User:Egm6322.s09.Three/Report2

Reviewing from the previous report

Report 1 problem: diffusion operator

$$\mathcal D (u):=div(\kappa \cdot grad\ u)$$

$$=\frac{\partial }{\partial x_{i}}(\kappa_{ij} \frac{\partial u}{\partial x_{j}})$$

Expanding:

$$\sum_{i=1}^{2} \sum_{j=1}^{2}\frac{\partial }{\partial x_{i}}(\kappa_{ij} \frac{\partial u}{\partial x_{j}})$$

$$\frac{\partial }{\partial x_{1}}(\kappa_{11} \frac{\partial u}{\partial x_{1}})+\frac{\partial }{\partial x_{1}}(\kappa_{12} \frac{\partial u}{\partial x_{2}})+\frac{\partial }{\partial x_{2}}(\kappa_{21} \frac{\partial u}{\partial x_{1}})+\frac{\partial }{\partial x_{2}}(\kappa_{22} \frac{\partial u}{\partial x_{2}})$$

Linear if $$\alpha$$ and $$\beta$$ are constants:

$$\mathcal L(\alpha u+\beta v)=\alpha \mathcal L(u)+ \beta \mathcal L(v)$$

$$\mathcal D(\alpha u+\beta v)=\alpha \mathcal D(u)+ \beta \mathcal D(v)$$

$$\forall \ u,v; \ \Omega\rightarrow  \Re, \ \forall \ \alpha, \beta \ \in \Re$$

$$div[\underline{\kappa} \cdot grad(\alpha u+ \beta v)] $$

Gradient is linear.

$$grad(\alpha u+ \beta v)=\alpha grad \ u+\beta grad \ v$$

Matrix multiplication is a linear operation.

$$\underline{A}(\alpha \cdot \underline{x}+\beta \cdot \underline{y})= \alpha \underline{A} \cdot \underline{x}+ \beta \underline{A} \cdot \underline{y}$$

$$\underline{\kappa}grad \cdot(\alpha \underline{u}+\beta\underline{v})= \alpha \underline{\kappa} \cdot grad\ u + \beta \underline{\kappa} \cdot grad\ v$$

Vector fields a, b : $$\Omega \rightarrow R^{3}$$

=Transformation of Coordinates=

Definition of a linear operator
The word linear comes from the latin word ‘linearis’ which means created by lines.

An operator is said to be linear if it has both Additivity and homogeneity.

\forall \ u,v; \ \Omega\rightarrow  \Re\\
 * Additive: $$\begin{matrix}

\mathcal L(u+v)=\mathcal L(u) + \mathcal L(v)

\end{matrix}$$

\forall \alpha \in \Re, \ \forall u: \Omega \rightarrow \Re\\
 * Homogeneous: $$\begin{matrix}

\mathcal L (\alpha u)=\alpha \mathcal L(u)

\end{matrix}$$

Equivalence between $$\mathcal L (\alpha u+ \beta v)=\alpha \mathcal L(u)$$

To prove additivity, set $$alpha=beta=1$$=beta=1. To prove homogeneity set $$\beta=0$$.

Homogeneity of $$\mathcal L(\cdot)$$ select either $$alpha=0$$ or $$u=0$$.

Image of zero under $$\mathcal L$$ is zero if $$\mathcal L$$  is a linear operator.

$$\mathcal L(0)=0$$

Example of Linear map for $$R^n$$ (n-dimensional case)
 R n $$\rightarrow$$ $$\mathbb{R}$$n is a set-to-set map.

If x $$\in$$  R  n; y $$\in$$ $$\mathbb{R}$$n,

then x $$\rightarrow$$ y=Ax is a point-to-point map.

A $$\in$$ $$\mathbb{R}$$n×n, and is a n×m matrix.

D (domain) =  R n

R (range) = $$\mathbb{R}$$n

General Case
When $$\mathbb{R}$$m $$\rightarrow$$ $$\mathbb{R}$$n; m $$\neq$$ n

If x $$\in$$ $$\mathbb{R}$$n; y $$\in$$ $$\mathbb{R}$$n; and y=Ax,

then A is a n×m matrix, A $$\in$$ $$\mathbb{R}$$n×m

Now consider: y=Ax+b;

y is a n×1 matrix; A is a n×m matrix; x is a m×1 matrix; b is a n×1 matrix.

M($$\cdot$$): $$\mathbb{R}$$m $$\rightarrow$$ $$\mathbb{R}$$n

If x $$\in$$ $$\mathbb{R}$$m; y $$\in$$ $$\mathbb{R}$$n;

then x $$\longmapsto$$ y=Ax+b

So clearly,M( 0 )= b $$\neq$$ 0 $$\Rightarrow$$ M($$\cdot$$) is not a linear map, and M($$\cdot$$) is not homogeneous, either.

Example of Rotation


In the case y=Rx+b

$$\begin{Bmatrix} y_1 \\ y_2 \end{Bmatrix}$$ = $$\begin{bmatrix} R_{11} &R_{12}    \\ R_{21} &R_{22} \end{bmatrix}$$$$\begin{Bmatrix} x_1 \\ x_2 \end{Bmatrix}$$ + $$\begin{Bmatrix} b_1 \\ b_2 \end{Bmatrix}$$

 Note:  If A $$\Leftrightarrow$$ B, then A is equivalent to B.

If A $$\Rightarrow$$ B, then A is sufficient condition for B;

If A $$\Leftarrow$$ B, then B is sufficient condition for A, or A is necessary condition for B.

[A $$\Rightarrow$$ B]$$\Leftrightarrow$$ [negation of B $$\Rightarrow$$ negation of A]

Transformation of Coordinates and the Jacobian
Assume $$R^m$$ and $$R^n$$ are expressions of vectors (tensors). Divergent maps of vector field (vector valued function) into a scalar function, in other words, domain and range of div(·) are functions spaces.

 Note:  In P9.1, it can't be transformed to coordinates of Eqn(5) by 2nd order linear PDEs (e.g. Eqn(2))

In coordinates:

$$x$$=$$\Phi$$($$\bar{x}$$,$$\bar{y}$$)

$$y$$=$$\psi$$($$\bar{x}$$,$$\bar{y}$$)

in coordinate transformation of Eqn(5) in P9-1

$$u$$(x,y)=$$u$$($$\Phi$$($$\bar{x}$$,$$\bar{y}$$),$$\psi$$($$\bar{x}$$,$$\bar{y}$$))

=$$u$$($$\bar{x}$$,$$\bar{y}$$)           (abuse of notation by using "u")

=$$\bar{u}$$($$\bar{x}$$,$$\bar{y}$$)      (more rigorous notation)

 Example： 

Let $$u(x)=ax+b$$

$$ x=x(\bar{x})

=sin\bar{x} =\Phi(\bar{x}$$)

$$u(x)=u(\Phi(\bar{x}))

=asin(\bar{x})+b

=u(\bar{x})

=\bar{u}(\bar{x})$$

$$u(\bar{x})$$is an abuse of notation; $$\bar{u}(\bar{x})$$ is more rigorous.

$$u_x(x,y)=\frac{\partial u}{\partial x}(x,y)=u_x(\Phi(\bar{x},\bar{y}),\Psi(\bar{x},\bar{y})) =\frac{\partial}{\partial x}\bar{u}(\bar{x},\bar{y}) =\frac{\partial \bar{u}}{\partial \bar{x}}\frac{\partial \bar{x}}{\partial x}+\frac{\partial \bar{u}}{\partial \bar{y}}\frac{\partial \bar{y}}{\partial x}$$

 Define:  $$\bar{x}=\bar{x}(x,y)$$=$$\bar{\Phi}(x,y)$$; $$\bar{y}=\bar{y}(x,y)$$=$$\bar{\Psi}(x,y)$$

then $$u_y(x,y)=\frac{\partial}{\partial y}\bar{u}(\bar{x},\bar{y})=\bar{u}_{\bar{x}}\frac{\partial \bar{x}}{\partial y}+\bar{u}_{\bar{y}}\frac{\partial \bar{y}}{\partial y}$$

 Matrix form: 

$$\partial_xu=[\frac{\partial \bar{x}}{\partial x},\frac{\partial \bar{y}}{\partial x}]\begin{Bmatrix} \partial_\bar{x} \\ \partial_\bar{y} \end{Bmatrix}(\bar{u})$$

$$ \begin{Bmatrix} \partial_x \\ \partial_y \end{Bmatrix}=\begin{bmatrix} \frac{\partial \bar{x}}{\partial x} & \frac{\partial \bar{y}}{\partial x}     \\ \frac{\partial \bar{x}}{\partial y} & \frac{\partial \bar{y}}{\partial y} \end{bmatrix} \begin{Bmatrix} \partial_\bar{x} \\ \partial_\bar{y} \end{Bmatrix}$$

 Jacobian Matrix:  $$\begin{bmatrix} \frac{\partial \bar{x}}{\partial x} & \frac{\partial \bar{y}}{\partial x}     \\ \frac{\partial \bar{x}}{\partial y} & \frac{\partial \bar{y}}{\partial y} \end{bmatrix}$$; (Sometimes it is defined as transformation of Jacobian matrix)

Note: An easier and more general expression:

$$(x_1,...,x_n)\longmapsto(\bar{x}_1,...,\bar{x}_n)$$

Indicial notation:

$$\bar{x}_i=\bar{x}_i(x_1,...,x_n)$$

$$\underline{J}$$=$${[\frac{\partial \bar{x}_i}{\partial x_j}]}$$n×n

i: row index; j: column index.

=2nd order PDEs : Alternative presentation, from particular to general =

PDE's linear with respect to 2nd derivative, but still not linear in general
Intro: In order to learn more about PDE's in general, we decided to go from particular examples to general expressions.

For the general 2nd order PDE:

$$div[\bar{\kappa}\cdot grad\, u]+f(x,y,u,u_{x},u_{y})=0$$

Where, $$\bar{\kappa}$$ is the conductivity tensor (2nd order) and:

$$\bar{\kappa}=\bar{\kappa}(x,y)$$

The portion of the equation with the 2nd order derivatives is given by: $$div[\bar{\kappa}\cdot grad\, u]$$

The rest of the equation, $$f(x,y,u,u_{x},u_{y})=0$$, may be a function of x,y, the unknown function u, or its first order derivatives.

e.g.

$$div[\bar{\kappa}(x,y)\cdot grad\, u]+ ax^{2}+ by + \sqrt{u}+ (u_{x})^{4}+2(u_{y})^{2}=0$$

Note that:

$$D_{1}(\cdot)=div[\bar{\kappa}(x,y)\cdot grad\, u]$$

is linear, and,

$$D_{2}(\cdot):=D_{1}+(\cdot)^{\frac{1}{2}}+[(\cdot)_{x}]^{4}+[(\cdot)_{y}]^{2}$$

is nonlinear. Therefore:

$$D_{3}:=D_{2}+ax^{2}+by$$

is also nonlinear.

Show PDE is linear with respect to 2nd order derivative, nonlinear as a whole.

PDE:Particular to General
2nd order PDE's, linear in all orders. i.e. linear with respect to 1st order & zeroeth order.

General form:

$$au_{xx}+2bu_{xy}+cu_{yy}+du_{x}+eu_{y}+fu+g=0...(1)$$

coefficients can be a function of x,y, or other constants in general if they are a function of u then the PDE is said to be quasilinear.

$$ \begin{pmatrix} \partial _{x} & \partial_{y} \end{pmatrix} \begin{pmatrix} a & b\\ c & d \end{pmatrix}\begin{Bmatrix} \partial_{x}u\\ \partial_{y}u \end{Bmatrix}+\begin{pmatrix} d & e \end{pmatrix}\begin{Bmatrix} \partial_{x}u\\ \partial_{y}u \end{Bmatrix}+fu+g=0...(2)$$

let $$au_{xx}+2bu_{xy}+cu_{yy}=\alpha$$

$$\partial_{x}=\frac{\partial}{\partial x}(\cdot)$$ $$\partial_{y}=\frac{\partial}{\partial y}(\cdot) $$

$$\begin{bmatrix} \partial_{x} & \partial_{y} \end{bmatrix}\begin{Bmatrix} au_{x}+bu_{y}\\ bu_{x}+cu_{y} \end{Bmatrix}=\begin{Bmatrix} (au_{x})_{x}+(bu_{y})_{x}\\ (bu_{x})_{y}+(cu_{y})_{y} \end{Bmatrix}$$

Assuming a,b,c are constants, uxy=uyx.

$$[a_{x}u_{x}]+[b_{x}u_{y}]+[b_{y}u_{x}]+[...] $$

Eqn (2)=> Eqn (1) only if a,b,c are constant, but d,e,f,g may be functions of x & y. eqn(2) is more general.

$$(au_{x})_x+(bu_{x})_y+(bu_{y})_x+(cu_{y})_y+du_{x}+eu_{y}+fu+g=0$$ (3)

Showing equation 2 and equation 1 are the same form

$$(au_{x})_x+(bu_{x})_y+(bu_{y})_x+(cu_{y})_y+du_{x}+eu_{y}+fu+g=0$$

$$[au_{xx}+a_{x}u_{x}]+[2bu_{xy}+b_{y}u_{x}+b_{x}u_{y}]+[cu_{yy}+c_{y}u{y}]+du_{x}+eu_{y}+fu+g=0$$

=>

$$au_{xx}+2bu_{xy}+cu_{yy}+[d+a_{x}+b_{y}]u_{x}+[e+b_{x}+c_{y}]u_{y}+fu+g=0$$

Noting that:

$$\overline{d}=[d+a_{x}+b_{y}]$$

$$\overline{e}=[e+b_{x}+c_{y}]$$

the format of equation 1 is recovered.

PDEs linear in all orders (more particular)
Proof to show that the order of double differentiation is independependent of the variables i.e uxy= uyx

Assume that there exist uniform number of intervals of length 'h' in both 'x' and 'y' directions

Then

ux= $$\lim_{h\to 0}\frac {u(x+h,y)-u(x,y)}{h}$$

uy= $$\lim_{h\to 0}\frac {u(x,y+h)-u(x,y)}{h}$$

uxy= $$ \lim_{h\to 0}\frac {\frac{\partial u(x+h,y) }{\partial y}-\frac{\partial u(x,y)}{\partial y}}{h}$$

$$\Rightarrow \lim_{h\to 0}\frac{\left [\frac{u(x+h,y+h)-u(x+h,y)}{h} \right ]-\left [\frac{u(x,y+h)-u(x,y)}{h} \right ]}{h}$$

$$\Rightarrow \lim_{h\to 0}\frac{u(x+h,y+h)-u(x+h,y)-u(x,y+h)+u(x,y)}{h^{2}}$$---(1) uyx= $$ \lim_{h\to 0}\frac {\frac{\partial u(x,y+h) }{\partial x}-\frac{\partial u(x,y)}{\partial x}}{h}$$

$$\Rightarrow \lim_{h\to 0}\frac{\left [\frac{u(x+h,y+h)-u(x,y+h)}{h} \right ]-\left [\frac{u(x+h,y)-u(x,y)}{h} \right ]}{h}$$

$$\Rightarrow \lim_{h\to 0}\frac{u(x+h,y+h)-u(x+h,y)-u(x,y+h)+u(x,y)}{h^{2}}$$---(2)

Hence as is evident from (1) and (2), the order of double differentiation is indeed independent of the variables.

=List of Contributing Members=

Egm6322.s09.Three.ge 17:13, 6 February 2009 (UTC) team coordinator

Egm6322.s09.three.liu 15:01, 6 February 2009 (UTC)

Egm6322.s09.Three.nav 15:38, 6 February 2009 (UTC)