User:Egm6322.s09.mafia/HW2

 See my comments below. Egm6322.s09 12:26, 7 February 2009 (UTC)

=Second Order PDEs, from Particular to General=

One case
--EGM6322.S09.TIAN 17:46, 24 April 2009 (UTC)

$$\kappa$$ $$= \underline{\kappa} (x,y,u,u_x,u_y)$$ $$\Rightarrow$$

Definition of quasilinear: For a PDE of order n (i.e. highest derivative terms are of order n),coeffients of nth order derivative are functions $$= (x,y,u,u_x,u_y,\frac{\partial^m u}{\partial {x^p y^q}})$$

Here, we assume there are two independent variables, mth order derivative, and $$p+q=m,m<n$$

Another Case
--EGM6322.S09.TIAN 17:45, 24 April 2009 (UTC)

PDEs linear with respect to 2nd derivative, but still non-linear in general:

$$div($$ $$\kappa$$   $$\cdot grad u)+$$ $$f(x,y,u,u_x,u_y)=0$$

Here, $$\kappa =$$ $$\kappa $$ $$(x,y) $$ $$f(x,y,u,u_x,u_y)=0$$ is nonlinear with respect to argues in general, e.g.

$$div($$ $$\kappa$$   $$\cdot grad u)+$$ $$ax^2+by+\sqrt{u}+(u_x)^4+2(u_y)^2=0$$

Homework:Show The Linearity Of The Equation

--EGM6322.S09.TIAN 17:46, 24 April 2009 (UTC)

Show it linear to 2nd derivative, but non-linear in general.

To make it clear,

$$D_1(\cdot):=$$ $$div[$$  $$\kappa$$ $$(x,y)$$   $$\cdot grad (\cdot)]$$

We can do this:

$$D_2(\cdot):=$$ $$D_1(\cdot)+$$ $$(\cdot)^{1/2}+$$ $$[(\cdot)_x]^4+$$ $$[2(\cdot)_y]^2$$

$$D_3(\cdot):=$$ $$D_2(\cdot)+$$ $$ax^2+by$$

The definition of linearity: $$\mathfrak{L} \left( \alpha u + \beta v \right) = \alpha \mathfrak{L} \left( u \right) + \beta \mathfrak{L} \left( v  \right)$$, where $$\forall$$ $$u,v: \Omega \rightarrow \mathbb{R} $$ and $$\forall$$ $$\alpha, \beta \in \mathbb{R}$$

Using this definition of linearity, the above expression for $$D(u)$$ is given as: $$D_1(\cdot) \left( \alpha u + \beta v\right) = div \left[ \mathbf{\kappa} \cdot grad \left( \alpha u + \beta v \right) \right]$$

$$grad(\cdot)$$ is linear $$grad(u) = \frac {\partial u}{\partial x_i} e_i$$ and $$\frac {\partial u}{\partial x_i} (\cdot)$$ is linear $$\therefore$$ $$grad \left( \alpha u + \beta v \right)$$ = $$\alpha \frac {\partial u}{\partial x_i}$$+ $$\beta \frac {\partial v}{\partial x_i}$$= $$  \alpha   grad(u) + \beta  grad(v)$$

Matrix (tensor) multiplication is a linear operator $$\bar{A} \cdot \left( \alpha \bar{x} + \beta \bar{y} \right) = \alpha \bar{A} \bar{x} + \beta \bar{A} \bar{y}$$

$$\therefore$$ $$\mathbf{\kappa} \cdot grad(u) = \alpha \mathbf{\kappa} \cdot grad(u) + \beta \mathbf{\kappa} \cdot grad(v)$$

$$div(\cdot)$$ is linear because it is another differential operator let $$\bar{a}, \bar{b}: \Omega$$ $$\mathbb{R}^3$$ and $$\alpha, \beta \in \mathbb{R}$$

$$\therefore$$ $$div \left( \alpha \bar{a} + \beta \bar{b} \right) = \frac {\partial }{\partial x_i} \left( \alpha a_i + \beta b_i \right) = \alpha \frac {\partial a_i}{\partial x_i} + \beta \frac {\partial b_i}{\partial x_i}$$

Taking all this into consideration, the definition of linearity has been proven.

$$D_1 \left( \alpha u + \beta v \right) = \alpha div \left[ \mathbf{k} \cdot grad(u) \right] + \beta div \left[ \mathbf{k} \cdot grad(v) \right]$$ ,which means, the 2nd derivative part is linear.

Then,

$$D_2(\alpha u + \beta v) =$$ $$D_1(\alpha u + \beta v)+$$ $$(\alpha u + \beta v)^{1/2}+$$ $$[(\alpha u + \beta v)_x]^4+$$ $$2[(\alpha u + \beta v)_y]^2$$

$$\alpha D_2( u ) =$$ $$\alpha D_1( u )+$$ $$\alpha( u )^{1/2}+$$ $$[\alpha( u )_x]^4+$$ $$2[\alpha( u )_y]^2$$

$$\beta D_2( v ) =$$ $$\beta D_1( v )+$$ $$\beta( v )^{1/2}+$$ $$[\beta( v )_x]^4+$$ $$[2\beta( v )_y]^2$$

Obviously,

$$D_2(\alpha u + \beta v) \neq$$ $$\alpha D_2( u ) +$$ $$\beta D_2( v ) $$

$$\therefore$$, $$D_2(\cdot)=$$ $$D_1(\cdot)+$$ $$(\cdot)^{1/2}+$$ $$[(\cdot)_x]^4+$$ $$[2(\cdot)_y]^2$$ is non-linear.

Similarly, we can prove that:

$$D_3(\cdot)=$$ $$D_2(\cdot)+$$ $$ax^2+by$$ is non-linear.

$$\therefore$$, the equation is linear to 2nd order derivative but non-linear in general.

The Expression of Second Order PDEs
--Egm6322.s09.lapetina 01:54, 17 April 2009 (UTC) The expression of second order PDEs shown here is original to Professor Loc Vu-Quoc. It cannot be found in any mainstream texts. In this section we will show how to express Second Order Linear PDEs in a matrix format.

Any PDE can be expressed as:

$$ \left \lfloor \partial_x \; \partial_y \right \rfloor \begin{bmatrix} a & b\\ b & c \end{bmatrix}

\begin{Bmatrix} \partial_x u \\ \partial_y u \end{Bmatrix} +

\left \lfloor d \; e \right \rfloor

\begin{Bmatrix} \partial_x u \\ \partial_y u \end{Bmatrix}

+ fu + g = 0

$$

where the coefficients $$ \left \{ a, b,...,g \right \} $$ are functions of $$ \left \{ x,y \right \} $$ in general (this includes constants).

The first term is $$\alpha$$, a scaler produced from multiplying a row matrix, square matrix, and column matrix.

$$\alpha= \left \lfloor \partial_x \; \partial_y \right \rfloor \begin{bmatrix} a & b\\ b & c \end{bmatrix}

\begin{Bmatrix} \partial_x u \\ \partial_y u \end{Bmatrix}

=

\left \lfloor \partial_x \; \partial_y \right \rfloor

\begin{Bmatrix} au_x+bu_y \\ bu_x+cu_y \end{Bmatrix}

=(au_x)_x+(bu_y)_x+(bu_x)_y+(cu_y)_y

$$.

Using the product rule, this becomes:

$$ \alpha=a {u}_{xx}+a_x u_x +b {u}_{yx}+ b_x u_y+ b {u}_{xy}+ b_y u_x + c {u}_{yy}+ c_y u_y $$

Note: This was required for a homework assignment in Meeting 8.

The nature of $$a$$, $$b$$ and $$c$$ greatly dictates the means by which the second order PDEs can be expressed.

Constant Coefficients
If $$a$$, $$b$$, and $$c$$ are constants, their derivatives with respect to $$x$$ and $$y$$ become zero, and $$\alpha$$ reduces to:

$$\alpha=a {u}_{xx}+2 b {u}_{xy}+c {u}_{yy}$$.

This means that the matrix equation can be expressed as:

$$ a {u}_{xx}+2 b {u}_{xy}+c {u}_{yy}+d u_x+e u_y+f u+g=0 $$.

Two Independent Variables
If $$a$$, $$b$$ or $$c$$ is not a constant, the function becomes

$$\left [ a {u}_{xx}+a_x u_x \right ] + \left [ 2b {u}_{xy} +b_y u_x+ b_x u_y \right ] + \left [c {u}_{yy} + c_y u_y \right ] +d u_x + e u_y + fu+ g=0$$

The first order derivatives can be grouped, and the resulting equation is:

$$ a {u}_{xx}+2 b {u}_{xy}+c {u}_{yy}+\bar d\ u_x+\bar e\ u_y+f u+g=0 $$

where

$$\bar d\ := d+a_x+b_y$$

$$\bar e\ := e+b_x+c_y$$

Note: This was required for homework in meeting 8, and shown in Meeting 9.

Thus, even with $$a$$, $$b$$ or $$c$$ as non-constants, the matrix form of this equation remains:

$$ \left \lfloor \partial_x \; \partial_y \right \rfloor \begin{bmatrix} a & b\\ b & c \end{bmatrix}

\begin{Bmatrix} \partial_x u \\ \partial_y u \end{Bmatrix} +

\left \lfloor d \; e \right \rfloor

\begin{Bmatrix} \partial_x u \\ \partial_y u \end{Bmatrix}

+ fu + g = 0

$$

But the resulting linear equation uses $$ \bar d\ $$ and $$ \bar e\ $$. Thus we gain a more general form of the matrix form of the equation, with minimal adjustment of the linear equation.

Note: Showing this was required for homework in Meeting 9.

Homework Problem Involving Linearity

--Egm6322.s09.lapetina 01:56, 17 April 2009 (UTC)

In this problem it will be proven that the following equation is second order and linear with respect to all orders:

$$ (au_x)_x+(bu_y)_x+(bu_x)_y+(cu_y)_y + d u_x +e u_y +fu+g=0$$,

also expressed as:

$$ \left \lfloor \partial_x \; \partial_y \right \rfloor \begin{bmatrix} a & b\\ b & c \end{bmatrix}

\begin{Bmatrix} \partial_x u \\ \partial_y u \end{Bmatrix} +

\left \lfloor d \; e \right \rfloor

\begin{Bmatrix} \partial_x u \\ \partial_y u \end{Bmatrix}

+ fu + g = 0

$$

where coefficients $$ \left \{ a, b, ... g \right \} $$ are continuous, differentiable functions of independent variables $$x$$ and $$y$$ only, and $$u$$ is a continuous, differentiable function of $$x$$ and $$y$$.

Expanded, it is:

$$ a {u}_{xx}+2 b {u}_{xy}+c {u}_{yy}+\bar d\ u_x+\bar e\ u_y+f u+g=0 $$

where

$$\bar d\ := d+a_x+b_y$$

$$\bar e\ := e+b_x+c_y$$.

From inspection, it is clear that the highest derivative is second, by definition making this a second order PDE.

The second order components of the equation are:

$$a {u}_{xx}$$, $$2 b {u}_{xy}$$ and $$c {u}_{yy}$$, with coefficients $$a$$, $$b$$, and $$c$$, defined as functions of $$x$$ and $$y$$ only. As a result, this equation is linear with respect to the second order.

The first order components of the equation are:

$$(d+a_x+b_y) u_x $$ and $$(e+b_x+c_y) u_y$$.

These coefficients can also be represented as:

$$d + div [a,b]$$ and

$$e+ div[b, c]$$.

The divergence operator is linear, and since $$d$$ and $$e$$ are by definition continuous, differentiable functions of $$x$$ and $$y$$ only, the coefficients

$$\bar d\ := d+div [a,b]$$

and

$$\bar e\ := e+ div[b, c]$$

are linear, and the PDE is linear with respect to the first order.

Finally, $$f$$ and $$g$$ are both functions of $$x$$ and $$y$$ only, making the PDE linear with respect to the zero order.

 You need to define $$\displaystyle {\rm div} [a,b] := a_x + b_y$$, but it is probably best NOT to use this notation, unless you define the vector with $$\displaystyle a$$ and $$\displaystyle b$$ being its components; it is getting more complicated than necessary. Egm6322.s09 12:26, 7 February 2009 (UTC)

Proof of the Mixed Derivative Theorum

--Egm6322.s09.lapetina 01:56, 17 April 2009 (UTC)

This proof is closely paraphrased from Thomas' Calculus.

Assume $$f(x,y)$$ and $$f_x$$, $$f_y$$, $${f}_{xy}$$, and $${f}_{yx}$$ are defined throughout an open region containing point $$\left ( a, b \right )$$ and are all continuous at $$\left ( a, b \right )$$.

Proving the equality of $${f}_{xy} \left ( a, b \right )$$ and $${f}_{yx} \left ( a, b \right )$$ requires four applications of the mean value theorum.

Point $$\left ( a, b \right )$$ lies within a rectangle $$R$$, and let $$ h, k \in \mathbb{R} $$ be numbers such that point $$\left ( a+h, b+k \right )$$ also lies within $$R$$

A value $$\delta$$ exists such that

$$\delta= F(a+h)-F(a)$$, where

$$F(x)=f(x, b+k)-f(x, b)$$

The first application of the mean value theorem is to $$F$$, and as a result:

$$\delta= h F'(c_1)$$, where $$c_1$$ is between $$a$$ and $$a+h$$.

We can take the derivative of $$F$$ with respect to $$x$$, and we are left with

$$F'(x)={f}_{x}(x, b+k)-{f}_{x}(x, b)$$

and

$$\delta= h \left [{f}_{x}(c_1, b+k)-{f}_{x}(c_1, b) \right ]$$

If we apply the mean value theorem to the function $$g(y)=f_x (c_1, y)$$, we have

$$g(b+k)-g(b)=kg'(d_1)$$,

or

$$f_x (c_1, b+k) -f_x (c_1, b)= k{f}_{xy} (c_1, d_1)$$

for some $$d_1$$ between $$b$$ and $$b+k$$, equivalent to one factor of $$\delta$$.

This leaves us with $$\delta = hk {f}_{xy}(c_1,d_1)$$ for some point in a second rectangle, $$R'$$, which has vertices $$(a,b), (a+h, b), (a, b+k), (a+h,b+k)$$, all of which lie within rectangle $$R$$.

Looking back at the original definition of $$\delta, F$$ and $$f$$, we see that

$$\delta = f(a+h, b+k)-f(a+h, b)-f(a, b+k)+f(a,b)$$, which can be expressed as:

$$\delta = \left [f(a+h, b+k)-f(a, b+k) \right ]- \left [f(a+h, b)+f(a,b) \right ]$$

which can be reduced to:

$$\delta= \phi (b+k)- \phi (b)$$

where $$\phi (y) =f(a+h, y)-f(a,y)$$.

Applying the Mean Value Theorem a third time to:

$$\delta = \left [f(a+h, b+k)-f(a, b+k) \right ]- \left [f(a+h, b)+f(a,b) \right ]$$

now results in:

$$\delta = k \phi'(d_2)$$ for some $$d_2$$ between $$b$$ and $$b+k$$.

Differentiating $$\phi$$ by $$y$$ results in:

$$phi'(y)=f_y(a+h, y) -f_y (a,y)$$, which allows $$\delta$$ to be expressed as:

$$\delta= k \left [{f}_{y}(a+h, d_2)-{f}_{y}(a, d_2) \right ]$$.

If we apply the Mean Value Theorem a fourth time, we arrive at:

$$\delta =kh {f}_{yx} (c_2, d_2)$$ for some $$c_2$$ between $$a$$ and $$a+h$$. This means that:

$$\delta={f}_{xy}(c_1, d_1)={f}_{yx}(c_2, d_2)$$ where $$(c_1, d_1)$$ and $$(c_2, d_2)$$ both lie within $$R'$$. If we allow $$h$$ and $$k$$ to approach zero, then the rectangle $$R'$$ converges to a point, and $$c_1=c_2$$ and $$d_1=d_2$$, and $${f}_{xy}(a,b)={f}_{yx}(a,b)$$

Proof that the Divergence Operator is Linear

--Egm6322.s09.lapetina 01:56, 17 April 2009 (UTC)

Prove that the divergence operator is a linear operator.

This relies upon the fact that the derivative, when applied to a continuous function, is itself linear. To prove this we will examine the definition of the derivative:

$$\lim h\rightarrow 0 \left [ \frac{f(x+h)-f(x)} {h}\ \right ]$$

The derivative is both homogeneous and additive, as shown by the Constant Multiple Rule .: $$ \frac{d}{dx} \ (cu)= \lim h\rightarrow 0 \left [ \frac{cu(x+h)-cu(x)} {h}\ \right ]$$

$$\frac{d}{dx} (cu)= c  \lim h\rightarrow 0 \left [ \frac{u(x+h)-u(x)} {h}\ \right ]$$

$$\frac {d}{dx} (cu)=c \frac{du}{dx}$$

and the Derivative Sum Rule:

$$\frac{d}{dx} \ \left [u(x) +v(x) \right ]= \lim h\rightarrow 0 \left [ \frac{\left [u(x+h)+v(x+h) \right ]-\left [u(x)+v(x) \right ]} {h}\ \right ]$$

$$\frac{d}{dx} \ \left [u(x) +v(x) \right ]= \lim h\rightarrow 0 \left [\frac{u(x+h)-u(x)}{h} \ +\frac {v(x+h)-v(x)} {h}\ \right ]$$

$$\frac{d}{dx} \ \left [u(x) +v(x) \right ]= \lim h\rightarrow 0 \frac{u(x+h)-u(x)}{h} \ + \lim h\rightarrow 0\frac {v(x+h)-v(x)} {h} = \frac {du}{dx} +\frac{dv} {dx} $$

Thus the derivative in a single variable is both homogeneous and additive, and therefore linear.

This can be extended to partial derivatives and the divergence operator in the following fashion:

Assume two real vector fields $$\overrightarrow{u}$$ and $$\overrightarrow{v}$$, and $$\alpha, \beta \in \mathbb{R}^n $$.

If the divergence operator satisfies the following equation:

$$\frac{\partial \left ( \alpha u + \beta v \right ) }{\partial x_i}= \alpha \frac{\partial u }{\partial x_i} + \beta \frac {\partial v}{\partial x_i}$$

then it is linear.

Also, if the divergence operator is both homogeneous and additive, it is linear.

By definition,

$$ \frac {\partial \left ( \right)}{\partial x_k} = \sum_{k=1}^n \frac {\partial \left ( \right)}{\partial x_k}$$

For $$\alpha$$, and $$\overrightarrow{u}$$, it is clear that:

$$ \frac {\partial \left ( \alpha \overrightarrow{u} \right)}{\partial x_k} = \frac {\partial \left ( \alpha \overrightarrow{u} \right)}{\partial x_1} + \frac {\partial \left ( \alpha \overrightarrow{u} \right)}{\partial x_2}+ \dots + \frac {\partial \left ( \alpha \overrightarrow{u} \right)}{\partial x_n}$$

Using the product rule, this becomes:

$$ \frac {\partial \left ( \alpha \overrightarrow{u} \right)}{\partial x_k} = \alpha \frac {\partial \left (  \overrightarrow{u} \right)}{\partial x_1} + \alpha \frac {\partial \left ( \overrightarrow{u} \right)}{\partial x_2}+ \dots + \alpha \frac {\partial \left ( \overrightarrow{u} \right)}{\partial x_n} = \alpha \sum_{k=1}^n \frac {\partial \left ( \overrightarrow{u} \right)}{\partial x_k}$$

Therefore, the divergence operator is homogeneous.

To prove that the divergence operator is additive, we will use both $$\overrightarrow{u}$$ and $$\overrightarrow{v}$$. Their vector sum can be expressed as: $$\overrightarrow{u}+ \overrightarrow{v} =\overrightarrow{w}$$

$$ \frac {\partial \left ( \overrightarrow{u} +\overrightarrow{v} \right)}{\partial x_k} = \frac {\partial \left ( \overrightarrow{u} + \overrightarrow{v} \right)}{\partial x_1} + \frac {\partial \left ( \overrightarrow{u} + \overrightarrow{v} \right)}{\partial x_2}+ \dots + \frac {\partial \left ( \overrightarrow{u} + \overrightarrow{v} \right)}{\partial x_n}$$

Because $$\overrightarrow{u}$$ and $$\overrightarrow{v}$$ are both real vector fields, the partial differential can be distributed, and the above statement is equivalent to:

$$ \frac {\partial \left ( \overrightarrow{u} +\overrightarrow{v} \right)}{\partial x_k} = \frac {\partial \left ( \overrightarrow{u}  \right)}{\partial x_1} +  \frac {\partial \left (  \overrightarrow{v} \right)}{\partial x_1} + \frac {\partial \left ( \overrightarrow{u}  \right)}{\partial x_2}+ \frac {\partial \left ( \overrightarrow{v} \right)}{\partial x_2} + \dots + \frac {\partial \left ( \overrightarrow{u} \right)}{\partial x_n} + \frac {\partial \left ( \overrightarrow{v} \right)}{\partial x_n}$$

Which can be expressed as:

$$ \frac {\partial \left ( \overrightarrow{u} +\overrightarrow{v} \right)}{\partial x_k} = \sum_{k=1}^n \frac {\partial \left ( \overrightarrow{u} \right)}{\partial x_k} + \sum_{k=1}^n \frac {\partial \left ( \overrightarrow{v} \right)}{\partial x_k}$$

indicating that the divergence operator is additive.

Because it is both additive and homogeneous, it is a linear operator.

Clarification of topics from R1 (Repeat R1)
--Egm6322.s09.xyz 18:20, 24 April 2009 (UTC)

It has been observed by Dr. Vu-Quoc that some of the basic concepts concerning the topic of linearity and linear operators covered in R1 were not fully understood by students. These concepts were presented a second time by Dr. Vu-Quoc in hopes that he will be more effective in communicating these ideas to students. Linearity (and the linear operator) is important in the study of PDEs. It forms the basis of study for Linear Transformation of Coordinates and future topics. A detailed review of these concepts is given here:

$$\blacktriangleright$$ Students were asked to expand the following expression, $$ D(u):= div \left [ \mathbf{k} \cdot grad(u) \right ] $$    (Coordinate form)
 * $$= \frac{\partial }{\partial x_i} \left [ \mathbf{k_{ij}} \cdot \frac{\partial u}{\partial x_j} \right ]$$ where $$i,j \in {1,2}$$    (Component form)

Most students expanded this equation using Leibniz Rule (aka "Product rule"). To do this would have been incomplete. Dr. Vu-Quoc wanted an expansion of  the two indices (i,j) in the folowing manner:
 * $$= \sum_{i}\sum_{j} \left [ \mathbf{k_{ij}} \cdot \frac{\partial u}{\partial x_j} \right ]$$
 * $$= \frac{\partial }{\partial x_1} \left ( k_{11} \frac{\partial u}{\partial x_1} \right ) + \frac{\partial }{\partial x_1} \left ( k_{12} \frac{\partial u}{\partial x_2} \right ) + \frac{\partial }{\partial x_2} \left ( k_{21} \frac{\partial u}{\partial x_1} \right ) + \frac{\partial }{\partial x_2} \left ( k_{22} \frac{\partial u}{\partial x_2} \right )$$

note: the above expression is the correct solution to the homework given in R1.

Leibniz Rule --Egm6322.s09.xyz 18:20, 24 April 2009 (UTC)

Leibniz Rule, also known as "the Product Rule", is given as, : $$ \left ( f \cdot g \right )^' = f^' \cdot g + f \cdot g^' $$ The product rule is a special form of the chain rule : $$\frac{d}{dx} \left ( ab \right ) = \frac{\partial }{\partial a} \left ( ab \right ) \frac{da}{dx} + \frac{\partial }{\partial b} \left ( ab \right ) \frac{db}{dx} = b \frac{da}{dx} + a \frac{db}{dx}$$ A more generalized form of Leibniz Rule for higher order derivatives is shown with the following expression: $$\left( fg \right)^{(k)} = \sum_{r=0}^k \dbinom{k}{r}f^{(k-r)}g^{(r)}$$

where "$$f,g$$ are real (or complex) functions defined on an open interval of $$\mathbb{R}$$. Also, $$f$$ and $$g$$ are $$k$$ times differentiable" [source:]

$$\blacktriangleright$$ Students were asked to prove that the operator, $$D \left( u \right)$$, is linear. Recall the definition of linearity: $$\mathfrak{L} \left( \alpha u + \beta v \right) = \alpha \mathfrak{L} \left( u \right) + \beta \mathfrak{L} \left( v  \right)$$, where $$\forall$$ $$u,v: \Omega \rightarrow \mathbb{R} $$ and $$\forall$$ $$\alpha, \beta \in \mathbb{R}$$

Using this definition of linearity, the above expression for $$D(u)$$ is given as: $$D \left( \alpha u + \beta v\right) = div \left[ \mathbf{k} \cdot grad \left( \alpha u + \beta v \right) \right]$$


 * note: $$grad(\cdot)$$ is linear
 * $$grad(u) = \frac {\partial u}{\partial x_i} e_i$$ and $$\frac {\partial u}{\partial x_i} (\cdot)$$ is linear
 * $$\therefore$$ $$grad \left( \alpha u + \beta v \right) = \alpha grad(u) + \beta grad(v)$$


 * note: Matrix (tensor) multiplication is a linear operator
 * Recall that $$\bar{A} \cdot \left( \alpha \bar{x} + \beta \bar{y} \right) = \alpha \bar{A} \bar{x} + \beta \bar{A} \bar{y}$$
 * $$\therefore$$ $$\mathbf{k} \cdot grad(u) + \beta \mathbf{k} \cdot grad(v)$$


 * note: $$div(\cdot)$$ is linear because it is another differential operator
 * let $$\bar{a}, \bar{b}: \Omega$$ $$\mathbb{R}^3$$ and $$\alpha, \beta \in \mathbb{R}$$
 * $$\therefore$$ $$div \left( \alpha \bar{a} + \beta \bar{b} \right) = \frac {\partial }{\partial x_i} \left( \alpha a_i + \beta b_i \right) = \alpha \frac {\partial a_i}{\partial x_i} + \beta \frac {\partial b_i}{\partial x_i}$$

Taking all this into consideration, the definition of linearity has been proven.

$$D \left( \alpha u + \beta v \right) = \alpha div \left[ \mathbf{k} \cdot grad(u) \right] + \beta div \left[ \mathbf{k} \cdot grad(v) \right]$$  (verified)$$\surd$$

Definition of a linear operator
--Egm6322.s09.xyz 18:21, 24 April 2009 (UTC)

A linear operator,$$\mathfrak{L} ( \cdot)$$, satifies the following two criteria:

1. Additive
 * $$\mathfrak{L} \left( u + v \right) = \mathfrak{L}(u) + \mathfrak{L}(v)$$ where $$\forall$$$$u,v: \Omega \rightarrow \mathbb{R}$$


 * note:$$\mathfrak{L}\left(1 \cdot u + 1 \cdot v \right) = 1 \cdot \mathfrak{L}(u) + 1 \cdot \mathfrak{L}(v)$$

2. Homogeneous
 * $$\mathfrak{L}(\alpha u) = \alpha \mathfrak{L}(u)$$


 * note: $$ \mathfrak{L}(0) = 0 $$. This means that the image of zero under the $$ \mathfrak{L}(\cdot) $$ is zero if $$\mathfrak{L}(\cdot) $$is linear

Example of linear map for $$\mathbb{R}^n$$ (n-dim case)
--Egm6322.s09.xyz 18:20, 24 April 2009 (UTC)

Given: $$\bar{A}: \mathbb{R}^n \rightarrow \mathbb{R}^n$$ where $$\mathbb{R}^n \ni x $$, $$\mathbb{R}^n \ni y $$, and $$\bar{x} \mapsto \bar{y} = \bar{A} \bar{x}$$. A pictorial representation of this linear map can be found in the image below.



An element of domain and range is a vector and a column matrix, $$\bar{x}$$. The image of $$\bar{x}$$ in range is $$\bar{y}$$. The mapping of $$\bar{x}$$ to $$\bar{y}$$ is $$\bar{A}$$.

note: $$\mathfrak{D}$$omain = $$\mathbb{R}^n$$ and $$\mathfrak{R}$$ange = $$\mathbb{R}^n$$ (this is a more general form). The $$\bar{A}$$ matrix is of the order n x m, i.e. - $$\bar{A} \in \mathbb{R}^{n x m}$$. This can be verified in the following manner: $$\bar{y} = \bar{A} \bar{x}$$ where $$\bar{y}$$ is (n x 1), $$\bar{A}$$ is (n x m), $$\bar{x}$$ is (m x 1) (verified)$$\surd$$

Example of not-linear mapping
--Egm6322.s09.xyz 18:27, 24 April 2009 (UTC)

 NOTE: Avoid the word "non-linear", which is more general; use "not-linear" instead to indicate affine mappings, which are not linear, i.e., not homogeneous. Egm6322.s09 21:48, 8 February 2009 (UTC)

Changed the title to "not-linear" --Egm6322.s09.xyz 18:26, 24 April 2009 (UTC)

Now consider the following mapping: $$\bar{y} = \bar{A}\bar{x} + \bar{b}$$ note: this mapping is not linear because a zero vector for x-matrix does not result in a zero for the y-matrix and therefore is not valid. This is an affine mapping.

$$M: \bar{R}^m \rightarrow \bar{R}^n$$ where $$\bar{R}^m \ni x $$, and $$\bar{x} \mapsto \bar{y} = \bar{A} \bar{x} + \bar{b}$$

Clearly, $$M \left ( \bar{0} \right) = \bar{b}$$. Therefore, $$M \left ( \cdot \right)$$ is not a linear map, it is an affine map.



Notes on Mathematical Syntax
--Egm6322.s09.xyz 18:22, 24 April 2009 (UTC)

The following table is a reference summary of descriptions for special mathematical symbols that are used frequently in this course:

Example: Rotation followed by translation
--Egm6322.s09.xyz 18:22, 24 April 2009 (UTC)

$$

\begin{Bmatrix} y_1\\ y_2 \end{Bmatrix}

=

\begin{bmatrix} R_{11} & R_{12} \\ R_{21} & R_{22} \end{bmatrix}

\begin{Bmatrix} x_1\\ x_2 \end{Bmatrix}

+

\begin{Bmatrix} b_1\\ b_2 \end{Bmatrix}

$$

Jean le Rond d'Alembert (1717 - 1783) --Egm6322.s09.xyz 18:22, 24 April 2009 (UTC)

d'Alembert was a French scholar in the areas of mathematics, fluid mechanics, and physics. He collaborated on one of the first encyclopedias, the Encyclopédie. He died due to a bladder illness, and was buried in an unmarked grave.

Source:

=Transformation of Coordinates=

--Egm6322.s09.lapetina 01:57, 17 April 2009 (UTC)



Functions of coordinate transformation in two dimensions can be expressed as:

$$

\begin{Bmatrix} x\\ y \end{Bmatrix}

= \mathbf{V}

\begin{Bmatrix} \bar {x} \\ \bar {y} \end{Bmatrix}

$$

where $$\mathbf{V}$$ is a two by two matrix.

One example of a linear transformation of coordinates is the rotation of the coordinate axes by an angle $$\theta$$.

The matrix for this is :

$$ \begin{bmatrix} cos \theta & -sin \theta \\ sin \theta & cos \theta \end{bmatrix}

$$

Concepts
--EGM6322.S09.TIAN 17:44, 24 April 2009 (UTC)

In $$div(\cdot)$$ an example of   for $$\mathbb{R}^m$$  $$\to$$ $$\mathbb{R}^n$$  $$(M:\mathbb{R}^m \to \mathbb{R}^n)$$.

$$\mathbb{R}^m, \mathbb{R}^n$$ are space vectors (tensors, matrix). Divide maps vector field (vector-valued function) into a scalar function. In other words, domain and range of $$div(\cdot)$$ are function of spaces.

$$x= \phi (\bar{x},\bar{y})$$

$$y= \psi (\bar{x},\bar{y})$$

Linear coordinate transformation Eq(5) P.9-1

$$u(x,y)=$$ $$u($$ $$x= \phi (\bar{x},\bar{y})$$, $$y= \psi (\bar{x},\bar{y}))$$

$$=u(\bar{x},\bar{y})$$,  which is an abuse of notation by using "u".

$$=\bar{u}(\bar{x},\bar{y})$$,  which is a more rigorous notation.

One Example
--EGM6322.S09.TIAN 17:44, 24 April 2009 (UTC)

Let $$u(x)=ax+b$$

Condition, $$x=\phi (\bar x)$$ $$=sin \bar x$$

$$u(x)=$$ $$u(\phi (\bar x))$$ $$=asin \bar {x} + b$$

=$$u(\bar x)$$ $$\gets$$  abuse of notation

=$$\bar {u} (\bar x)$$  $$\gets$$  more rigorous

Another One
--EGM6322.S09.TIAN 17:43, 24 April 2009 (UTC)

$$u_x(x,y)=$$ $$\frac{\partial u}{\partial x}(x,y)$$ $$=u_x(\phi(\bar{x},\bar{y}),\psi(\bar{x},\bar{y}))$$ $$= \frac{\partial }{\partial x}$$ $$\bar{u}(\bar{x},\bar{y})$$ $$=\frac{\partial \bar u}{\partial \bar x}$$ $$\frac{\partial \bar x}{\partial x}$$ $$+$$ $$\frac{\partial \bar u}{\partial \bar y}$$ $$\frac{\partial \bar y}{\partial y}$$ $$= \bar{u}_\bar {x}$$ $$\frac{\partial \bar x}{\partial x}$$ + $$\bar{u}_\bar {y}$$ $$\frac{\partial \bar y}{\partial x}$$

Define:

$$\bar x$$ = $$\bar {x} (x,y)$$ = $$\bar {\phi} (x,y)$$

$$\bar y$$ = $$\bar {y} (x,y)$$ = $$\bar {\psi} (x,y)$$

$$u_y(x,y)=$$ $$\frac{\partial }{\partial y}\bar u(\bar {x},\bar {y})$$ = $$= \bar{u}_\bar {x}$$ $$\frac{\partial \bar x}{\partial y}$$ + $$\bar{u}_\bar {y}$$ $$\frac{\partial \bar y}{\partial y}$$

Matrix Form
--EGM6322.S09.TIAN 17:42, 24 April 2009 (UTC)

$$\partial_x u=$$ $$\big\lfloor$$ $$\frac{\partial \bar x}{\partial x}$$ $$\frac{\partial \bar y}{\partial x}$$ $$\big\rceil$$ $$\begin{Bmatrix} \partial_ {\bar x} \\ \partial_ {\bar y} \end{Bmatrix} $$ $$(\bar u)$$

Homework:Likewise For $$\partial_y $$ --EGM6322.S09.TIAN 17:41, 24 April 2009 (UTC)

Likewise for $$\partial_y $$

$$\partial_y u=$$ $$\big\lfloor$$ $$\frac{\partial \bar x}{\partial y}$$ $$\frac{\partial \bar y}{\partial y}$$ $$\big\rceil$$ $$\begin{Bmatrix} \partial_ {\bar x} \\ \partial_ {\bar y} \end{Bmatrix} $$ $$(\bar u)$$

Then,

$$\begin{Bmatrix} \partial_ x \\ \partial_ y \end{Bmatrix} $$ = $$\begin{bmatrix} \frac{\partial \bar x}{\partial x} & \frac{\partial \bar y}{\partial x} \\ \frac{\partial \bar x}{\partial y} & \frac{\partial \bar y}{\partial y} \end{bmatrix} $$ $$\begin{Bmatrix} \partial_ {\bar x} \\ \partial_ {\bar y} \end{Bmatrix} $$

$$\begin{bmatrix} \frac{\partial \bar x}{\partial x} & \frac{\partial \bar y}{\partial x} \\ \frac{\partial \bar x}{\partial y} & \frac{\partial \bar y}{\partial y} \end{bmatrix} $$ is known as the Jacobian matrix(sometimes it is defined as transposition of Jacobian matrix).

Easier and more general,

$$(x_1,\cdots,x_n)$$ $$\to$$ $$(\bar{x_1},\cdots,\bar{x_n})$$

Initial notation:$$\bar{x_i}$$ = $$\bar{x_i}$$ $$(x_1,\cdots,x_n)$$

$$J_{n \times n}$$ = $$\begin{bmatrix} \frac{\partial \bar x_i}{\partial x_j} \end{bmatrix}_{n \times n} $$

Here, "$$i$$"is the row index while "$$j$$"is the column index.

Carl Gustav Jacob Jacobi(1804-1851) --EGM6322.S09.TIAN 17:42, 24 April 2009 (UTC)

Jacobian Matrix was named after Carl Gustav Jacob Jacobi, a 19th century mathematician from Prussia. Born in 1804, Jacobi studied at Berlin University and later went on to teach at Königsberg University. He is known for Jacobi's elliptic functions,Jacobian,Jacobi symbol, and Jacobi identity. He died in Berlin in 1851. source:

= References =

= Signatures =

--EGM6322.S09.TIAN 14:49, 6 February 2009 (UTC)Miao Tian --Egm6322.s09.xyz 14:52, 6 February 2009 (UTC) --Egm6322.s09.lapetina 14:53, 6 February 2009 (UTC)