User:EGM6322.S09.TIAN

One case
$$\kappa$$ $$= \underline{\kappa} (x,y,u,u_x,u_y)$$ $$\Rightarrow$$

Definition of quasilinear: For a PDE of order n (i.e. highest derivative terms are of order n),coeffients of nth order derivative are functions $$= (x,y,u,u_x,u_y,\frac{\partial^m u}{\partial {x^p y^q}})$$

Here, we assume there are two independent variables, mth order derivative, and $$p+q=m,m<n$$

Another Case
PDEs linear with respect to 2nd derivative, but still non-linear in general:

$$div($$ $$\kappa$$   $$\cdot grad u)+$$ $$f(x,y,u,u_x,u_y)=0$$

Here, $$\kappa =$$ $$\kappa $$ $$(x,y) $$ $$f(x,y,u,u_x,u_y)=0$$ is nonlinear with respect to argues in general, e.g.

$$div($$ $$\kappa$$   $$\cdot grad u)+$$ $$ax^2+by+\sqrt{u}+(u_x)^4+2(u_y)^2=0$$

Homework:Show The Linearity Of The Equation Above

Show it linear to 2nd derivative, but non-linear in general.

To make it clear,

$$D_1(\cdot):=$$ $$div[$$  $$\kappa$$ $$(x,y)$$   $$\cdot grad (\cdot)]$$

We can do this:

$$D_2(\cdot):=$$ $$D_1(\cdot)+$$ $$(\cdot)^{1/2}+$$ $$[(\cdot)_x]^4+$$ $$[2(\cdot)_y]^2$$

$$D_3(\cdot):=$$ $$D_2(\cdot)+$$ $$ax^2+by$$

The definition of linearity: $$\mathfrak{L} \left( \alpha u + \beta v \right) = \alpha \mathfrak{L} \left( u \right) + \beta \mathfrak{L} \left( v  \right)$$, where $$\forall$$ $$u,v: \Omega \rightarrow \mathbb{R} $$ and $$\forall$$ $$\alpha, \beta \in \mathbb{R}$$

Using this definition of linearity, the above expression for $$D(u)$$ is given as: $$D_1(\cdot) \left( \alpha u + \beta v\right) = div \left[ \mathbf{\kappa} \cdot grad \left( \alpha u + \beta v \right) \right]$$

$$grad(\cdot)$$ is linear $$grad(u) = \frac {\partial u}{\partial x_i} e_i$$ and $$\frac {\partial u}{\partial x_i} (\cdot)$$ is linear $$\therefore$$ $$grad \left( \alpha u + \beta v \right)$$ = $$\alpha \frac {\partial u}{\partial x_i}$$+ $$\beta \frac {\partial v}{\partial x_i}$$= $$  \alpha   grad(u) + \beta  grad(v)$$

Matrix (tensor) multiplication is a linear operator $$\bar{A} \cdot \left( \alpha \bar{x} + \beta \bar{y} \right) = \alpha \bar{A} \bar{x} + \beta \bar{A} \bar{y}$$

$$\therefore$$ $$\mathbf{\kappa} \cdot grad(u) = \alpha \mathbf{\kappa} \cdot grad(u) + \beta \mathbf{\kappa} \cdot grad(v)$$

$$div(\cdot)$$ is linear because it is another differential operator let $$\bar{a}, \bar{b}: \Omega$$ $$\mathbb{R}^3$$ and $$\alpha, \beta \in \mathbb{R}$$

$$\therefore$$ $$div \left( \alpha \bar{a} + \beta \bar{b} \right) = \frac {\partial }{\partial x_i} \left( \alpha a_i + \beta b_i \right) = \alpha \frac {\partial a_i}{\partial x_i} + \beta \frac {\partial b_i}{\partial x_i}$$

Taking all this into consideration, the definition of linearity has been proven.

$$D_1 \left( \alpha u + \beta v \right) = \alpha div \left[ \mathbf{k} \cdot grad(u) \right] + \beta div \left[ \mathbf{k} \cdot grad(v) \right]$$ ,which means, the 2nd derivative part is linear.

Then,

$$D_2(\alpha u + \beta v) =$$ $$D_1(\alpha u + \beta v)+$$ $$(\alpha u + \beta v)^{1/2}+$$ $$[(\alpha u + \beta v)_x]^4+$$ $$2[(\alpha u + \beta v)_y]^2$$

$$\alpha D_2( u ) =$$ $$\alpha D_1( u )+$$ $$\alpha( u )^{1/2}+$$ $$[\alpha( u )_x]^4+$$ $$2[\alpha( u )_y]^2$$

$$\beta D_2( v ) =$$ $$\beta D_1( v )+$$ $$\beta( v )^{1/2}+$$ $$[\beta( v )_x]^4+$$ $$[2\beta( v )_y]^2$$

Obviously,

$$D_2(\alpha u + \beta v) \neq$$ $$\alpha D_2( u ) +$$ $$\beta D_2( v ) $$

$$\therefore$$, $$D_2(\cdot)=$$ $$D_1(\cdot)+$$ $$(\cdot)^{1/2}+$$ $$[(\cdot)_x]^4+$$ $$[2(\cdot)_y]^2$$ is non-linear.

Similarly, we can prove that:

$$D_3(\cdot)=$$ $$D_2(\cdot)+$$ $$ax^2+by$$ is non-linear.

$$\therefore$$, the equation is linear to 2nd order derivative but non-linear in general.

Coordinate Transformation (continued)
In $$div(\cdot)$$ an example of   for $$\mathbb{R}^m$$  $$\to$$ $$\mathbb{R}^n$$  $$(M:\mathbb{R}^m \to \mathbb{R}^n)$$.

$$\mathbb{R}^m, \mathbb{R}^n$$ are space vectors (tensors, matrix). Divide maps vector field (vector-valued function) into a scalar function. In other words, domain and range of $$div(\cdot)$$ are function of spaces.

$$x= \phi (\bar{x},\bar{y})$$

$$y= \psi (\bar{x},\bar{y})$$

Linear coordinate transformation Eq(5) P.9-1

$$u(x,y)=$$ $$u($$ $$x= \phi (\bar{x},\bar{y})$$, $$y= \psi (\bar{x},\bar{y}))$$

$$=u(\bar{x},\bar{y})$$,  which is an abuse of notation by using "u".

$$=\bar{u}(\bar{x},\bar{y})$$,  which is a more rigorous notation.

Example
Let $$u(x)=ax+b$$

Condition, $$x=\phi (\bar x)$$ $$=sin \bar x$$

$$u(x)=$$ $$u(\phi (\bar x))$$ $$=asin \bar {x} + b$$

=$$u(\bar x)$$ $$\gets$$  abuse of notation

=$$\bar {u} (\bar x)$$  $$\gets$$  more rigorous

Another One
$$u_x(x,y)=$$ $$\frac{\partial u}{\partial x}(x,y)$$ $$=u_x(\phi(\bar{x},\bar{y}),\psi(\bar{x},\bar{y}))$$ $$= \frac{\partial }{\partial x}$$ $$\bar{u}(\bar{x},\bar{y})$$ $$=\frac{\partial \bar u}{\partial \bar x}$$ $$\frac{\partial \bar x}{\partial x}$$ $$+$$ $$\frac{\partial \bar u}{\partial \bar y}$$ $$\frac{\partial \bar y}{\partial y}$$ $$= \bar{u}_\bar {x}$$ $$\frac{\partial \bar x}{\partial x}$$ + $$\bar{u}_\bar {y}$$ $$\frac{\partial \bar y}{\partial x}$$

Define:

$$\bar x$$ = $$\bar {x} (x,y)$$ = $$\bar {\phi} (x,y)$$

$$\bar y$$ = $$\bar {y} (x,y)$$ = $$\bar {\psi} (x,y)$$

$$u_y(x,y)=$$ $$\frac{\partial }{\partial y}\bar u(\bar {x},\bar {y})$$ = $$= \bar{u}_\bar {x}$$ $$\frac{\partial \bar x}{\partial y}$$ + $$\bar{u}_\bar {y}$$ $$\frac{\partial \bar y}{\partial y}$$

Matrix Form
$$\partial_x u=$$ $$\big\lfloor$$ $$\frac{\partial \bar x}{\partial x}$$ $$\frac{\partial \bar y}{\partial x}$$ $$\big\rceil$$ $$\begin{Bmatrix} \partial_ {\bar x} \\ \partial_ {\bar y} \end{Bmatrix} $$ $$(\bar u)$$

Homework:Likewise For $$\partial_y $$

Likewise for $$\partial_y $$

$$\partial_y u=$$ $$\big\lfloor$$ $$\frac{\partial \bar x}{\partial y}$$ $$\frac{\partial \bar y}{\partial y}$$ $$\big\rceil$$ $$\begin{Bmatrix} \partial_ {\bar x} \\ \partial_ {\bar y} \end{Bmatrix} $$ $$(\bar u)$$

Then,

$$\begin{Bmatrix} \partial_ x \\ \partial_ y \end{Bmatrix} $$ = $$\begin{bmatrix} \frac{\partial \bar x}{\partial x} & \frac{\partial \bar y}{\partial x} \\ \frac{\partial \bar x}{\partial y} & \frac{\partial \bar y}{\partial y} \end{bmatrix} $$ $$\begin{Bmatrix} \partial_ {\bar x} \\ \partial_ {\bar y} \end{Bmatrix} $$

$$\begin{bmatrix} \frac{\partial \bar x}{\partial x} & \frac{\partial \bar y}{\partial x} \\ \frac{\partial \bar x}{\partial y} & \frac{\partial \bar y}{\partial y} \end{bmatrix} $$ is known as the Jacobian matrix(sometimes it is defined as transposition of Jacobian matrix).

Easier and more general,

$$(x_1,\cdots,x_n)$$ $$\to$$ $$(\bar{x_1},\cdots,\bar{x_n})$$

Initial notation:$$\bar{x_i}$$ = $$\bar{x_i}$$ $$(x_1,\cdots,x_n)$$

$$J_{n \times n}$$ = $$\begin{bmatrix} \frac{\partial \bar x_i}{\partial x_j} \end{bmatrix}_{n \times n} $$

Here, "$$i$$"is the row index while "$$j$$"is the column index.

Carl Gustav Jacob Jacobi(1804-1851)

Jacobian Matrix was named after Carl Gustav Jacob Jacobi, a 19th century mathematician from Prussia. Born in 1804, Jacobi studied at Berlin University and later went on to teach at Königsberg University. He is known for Jacobi's elliptic functions,Jacobian,Jacobi symbol, and Jacobi identity. He died in Berlin in 1851. source:

=Signatures=

--EGM6322.S09.TIAN 20:35, 5 February 2009 (UTC) Miao Tian

=Complete Form Of Matrix=

$$\alpha = \beta + \gamma$$

$$\beta$$ = $$\left \lfloor \partial_{\bar{x}} \; \partial_{\bar{y}} \right \rfloor $$ $$\mathbf{J} \mathbf{J^T}\begin{Bmatrix}\partial {\bar{x}}\\\partial {\bar{y}}\end{Bmatrix}$$

$$= \left \lfloor \partial_r \; \partial_{\theta} \right \rfloor $$ $$\frac {1} {r^2} $$ $$\begin{bmatrix} rc & rs\\ -s & c \end{bmatrix} $$ $$\begin{bmatrix} rc & -s\\ rs & c \end{bmatrix} $$ $$\begin{Bmatrix}\partial_r \\\ \partial_{\theta}\end{Bmatrix}$$

$$= \left \lfloor \partial_r \ \partial_{\theta} \right \rfloor $$ $$\begin{Bmatrix}\partial_r \\\ \partial_{\theta}\end{Bmatrix}$$

$$= \partial_{rr} +$$ $$\frac {1} {r^2} $$ $$\partial_{\theta \theta} $$

Note: $$\left \lfloor \partial_{\bar{x}} \ \partial_{\bar{y}} \right \rfloor $$$$\mathbf{J} \mathbf{J^T}$$ hold fixed, no further differentiate.

Three Methods To Figure Out $$\gamma$$
Method One:

$$\gamma =$$ $$= \left \lfloor \partial_r \ \partial_{\theta} \right \rfloor $$ $$\mathbf{J} \mathbf{J^T}$$ $$\begin{Bmatrix}\partial r \\\ \partial {\theta}\end{Bmatrix}$$

Method Two:

$$= \left \lfloor \partial_x \ \partial_y \right \rfloor $$ $$\mathbf{J} (x,y)$$ $$\begin{Bmatrix}\partial r \\\ \partial {\theta}\end{Bmatrix}$$

Method Three:

re-expect result in $$( r, $$ $$\theta$$ )

$$= \left \lfloor \partial_x \ \partial_y \right \rfloor $$ $$\mathbf{J} (r,\theta)$$ $$\begin{Bmatrix}\partial_r \\\ \partial_{\theta}\end{Bmatrix}$$

expressing result in $$( r, $$ $$\theta$$ )

Let us use Method Three:

$$= \left \lfloor \partial_x \ \partial_y \right \rfloor $$ $$\mathbf{J^T}$$ = $$= \left \lfloor \partial_x \ \partial_y \right \rfloor $$ $$\begin{bmatrix} c & -\frac {s}{r}\\ s & \frac {c}{s} \end{bmatrix} $$ = $$= \left \lfloor ( \partial_x c + \partial_y s) \;  (-\partial_x (\frac {s}{c})) + \partial_y (\frac {c}{r}) \right \rfloor $$

$$\partial_x c =$$ $$\frac{\partial}{\partial x}$$ $$cos {\theta}(x,y)$$ = $$\frac{\partial}{\partial r}$$ $$( cos {\theta})$$ $$\frac{\partial r}{\partial x}$$ + $$\frac{\partial}{\partial {\theta}}$$ $$( cos {\theta})$$ $$\frac{\partial {\theta}}{\partial x}$$

$$\partial_x f(r,{\theta})$$ = ($$\frac{\partial}{\partial r} \; f$$) $$\frac{\partial r}{\partial x} $$ + $$\frac{\partial f}{\partial {\theta}} $$ $$\frac{\partial {\theta}}{\partial x} $$

$$\frac{\partial {\theta}}{\partial x} $$ = $$\frac{\partial}{\partial x} $$ $$tan^{-1}$$ $$( \frac{y}{x} )$$ =$$J_{21} (r,{\theta})$$ = $$\frac{\partial {\bar{x_2}}}{\partial {x_1}} $$

$$J_{21}=$$ $$-\frac {sin{\theta}} {r}$$ $$\partial_x cos {\theta} $$ = $$(-sin{\theta})$$ $$(-\frac {s}{r})$$ = $$\frac {s^2}{r}$$

$$\partial_y sin {\theta}$$ = $$\frac {c^2}{r}$$

$$\Rightarrow$$ $$\partial_x c + \partial_y s$$ = $$\frac {1}{r}$$

$$\partial_x (\frac {s}{r})=$$ $$\frac{\partial} {\partial {r}} $$ $$(\frac {s}{r})$$ $$\frac{\partial r }{\partial x} +$$ $$\frac{\partial} {\partial {\theta}} $$ $$(\frac {s}{r})$$ $$\frac{\partial {\theta} }{\partial x} $$

= $$-\frac {2sc} {r^2}$$

Similarly, $$\partial_y (\frac {c}{r})=$$ $$\frac{\partial} {\partial {r}} $$ $$(\frac {c}{r})$$ $$\frac{\partial r }{\partial y} +$$ $$\frac{\partial} {\partial {\theta}} $$ $$(\frac {c}{r})$$ $$\frac{\partial {\theta} }{\partial y} $$

= $$-\frac {2sc} {r^2}$$

Homework: The Complete Form Of $$\gamma$$ $$\gamma =$$ $$\begin{bmatrix} (\phi_{xx} \partial_{\bar{x}} + \psi_{xx} \partial_{\bar{y}}) & (\phi_{xy} \partial_{\bar{x}} + \psi_{xy} \partial_{\bar{y}}) \\ (\phi_{xy} \partial_{\bar{x}} + \psi_{xy} \partial_{\bar{y}}) & (\phi_{yy} \partial_{\bar{x}} + \psi_{yy} \partial_{\bar{y}}) \end{bmatrix} $$

Homework:Derive LP p.14 (1.2.13) $$A=a \phi_x^2 + 2b \phi_x \phi_y +c \phi_y^2$$

$$ B=a \phi_x \psi_x + b( \phi_x \psi_y +\phi_y \psi_x ) + c \phi_y \psi_y  $$

$$C=a \psi_x^2 + 2b \psi_x \psi_y +c \psi_y^2$$

$$AC - B^2 =$$ $$(a \phi_x^2 + 2b \phi_x \phi_y +c \phi_y^2)$$ $$(a \psi_x^2 + 2b \psi_x \psi_y +c \psi_y^2)$$ - $$[a \phi_x \psi_x + b( \phi_x \psi_y +\phi_y \psi_x ) + c \phi_y \psi_y ]^2 $$

=$$a^2 \phi_x^2 \psi_y^2 + 2ab \phi_x \phi_y \psi_y^2 + ac \phi_y^2 \psi_y^2 + 2ab \phi_x^2 \psi_x \psi_y + 4b^2 \phi_x \phi_y \psi_x \psi_y + 2bc \phi_y^2 \psi_x \psi_y +ac \phi_x^2 \psi_y^2 +2bc \phi_x \phi_y \psi_y^2 + c^2 \phi_y^2 \psi_y^2 $$ -

($$a^2 \phi_x^2 \psi_x^2 +b^2 \phi_x^2 \psi_y^2 +b^2 \phi_y^2 \psi_x^2 +2b^2 \phi_x \phi_y \psi_x \psi_y + c^2 \phi_y^2 \psi_y^2 +2ab \phi_x^2 \psi_x \psi_y + 2ab \phi_x \phi_y \psi_x^2 +2ac \phi_x \phi_y \psi_x \psi_y +2bc \phi_x \phi_y \psi_y^2 +2bc \phi_y^2 \psi_x \psi_y$$)

= $$(ac-b^2)(\phi_x \psi_y - \phi_y \psi_x)$$

Lecture 24
$$\tau (div (grad \omega))+ p(x,y) = m W_tt$$

Here, $$div (grad \omega)= \omega_{xx}+\omega_{yy}$$, $$m$$ is mass/unit area.

Wave Equation In 1-D Space (actually a 2-D (x,t) Problem)
$$\tau \omega_{xx} - m \omega_{tt} +p =0$$

$$A= \begin{bmatrix} a & b \\ b & c \end{bmatrix} $$, $$detA=ac-b^2$$

In this case, $$a= \tau >0, b=0, c=-m<0 $$

$$detA=- \tau m <0 \Rightarrow hyperbolic$$

1-D space
$$\frac{d}{dx} (\kappa \frac{du}{dx}) + f = C \frac{du}{dt}$$

Here, $$\kappa is heat conductivity, f is heat source, C is heat capacity$$

We assume $$\kappa$$ is constant.

$$\kappa u_{xx} - C u_t +f =0$$

$$a= \kappa, b=0,  c=0 \Rightarrow  detA=0\Rightarrow parabolic$$

2-D space
$$\frac{d}{dx} (\kappa \frac{du}{dx}) + f = C \frac{du}{dt}$$

We assume $$\kappa$$ is constant here.

$$\kappa (u_{xx} + u_{yy}) - C u_t +f =0$$

General to 3 independent variables (x,y,z)

$$\big\lfloor \partial_x \; \partial_y \;\partial_z  \big\rceil \begin{bmatrix} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{bmatrix} \begin{Bmatrix} \partial_x u \\ \partial_y u \\ \partial_z u \end{Bmatrix} $$

Here,$$ \begin{bmatrix} A_{11} & A_{12} & A_{13}\\ A_{21} & A_{22} & A_{23}\\ A_{31} & A_{32} & A_{33} \end{bmatrix} = [A_{ij}]_{3 \times 3}$$

In 2-D case, $$A_{ij}=0 \; \forall \; i,j \; except \; A_{11} = A_{22}= \kappa >0 \Rightarrow detA=0 \Rightarrow parabolic$$

Solution of unsteady heat equation without heat source, which means $$(f=0)$$ in polar coordinate.
Separation of Variables.

$$\frac {1}{r} \frac {\partial} {\partial r} (r \frac {\partial u} {\partial r})$$ $$+ \frac {1}{r^2} \frac {\partial^2 u}{\partial^2 \theta}$$ $$= \frac {\partial u}{\partial t}(1)$$

Here, $$\frac {1}{r} \frac {\partial} {\partial r} (r \frac {\partial u} {\partial r})$$ $$+ \frac {1}{r^2} \frac {\partial^2 u}{\partial^2 \theta}$$ $$=div (grad u)$$

$$u(r,\theta,t)=R(r)\Theta (\theta) T(t)\;(2)$$

Plug $$(2)$$ into $$(1)$$:

$$\frac {1}{rR} \frac {\partial} {\partial r} (r \frac {dR} {dr})$$ $$+ \frac {1}{r^2 \Theta} \frac {d^2 \Theta}{d \theta^2 \theta}$$ $$- \frac {1}{T} \frac {dT}{dt}=0$$

Homework:Expand $$\; T^* $$ in terms of $$cos \theta$$ $$T^* (\theta)= \frac {3T_o}{2} + \frac {T_o}{2} cos{2 \theta}$$

Expand it:

$$T^* (\theta)= \frac {3T_0}{2} + \frac {T_0}{2} (2 cos^2 \theta -1)$$ $$=T_0+T_0 cos^2 \theta$$

lecture 27
$$\bar u$$ $$(r, \theta)=$$ $$R(r) \Theta (\theta)= \frac {1}{k} R(r)\Theta (\theta)$$

Since $$k$$ is arbitrary, select $$k=1 $$

Now why $$B=B(\rho), C=C(\rho)$$?

Hint:

General initial condition : $$u(r, \theta, t_0)= {\bar u} (r, \theta)$$

Simplify $$u(r, \theta ,t_0)={\bar u (\theta)}$$

$$u(r, \theta_1 ,t_0)={\bar u (\theta_1)}$$

$$u(r, \theta_2 ,t_0)={\bar u (\theta_2)}$$

$${\bar u} (\theta)= \Theta (\theta)= \frac {1}{k} \Theta (\theta) k$$

Let $$\frac {1}{k} \Theta (\theta) = \Theta (\theta)$$,

$$k=T(t_0)$$

$$\begin{cases} {\bar u} (\theta_1)= Bsin(\sqrt{\rho} \theta_1)+Ccos(\sqrt{\rho} \theta_1)\\ {\bar u} (\theta_2)= Bsin(\sqrt{\rho} \theta_2)+Ccos(\sqrt{\rho} \theta_2) \end{cases}$$

Then we can solve B, C and finally we will find that B and C are functions of $$\rho$$

$$B=B(\rho, \theta_1, \theta_2)$$

$$C=C(\rho, \theta_1, \theta_2)$$

HW 2
Bessel Function:

$$z^2 \frac {d^2w}{dz^2} + z \frac {dw}{dz} + (z^2-\nu^2)w=0$$

We have one solution $$J_{\nu}(z)$$:

When $$\nu = $$integer n,

$$J_n(z)= \frac {1}{\pi} \int_{0}^{\pi} cos(zsin \theta -n \theta)d \theta$$

In $$J(z)=z_{\alpha} \sum_{k=1}^{\infty} \alpha_k z^k$$ form, we can write $$J_n(z)$$as:

$$J_n(z)= \sum_{m=0}^{\infty} (-1)^m \frac {z^{n+2m}}{2^{n+2m}m!\Gamma(n+m+1)}$$

The other solution is to let $$\nu = -n$$:

$$J_{-n}(z)= \sum_{m=0}^{\infty} (-1)^m \frac {z^{-n+2m}}{2^{-n+2m}m!\Gamma(-n+m+1)}$$

Verify it :

$$z^2 \frac {d^2J_{-n}(z)}{dz^2}=$$ $$ \sum_{m=0}^{\infty}(-n+2m)(-n+2m-1) (-1)^m \frac {z^{-n+2m}}{2^{-n+2m}m!\Gamma(-n+m+1)}$$

$$z \frac {dJ_{-n}(z)}{dz}=$$ $$ \sum_{m=0}^{\infty}(-n+2m)(-1)^m \frac {z^{-n+2m}}{2^{-n+2m}m!\Gamma(-n+m+1)}$$

lecture 32
(1)$$div (\underline {v}) = \frac {\partial v_i}{\partial x_i}$$ $$= \frac {\partial v_i}{\partial x_i} \delta_{ij}$$ $$= \frac {\partial v_i} {\partial \overline {x_k}} \frac {\partial \overline {x_k}} {\partial x_j}\delta_{ij}$$

(2)$$\underline {J} = \begin{bmatrix} \frac {\partial \overline {x_i}}{\partial x_j} \end{bmatrix} $$ $$= \begin{bmatrix} C & S \\ -\frac {S}{r} & \frac {C}{r} \end{bmatrix} $$

(3)$$\frac {\partial v_1}{\partial x_1} = \frac {\partial v_x}{\partial x}= \frac {\partial v_x}{\partial r} \frac {\partial r}{\partial x} + \frac {\partial v_x}{\partial \theta}  \frac {\partial \theta}{\partial x} $$

$$\frac {\partial \overline {x_1}}{x_1} = J_{11}=C$$

$$\frac {\partial \overline {x_2}}{x_1} = J_{21}= -\frac {S}{r}$$

(4)$$\underline {v}=v_x \underline {i}+ v_y \underline {j}=$$ $$v_r(c \underline {i} + s \underline {j})+ v_{\theta}(-rs \underline {i} + rc \underline {j})$$ $$=(cv_r-rsv_{\theta}) \underline {i} + (sv_r-rcv_{\theta}) \underline {j} $$

Here we have,

$$\underline {e_r}=c \underline {i} + s \underline {j}$$

$$\underline {e_{\theta}}=-rs \underline {i} + rc \underline {j}$$

$$v_x=cv_r-rsv_{\theta}$$

$$v_y=sv_r-rcv_{\theta}$$

from equation (3),

$$\frac {\partial v_x}{\partial r}= \frac {\partial}{\partial r}(cv_r-rsv_{\theta})$$

Similarly,

$$\frac {\partial v_x}{\partial \theta}= \frac {\partial}{\partial \theta}(cv_r-rsv_{\theta})$$

So we have,

$$\frac {\partial v_1}{\partial x_1}= \frac {\partial v_x}{\partial x}=$$ $$[\frac {s^2}{r} v_r +c^2 \frac {\partial v_r}{\partial r}- \frac {cs}{r} \frac {\partial v_r}{\partial \theta}] + [-rcs \frac {v_{\theta}}{r}+ s^2 \frac{\partial v_{\theta}}{\theta}]$$

$$\frac {\partial v_2}{\partial x_2}= \frac {\partial v_y}{\partial y}=$$ $$[\frac {c^2}{r} v_r +s^2 \frac {\partial v_r}{\partial r}+ \frac {cs}{r} \frac {\partial v_r}{\partial \theta}] + [rcs \frac {v_{\theta}}{r}+ c^2 \frac{\partial v_{\theta}}{\theta}]$$

$$div (\underline {v}) = \frac {\partial v_x}{\partial x} + \frac {\partial v_y}{\partial y}= \frac {1}{r} \frac {\partial}{\partial r}(rv_r)+ \frac {\partial v_{\theta}}{\partial \theta}$$

Differnce between the above expression and that in the book In the book,

$$\underline {e_r}=c \underline {i} + s \underline {j}$$

$$\underline {e_{\theta}}=-s \underline {i} + c \underline {j}$$

not,

$$\underline {e_r}=c \underline {i} + s \underline {j}$$

$$\underline {e_{\theta}}=-rs \underline {i} + rc \underline {j}$$

So now we have,

$$\underline {v}=v_x \underline {i}+ v_y \underline {j}=$$ $$v_r(c \underline {i} + s \underline {j})+ v_{\theta}(-s \underline {i} + c \underline {j})$$ $$=(cv_r-sv_{\theta}) \underline {i} + (sv_r-cv_{\theta}) \underline {j} $$

$$\frac {\partial v_x}{\partial r}= \frac {\partial}{\partial r}(cv_r-sv_{\theta})=c \frac {\partial v_r}{\partial r} - s \frac {\partial v_{\theta}}{\partial r}$$

$$\frac {\partial v_x}{\partial \theta}= \frac {\partial}{\partial \theta}(cv_r-sv_{\theta})=-sv_r+c \frac {\partial v_r}{\partial \theta} - cv_{\theta}-s \frac {\partial v_{\theta}}{\partial \theta}$$

$$\frac {\partial v_y}{\partial r}= \frac {\partial}{\partial r}(sv_r+cv_{\theta})=s \frac {\partial v_r}{\partial r} +c \frac {\partial v_{\theta}}{\partial r}$$

$$\frac {\partial v_y}{\partial \theta}= \frac {\partial}{\partial \theta}(sv_r+cv_{\theta})=cv_r+s \frac {\partial v_r}{\partial \theta} - sv_{\theta}+c \frac {\partial v_{\theta}}{\partial \theta}$$

And we have, $$\underline {J} = \begin{bmatrix} \frac {\partial \overline {x_i}}{\partial x_j} \end{bmatrix} $$ $$= \begin{bmatrix} C & S \\ -\frac {S}{r} & \frac {C}{r} \end{bmatrix} $$

$$div (\underline {v}) = \frac {\partial v_x}{\partial x} + \frac {\partial v_y}{\partial y}

= \frac {\partial v_x}{\partial r} \cdot \frac {\partial r}{\partial x} +\frac {\partial v_x}{\partial \theta} \cdot \frac {\partial \theta}{\partial x} +\frac {\partial v_y}{\partial r} \cdot \frac {\partial r}{\partial x} +\frac {\partial v_y}{\partial \theta} \cdot \frac {\partial \theta}{\partial y}$$

$$=c(c \frac {\partial v_r}{\partial r} - s \frac {\partial v_{\theta}}{\partial r})-\frac {s}{r}(-sv_r+c \frac {\partial v_r}{\partial \theta}

- cv_{\theta}-s \frac {\partial v_{\theta}}{\partial \theta})

+s(s \frac {\partial v_r}{\partial r}

+c \frac {\partial v_{\theta}}{\partial r})

+\frac {c}{r}(cv_r+s \frac {\partial v_r}{\partial \theta}

- sv_{\theta}+c \frac {\partial v_{\theta}}{\partial \theta})

$$

$$=c^2 \frac {\partial v_r}{\partial r} - cs \frac {\partial v_{\theta}}{\partial r}+ \frac {s^2}{r}v_r -\frac {sc}{r} \frac {\partial v_r}{\partial \theta} + \frac {sc}{r} v_{\theta} + \frac {s^2}{r} \frac {\partial v_{\theta}}{\partial \theta} + s^2 \frac {\partial v_r}{\partial r} +sc \frac {\partial v_{\theta}}{\partial r} + \frac {c^2}{r} v_r + \frac {cs}{r} \frac {\partial v_r}{\partial \theta} - \frac {cs}{r} v_{\theta}+ \frac {c^2}{r} \frac {\partial v_{\theta}}{\partial \theta}$$

=$$\frac {\partial v_r}{\partial r}+ \frac {1}{r} v_r + \frac {1}{r} \frac {\partial v_{\theta}}{\theta}$$

=$$\frac {1}{r} \frac {\partial}{\partial r}(rv_r)+ \frac {1}{r} \frac {\partial v_{\theta}}{\theta}$$

That is the expression from the book.

Verify the d'Alembert solution is the solution of Wave Equation

The classic wave equation is,

$$c_0^2 w_{xx}=w{tt}$$

Initial Condition:

$$w(x,0)=f(x)$$

$$w_t(x,0)=g(x)$$

the solution is:

$$w(x,t)=\frac {1}{2} [f(x-c_0t)+f(x+c_0t)] + \frac {1}{2c_0} \int_{x-c_0t}^{x+c_0t} g(\xi)	d \xi$$

Plug the solution into the wave equation,

$$\frac {\partial w}{\partial x}=\frac {1}{2} [f^'(x-c_0t)+f^'(x+c_0t)]+\frac {1}{2c_0}[g(x+c_0t)-g(x+c_0t)]$$

$$\frac {\partial^2 w}{\partial x^2}=\frac {1}{2} [f^{}(x-c_0t)+f^{}(x+c_0t)]+\frac {1}{2c_0}[g^'(x+c_0t)-g^'(x+c_0t)]$$

$$\frac {\partial w}{\partial t}=\frac {1}{2} [c_0f^'(x-c_0t)+c_0f^'(x+c_0t)]+\frac {1}{2c_0}[c_0g(x+c_0t)-c_0g(x+c_0t)]$$

$$\frac {\partial^2 w}{\partial x^2}=\frac {1}{2} [c_0^2f^{}(x-c_0t)+c_0^2f^{}(x+c_0t)]+\frac {1}{2c_0}[c_0^2g^'(x+c_0t)-c_0^2g^'(x+c_0t)]$$

Then it's easy to see that

$$c_0^2w_{xx}=w_{tt}$$

lecture 37








Cases $$\lambda_1 \lambda_2 >0$$ and $$\lambda_1 \lambda_2 <0$$,

$$\lambda_1 \lambda_2 >0$$is the case of ellipse

$$\lambda_1 \lambda_2 <0$$is the case of hyperbola

In both of these two cases:$$\lambda_1 \lambda_2 \neq0$$

Geometric interpretation:

Eigenvalue problem $$\Leftrightarrow$$ Rotation of $$(x,y)$$to$$(\overline {x},\overline{y})$$which are parallel to principal axes of conics. Orthogonal of $$\underline {V}$$ $$(\underline {v} \ \underline {v}^T$$ $$=\underline {I})$$

Next Step: Get rid of linear terms in

$$\left \lfloor \overline {x} \ \overline {y} \right \rfloor$$ $$\underline {\Lambda}$$ $$	\binom{\overline {x}}{\overline {y}}   $$ + $$\left \lfloor \overline {d} \ \overline {e} \right \rfloor$$ $$	\binom{\overline {x}}{\overline {y}}   $$ +$$f=0$$ Equation(1)

Translation of Coordinate System.

Define the new coordinate system $$(\overline{\overline {x}} \ \overline{\overline {y}}) $$as:

$$	\binom{\overline{\overline {x}}}{\overline{\overline {y}}}$$ = $$\binom {\overline {x}+r}{\overline {y}+s}$$

(r,s)are unknowns.It's the "center" of ellipse or hyperbola.

$$\binom {\overline {x}}{\overline {y}}$$ = $$	\binom{\overline{\overline {x}}-r}{\overline{\overline {y}}-s}$$

Plugging it into Equation (1), we obtain:

$$\lambda_1 (\overline{\overline{x}}-r)^2+\lambda_2 (\overline{\overline{y}}-s)^2$$ $$+\overline {d} (\overline{\overline{x}}-r)+\overline {e} (\overline{\overline{y}}-s) +f =0$$

Find Out r and s Expanding

$$\lambda_1 (\overline{\overline{x}}-r)^2+\lambda_2 (\overline{\overline{y}}-s)^2$$ $$+\overline {d} (\overline{\overline{x}}-r)+\overline {e} (\overline{\overline{y}}-s) +f =0$$

We obtain:

$$\lambda_1 \overline {\overline {x}}^2 + \lambda_2 \overline {\overline {y}}^2$$ $$+(\overline {d}- 2 \lambda_1 r )\overline {\overline {x}}$$ $$+(\overline {e}- 2 \lambda_2 s )\overline {\overline {y}}$$ $$+(\lambda_1 r^2 + \lambda_2 s^2 - \overline {d} r -\overline {e} s +f)=0$$

Because we need to get rid of linerar terms,

$$ \overline {d} - 2 \lambda_1 r =0$$

$$ \overline {e} - 2 \lambda_2 s =0$$,

Finally we have:

$$r= \overline {d} / 2 \lambda_1$$

$$s= \overline {e} / 2 \lambda_2$$

Find Out g $$\lambda_1 (\overline{\overline{x}}-r)^2+\lambda_2 (\overline{\overline{y}}-s)^2=g$$

Comparing it to

$$\lambda_1 \overline {\overline {x}}^2 + \lambda_2 \overline {\overline {y}}^2$$ $$+(\lambda_1 r^2 + \lambda_2 s^2 - \overline {d} r -\overline {e} s +f)=0$$

we get,

$$g=-(\lambda_1 r^2 + \lambda_2 s^2 - \overline {d} r -\overline {e} s +f)$$

Parabola PDE Now we have $$\lambda_1 u_{xx} + e u_y =h$$ (1)

and we want $$u_{\xi \xi} - u_{\eta} =g$$(2)

Rewriting (1) in matrix form, we have:

$$\left \lfloor \partial {x} \ \partial {y} \right \rfloor$$ $$\begin{bmatrix} \lambda_1     &  0      \\ 0             & 0 \end{bmatrix} $$ $$\begin{bmatrix} \partial_x u           \\ \partial_y u            \end{bmatrix} $$ $$+ \left \lfloor 0 \ e \right \rfloor$$ $$\begin{bmatrix} \partial_x u           \\ \partial_y u            \end{bmatrix}=g $$(3)

We have already derived that,

$$\begin{bmatrix} \partial_x            \\ \partial_y \end{bmatrix} $$ = $$J_{\beta} \begin{bmatrix} \partial_{\xi}            \\ \partial_{\eta} \end{bmatrix} $$ (4)

$$J_{\beta}= \begin{bmatrix} 1/ \sqrt{\lambda_1}     &  0      \\ 0             & 0 \end{bmatrix} $$(5)

Plugging (4) and (5) into (3), we obtain:

$$\left \lfloor \partial {\xi} \ \partial {\eta} \right \rfloor J_{\beta}^T$$ $$\begin{bmatrix} \lambda_1     &  0      \\ 0             & 0 \end{bmatrix} J_{\beta}$$ $$\begin{bmatrix} \partial_{\xi} u           \\ \partial_{\eta} u            \end{bmatrix} $$ $$+ \left \lfloor 0 \ e \right \rfloor J_{\beta}$$ $$\begin{bmatrix} \partial_{\xi} u           \\ \partial_{\eta} u            \end{bmatrix}=g $$

$$\left \lfloor \partial {\xi} \ \partial {\eta} \right \rfloor \begin{bmatrix} 1/ \sqrt{\lambda_1}     &  0      \\ 0             & 0 \end{bmatrix}$$ $$\begin{bmatrix} \lambda_1     &  0      \\ 0             & 0 \end{bmatrix} \begin{bmatrix} 1/ \sqrt{\lambda_1}     &  0      \\ 0             & 0 \end{bmatrix}$$ $$\begin{bmatrix} \partial_{\xi} u           \\ \partial_{\eta} u            \end{bmatrix} $$ $$+ \left \lfloor 0 \ e \right \rfloor J_{\beta}$$ $$\begin{bmatrix} \partial_{\xi} u           \\ \partial_{\eta} u            \end{bmatrix}=g $$

The above equation is:

$$u_{\xi \xi} - u_{\eta} =g$$, which is exactly we want.