User:Egm6322.s09.mafia/HW6

 See my comments below.

After you made a correction for a section with a comment box, you want to put a comment in that same comment box on what you did. Egm6322.s09 13:59, 12 April 2009 (UTC)

 Common problem: The notation for vectors (and more generally tensors) should be uniform throughout the report: Use boldface letters, instead of underline, or overhead arrow, or overhead hat. Each of you used a different notation, despite repeated reminders in class. Egm6322.s09 20:01, 12 April 2009 (UTC)

 A transient heat problem on a circular domain was not stated and solved. At least provide the statement; see notes of Meeting 30. Egm6322.s09 20:11, 12 April 2009 (UTC)

Sign everything!

=Photos of Student Interaction Using Co-operative Learning Technique=

--Egm6322.s09.xyz 01:45, 10 April 2009 (UTC)

One of the main objectives for the co-operative learning scheme is to enhance student interaction.

The following photos were taken during a study session amongst students as they completed the respective assignments given in Report 6.

As shown in the photos, students were able to use the co-operative learning framework as a tool to discuss relevant concepts, share homework solutions, and cultivate a sense of camaraderie with each other.

Photos of Student Interaction using Co-operative Learning --Egm6322.s09.xyz 01:45, 10 April 2009 (UTC)

=The Principle of Dimensional Homogeneity= Egm6322.s09.Three.ge 02:36, 9 April 2009 (UTC) ''' How to differentiate an integral. ''' $$\frac{d}{dx}\int_{\zeta=A(x)}^{\zeta=B(x)}F(x,\zeta)d\zeta=\int_{\zeta=A(x)}^{\zeta=B(x)}\frac{\partial }{\partial x}F(x,\zeta)d\zeta$$

Which becomes:

$$F \left[x,\zeta=B(x)\right]\frac {dB(x)}{dx}-F \left[x,\zeta=A(x)\right]\frac {dA(x)}{dx}$$

It should be noted that the principle of dimensional homogeneity must be satisfied for the previous equations.

$$\textrm{Let:} \ \textrm{  }\begin{matrix} \frac{d}{dx}\int_{\zeta=A(x)}^{\zeta=B(x)}F(x,\zeta)d\zeta=\alpha& &\int_{\zeta=A(x)}^{\zeta=B(x)}\frac{\partial }{\partial x}F(x,\zeta)d\zeta=\beta\\ F \left[x,\zeta=B(x)\right]\frac {dB(x)}{dx}=\gamma & &F \left[x,\zeta=A(x)\right]\frac {dA(x)}{dx}=\delta \end{matrix} $$

Thus,

$$[\alpha]=\textrm{the} \ \textrm{ dimension} \ \textrm{ of} \ \alpha=\frac{[F][G]}{[x]}=\beta=\gamma$$

Homogeneous vs. Non Homogeneous PDEs
Egm6322.s09.Three.ge 02:36, 9 April 2009 (UTC)

For the general 2nd order PDE:

$$au_{xx}+2bu_{xy}+cu_{yy}+du_{x}+eu_{y}+fu+g=\mathcal D(u)$$

The differential equation is considered homogeneous if:

$$\mathcal D(u)=0$$

And non-homogeneous if:

$$\mathcal D(u)=f$$

Where f is a forcing function.

The d'Alembert Solution
Egm6322.s09.Three.ge 02:36, 9 April 2009 (UTC)

Classical Wave equation:

$$(c_{0}^{2})w_{xx}=w_{tt}$$

Exact solution, d'Alembert solution:

$$w(x,t)=\frac{1}{2} [f(x-c_{0}t)+ f(x+c_{0}t)]+\frac{1}{2c_{0}}\int_{x-c_{0}t}^{x+c_{0}t}g(\zeta)d\zeta$$

Proof of Formula
The classic wave equation is,

$$c_0^2 w_{xx}=w{tt}$$

Initial Condition:

$$w(x,0)=f(x)$$

$$w_t(x,0)=g(x)$$

the solution is:

$$w(x,t)=\frac {1}{2} [f(x-c_0t)+f(x+c_0t)] + \frac {1}{2c_0} \int_{x-c_0t}^{x+c_0t} g(\xi)	d \xi$$

Plug the solution into the wave equation,

$$\frac {\partial w}{\partial x}=\frac {1}{2} [f^'(x-c_0t)+f^'(x+c_0t)]+\frac {1}{2c_0}[g(x+c_0t)-g(x+c_0t)]$$

$$\frac {\partial^2 w}{\partial x^2}=\frac {1}{2} [f^{}(x-c_0t)+f^{}(x+c_0t)]+\frac {1}{2c_0}[g^'(x+c_0t)-g^'(x+c_0t)]$$

$$\frac {\partial w}{\partial t}=\frac {1}{2} [c_0f^'(x-c_0t)+c_0f^'(x+c_0t)]+\frac {1}{2c_0}[c_0g(x+c_0t)-c_0g(x+c_0t)]$$

$$\frac {\partial^2 w}{\partial x^2}=\frac {1}{2} [c_0^2f^{}(x-c_0t)+c_0^2f^{}(x+c_0t)]+\frac {1}{2c_0}[c_0^2g^'(x+c_0t)-c_0^2g^'(x+c_0t)]$$

Then it's easy to see that

$$c_0^2w_{xx}=w_{tt}$$

--EGM6322.S09.TIAN 01:04, 10 April 2009 (UTC)

=The General Theory of Separation of Variables= Egm6322.s09.Three.ge 03:03, 9 April 2009 (UTC)

The method of separation of variables seeks solve a PDE of order n by separating it into a series of n ODE's and then solving these ODE's. For a linear PDE this is done by assuming a solution that is a product of n expressions, with each expression being only a function of one independent variable.

For example, for a given PDE, the solution will be of the form:

$$u(x_{1},x_{2},x_{3},\cdots x_{n})$$

Where:

$$x_{1},x_{2},x_{3},\cdots x_{n}$$

are independent variables.

To use the method of separation of variables, one assumes that the solution may be written as a product of expressions, with each expression being a function of only one independent variable.

$$u(x_{1},x_{2},x_{3},\cdots x_{n})=X_{1}(x_{1})X_{2}(x_{2})X_{3}(x_{3}) \cdots X_{n}(x_{n})$$

Plugging this product of terms back into the PDE, one may separate the variables, and thus solve n ODE's to solve the PDE. (reference Zwillinger)

 Not complete; there is an additive condition on the differential operator to allow for the application of the above separation of variables. There are also more details regarding the method of separation of variables of course, particularly when it is applied to nonlinear PDEs. Egm6322.s09 14:33, 12 April 2009 (UTC)

An example of Separation of Variables on a non-linear PDE is given below.

Nonlinear Separation of Variables --Egm6322.s09.lapetina 14:25, 9 April 2009 (UTC)

Solve the following form of problem:

$$F(x)(u_x)^2+G(y)(u_y)^2=a(x)+b(y)$$

with the following data:

$$F(x)=2x$$

$$G(y)=3y$$

$$a(x)=4x$$

$$b(y)=5y$$.

We know the solution takes the form:

$$u(x,y)=\phi (x)+\psi (y)$$.

which means:

$$u_x= \phi ' (x)$$

and $$u_y=\psi '(y)$$.

Plugging these into the original equation, we find: $$2x (\phi '(x))^2 -4x=-3y (\psi '(y))^2 + 5y$$.

We know each of these values must equal the same arbitrary quantity, $$\alpha$$. Thus:

$$2x (\phi '(x))^2 -4x=\alpha$$

$$-3y (\psi '(y))^2 + 5y=\alpha $$

Let's try $$\alpha=0$$, which results in:

$$\phi '(x)=\sqrt{2}$$ and

$$\psi '(y)=\sqrt{\frac{5}{3}}$$.

Further:

$$\phi(x)=x\sqrt{2}+C_1$$ and

$$\psi '(y)=y\sqrt{\frac{5}{3}}+C_2$$.

Then:

$$u(x,y)=x\sqrt{2}+y\sqrt{\frac{5}{3}} + C_3$$.

Checking our answer:

$$2x (\sqrt{2})^2 +3y (\sqrt{\frac{5}{3}})^2= 4x + 5y$$ we see:

$$4x + 5y= 4x + 5y$$.

This solution for $$u$$ works.

=The Expression of Divergence in Polar Coordinates=

The Primitive of Functions
The primitive for function F is:

$$Q(x,\xi):=\int F(x,\xi)d\xi$$

e.g. The primitive for x is:

$$\int xd\xi=\frac {1}{2}x^2+k$$

The definition for the integral of x is:

$$\int_{x=a}^{x=b}x dx=\frac {1}{2}(b^2-a^2)$$

HW: Check the definition of indefinite integral The indefinite integral of a function $$f(x)$$ can be denoted by

$$\int f(x)dx$$

For example, if $$n \ne -1$$, we have

$$\int x^n dx=\frac {x^{n+1}}{n+1}+C$$

In this case, $$C$$ denotes an arbitrary constant. This is because the function $$f(x)$$ has many indefinite integrals, but any two differ by a constant. Consequently, if one antiderivative of a function $$f(x)$$ is found and added by an arbitrary constant, every indefinite integral for the function is found.

Egm6322.s09.three.liu 16:31, 24 April 2009 (UTC)

 Give reference from where you got the above definition. Egm6322.s09 20:01, 12 April 2009 (UTC)

 This part has been changed.Egm6322.s09.three.liu 14:07, 13 April 2009 (UTC)

Generally, the indefinite integral for $$F(x,\xi)$$ is

$$ \int F(x,\xi)d\xi=G(x,\xi=B(x))-G(x,\xi=A(x))$$

$$\frac {d}{dx}F(x,\xi)

=[\frac {\partial G(x,B(x))}{\partial x}+\frac {\partial G(x,B(x))}{\partial \xi}]-[\frac {\partial G(x,A(x))}{\partial x}+\frac {\partial G(x,A(x))}{\partial \xi}]$$

$$=[\int_{\xi=A(x)}^{\xi=B(x)}\frac {\partial}{\partial x}F(x,\xi) d\xi+F(x,B(x))\frac {dB(x)}{dx}]-[\int_{\xi=\xi_0}^{\xi=A(x)}\frac {\partial}{\partial x}F(x,\xi) d\xi+F(x,A(x))\frac {dA(x)}{dx}]$$

$$\frac {\partial}{\partial x}F(x,\xi)=\int_{\xi=A(x)}^{\xi=B(x)}\frac {\partial}{\partial x}F(x,\xi) d\xi+F(x,B(x))\frac {dB(x)}{dx}-F(x,A(x))\frac {dA(x)}{dx}$$

Divergence operator in polar coordinate


Basic vector in polar coordinates:

$$\bar{\underline{e_i}}=\frac {\partial P}{\partial \bar{x_i}}; i=1,2,...$$

$$\bar{\underline{e_1}}=\underline{e_r}=\frac {\partial P}{\partial r}=\frac {\partial (\overrightarrow{OP})}{\partial r}; (\bar{x_1}=r)$$

$$\overrightarrow{OP}=x\overrightarrow{i}+y\overrightarrow{j}=(rcos\theta)\overrightarrow{i}+(rsin\theta)\overrightarrow{j}$$

$$\bar{\underline{e_2}}=\underline{e_\theta}=\frac {\partial P}{\partial \theta}; (\bar{x_x}=\theta)$$

$$ \underline{e_r}=\frac {\partial P}{\partial r}=cos\theta\overrightarrow{i}+sin\theta\overrightarrow{j}$$

$$\underline{e_\theta}=\frac{\partial P}{\partial \theta}=-rsin \theta \overrightarrow{i}+rcos\theta \overrightarrow{j}$$

$$\begin{Vmatrix} \underline{e_r} \end{Vmatrix}=1$$

$$\begin{Vmatrix} \underline{e_\theta} \end{Vmatrix}\neq1=r$$

Egm6322.s09.three.liu 14:08, 10 April 2009 (UTC)

(1)$$div (\underline {v}) = \frac {\partial v_i}{\partial x_i}$$ $$= \frac {\partial v_i}{\partial x_i} \delta_{ij}$$ $$= \frac {\partial v_i} {\partial \overline {x_k}} \frac {\partial \overline {x_k}} {\partial x_j}\delta_{ij}$$

(2)$$\underline {J} = \begin{bmatrix} \frac {\partial \overline {x_i}}{\partial x_j} \end{bmatrix} $$ $$= \begin{bmatrix} C & S \\ -\frac {S}{r} & \frac {C}{r} \end{bmatrix} $$

(3)$$\frac {\partial v_1}{\partial x_1} = \frac {\partial v_x}{\partial x}= \frac {\partial v_x}{\partial r} \frac {\partial r}{\partial x} + \frac {\partial v_x}{\partial \theta}  \frac {\partial \theta}{\partial x} $$

$$\frac {\partial \overline {x_1}}{x_1} = J_{11}=C$$

$$\frac {\partial \overline {x_2}}{x_1} = J_{21}= -\frac {S}{r}$$

(4)$$\underline {v}=v_x \underline {i}+ v_y \underline {j}=$$ $$v_r(c \underline {i} + s \underline {j})+ v_{\theta}(-rs \underline {i} + rc \underline {j})$$ $$=(cv_r-rsv_{\theta}) \underline {i} + (sv_r-rcv_{\theta}) \underline {j} $$

Here we have,

$$\underline {e_r}=c \underline {i} + s \underline {j}$$

$$\underline {e_{\theta}}=-rs \underline {i} + rc \underline {j}$$

$$v_x=cv_r-rsv_{\theta}$$

$$v_y=sv_r-rcv_{\theta}$$

from equation (3),

$$\frac {\partial v_x}{\partial r}= \frac {\partial}{\partial r}(cv_r-rsv_{\theta})$$

Similarly,

$$\frac {\partial v_x}{\partial \theta}= \frac {\partial}{\partial \theta}(cv_r-rsv_{\theta})$$

So we have,

$$\frac {\partial v_1}{\partial x_1}= \frac {\partial v_x}{\partial x}=$$ $$[\frac {s^2}{r} v_r +c^2 \frac {\partial v_r}{\partial r}- \frac {cs}{r} \frac {\partial v_r}{\partial \theta}] + [-rcs \frac {v_{\theta}}{r}+ s^2 \frac{\partial v_{\theta}}{\theta}]$$

$$\frac {\partial v_2}{\partial x_2}= \frac {\partial v_y}{\partial y}=$$ $$[\frac {c^2}{r} v_r +s^2 \frac {\partial v_r}{\partial r}+ \frac {cs}{r} \frac {\partial v_r}{\partial \theta}] + [rcs \frac {v_{\theta}}{r}+ c^2 \frac{\partial v_{\theta}}{\theta}]$$

$$div (\underline {v}) = \frac {\partial v_x}{\partial x} + \frac {\partial v_y}{\partial y}= \frac {1}{r} \frac {\partial}{\partial r}(rv_r)+ \frac {\partial v_{\theta}}{\partial \theta}$$

--EGM6322.S09.TIAN 01:03, 10 April 2009 (UTC)

 You simply copied from the lecture notes, but did not fill in the detailed derivation in between. Also, what are the symbols $$\displaystyle s, c, S, C$$ ? There was no rigor in the use of symbols; this remark is not only applicable here, but also holds for the entire report; see my comment at the top regarding the notation for tensors and vectors. Egm6322.s09 12:03, 13 April 2009 (UTC)

Differnce between the above expression and that in the book In the book,

$$\underline {e_r}=c \underline {i} + s \underline {j}$$

$$\underline {e_{\theta}}=-s \underline {i} + c \underline {j}$$

not,

$$\underline {e_r}=c \underline {i} + s \underline {j}$$

$$\underline {e_{\theta}}=-rs \underline {i} + rc \underline {j}$$

So now we have,

$$\underline {v}=v_x \underline {i}+ v_y \underline {j}=$$ $$v_r(c \underline {i} + s \underline {j})+ v_{\theta}(-s \underline {i} + c \underline {j})$$ $$=(cv_r-sv_{\theta}) \underline {i} + (sv_r-cv_{\theta}) \underline {j} $$

$$\frac {\partial v_x}{\partial r}= \frac {\partial}{\partial r}(cv_r-sv_{\theta})=c \frac {\partial v_r}{\partial r} - s \frac {\partial v_{\theta}}{\partial r}$$

$$\frac {\partial v_x}{\partial \theta}= \frac {\partial}{\partial \theta}(cv_r-sv_{\theta})=-sv_r+c \frac {\partial v_r}{\partial \theta} - cv_{\theta}-s \frac {\partial v_{\theta}}{\partial \theta}$$

$$\frac {\partial v_y}{\partial r}= \frac {\partial}{\partial r}(sv_r+cv_{\theta})=s \frac {\partial v_r}{\partial r} +c \frac {\partial v_{\theta}}{\partial r}$$

$$\frac {\partial v_y}{\partial \theta}= \frac {\partial}{\partial \theta}(sv_r+cv_{\theta})=cv_r+s \frac {\partial v_r}{\partial \theta} - sv_{\theta}+c \frac {\partial v_{\theta}}{\partial \theta}$$

And we have, $$\underline {J} = \begin{bmatrix} \frac {\partial \overline {x_i}}{\partial x_j} \end{bmatrix} $$ $$= \begin{bmatrix} C & S \\ -\frac {S}{r} & \frac {C}{r} \end{bmatrix} $$

$$div (\underline {v}) = \frac {\partial v_x}{\partial x} + \frac {\partial v_y}{\partial y}

= \frac {\partial v_x}{\partial r} \cdot \frac {\partial r}{\partial x} +\frac {\partial v_x}{\partial \theta} \cdot \frac {\partial \theta}{\partial x} +\frac {\partial v_y}{\partial r} \cdot \frac {\partial r}{\partial x} +\frac {\partial v_y}{\partial \theta} \cdot \frac {\partial \theta}{\partial y}$$

$$=c(c \frac {\partial v_r}{\partial r} - s \frac {\partial v_{\theta}}{\partial r})-\frac {s}{r}(-sv_r+c \frac {\partial v_r}{\partial \theta}

- cv_{\theta}-s \frac {\partial v_{\theta}}{\partial \theta})

+s(s \frac {\partial v_r}{\partial r}

+c \frac {\partial v_{\theta}}{\partial r})

+\frac {c}{r}(cv_r+s \frac {\partial v_r}{\partial \theta}

- sv_{\theta}+c \frac {\partial v_{\theta}}{\partial \theta})

$$

$$=c^2 \frac {\partial v_r}{\partial r} - cs \frac {\partial v_{\theta}}{\partial r}+ \frac {s^2}{r}v_r -\frac {sc}{r} \frac {\partial v_r}{\partial \theta} + \frac {sc}{r} v_{\theta} + \frac {s^2}{r} \frac {\partial v_{\theta}}{\partial \theta} + s^2 \frac {\partial v_r}{\partial r} +sc \frac {\partial v_{\theta}}{\partial r} + \frac {c^2}{r} v_r + \frac {cs}{r} \frac {\partial v_r}{\partial \theta} - \frac {cs}{r} v_{\theta}+ \frac {c^2}{r} \frac {\partial v_{\theta}}{\partial \theta}$$

=$$\frac {\partial v_r}{\partial r}+ \frac {1}{r} v_r + \frac {1}{r} \frac {\partial v_{\theta}}{\theta}$$

=$$\frac {1}{r} \frac {\partial}{\partial r}(rv_r)+ \frac {1}{r} \frac {\partial v_{\theta}}{\theta}$$

That is the expression from the book.

--EGM6322.S09.TIAN 01:03, 10 April 2009 (UTC)

 Good. But you don't have to rederive the expression for the divergence in polar coordinates; there is a just simple step to go from the expression in the lecture to the expression in the book. It is best to distinguish the basis vector in the lecture $$\displaystyle \mathbf e_\theta = r (- \sin \theta \mathbf i + \cos \theta \mathbf j)$$, which is not a unit vector, and the basis vector in the book $$\displaystyle \hat \mathbf e_\theta = (- \sin \theta \mathbf i + \cos \theta \mathbf j) = \mathbf e_\theta / r$$, which is a unit vector. Similarly, you need to distinguish the component $$\displaystyle v_\theta$$ in the lecture, from the component $$\displaystyle \hat v_\theta$$ in the book (even though the same notation $$\displaystyle v_\theta$$ was used in the book), i.e., $$  \displaystyle \mathbf v   = v_r \mathbf e_r + v_\theta \mathbf e_\theta =  \hat v_r \hat \mathbf e_r + \hat v_\theta \hat \mathbf e_\theta $$. Of course, $$\displaystyle \mathbf e_r = \hat \mathbf e_r$$, and $$\displaystyle v_r = \hat v_r$$, but $$\displaystyle v_\theta \ne \hat v_\theta$$. The components $$\displaystyle (v_r, v_\theta)$$ are called the (regular) tensor components of $$\displaystyle \mathbf v$$, whereas the components $$\displaystyle (\hat v_r, \hat v_\theta)$$ are called the physical components of $$\displaystyle \mathbf v$$. Also, the hat over a boldface letter usually designates a unit vector. Egm6322.s09 14:33, 12 April 2009 (UTC)

Derive the Grad in Polar Coordinates (Method #1)

--Egm6322.s09.xyz 01:40, 10 April 2009 (UTC)

The following method is re-stated here from an online reference entitled, "Cylindrical Coordinates" by A.J. Mallinckrodt

Recall the translated variables from cartesian to polar coordinates:

$$x = r cos\theta$$

$$y = r sin\theta$$

$$z = z$$

The basis vectors in polar coordinates are:

$$\blacktriangleright \hat r = \frac{\vec r}{r} = \frac{x \hat i + y \hat j}{r} = \frac{x}{r} \hat i + \frac{y}{r} \hat j $$


 * $$= \frac{rcos\theta}{r} \hat i + \frac{rsin\theta}{r} \hat j$$

$$\therefore \hat r = cos\theta \hat i + sin\theta \hat j$$

$$\blacktriangleright \hat \Theta = \hat z \times \hat r$$


 * $$=(\hat k) \times (cos\theta \hat i + sin\theta \hat j)$$


 * $$\therefore \hat \Theta = - sin\theta \hat i + cos\theta \hat j$$

$$\blacktriangleright \hat z = \hat z$$

Assumptions:


 * 1. $$u =u(r, \theta, z)$$ is a scalar field
 * 2. $$du $$is proportional to $$d\hat R$$, where $$\hat R$$ is the the displacement vector

$$\therefore du = \frac{\partial u}{\partial r}dr + \frac{\partial u}{\partial \theta}d\theta + \frac{\partial u}{\partial z}dz$$

and

$$du = \vec \nabla u \cdot d\hat R$$

Equate both expressions for du:

$$\frac{\partial u}{\partial r}dr + \frac{\partial u}{\partial \theta}d\theta + \frac{\partial u}{\partial z}dz = (\vec \nabla u)_r dr + (\vec \nabla u)_{\theta} r d\theta + (\vec \nabla u)_z dz$$

By inspection,

$$\blacktriangleright (\vec \nabla u)_r = \frac{\partial u}{\partial r}$$

$$\blacktriangleright r (\vec \nabla u)_{\theta} = \frac{\partial u}{\partial \theta}$$, or, $$(\vec \nabla u)_{\theta} = \frac{1}{r}\frac{\partial u}{\partial \theta} $$

$$\blacktriangleright (\vec \nabla u)_z = \frac{\partial u}{\partial z} $$

Therefore, the expression for the divergence can be written as a function of the basis vectors:

$$\therefore \vec \nabla = \frac{\partial}{\partial r} \hat r + \frac{1}{r}\frac{\partial u}{\partial \theta} \hat \theta + \frac{\partial u}{\partial z} \hat z $$

Let $$\hat A = A_r \hat r + A_{\theta} \hat \theta + A_z \hat z$$

Then,

$$\vec \nabla \cdot \vec A =\left[ \frac{\partial}{\partial r} \hat r + \frac{1}{r}\frac{\partial u}{\partial \theta} \hat \theta + \frac{\partial u}{\partial z} \hat z \right] \cdot \left[ A_r \hat r + A_{\theta} \hat \theta + A_z \hat z \right] $$


 * $$= \hat r \left \{

\frac{\partial}{\partial r} [A_r \hat r] +\frac{\partial}{\partial r} [A_{\theta} \hat \theta] +\frac{\partial}{\partial r} [A_z \hat z] \right \}$$


 * $$+ \frac{\hat \theta}{r} \left \{

\frac{\partial}{\partial \theta} [A_r \hat r] +\frac{\partial}{\partial \theta} [A_{\theta} \hat \theta] +\frac{\partial}{\partial \theta} [A_z \hat z] \right \}$$


 * $$+ \hat z \left \{

\frac{\partial}{\partial z} [A_r \hat r] +\frac{\partial}{\partial z} [A_{\theta} \hat \theta] +\frac{\partial}{\partial z} [A_z \hat z] \right \}$$


 * $$=\hat r \left \{ \left( A_r \frac{\partial \hat r}{\partial r} + \hat r \frac{\partial A_r}{\partial r} \right)

+ \left( A_{\theta} \frac{\partial \hat \theta}{\partial r} + \hat \theta \frac{\partial A_{\theta}}{\partial r} \right) + \left( A_z \frac{\partial \hat z}{\partial r} + \hat z \frac{\partial A_z}{\partial r} \right) \right\}$$


 * $$+\frac{\hat \theta}{r} \left \{ \left( A_r \frac{\partial \hat r}{\partial \theta} + \hat r \frac{\partial A_r}{\partial \theta} \right)

+ \left( A_{\theta} \frac{\partial \hat \theta}{\partial \theta} + \hat \theta \frac{\partial A_{\theta}}{\partial \theta} \right) + \left( A_z \frac{\partial \hat z}{\partial \theta} + \hat z \frac{\partial A_z}{\partial \theta} \right) \right\}$$


 * $$+\hat r \left \{ \left( A_r \frac{\partial \hat r}{\partial z} + \hat r \frac{\partial A_r}{\partial z} \right)

+ \left( A_{\theta} \frac{\partial \hat \theta}{\partial z} + \hat \theta \frac{\partial A_{\theta}}{\partial z} \right) + \left( A_z \frac{\partial \hat z}{\partial z} + \hat z \frac{\partial A_z}{\partial z} \right) \right\}$$

The respective derivatives are determined from the basis vectors as follows:

$$\blacktriangleright \frac{\partial{\hat z}}{\partial r} = \frac{\partial{\hat z}}{\partial \theta} = \frac{\partial{\hat z}}{\partial z} = 0$$

$$\blacktriangleright \frac{\partial{\hat r}}{\partial r} = \frac{\partial{\hat r}}{\partial z} = 0$$

$$\blacktriangleright \frac{\partial{\hat r}}{\partial \theta} = -sin \theta \hat i + cos\theta \hat j = \hat \theta$$

$$\blacktriangleright \frac{\partial{\hat \theta}}{\partial r} = \frac{\partial{\hat \theta}}{\partial z} = 0$$

$$\blacktriangleright \frac{\partial{\hat \theta}}{\partial \theta} = -cos \theta \hat i - sin\theta \hat j = \hat r$$

Substituting these values yields:

$$\vec \nabla \cdot \vec A = $$


 * $$=\hat r \left \{ \left( \cancel{A_r (0)} + \hat r \frac{\partial A_r}{\partial r} \right)

+ \left( \cancel{A_{\theta} (0)} + \hat \theta \frac{\partial A_{\theta}}{\partial r} \right) + \left( \cancel{A_z (0)} + \hat z \frac{\partial A_z}{\partial r} \right) \right\}$$


 * $$+\frac{\hat \theta}{r} \left \{ \left( A_r (\hat \theta) + \hat r \frac{\partial A_r}{\partial \theta} \right)

+ \left( A_{\theta} (-\hat r) + \hat \theta \frac{\partial A_{\theta}}{\partial \theta} \right) + \left( \cancel{A_z (0)} + \hat z \frac{\partial A_z}{\partial \theta} \right) \right\}$$


 * $$+\hat z \left \{ \left( \cancel{A_r (0)} + \hat r \frac{\partial A_r}{\partial z} \right)

+ \left( \cancel{A_{\theta} (0)} + \hat \theta \frac{\partial A_{\theta}}{\partial z} \right) + \left( \cancel{A_z (0)} + \hat z \frac{\partial A_z}{\partial z} \right) \right\}$$

Taking the respective dot products of the remaining terms yields:

$$\vec \nabla \cdot \vec A = \frac{\partial A_r}{\partial r} + \frac{1}{r} \left[ A_r + \frac{\partial A_{\theta}}{\partial \theta}\right] + \frac{A_z}{\partial z} $$

or

$$\vec \nabla \cdot \vec A = \frac{1}{r} \frac{\partial }{\partial r}(A_r r) + \frac{1}{r} \frac{\partial A_{\theta}}{\partial \theta} + \frac{A_z}{\partial z} $$

.

 Provide the reference from where you got Method #1, which is clearly not the method taught in class. Of course, you should get the same results. The method taught in class is Method #2 below; see also above for the derivation of the divergence in polar coordinates as done in class. Good work on latex equations. Also, as mentioned, the hat over a boldface letter usually designates a unit vector. Egm6322.s09 14:33, 12 April 2009 (UTC)

Added hyperlink for the online source reference for Method#1 --Egm6322.s09.xyz 21:26, 15 April 2009 (UTC)

Derive the Grad in Polar Coordinates (Method #2) Egm6322.s09.bit.sahin 14:59, 10 April 2009 (UTC)

Gradient in cartesian coordinates is

$$grad T=\frac{\partial T}{\partial x}\mathbf{i}+\frac{\partial T}{\partial y}\mathbf{j}$$

Rectangular coordinates and polar coordinates are related as follows

$$x=rcos\theta $$, $$y=rsin\theta $$

and additionally

$$r^{2}=x^{2}+y^{2}$$, $$\theta =tan^{-1}\left ( y/x \right )$$

two polar basis vectors expressed in terms of the rectangular basis vectors as following

$$\mathbf{e_{r}}=cos\theta \mathbf{i}+sin\theta \mathbf{j}$$,

$$\mathbf{e_{\theta }}=-sin\theta \mathbf{i}+cos\theta \mathbf{j}$$

or

$$\mathbf{i}=cos\theta \mathbf{e_{r}}-sin\theta \mathbf{e_{\theta }}$$,

$$\mathbf{j}=sin\theta \mathbf{e_{r}}+cos\theta \mathbf{e_{\theta }}$$

By using the chain rule, it can be written that

$$\frac{\partial T}{\partial x}\mathbf{i}=\left (\frac{\partial T}{\partial r}\frac{\partial r}{\partial x}+\frac{\partial T}{\partial \theta}\frac{\partial \theta}{\partial x} \right )\left ( cos\theta \mathbf{e_{r}}-sin \theta \mathbf{e_{\theta}} \right )$$

$$\frac{\partial T}{\partial x}\mathbf{i}=\left (\frac{\partial T}{\partial r} cos \theta+\frac{\partial T}{\partial \theta} \frac{-sin \theta}{r} \right )\left ( cos\theta \mathbf{e_{r}}-sin \theta \mathbf{e_{\theta}} \right )$$

$$\frac{\partial T}{\partial x}\mathbf{i}=\frac{\partial T}{\partial r}cos^{2}\theta\mathbf{e_{r}}-\frac{\partial T}{\partial r}sin\theta cos\theta \mathbf{e_{\theta }}-\frac{\partial T}{\partial \theta }\frac{1}{r}sin\theta cos\theta \mathbf{e_{r}}+\frac{\partial T}{\partial \theta }\frac{1}{r}sin^{2} \theta \mathbf{e_{\theta}}$$

Similarly

$$\frac{\partial T}{\partial y}\mathbf{j}=\left (\frac{\partial T}{\partial r}\frac{\partial r}{\partial y}+\frac{\partial T}{\partial \theta}\frac{\partial \theta}{\partial y} \right )\left ( sin\theta \mathbf{e_{r}}+cos \theta \mathbf{e_{\theta}} \right )$$

$$\frac{\partial T}{\partial y}\mathbf{j}=\left (\frac{\partial T}{\partial r}sin \theta+ \frac{\partial T}{\partial \theta}\frac{cos \theta}{r} \right )\left ( sin\theta \mathbf{e_{r}}+cos \theta \mathbf{e_{\theta}} \right )$$

$$\frac{\partial T}{\partial y}\mathbf{j}=\frac{\partial T}{\partial r}sin^{2}\theta\mathbf{e_{r}}+\frac{\partial T}{\partial r}sin\theta cos\theta \mathbf{e_{\theta }}+\frac{\partial T}{\partial \theta }\frac{1}{r}sin\theta cos\theta \mathbf{e_{r}}+\frac{\partial T}{\partial \theta }\frac{1}{r}cos^{2} \theta \mathbf{e_{\theta}}$$

Eventually,

$$\frac{\partial T}{\partial x} \mathbf{i}+\frac{\partial T}{\partial y}\mathbf{j}=\frac{\partial T}{\partial r}\mathbf{e_{r}}cos^{2} \theta+\frac{\partial T}{\partial \theta}\frac{sin^{2} \theta \mathbf{e_{\theta}}}{r}+\frac {\partial T}{\partial r}sin^{2} \theta \mathbf{e_{r}}+\frac{\partial T}{\partial \theta}\frac{cos^{2} \theta \mathbf{e_{\theta}}}{r}$$

$$\frac{\partial T}{\partial x} \mathbf{i}+\frac{\partial T}{\partial y}\mathbf{j}=\frac{\partial T}{\partial r}\mathbf{e_{r}}+\frac{1}{r}\frac{\partial T}{\partial \theta}\mathbf{e_{\theta}}$$

=The Reynold's Transport Theorem=

Egm6322.s09.Three.nav 13:46, 24 April 2009 (UTC)

The Leibniz Integral Rule, named after Gottfried Leibniz accomplishes differentiation of an integral. It tells us that

$$\frac{\mathrm{d} }{\mathrm{d} x}\left(\int^{B(x)}_{A(x)}F(x,\xi)d\xi \right)= \int^{B(x)}_{A(x)}\frac{\partial F(x,\xi)}{\partial x}d\xi+\left(F(x, \xi= B(x))\frac{\mathrm{d} B}{\mathrm{d} x} \right)- \left(F(x, \xi= A(x))\frac{\mathrm{d} A}{\mathrm{d} x} \right)$$ --(1)

This can be derived from the fundamental theorem of calculus as shown in this Wikipedia link. 

The Leibniz Theorem is applied as a tool in deriving most of the conservation laws in Fluid Mechanics. Consider a control volume that is neither fixed nor moving, but is simply bounded by surfaces a(t) and b(t). These surfaces are moving with a velocity different from that of the local fluid velocity. Then the differential of the integral, as given by the Leibniz theorem, is written as

$$\frac{\mathrm{d} }{\mathrm{d} t}\left(\int^{b(t)}_{a(t)}F(x,t)dx \right)= \int^{B(x)}_{A(x)}\frac{\partial F(x,t)}{\partial t}dx+\left(F(x= b(t),t)\frac{\mathrm{d} B}{\mathrm{d} t} \right)- \left(F(x= a(t),t))\frac{\mathrm{d} A}{\mathrm{d} t} \right)$$-(2)

where a(t), b(t) form the boundaries of the control volume considered and are moving at a speed dA/dt and dB/dt respectively.

Multiplying through by dS (the surface area of a differential control volume) and generalizing further, we obtain the following form:

$$\frac{\mathrm{d} }{\mathrm{d} t}\left(\int^{b(t)}_{a(t)}F(x,t)dV \right)= \int^{b(t)}_{a(t)}\frac{\partial F(x,t)}{\partial t}dV+\int_{S(t)}\mathbf{dS.u_{s}}F$$--(3)

The surface integral over area S(t) in (3) includes the contributions from both the boundaries, thus excluding the need for the previous form of the equation (2).

For a control volume that is fixed, i.e. u= 0, (3) becomes

$$\frac{\mathrm{d} }{\mathrm{d} t}\left(\int^{b(t)}_{a(t)}F(x,t)dV \right)= \int^{b(t)}_{a(t)}\frac{\partial F(x,t)}{\partial t}dV$$--(4)

For a material volume i.e. a control volume in which the boundaries are moving at a velocity equal to the fluid velocity, u, (3) becomes

$$\frac{\mathrm{D} }{\mathrm{D} t}\left(\int^{b(t)}_{a(t)}F(x,t)dV \right)= \int^{b(t)}_{a(t)}\frac{\partial F(x,t)}{\partial t}dV+\int_{S(t)}\mathbf{dS.u_{s}}F$$--(5)

Eqn.(5) is sometimes referred to as the Reynold's Transport Theorem. We use the term D/Dt to convey that we are defining the property of a material control volume. This theorem is often used in formulating the basic conservation laws in fluid mechanics.

Applying Gauss' theorem, which states that $$\oint_{S}\mathbf{X.dS}= \int_{V}(\nabla.X)dV$$, (5) can be rewritten as

$$\frac{\mathrm{D} }{\mathrm{D} t}\left(\int^{b(t)}_{a(t)}F(x,t)dV \right)= \int^{b(t)}_{a(t)}\frac{\partial F(x,t)}{\partial t}dV+\int_{V(t)}\nabla.(u_{s}F)dV$$(6)

Consider F as a physical property

Substituting F= $$\rho$$ in (6) gives us the equation for Conservation of Mass. For a steady flow system, the mass of the system i.e.$$\rho$$dV remains constant with time. Hence from(6)

$$0= \int^{b(t)}_{a(t)}\frac{\partial \rho}{\partial t}dV+\int_{V(t)}\nabla.(\rho u_{s})dV$$

(or)

$$\int_{V(t)}\left (\frac{\partial \rho}{\partial t}+\nabla.(\rho u) \right) dV = 0 $$---(7)

Similarly substituting F= linear momentum= $$\rho$$v in (6) gives us the equation for Conservation of Momentum.

$$\int_{V(t)}\left (\frac{\partial \rho v}{\partial t}+\nabla.(\rho vu) \right) dV = 0 $$

Hence we see the relevance of the Leibniz Integral rule in the context of Fluid Mechanics and the fundamental role it plays in deriving the basic laws that the complex science of Fluid Mechanics is built on.

Similarity between Reynolds transport theorem and Leibniz Rule We have the derivative of an integral in 1-D given as

$$\frac{\mathrm{d} }{\mathrm{d} x}\int_{\xi=A(x)}^{\xi=B(x)}F(x,\xi)d\xi=\int_{\xi=A(x)}^{\xi=B(x)}\frac{\partial  }{\partial x}F(x,\xi)d\xi+...+\frac{\mathrm{d} B(x)}{\mathrm{d} x}F[x,\xi=B(x)]-\frac{\mathrm{d} A(x)}{\mathrm{d} x}F[x,\xi=A(x)] $$

when extended to the entire domain ,we have the Leibniz rule as follows,

$$\frac{\mathrm{d} }{\mathrm{d} t}\iiint_\mathrm{R(t)} T_{i,j}(x_i,t)dV=\iiint_\mathrm{R}\frac{\partial T_{i,j...}}{\partial t}dV+\iint_\mathrm{S}n_kw_kT_{i,j}dS$$

where $$T_{i,j }$$ is any property ,be it a scalar ,vector or tensor,which is a function of space and time.If the property around the region R changes over time is equal to rate of change of property within the region and the whatever property that goes through the surface where $$n_k$$ is the unit vector and $$w_k$$ is the velocity of the changing surface.

The Reynold's Transport Theorem,which can be shown to be analogous to the Leibniz theorem as follows

$$\frac{\mathrm{d} N}{\mathrm{d} t}_\mathrm{system}=\frac{\partial }{\partial t}\int_\mathrm{cv}NdV +\int_\mathrm{cs}N\vec{v}d\vec{A}$$

Which can further explained as

$$\frac{\mathrm{d} }{\mathrm{d} t}\iiint_\mathrm{MR}F(x_i,t)dV=\frac{\partial }{\partial t}\iiint_\mathrm{AR}FdV+\iint_\mathrm{CS}F\vec{v}.d\vec{S}$$

Which states that the rate of change of property $$F(x_i)$$ in the material region is equal to rate of change of property in the arbitrary region ,also the control volume the region and the flux of the material through the surface.

If the function F is a function of time then the equation can written as follows ,which is the Leibniz rule.

$$\frac{\mathrm{d} }{\mathrm{d} t}\iiint_\mathrm{MR}F(x_i,t)dV=\iiint_\mathrm{AR}\frac{\partial }{\partial t}FdV+\iint_\mathrm{CS}F\vec{v}.d\vec{S}

$$

Egm6322.s09.bit.gk 17:41, 10 April 2009 (UTC)

 Could you start from the Reynolds Transport Theorem in 3-D, i.e., Eq.(5) above, and show that Eq.(5) reduces Eq.(1) for the 1-D case? Egm6322.s09 20:01, 12 April 2009 (UTC)

Derive Solution to White's Fluid Mechanics Problem

--Egm6322.s09.lapetina 14:25, 9 April 2009 (UTC)

White's problem features a cylinder centered within a cylindrical container of fluid, rotating with an angular velocity of $$\Omega_i$$. White tells us that the $$\theta$$ momentum equation reduces to:

$$\nabla^2 v_\theta = \frac{1}{r} \frac{d}{dr} \left ( r \frac {dv_\theta}{dr} \right )= \frac{v_\theta}{r^2}$$

Performing the chain rule on the above center expression, and equating it with the last results in the following ordinary differential equation: $$v^{''}_\theta+ \frac{1}{r}v^'_\theta - \frac{v_\theta}{r^2}=0$$

Using the Method of Undetermined Coefficients we can solve ordinary differential equations like this one by assuming a form of a solution, plugging it into our original equation, and checking.

Let's try a solution of the form:

$$v_\theta=C_1 r + \frac{C_2}{r}$$.

so that:

$$\frac{d v_\theta}{dr}=C_1-\frac {C_2}{r^2}$$ and:

$$\frac{d^2 v\theta}{dr^2}=2 \frac{C_2}{r^3}$$. We can plug these values back into our original equation, which leaves us with:

$$v_\theta=r^2 \left [ \frac{2 C_2}{r^{-3}} \right ] + r \left [ C_1 - \frac{C_2}{r^2} \right ] $$

which reduces to:

$$v_\theta= C_1 r + \frac{C_2}{r} $$

Our boundary conditions for this problem are that at $$r_i$$, the inner radius, the angular velocity is the same as that of the rotating cylinder, $$\Omega_i$$, while at the outer radius, $$r_o$$, the angular velocity is $$0$$.

Applying these two boundary conditions to our ODE leaves us with the following two equations:

$$\Omega_i r_i=C_1 r_i +\frac {C_2} {r_i}$$ and

$$0=C_1 r_o+ \frac{C_2}{r_o}$$.

We see that $$C_1=\frac{-C_2}{ r^{2}_o}$$, and that $$C_1=-\frac{\Omega_i r^2_i}{(r^2_0-r^2_i )}$$ so that $$C_2=-C_1 r^2_o$$

Plugging these values into our original equation: $$v_\theta= -\frac{C_2}{r} +C_1 r $$ leaves us with:

$$v_\theta= \frac{\Omega_i r^2_i}{(r^2_o-r^2_i )} \frac {r^2_o} {r} - \frac{\Omega_i r^2_i}{(r^2_o-r^2_i )} r$$

$$r v_\theta r=\Omega_i r^2_i \frac{r^2_o-r^2}{(r^2_o-r^2_i )}$$.

This can be rewritten as: $$v_\theta =\Omega_i \frac{r^2_i}{r} \frac{r^2_o-r^2}{(r^2_o-r^2_i )}$$

White's solution is expressed as:

$$v_\theta = \Omega_i r_i \frac{\frac{r_o}{r}-\frac{r}{r_o}}{\frac{r_o}{r_i}-\frac{r_i}{r_o}}$$

which can be manipulated in the following fashion:

$$v_\theta = \Omega_i r_i \frac{\frac{r_o}{r}-\frac{r}{r_o}}{\frac{r_o}{r_i}-\frac{r_i}{r_o}}\left [ \frac{r r_o r_i}{r r_o r_i} \right ]=\Omega_i \frac{r^2_i}{r} \frac{r^2_o-r^2}{(r^2_o-r^2_i )}$$.

The solutions are correct, and the Method of Undetermined coefficients was appropriate.

 In the above box, you have not exactly applied the method of undetermined coefficient to derive the solution; essentially, you only verified the solution given in the book. Actually, you should assume the solution of the form $$\displaystyle r^n$$, then solve for the undetermined coefficient $$\displaystyle n$$; you will then have two possible solutions. Next, superpose these two solutions to obtain the expression shown in the above box. The solution just described is given the next two boxes. Egm6322.s09 10:13, 15 April 2009 (UTC)

Method 2 for the solution of differential equation for flow between rotating cylinder Egm6322.s09.bit.sahin 15:41, 10 April 2009 (UTC)

$$\nabla^2 v_\theta = \frac{1}{r} \frac{d}{dr} \left ( r \frac {dv_\theta}{dr} \right )= \frac{v_\theta}{r^2}$$

By using the chain rule we find

$$\frac{1}{r}\left ( \frac{\partial v_{\theta }}{\partial r}+\frac{\partial ^{2}v_{\theta }}{\partial r^{2}}r \right )=\frac{v_{\theta }}{r^{2}}$$

rearranging it

$$r^{2}\frac{\partial ^{2}v_{\theta }}{\partial r^{2}}+r\frac{\partial v_{\theta }}{\partial r}-v_{\theta }=0$$

This is basically a Cauchy Equation which is given in detail in following collapsible box.

The auxiliary equation for the above equation is (a=1, b=-1)

$$m^{2}-1=0$$

The roots are $$m_{1}=1$$ and $$m_{2}=-1$$. A fundamental system of real solutions for all positive r is

$$v_{\theta 1}=r$$, $$v_{\theta 2}=1/r$$

and the corresponding general solution for all those r is

$$v_{\theta }=c_{1}r+\frac{c_{2}}{r}$$

Cauchy Equation (Kreyszig,1967) Egm6322.s09.bit.sahin 15:44, 10 April 2009 (UTC)

The so-called Cauchy equation or Euler equation

$$x^{2}y''+axy'+by=0$$, (a,b constant)

can also be purely algebraic manipulations. By substituting

$$y=x^{m}$$

and its derivatives into the first equation we find

$$x^{2}m\left ( m-1 \right )x^{m-2}+axmx^{m-1}+bx^{m}=0$$.

By omitting the common power $$x^{m}$$, which is not zero when $$x\neq 0$$, we obtain the auxiliary equation

$$m^{2}+\left ( a-1 \right )m+b=0$$.

If the roots $$m_{1}$$ and $$m_{2}$$ of this equation are different, then the functions

$$y_{1}\left ( x \right )=x^{m_{1}}$$ and $$y_{2}\left ( x \right )=x^{m_{2}}$$

constitute a fundamental system of solutions for all $$x$$ for which these functions are defined. The corresponding general solution is

$$y=c_{1}x^{m_{1}}+c_{2}x^{m_{2}}$$

=Classification of PDEs=

The Beauty of the Matrix Operator Approach
Egm6322.s09.bit.la

from p 19.2:

$$\mathbf{\bar{A}}=\mathbf{J}\mathbf{A}\mathbf{J}^T$$

$$det \mathbf{\bar{A}}=det\mathbf{J}det\mathbf{A}det(\mathbf{J})^T $$

where: $$det(\mathbf{J})^T= det\mathbf{J}$$

$$det\mathbf{\bar{A}}=(det\mathbf{A})(det\mathbf{J})^T$$

$$\bar{a}\bar{c}-\bar{b}^2=(ac-b^2)(\phi _x \psi_y- \phi _y \psi x)^2$$

 What is "p 19.2"? You should not refer to the page number of the lecture transparencies in the report, since no one knows what is in that page. It is better to refer to the corresponding section in a previous report by creating an internal link; for example, see the box below the section on Another Axisymmetric Problem in Report R4, and compare this method of derivation with the above. Also, the above was a direct copy of the lecture transparency without additional explanation (what was the "beauty"?). Egm6322.s09 12:03, 13 April 2009 (UTC)

The Canonical Form
$$\lfloor x y\rfloor\begin{bmatrix} a & b\\ b & c\end{bmatrix}\begin{Bmatrix}x\\ y\end{Bmatrix}+\lfloor d e \rfloor\begin{Bmatrix}x\\ y\end{Bmatrix} + f =0$$

$$ ax^2+2bxy+cy^{2}+dx+ey+f=0$$

 Add a blank space between $$\displaystyle x$$ and $$\displaystyle y$$ so not to confuse with the product $$\displaystyle xy$$, i.e., $$\displaystyle \lfloor x \ y \rfloor$$, instead of $$\displaystyle \lfloor x y \rfloor$$. Similarly, $$\displaystyle \lfloor d \ e \rfloor$$, instead of $$\displaystyle \lfloor d e \rfloor$$. Egm6322.s09 13:54, 13 April 2009 (UTC)



This are called canonical forms because they generate circles, ellipses, parabolas, hyberpolas by cutting the cone with a plane in different sections

Ellipses

$$\left ( \frac{x}{a} \right )^2+\left ( \frac{y}{b} \right )^2=1$$

where : $$\left ( \frac{x}{a} \right )=\xi $$

and $$\left ( \frac{y}{b} \right )=\eta$$

If in the above equation a=b then you will have a circle (specific case of ellipses)

Parabolas

$$\pm \xi^2-\eta = 0$$

when:

$$+\xi \Rightarrow$$ concave in $$+\eta$$ direction

$$-\xi \Rightarrow$$ concave in $$-\eta$$ direction

$$ax^2-y=0$$

$$\pm (\sqrt{\left | a \right |}x)^2-y=0$$

Where:

$$\sqrt{\left | a \right |}x=\xi $$

and $$y=\eta$$

Hyperbolas

$$\xi ^2-\eta ^2=\pm 1 $$

$$\xi\eta=\pm 1$$

Goal: Find coordinate transformation that removes mixed products xy

The Eigenvalue Problem
$$\underline J =\begin{bmatrix} a & b\\ c & d \end{bmatrix}$$ and  $$det \underline J= \left (ad-bc  \right )$$

$$\underline J^T =\begin{bmatrix} a & c\\ b & d \end{bmatrix}$$ and  $$det \underline J^T= \left (ad-bc  \right )$$

We see that $$det \underline J=det \underline J^t$$

The Eigen Value problem of $$\underline A=\begin{bmatrix} a & b\\ b & c \end{bmatrix}$$ is $$det\left ( \underline A-\lambda \underline I\right )=0$$ Solving for $$ \lambda$$ ,We have ,

$$det\left(\begin{bmatrix} a & b\\ b & c \end{bmatrix} -\lambda \begin{bmatrix} 1 & 0\\ 0 & 1 \end{bmatrix}\right) =0$$

$$\Rightarrow det\left( \begin{bmatrix} a-\lambda & b\\ b & c-\lambda \end{bmatrix}\right)=0 $$

$$\Rightarrow \left(a-\lambda\right)\left(c-\lambda\right)-b^2=0$$

$$\Rightarrow \lambda^2-\lambda(a+c)+(ac-b^2)=0$$    (1)

The two roots of $$\lambda$$ are

$$\lambda_1=\frac{(a+c )+\sqrt{(a+c)^2-4(ac-b^2)}}{2}$$

and

$$\lambda_2=\frac{(a+c )-\sqrt{(a+c)^2-4(ac-b^2)}}{2}$$

$$\therefore$$ from equation (1) we have the sum of roots as $$(a+c)$$ and the product of roots $$(ac-b^2)$$

$$\therefore$$ from the factor theorem we have equation (1) as

$$\left (\lambda-\lambda_1  \right )\left (\lambda-\lambda_2   \right )=0$$

$$\underline {Theorem}$$

Any real matrix ,order $$n\times n$$,is diagonizable ,i.e,n Eigen values which are real numbers $$(\lambda_1,...\lambda_n)$$.This matrix called $$\underline A$$,can be decomposed as:

$$\underline A=\underline V \underline \Lambda     \underline V^T$$

and for an orthogonal matrix

$$V^{-1}=V^T$$ and $$\Lambda=diag[\lambda_1,...,\lambda_n]$$ and $$VV^{-1}=VV^T$$

$$\therefore det(I)=det(VV^T)=(detV)^2$$

$$\Rightarrow detV=\pm1$$

$$detV=1$$ in a righthanded co-ordinate system

&

$$detV=-1$$ in a left handed co-ordinate system

$$\therefore det \underline A=det\left( \underline V \underline \lambda \underline V^T\right)$$

Egm6322.s09.bit.gk 17:34, 10 April 2009 (UTC)

Example for Eigen value and Eigen Vector

To find the Eigen values and Eigen vector of

$$\underline A=\begin{bmatrix} 2 & 5\\ 5 & 7 \end{bmatrix}$$

Using MATLAB ,from the given syntax,

$$[V,D]=eig(\underline A)$$

where the Eigen Values D are

$$D= \begin{bmatrix} -1.0902 & 0\\ 0      & 10.0902 \end{bmatrix}$$

we have ,the Eigen vector in form of Modal Matrix

$$V= \begin{bmatrix} -0.8507 & 0.5257\\ 0.5257 & 0.8507 \end{bmatrix}$$

The Modal matrix M is represented generally as

$$M^{-1}AM=D$$ Where A is the given matrix and D is the diagonal matrix containing the

eigenvalues of A

We have from the given Eigenvalues,we can find the eigen vectors

$$(A-\lambda_1 I)V_1=0$$

$$(A-\lambda_2 I)V_2=0$$

$$\Rightarrow AV_1=V_1\lambda_1$$

and

$$\Rightarrow AV_2=V_2\lambda_2$$

Writing these simultaneous equations in matrix form ,we have

$$A\begin{bmatrix} V_1 &V_2 \end{bmatrix}=\begin{bmatrix} V_1 &V_2 \end{bmatrix}\lambda$$

Where $$\lambda=\begin{bmatrix} \lambda_1 & 0\\ 0&\lambda_2 \end{bmatrix} $$ which represents the eigenvalues. $$ \therefore AV=V\lambda$$

This can also be shown by the numerical example from above using MATLAB.

$$A=\begin{bmatrix} 2 &5 \\ 5& 7 \end{bmatrix}$$   ;    $$V= \begin{bmatrix} -0.8507 & 0.5257\\ 0.5257 & 0.8507 \end{bmatrix}$$   ;     $$D= \begin{bmatrix} -1.0902 & 0\\ 0      & 10.0902 \end{bmatrix}$$

$$\therefore$$ using MATLAB ,we have

$$AV=VD=\begin{bmatrix} 0.9271 & 5.3049 \\   -0.5736& 8.5834

\end{bmatrix} $$

Egm6322.s09.bit.gk 17:34, 10 April 2009 (UTC)

=References=

=Signatures=

--Egm6322.s09.xyz 01:46, 10 April 2009 (UTC)

Egm6322.s09.bit.la 00:49, 10 April 2009 (UTC)

--EGM6322.S09.TIAN 01:05, 10 April 2009 (UTC)

Egm6322.s09.three.liu 14:10, 10 April 2009 (UTC)

Egm6322.s09.Three.nav 15:05, 10 April 2009 (UTC)

Egm6322.s09.bit.sahin 15:49, 10 April 2009 (UTC)

Egm6322.s09.bit.gk 17:46, 10 April 2009 (UTC)

Egm6322.s09.Three.ge 18:14, 10 April 2009 (UTC)

--Egm6322.s09.lapetina 18:45, 10 April 2009 (UTC)