User:Egm6321.f12.team2.yan

= Problem 1.1 - Derive The Second Total Time Derivative of the equation (2) p.1a-4 =

Given
$$\displaystyle\left.f(S,t)\right|_{S = Y^1(t)} = f(Y^1(t),t)$$

Find
The second total time derivative of $$\displaystyle\left.f(S,t)\right|_{S = Y^1(t)} = f(Y^1(t),t)$$

Total Time Derivative
Consider $$\displaystyle S=Y^1(t) $$, the total time derivative of the given function can be expressed as:

Substitute $$\displaystyle \dot Y^1 $$ for $$\displaystyle \frac{dY^1}{dt} $$:

Second Total Time Derivative
Take the second total time derivative of the equation (1.3):

=R*2.5=

Given
Consider the following function

Find
$$G(y',y,x) = \frac{d}{dx}\phi(x,y) = 0$$

and show that (2.5.1) is an N1-ODE.

Solution
From chain-rule:

$$\frac{d\phi}{dx} = \frac{\partial\phi}{\partial x} + \frac{\partial\phi}{\partial y}\frac{\partial y}{\partial x}$$

Applying chain-rule to (2.5.1) yields:

$$ \begin{align} \frac{\partial\phi}{\partial x} &= 2xy^{\frac{3}{2}} + 3x^{-1} \\ \frac{\partial\phi}{\partial y} &= \frac{3}{2}x^2y^{\frac{1}{2}} + 2y^{-1} \end{align} $$

where $$ \displaystyle y' := \frac{\partial y}{\partial x} $$

Take partial derivative of $$ \displaystyle M $$ and $$ \displaystyle N $$ with respect to $$ x $$ and $$ y $$ respectively:

We can conclude that $$G(x,y,{y}')$$ is an N1-ODE.

=R*2.6=

Given

 * {| style="width:100%" border="0"

$$ M=\frac{\partial \phi(x,y)}{\partial x}\displaystyle $$ (2.6.1)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$ N=\frac{\partial \phi(x,y) }{\partial y}\displaystyle $$ (2.6.2)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$ \frac{\partial M(x,y)}{\partial y}=\frac{\partial N(x,y)}{\partial x} \displaystyle $$ (2.6.3)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Find
Review calculus, and find the minimum degree of differentiability of the function $$\displaystyle \phi(x,y) $$. State the full theorem and provide a proof.

Mixed Derivative Theorem
If $$\displaystyle f(x,y)$$ and its partial derivatives $$\displaystyle f_{x}, f_{y}, f_{xy}$$ and $$\displaystyle f_{yx}$$ are defined in a neighborhood of $$\displaystyle (a,b)$$ and all are continuous at $$\displaystyle (a,b)$$, then $$\displaystyle f_{xy}(a,b)=f_{yx}(a,b)$$.

Mean Value Theorem
Assume $$\displaystyle f:\mathbb {R}^2\rightarrow \mathbb {R}$$ is differentiable. Define $$\displaystyle X_{0} = (x_{0}, y_{0})$$ and $$\displaystyle X = (x_{0} + u, y_{0} + v)$$.

Then there exists $$\displaystyle C$$ which lies on the line joining $$\displaystyle X_{0}$$ and $$\displaystyle X$$ such that
 * {| style="width:100%" border="0"

$$f(X) = f(X_{0}) + f'(C)(X-X_{0}) \displaystyle $$ i.e, there exists $$ c\in (0,1) \displaystyle $$ such that
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$f(x_{0} + u, y_{0} + v) - f(x_{0}, y_{0}) = uf_{x}(C) + vf_{y}(C) \displaystyle $$ where $$\displaystyle C = (x_{0} + cu, y_{0} + cv)$$
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Proof
Suppose $$\displaystyle f:\mathbb [a,b]\rightarrow \mathbb R$$
 * {| style="width:100%" border="0"

$$F_{1}(x,y) = f(x+u,y+v) - f(x+u,y)-f(x,y+v)+f(x,y) \displaystyle $$ (2.6.4) and
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$F_{2}(x,y) = f(x,y+v) - f(x,y) \displaystyle $$ (2.6.5) From Mean Value theory: Because
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$\displaystyle {F_{2}(a+u,b)-F_{2}(a,b)}=f(a+u,b+v) - f(a+u,b)-f(a,b+v)+f(a,b)=F_{1}(a,b) \displaystyle $$ (2.6.7) Replace $$F_{2}(a+u,b)-F_{2}(a,b)$$ in (2.6.6) with $$ F_{1}(a,b)$$
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$\displaystyle \frac{F_{1}(a,b)}{u}=\frac{\partial F_{2}}{\partial x}(a+cu,b) $$ (2.6.8) Referring to (2.6.5), the right hand side of (2.6.8) can be expressed as:
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$\displaystyle \frac{\partial F_{2}}{\partial x}(a+cu,b)=\frac{\partial f}{\partial x}(a+cu,b+v)-\frac{\partial f}{\partial x}(a+cu,b) $$ (2.6.9) Applying mean value theory to (2.6.9):
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$\displaystyle \frac{\frac{\partial f}{\partial x}(a+cu,b+v)-\frac{\partial f}{\partial x}(a+cu,b)}{v}=\frac{\partial^{2}}{\partial x\partial y}f(a+cu,b+cv) $$ (2.6.10) From (2.6.8), we know that $$\displaystyle \frac{F_{1}(a,b)}{u} =[ \frac{\partial f}{\partial x}(a+cu,b+v)-\frac{\partial f}{\partial x}(a+cu,b) ]u $$ The (2.6.10) can then be written as: When $$\displaystyle u\rightarrow 0$$, $$\displaystyle v\rightarrow 0$$, the limit of (2.6.11) can be expressed as: When we exchange $$u$$ with $$v$$ : As we define $$\displaystyle [a,b]\rightarrow \mathbb {R}$$, we can conclude that:
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Reference
=R*2.8=

Find
Why solving equation (2.11.1) for the integration factor $$\displaystyle h(x,y) $$ is usually not easy ?

Solution
Solving equation 2.11.1 for the integrating factor $$\displaystyle h(x,y) $$ is usually not easy because equation(2.8.1) is an non-linear partial differential equation with multi-variable $$x$$ and $$y$$. In the case that both $$h_x \ne 0$$ and $$h_y \ne 0$$, the integration can be complicated due to the existence of both $$N$$ and $$M$$ which also contain $$x$$ and $$y$$. Differently, if either $$h_x = 0$$ or $$h_y = 0$$, we can integrate the equation with respect to only one variable which makes the processes much simpler.

=R*3.3-Solving for homogenous counterpart =

Given
Instead of identifying $$\displaystyle y_H(x) $$ from $$ \displaystyle h(x)=exp\left[\int^{x}a_{0}(s)ds+k_{1}\right]

$$ and $$ \displaystyle y(x)=\frac{1}{h(x)}\left[\int^x h(s)b(s)ds + k_2\right] $$

Find
Solve the homogeneous counterpart of $$\displaystyle y' + a_0(x)\,y=0 $$.

Solution
{| style="width:100%" border="0" $$ \begin{align} \\& \displaystyle y' + a_0(x)\,y=0 \\& y'= -a_0(x)y \\& \int^y \frac{dy}{y}=-\int^s a_0(s)ds \\& y(x)=exp \left[-\int^x a_0(s)ds + k \right] \\& \Rightarrow y_H(x)=exp \left[-\int^x a_0(s)ds + k \right] \end{align} $$
 * style="width:95%" |
 * style="width:95%" |

We solved this by ourselves.

=R*3.2-Show the solution in the lecture notes agrees with King 2003 p.512=

Given
The solution of $$ \displaystyle y'+a_{0}\left(x\right)y=0 $$ in $$ \displaystyle y\left(x\right)=\frac{1}{h\left(x\right)}\left[\int^{x}h\left(s\right)b\left(s\right)ds+k_{2}\right] $$ presented in King 2003 p.512 is: , where and Note that the notations $$ P\left(x\right) $$ and $$ Q\left(x\right) $$ adopted in King 2003 p.512 correspond to $$ a_{0}\left(x\right) $$ and $$ b\left(x\right) $$ in the lecture notes respectively.

Find
Use $$ h\left(x\right)=exp\left[\int^{x}a_{0}\left(s\right)ds+k_{1}\right] $$ and $$ y\left(x\right)=\frac{1}{h\left(x\right)}\left[\int^{x}h\left(s\right)b\left(s\right)ds+k_{2}\right] $$ to identify $$ A$$, $$ y_{H}\left(x\right)$$ and $$ y_{P}\left(x\right)$$

Solution
Since

$$ \frac{1}{h\left(x\right)}=exp\left[-\int^{x}a_{0}\left(s\right)ds-k_{1}\right]=exp\left[-k_{1}\right]exp\left[-\int^{x}a_{0}\left(s\right)ds\right] $$  $$ \int^{x}h\left(s\right)b\left(s\right)ds=exp\left[k_{1}\right]\int^{x}b\left(s\right)exp\left[\int^{s}a_{0}\left(t\right)dt\right]ds $$

$$ y\left(x\right) $$ can then be expressed as:

The comparison results between (3.2.1) and (3.2.4) can be obtained as follow:

$$ \begin{align} \\& A=k_{2}exp\left[-k_{1}\right] \\& y_{H}\left(x\right)=k_{2}exp\left[-k_{1}\right]exp\left[-\int^{x}a_{0}\left(s\right)ds\right] \\& y_{P}\left(x\right)=exp\left[-\int^{x}a_{0}\left(s\right)ds\right]\int^{x}b\left(s\right)exp\left[\int^{s}a_{0}\left(t\right)dt\right]ds \end{align} $$

As we can see, the results agree with those in King 2003 p.512.  We solved this by ourselves. =R*3.1-Show that only one integration constant is required to solve L1-ODE-VC=

Find
Show that the integration constant $$ k_{1} $$ in (3.1.3) is NOT necessary, only $$ k_{2} $$ in $$ y\left(x\right)=\frac{1}{h\left(x\right)}\left[\int^{x}h\left(s\right)b\left(s\right)ds+k_{2}\right] $$ is necessary.

Solution
First,multiply (3.1.2) by $$ h\left(x\right) $$, Note that

$$ exp\left[\int^{x}a_{0}\left(s\right)ds+k_{1}\right]=exp\left[k_{1}\right]exp\left[\int^{x}a_{0}\left(s\right)ds\right] $$

The (3.1.4) can be expressed as: $$ exp\left[k_{1}\right] $$ appears on both sides of (3.1.5), which means the terms related to integration constant $$ k_{1} $$ are canceled out.  Thus we know that defining an integration constant $$ k_{1} $$ is unnecessary. We solved this by ourselves.

=R*3.4-Find the integration factor h=

Find
If (3.4.1) is not exact, find the integrating factor $$ h $$ to make it exact.

Solution
First, check whether (3.4.1) is exact: Since (3.4.1) is not exact, an integrating factor is needed to make it exact.  Assume the integrating factor $$ h $$ is a function of $$ x $$.  Thus, and can then be applied here. <p\> Where $$ n\left(s\right)=\frac{-2}{s^{2}}\left(s-s^{4}\right)=-2\left(\frac{1}{s}-s^{2}\right) $$

and the integrating factor $$ h\left(x\right) $$ can be obtained as follow:

$$ \begin{align} \\& h\left(x\right)=exp\left[-2\int^{x}\left(\frac{1}{s}-s^{2}\right)ds+k\right] \\& =exp\left[-2\log x+\frac{2}{3}x^{3}+k\right] \\& =x^{-2}exp\left[\frac{2}{3}x^{3}+k\right] \end{align} $$

We solved this by ourselves.
 * }

=R*4.5 Show the equivalence of 2nd exactness condition=

Find
Show the equivalence of 2nd exactness condition.

Solution
On our honor, we did this assignment on our own, without looking at the solutions in previous semesters or other online solutions. From the lecture notes (1)-(2) p.21-8: After Substituting the equations given in this problem into ($$):

Since the coeficients $$ C_{0}, C_{1}, C_{2}, C_{3} $$ are linearly independent, we can conclude that $$ C_{0}=C_{1}=C_{2}=C_{3}=0 $$. The 2nd exactness condition can then be obtained:

Find
Use the Taylor series at $$ x = 0 $$(MacLaurin series) to derive ($$) and ($$).

Solution
On our honor, we did this assignment on our own, without looking at the solutions in previous semesters or other online solutions. Maclaurin Series is defined as: Deine $$ f_{1}\left(x\right)=\left(1-x\right)^{-a} $$, and take the derivative of $$ f_{1}\left(x\right) $$ with respect to $$ x $$ in order to derive the Maclaurin Series of $$ f\left(x\right)=f_{1}\left(x\right) $$

Substitute the results into ($$), ($$) can be derived as followed:

Unlike the derivation of ($$), deriving ($$) in the same fashion would be significantly longer and more difficult. Deriving Maclaurin Series of $$ \arctan \left( 1+x \right) $$ first and take advantage of dividing Maclaurin Series of $$ \arctan \left( 1+x \right) $$ by $$ x $$ would be a better way to derive ($$). Define $$ g\left(x\right)=\arctan \left( x \right) $$ for conveniency, and do the same processes as that in deriving ($$):

Substitute each term into ($$): By dividing ($$) by $$ x $$, ($$) can then be obtained:

Find
Show that in the region near x = 0, ($$) is true.

Solution
On our honor, we did this assignment on our own, without looking at the solutions in previous semesters or other online solutions. As we can see, the contours of $$ F(5,-10;1;x) $$ and $$ \left(1-x\right)^{6}\left(1001x^{4}-1144x^{3}+396x^{2}-44x+1\right) $$ are almost the same. The deviations between two functions are extremely small near $$ x = 0 $$ region.

Find
Show that ($$) agrees with King 2003 p.8 (1.6), i.e.,

with

Hint: $$\displaystyle \left( \frac{u_2}{u_1} \right)' = \frac{1}{h} $$ Discuss the feasibility of the following choices for variation of parameters:

Solution
On our honor, we did this assignment on our own, without looking at the solutions in previous semesters or other online solutions. Define $$ G\left(x\right)=\frac{u_{2}\left(x\right)}{u_{1}\left(x\right)} $$ ,thus

$$ h\left(x\right)=\frac{u_{1}^{2}}{u'_{2}u_{1}-u_{2}u'_{1}} $$

From the formula of integration by parts: $$ \int F\left(x\right)G'\left(x\right)dx=F\left(x\right)G\left(x\right)-\int G\left(x\right)F'\left(x\right)dx $$ where

and

($$) can then be written as:

After substituting all the terms into ($$):

($$) and ($$) are identical, which shows that (1) p.34-6 agrees with King 2003 p.8 (1.6). For feasibility of case 1: $$ y\left(x\right)=U\left(x\right)\pm u_\left(x\right) $$, $$ y'={\frac{d}{dx}}U\left(x\right)\pm {\frac{d}{dx}}u_\left(x\right) $$, and $$ y''={\frac{d^{2}}{d{x}^{2}}}U\left(x\right) \pm {\frac{d^{2}}{d{x}^{2}}}u_\left(x\right) $$ $$ a_{0}y+a_{1}y'+y''=a_U\left(x\right) \pm a_{\frac{d}{dx}}U\left(x\right) \pm {\frac{d^{2}}{d{x}^{2}}}U\left(x\right) $$ For feasibility of case 2: $$ y={\frac {U \left( x \right) }{u_ \left( x \right) }} $$, $$ y'={\frac { \left( {\frac {d}{dx}}U \left( x \right) \right) u_ \left( x \right) -U \left( x \right) {\frac {d}{dx}}u_ \left( x \right) }{ \left( u_ \left( x \right)  \right) ^{2}}} $$, and

For feasibility of case 3: $$ y={\frac {u_ \left( x \right) }{U \left( x \right) }} $$, $$ y'=-{\frac { \left( {\frac {d}{dx}}U \left( x \right) \right) u_ \left( x \right) -U \left( x \right) {\frac {d}{dx}}u_ \left( x \right) }{ \left( U \left( x \right)  \right) ^{2}}} $$, and

None of the given trial solutions are feasible for substituting the trial solutions into a Non-homogenous L2-ODE-VC can not produce a general non-homogenous L1-ODE-VC. Several terms contain $$U(x)$$ still exist after the substitutions.

Given
where $$ \displaystyle r_2(x) = \frac{1}{x-1} $$, and the trial solution: $$ y=e^{r_2(x)} $$

Find
Explain why $$ r_2(x) $$ is not an valid root.

Solution
On our honor, we did this assignment on our own, without looking at the solutions in previous semesters or other online solutions. Substitute $$y,y',y''$$ into ($$): $$ r_2(x) $$ is not an valid root because any root of the characteristic equation has to be a constant. And as we can see, ($$) is not equal to zero for every $$ x $$.

Solution
On our honor, we did this assignment on our own, without looking at the solutions in previous semesters or other online solutions. Since $$ P_{2}\left(x\right) $$ is a homogeneous solution to the Legendre's equation with $$ n=2$$, the method of reduction of order can be applied to find the second homogeneous solution to the Legendre's equation. The Legendre's equation takes the form

From the reduction of order formula, King 2003 p.6 (1.3),i.e.,

where $$ u_{1}\left(x\right) $$ in this case is equal to $$ P_{2}\left(x\right) $$ The result shows that $$u_{2}\left(x\right)$$ derived from the method of reduction of order is the same as $$Q_{2}(x)$$ given in the problem.

Find
1. Find $$ \left\{ dx_{i}\right\} =\left\{ dx_{1},dx_{2},dx_{3}\right\} $$ in terms of $$ \left\{ \xi_{j}\right\} =\left\{ \xi_{1},\xi_{2},\xi_{3}\right\} $$ and $$ \left\{ d\xi_{k}\right\} =\left\{ d\xi_{1},d\xi_{2},d\xi_{3}\right\} $$

2. Find $$ ds^{2}=\underset{i}{\sum}\left(dx_{i}\right)^{2}=\underset{k}{\sum}\left(h_{k}\right)^{2}\left(d\xi_{k}\right)^{2} $$. Identify $$ \left\{ h_{i}\right\} $$ in terms of $$ \left\{ \xi_{i}\right\} $$.

3. Find $$ \triangle u $$ in cylindrical coordinates.

4. Use separation of variable to find the separated equations and compare to the Bessel equ. (1) p.27-1.

2
Using the results obtained in ($$), ($$), and ($$): From ($$), we can identify:

$$ \left\{ h_{i}\right\} =\left\{ 1,\xi_{1},1\right\} $$

3
Laplace operator in general curvilinear coordinate is defined as:

and $$ h_{1}h_{2}h_{3}=\xi_{1} $$.

For $$ i = 1 $$:

For $$ i = 2 $$:

For $$ i = 3 $$:

Sum up ($$), ($$), and ($$), $$ \triangle u $$ can then be obtained:

4
Let $$ u\left(\xi_{1,}\xi_{2},\xi_{3}\right)=P\left(\xi_{1}\right)Q\left(\xi_{2}\right)R\left(\xi_{3}\right) $$ and $$ \triangle u=0 $$.

After substituting $$ u\left(\xi_{1,}\xi_{2},\xi_{3}\right) $$ into ($$) we have:

Multiply ($$) by $$ \frac{\xi_{1}^{2}}{PQR} $$:

If we define $$ \frac{1}{R}\frac{d^{2}R}{d\xi_{3}^{2}} = \beta, \frac{1}{Q}\frac{d^{2}Q}{d\xi_{2}^{2}} = -\nu^{2} $$and rearrange ($$):

Bessel function is in the form of $$x^2 \frac{d^2 y}{dx^2} + x \frac{dy}{dx} + (x^2 - \alpha^2)y = 0$$.

Note: $$\xi_{1}, P\left(\xi_{1}\right), \nu $$ correspond to $$ x, y, \alpha $$ respectively.

From observation, there is an additional coefficient $$ \beta $$ in front of $$ \xi_{1}^{2} $$ in ($$) as compared with the Bessel function.