User:Egm6321.f12.team1.cla/report5

=Problem 5.1 - Matrix Exponential=

Link to lecture notes

Find:
Show that

Solution:
Matrix $$\mathbf A$$ is composed of n rows and m columns where a particular point in the matrix is notated as $$ A_{i,j}$$. Shown below is a general matrix and its transpose where the transpose is found by switching from $$A_{i,j}$$ to $$ A_{j,i}$$.

$$\mathbf A=\begin{bmatrix} A_{1,1} & A_{1,2} & \cdot & \cdot & \cdot & A_{1,m}\\ A_{2,1} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{n,1} & \cdot & \cdot & \cdot & \cdot & A_{n,m} \end{bmatrix}$$

$$\mathbf A^T=\begin{bmatrix} A_{1,1} & A_{2,1} & \cdot & \cdot & \cdot & A_{m,1}\\ A_{1,2} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{1,n} & \cdot & \cdot & \cdot & \cdot & A_{m,n} \end{bmatrix}$$

The formula of the exponential of the transpose of a matrix is shown below and was found by making slight changes in equation ($$)

By forming an mxn Identity matrix and plugging it into ($$) along with the transpose of matrix A we can solve for the first 3 terms of the series. This is shown below.


 * $$=\begin{bmatrix}

1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}+\frac{1}{1}\begin{bmatrix} A_{11} & A_{2,1} & \cdot & \cdot & \cdot & A_{m,1}\\ A_{1,2} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{1,n} & \cdot & \cdot & \cdot & \cdot & A_{mn} \end{bmatrix}+\frac{1}{2}\begin{bmatrix} A_{11} & A_{2,1} & \cdot & \cdot & \cdot & A_{m,1}\\ A_{1,2} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{1,n} & \cdot & \cdot & \cdot & \cdot & A_{mn} \end{bmatrix}^2$$


 * $$=\begin{bmatrix}

1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}+\frac{1}{1}\begin{bmatrix} A_{11} & A_{2,1} & \cdot & \cdot & \cdot & A_{m,1}\\ A_{1,2} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{1,n} & \cdot & \cdot & \cdot & \cdot & A_{mn} \end{bmatrix}+\frac{1}{2}\begin{bmatrix} A_{1,j}A_{i,1} & A_{2,j}A_{i,1} & \cdot & \cdot & \cdot & A_{n,j}A_{i,1}\\ A_{1,j}A_{i,2} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{1,j}A_{i,m} & \cdot & \cdot & \cdot & \cdot & A_{n,j}A_{i,m} \end{bmatrix}$$


 * $$\exp(\mathbf A^T)=\begin{bmatrix}

1+A_{1,1}+\frac{1}{2}(A_{1,j}A_{i,1}) & A_{2,1}+\frac{1}{2}(A_{2,j}A_{i,1}) & \cdot & \cdot & \cdot & A_{n,1}+\frac{1}{2}(A_{n,j}A_{i,1})\\ A_{1,2}+\frac{1}{2}(A_{1,j}A_{i,2}) & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{1,m}+\frac{1}{2}(A_{i,j}A_{i,m}) & \cdot & \cdot & \cdot & \cdot & 1+A_{n,m}+\frac{1}{2}(A_{n,j}A_{i,m}) \end{bmatrix}$$

We can do the same thing by plugging in Matrix A and then taking the transpose after the first 3 terms in the series are added together to prove equation ($$).


 * $$=\begin{bmatrix}

1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}+\frac{1}{1}\begin{bmatrix} A_{11} & A_{12} & \cdot & \cdot & \cdot & A_{1m}\\ A_{21} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{n1} & \cdot & \cdot & \cdot & \cdot & A_{nm} \end{bmatrix}+\frac{1}{2}\begin{bmatrix} A_{11} & A_{12} & \cdot & \cdot & \cdot & A_{1m}\\ A_{21} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{n1} & \cdot & \cdot & \cdot & \cdot & A_{nm} \end{bmatrix}^2$$


 * $$=\begin{bmatrix}

1 & 0 & 0 & 0 & 0 & 0\\ 0 & 1 & 0 & 0 & 0 & 0\\ 0 & 0 & 1 & 0 & 0 & 0\\ 0 & 0 & 0 & 1 & 0 & 0\\ 0 & 0 & 0 & 0 & 1 & 0\\ 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}+\frac{1}{1}\begin{bmatrix} A_{11} & A_{12} & \cdot & \cdot & \cdot & A_{1m}\\ A_{21} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{n1} & \cdot & \cdot & \cdot & \cdot & A_{nm} \end{bmatrix}+\frac{1}{2}\begin{bmatrix} A_{1,j}A_{i,1} & A_{1,j}A_{i,2} & \cdot & \cdot & \cdot & A_{1,j}A_{i,m}\\ A_{2,j}A_{i,1} & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{n,j}A_{i,1} & \cdot & \cdot & \cdot & \cdot & A_{n,j}A_{i,m} \end{bmatrix}$$


 * $$\exp[\mathbf A]=\begin{bmatrix}

1+A_{1,1}+\frac{1}{2}(A_{1,j}A_{i,1}) & A_{1,2}+\frac{1}{2}(A_{1,j}A_{i,2}) & \cdot & \cdot & \cdot & A_{1,m}+\frac{1}{2}(A_{i,j}A_{i,m})\\ A_{2,1}+\frac{1}{2}(A_{2,j}A_{i,1}) & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{n,1}+\frac{1}{2}(A_{n,j}A_{i,1}) & \cdot & \cdot & \cdot & \cdot & 1+A_{n,m}+\frac{1}{2}(A_{n,j}A_{i,m}) \end{bmatrix}$$


 * $$(\exp[\mathbf A])^T=\begin{bmatrix}

1+A_{1,1}+\frac{1}{2}(A_{1,j}A_{i,1}) & A_{2,1}+\frac{1}{2}(A_{2,j}A_{i,1}) & \cdot & \cdot & \cdot & A_{n,1}+\frac{1}{2}(A_{n,j}A_{i,1})\\ A_{1,2}+\frac{1}{2}(A_{1,j}A_{i,2}) & \cdot & &  &  & \cdot\\ \cdot & & \cdot &  &  & \cdot\\ \cdot & &  & \cdot &  & \cdot\\ \cdot & &  &  & \cdot & \cdot\\ A_{1,m}+\frac{1}{2}(A_{i,j}A_{i,m}) & \cdot & \cdot & \cdot & \cdot & 1+A_{n,m}+\frac{1}{2}(A_{n,j}A_{i,m}) \end{bmatrix}$$

Therefore ($$) is satisfied

As an example

$$\mathbf A^T=\begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}$$

Recall Equation ($$)


 * $$=\begin{bmatrix}

1 & 0\\ 0 & 1 \end{bmatrix}+\frac{1}{1}\begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}+\frac{1}{2}\begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}^2+\frac{1}{6}\begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}^3$$


 * $$=\begin{bmatrix}

1 & 0\\ 0 & 1 \end{bmatrix}+\begin{bmatrix} 0 & -1\\ 1 & 0 \end{bmatrix}+\begin{bmatrix} \frac{-1}{2} & 0\\ 0 & \frac{-1}{2} \end{bmatrix}^2+\begin{bmatrix} 0 & \frac{1}{6}\\ \frac{-1}{6}& 0 \end{bmatrix}^3$$


 * $$\exp(\mathbf A^T)=\begin{bmatrix}

\frac{1}{2} & \frac{-5}{6}\\ \frac{5}{6}& \frac{1}{2} \end{bmatrix}$$

Recall Equation ($$)


 * $$=\begin{bmatrix}

1 & 0\\ 0 & 1 \end{bmatrix}+\frac{1}{1}\begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}+\frac{1}{2}\begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}^2+\frac{1}{6}\begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}^3$$


 * $$=\begin{bmatrix}

1 & 0\\ 0 & 1 \end{bmatrix}+\begin{bmatrix} 0 & 1\\ -1 & 0 \end{bmatrix}+\begin{bmatrix} \frac{-1}{2} & 0\\ 0 & \frac{-1}{2} \end{bmatrix}^2+\begin{bmatrix} 0 & \frac{1}{6}\\ \frac{-1}{6}& 0 \end{bmatrix}^3$$


 * $$\exp[\mathbf A]=\begin{bmatrix}

\frac{1}{2} & \frac{5}{6}\\ \frac{-5}{6}& \frac{1}{2} \end{bmatrix}$$


 * $$(\exp[\mathbf A])^T=\begin{bmatrix}

\frac{1}{2} & \frac{-5}{6}\\ \frac{5}{6}& \frac{1}{2} \end{bmatrix}$$

Therefore ($$) is satisfied

=Problem 5.2 Proof of finding the exponential of a diagonal matrix with complex numbers =

Link to lecture notes

Given:
The diagonal matrix below

where $$d_{1},...,d_{n}$$ are complex numbers of the form $$d=x+iy$$

Complex numbers can be written in polar form using the exponential using the formula below,

Find:
Show that for a diagonal matrix with complex numbers, as seen in equation ($$), the formula below can be used to find the exponential of matrix D.

Solution:
First we will plug equation ($$) into equation ($$) to find the exponential of matrix D using the first method.

$$ exp[D]=1+\frac {1}{1!}D+\frac{1}{2!}D^2+...$$

$$ exp[D]=1+\frac {1}{1!}D+\frac{1}{2!}D^2+...$$


 * $$=\begin{bmatrix}

1 & 0\\ 0 & 1 \end{bmatrix}+\begin{bmatrix} d_{1} & 0\\ 0 & d_{n} \end{bmatrix}+\frac{1}{2}\begin{bmatrix} d_{1} & 0\\ 0 & d_{n} \end{bmatrix}^2+\frac{1}{6}\begin{bmatrix} d_{1} & 0\\ 0 & d_{n} \end{bmatrix}^3+...$$


 * $$=\begin{bmatrix}

1 & 0\\ 0 & 1 \end{bmatrix}+\begin{bmatrix} d_{1} & 0\\ 0 & d_{n} \end{bmatrix}+\begin{bmatrix} \frac{1}{2}d_{1}^2 & 0\\ 0 & \frac{1}{2}d_{n}^2 \end{bmatrix}+\begin{bmatrix} \frac{1}{6}d_{1}^3 & 0\\ 0 & \frac{1}{6}d_{n}^3 \end{bmatrix}+...$$

Now we know d terms in the matrix above are complex numbers and can be written polar form using equation ($$).
 * $$=\begin{bmatrix}

1 & 0\\ 0 & 1 \end{bmatrix}+\begin{bmatrix} r_{1} \exp(r_{1}i\theta_{1}) & 0\\ 0 & r_{n} \exp(r_{n}i\theta_{n}) \end{bmatrix}+\begin{bmatrix} \frac{1}{2}r_{1}^2 \exp(r_{1}i\theta_{1}2) & 0\\ 0 & \frac{1}{2}r_{n}^2\exp(r_{n}i\theta_{n}2) \end{bmatrix}+\begin{bmatrix} \frac{1}{6}r_{1}^3 \exp(r_{1}i\theta_{1}3) & 0\\ 0 & \frac{1}{6}r_{n}^3\exp(r_{n}i\theta_{n}3) \end{bmatrix}$$

We can combine all four matrices into one.

Now we can find the exponential of the matrix D using equation to show that it gives the same answer as the method above.

We plug the d terms in equation ($$) into equation ($$) to get,


 * $$ exp[D]=\begin{bmatrix}

exp(r_{1}exp(r_{1}i\theta_{1})) & 0\\ 0 & exp(r_{n}exp(r_{n}i\theta_{n})) \end{bmatrix}$$

Now we need to use the expanded exponential seen in equation ($$) on the terms in matrix D


 * $$ exp[D]=\begin{bmatrix}

exp(1+r_{1} \exp(r_{1}i\theta_{1})+\frac{1}{2}r_{1}^2 \exp(r_{1}i\theta_{1}2)+\frac{1}{6}r_{1}^3 \exp(r_{1}i\theta_{1}3)) & 0\\ 0 & exp(1+r_{n} \exp(r_{n}i\theta_{n})+\frac{1}{2}r_{n}^2\exp(r_{n}i\theta_{n}2)+\frac{1}{6}r_{n}^3\exp(r_{n}i\theta_{n}3)) \end{bmatrix}$$

The solution above is the same solution found in equation ($$) showing that equation ($$) can be used to find the exponential of a diagonal matrix.

=Problem 5.3 Show that proofs from 5.1 and 5.2 can be combined if matrix A is diagonalizable =

Link to lecture notes

Given
$$ exp^A = \phi Diag [e^{\lambda_1},e^{\lambda_2},...........e^{\lambda_n},] \phi^{-1}$$

To show
Prove the above statement.

Solution
This is only applicable in cases where A is a diagonalizable matrix. For any function the exponential is defined as $$e^x = 1 + \frac{x}{1!} + \frac{x^2}{2!} + ............ + \frac{x^n}{n!}$$

applying this to a matrix $$e^A = 1 + \frac{A}{1!} + \frac{A^2}{2!} + ............ + \frac{A^n}{n!}$$

Now for a matrix $\Lambda$ which is a diagonal matrix of A

$$e^\Lambda = 1 + \frac{\Lambda}{1!} + \frac{\Lambda^2}{2!} + ............ + \frac{\Lambda^n}{n!}$$

Multiplying this out we can see that the result is:

$$e^\Lambda = \left[ {\begin{array}{cc} 1 + \frac{\lambda_{11}}{1!} + \frac{\lambda_{11}^2}{2!} + ..... + \frac{\lambda_{11}^n}{n!} & 0 \\ 0 & 1 + \frac{\lambda_{22}}{1!} + \frac{\lambda_{22}^2}{2!} + ..... + \frac{\lambda_{22}^n}{n!} \end{array} } \right]$$

$$e^\Lambda = \left[ {\begin{array}{cc} e^{b_{11}} & 0 \\ 0 & e^{b_{22}} \end{array} } \right] $$ where $$b_{ij}$$ is the terms of B in the $$i,j$$ position.

For a diagonalizable matrix $$A = \Phi \Lambda \Phi$$ Where $$\Phi$$ is the eigenvector for A

We can also see that

$$A^n = \Phi \Lambda^n \Phi$$

From this and the previous result we see that

$$e^A = \Phi e^\Lambda \Phi$$

=Problem 5.4 - Show exponential matrix by finding eigenvalues & eigenvectors =

Link to lecture notes

Given:
The matrix $$\mathbf B t$$ defined as

Solution:
Referring to the lecture note ,

where $$\mathbf \Phi=[\phi_1,\phi_2,...,\phi_n]$$$$\phi_n$$ are n linearly independent eigenvectors of matrix$$\mathbf B t$$

and $$\Lambda=Diag[e^{\lambda_1},e^{\lambda_2},...,e^{\lambda_n}]$$ $$\lambda_n$$ is the respective eigenvalues of matrix$$\mathbf B t$$

Using Matlab to find eigenvectors and eigenvalues of matrix$$\mathbf B t$$, we input codes The results from Matlab gives eigenvectors

$$\phi_1=[1, -i]^T,$$

$$\phi_2=[1, i]^T,$$

so $$\mathbf \Phi=\begin{bmatrix} 1 &1 \\ -i& i \end{bmatrix}$$

and $$\mathbf \Phi ^{-1}=\begin{bmatrix} 0.5 & 0.5i \\ 0.5 & -0.5i \end{bmatrix}=\begin{bmatrix} i &-1 \\ i& 1 \end{bmatrix} \frac{1}{2i}$$

The eigenvalue diagonal matrix is $$\Lambda=\begin{bmatrix} it & 0 \\ 0 & -it \end{bmatrix}$$

Plugging $$\Phi ,\Lambda ,\Phi^{-1}$$ into Equation($$) yeilds

$$\mathbf B t= \begin{bmatrix} 1 &1 \\ -i& i \end{bmatrix} \begin{bmatrix} it &0 \\ 0& -it \end{bmatrix} \begin{bmatrix} i &-1 \\ i& +1 \end{bmatrix} \frac{1}{2i}$$

which is exactly the same as Equation($$)

Using Equation($$) to solve $$exp[\mathbf B t]$$

Simply the result by using MATLAB command, we get Equation ($$)

$$exp[\mathbf B t]= \begin{bmatrix} 1 &1 \\ -i& i \end{bmatrix} \begin{bmatrix} e^{it} &0 \\ 0& e^{-it} \end{bmatrix} \begin{bmatrix} i &-1 \\ i& +1 \end{bmatrix} \frac{1}{2i}=\begin{bmatrix} cos\,t &-sin\,t \\ sin\,t & cos\,t \end{bmatrix} \neq \begin{bmatrix} 1 &e^{-t} \\ e^{t} & 1 \end{bmatrix}$$

=Problem 5.5* -Show the first integral function =

Given:
and

Prove That:

 * $$\phi(x,y,p) = P(x)p + T(x) y + k$$

Solution:
We know

Comparing the only terms containing $$y''$$ in ($$) and ($$), we can obtain
 * $$\phi_p = P(x)$$

Integrating w.r.t $$p$$, we get

Differentiating w.r.t. $$x, y$$, we get
 * $$\phi_x = P'(x)p + \frac {\partial k_1(x,y)} {\partial x}$$

and
 * $$\phi_y = \frac {\partial k_1(x,y)} {\partial y}$$

Substituting $$\phi_x, \phi_y, \phi_p $$ into ($$) and comparing with ($$)
 * $$R(x)y + Q(x) y' + \cancel {P(x) y} = P'(x)p + \frac {\partial k_1(x,y)} {\partial x} + \frac {\partial k_1(x,y)}{\partial y} y' + \cancel {P(x) y}$$
 * $$\Rightarrow R(x)y + Q(x) y'= P'(x)y' + \frac {\partial k_1(x,y)} {\partial x} + \frac {\partial k_1(x,y)}{\partial y} y'$$

Equating coefficients with $$y'$$


 * $$Q(x) = P'(x) + \frac {\partial k_1(x,y)}{\partial y} $$

and we are left with


 * $$R(x)y = \frac {\partial k_1(x,y)} {\partial x} $$

Integrating w.r.t. $$x$$
 * $$k_1(x,y) = \left( \int^{x} R(s)ds \right ) y + k_2(y)$$
 * $$\Rightarrow k_1(x,y) = T(x) y + k_2(y)$$

Substituting into ($$), we have

Differentiating w.r.t $$x$$
 * $$\phi_x = P'(x)p + T'(x) y $$

Differentiating w.r.t $$y$$
 * $$\phi_y = T(x) + k_2'(y) $$

Substituting into ($$)
 * $$P'(x)p + T'(x) y + T(x) + k_2'(y) = R(x)y + Q(x)y'$$
 * $$\Rightarrow T'(x) y + [P'(x) + T(x]y' + k_2'(y)y' = R(x)y + Q(x)y'$$

Equating coefficients of $$y'$$ in both sides, we obtain
 * $$k_2'(y) = 0$$
 * $$\Rightarrow k_2(y) = k (Constant)$$

Substituting into ($$), we have


 * $$\phi(x,y,p) = P(x)p + T(x) y + k\qquad\blacksquare$$

Pavel Bhowmik (talk) 15:45, 31 October 2012 (UTC)

=Problem 5.6* - - Solving a linear second order ordinary differential equation=

Given:
The necessary and sufficient conditions for exactness of a 2nd order linear differential equatino as

Find:
1. Show that $$G$$ below is exact.

2. Find the first integral using equation below if exact.

3. Find the solution to $$y(x)$$ for equation ($$).

Part 1:
First we have to find out if equation ($$) is exact. The first exactness condition is satisfied because equation ($$) was given in the correct form. The second exactness condition is satisfied if equations ($$) and ($$) are satisfied. So let's find $$f$$ and $$g$$ and then all the terms for equations ($$) and ($$).

By definition $$f$$ and $$g$$ are as follows,


 * $$f=cos(x)$$
 * $$g=(x^2-sin(x))p+2xy$$

where $$p$$ is equal to  $$y'$$.


 * $$f_{xx}=-cos(x)$$
 * $$f_{xy}=0$$
 * $$f_{yy}=0$$
 * $$f_{y}=0$$
 * $$f_{xp}=0$$
 * $$f_{yp}=0$$
 * $$g_{pp}=0$$
 * $$g_{xp}=2x-cos(x)$$
 * $$g_{yp}=0$$
 * $$g_{y}=2x$$

Now we plug these terms into ($$),
 * $$f_{xx} + 2pf_{xy} + p^2f_{yy} = g_{xp}+pg_{yp}-g_{y}$$
 * $$-cos(x) + 2p(0) + p^2(0) = 2x-cos(x)+p(0)-2x$$

Canceling terms you are left with,
 * $$-cos(x)=-cos(x)$$

Plugging in terms for equation($$) we get,
 * $$f_{xp}+pf_{yp}+2f{y}= g_{pp}$$
 * $$0+p(0)+2(0)= 0$$
 * $$0=0$$

Part 2:

 * $$\phi(x,y,p)=h(x,y)+\int f(x,y,p)dp$$

The integral of $$f$$ is easy to find because we already have$$f$$.
 * $$\phi(x,y,p)=h(x,y)+\int cos(x)dp$$

However finding $$h(x,y)$$ is not so easy because we can not simply pull the term from $$G$$. We have to use equation ($$) and solve for $$h$$ with respect to $$x$$ and $$y$$ independently.
 * $$g=\phi_{x}+\phi_{y}p$$

Therefore by plugging in terms into equation ($$),
 * $$g=[h_{x}-sin(x)p]+(h_{y}+0)y'=x^2p-(sin(x)p)+2xy$$

Therefore,
 * $$h_{x}=2xy+k_{1}(y)$$
 * $$\Rightarrow h(x,y)=x^2 y+k_{1}(y)$$
 * $$h_{y}=x^2 +k_{1}'(y)=x^2$$
 * $$\Rightarrow k_{1}'(y)=0$$
 * $$\Rightarrow k_{1}(y)=k_1$$
 * $$h(x,y)=x^2 y+k_{1}$$

Now we can plug the $$h(x,y)$$ into ($$),


 * {| border="1" cellspacing="0" cellpadding="5" align="left"


 * $$\phi(x,y,p)=x^2y+cos(x)p=k_{2}$$
 * }
 * }

Part 3:
Now we need to find $$y(x)$$ and we can do this by solving the first integral for $$y$$.
 * $$\phi(x,y,p)=yx^2+cos(x)p=k_{2}$$

First put differential equation into the form,
 * $$\frac{x^2}{cos(x)}y+y'=\frac{k_{2}}{cos(x)}$$

where $$a_{0}(x):=\frac{x^2}{cos(x)}$$ and $$b(x):=\frac{k_{2}}{cos(x)}$$ We know that the integrating constant for this form of equation can be obtained as
 * $$h(x)=\exp \int^x a_{0}(x)dx$$

Plugging in the terms we get,
 * $$h(x)=\exp \int^x \frac{x^2}{cos(x)}dx$$

We know we can get $$y(x)$$ from the equation below,
 * $$y(x)= \frac{\int^x h(s)b(s)ds+k_{3}}{h(x)}$$
 * {| border="1" cellspacing="0" cellpadding="5" align="left"


 * $$y(x)= \frac{\int^x \exp [\frac{s^2}{cos(s)}]*\frac{k_{2}}{cos(s)}ds+k_{3}}{\exp[\int^x \frac{x^2}{cos(x)}dx]}$$
 * }
 * }

We were unable to come up with a closed form solution because the integrating factor $$h(x)=\exp \int^x \frac{x^2}{cos(x)}dx$$ does not provide a closed form integral.

=Problem 5.7* Proving the equivalence of symmetry of mixed partial derivatives in the first integral =

Link to lecture notes Airport Notes

Find:
Prove the conditions below using the given equations.
 * $$ \phi_{xy}=\phi=_{yx}, \phi_{yp}=\phi=_{py}, \phi_{px}=\phi=_{xp}$$

Solution:
We already know $$g_{0}$$ from equation ($$). So let's start with $$g_{1}$$


 * $$g_{1}=\frac{\partial}{\partial p} \frac {d\phi}{dx}$$
 * $$g_{1}=\frac{\partial}{\partial p} [\phi_{x}+\phi_{y}y'+\phi_{y'}y'']$$
 * $$g_{1}= \phi_{xy'}+\phi_{yy'}y'+\phi_{y'y'}y''$$

Now we can plug this $$g_{1}$$ into equation ($$) $$\frac{d}{dx}g_{1}=\frac {d}{dx}[\phi_{xy'}+\phi_{yy'}y'+\phi_{y'y'}y'']$$

We have two of the terms to solve equation ($$) but we still need to find $$g_{2}$$


 * $$\frac{d^2}{dx^2}g_{2}=\frac{d}{dx}g_{1}$$

where $$g_{1}=\phi_{y}$$
 * $$\frac{d^2}{dx^2}g_{2}=\frac{d}{dx}\phi_{y'}=\frac{d}{dx}[\phi_{y'x}+\phi_{y'y}y'+\phi_{y'y'}y'']$$

Now we can plug $$g_{0},\frac{d}{dx}g_{1},\frac{d^2}{dx^2}g_{2}$$ we can plug these into equation ($$).


 * $$g_{0}-\frac{d}{dx}g_{1}+\frac{d^2}{dx^2}g_{2}=0$$
 * $$\phi_{xy}-\frac{d}{dx}[\phi_{xy'}+\phi_{yy'}y'+\phi_{y}+\phi_{y'y'}y]+\frac{d}{dx}[\phi_{y'x}+\phi_{y'y}y'+\phi_{y'y'}y]=0$$

Canceling terms we get,
 * $$\phi_{xy}-\frac{d}{dx}[\phi_{xy'}+\phi_{yy'}y'+\phi_{y}]+\frac{d}{dx}[\phi_{y'x}+\phi_{y'y}y']=0$$

Now we need to combine like terms in the equation by the degree of y
 * $$(\phi_{xy}-\phi_{yx})+\frac{d}{dx}(\phi_{xy'}-\phi_{y'x})+\frac{d}{dx}(\phi_{y'y}y'-\phi_{yy'}y')=0$$

Therefore the only for this equation to be equal to zero each term has to individually to be equal,
 * $$(\phi_{xy}-\phi_{yx})=0$$
 * $$\phi_{xy}=\phi_{yx}$$

Plugging p in for y' we get,
 * $$\frac{d}{dx}(\phi_{xp}-\phi_{px})=0$$
 * $$(\phi_{xp}-\phi_{px})=0$$
 * $$\phi_{xp}=\phi_{px})$$

The final condition is found by solving the last term,
 * $$\frac{d}{dx}(\phi_{py}-\phi_{yp}y')=0$$
 * $$(\phi_{py}-\phi_{yp})=0$$
 * $$\phi_{py}=\phi_{yp}=0$$

=Problem 5.8* - Equivalence of the Second Exactness Conditions for N2-ODEs =

Link to lecture notes

Given
The second exactness condition for N2-ODEs is given by

$$g_0 - \frac{dg_1}{dx} + \frac{d^2g_2}{dx^2} = 0$$ where

$$g_j := \frac{\partial G}{\partial y^{(j)}}$$

$$\quad y^{(i)} := \frac{\partial^i y}{\partial x^i}$$

$$\quad\text{and}\quad G := g + fy''$$

Second Exactness also given by:

$$\displaystyle \begin{align} f_{xx} + 2pf_{xy} + p^2f_{yy} &= g_{xp} + pg_{yp} - g_y\\ f_{xp} + pf_{yp} + 2f_y &= g_{pp} \end{align} $$

To Show
Show that the two forms are equavalent

Solution
Let $$p := y'$$ and $$q:= y''$$.

Then, $$ g_0 = \frac{\partial}{\partial y} (g + fy'') = g_y + f_yq$$

$$ g_1 = \frac{\partial}{\partial p} (g + fy'') = g_p + f_pq$$

$$ \frac{dg_1}{dx} = g_{px} + g_{py}p + g_{pp}q + ( f_{px} + f_{py}p + f_{pp}q ) q + f_pq'$$

$$ g_2 = \frac{\partial}{\partial q} (g + fy'') = f$$

$$ \begin{align} \frac{d^2g_2}{dx^2} =& \frac{d}{dx} (f_x + f_yp + f_pq) \\ =&\qquad f_{xx} + f_{xy}p + f_{xp}q\\ &+ f_{yx}p + f_{yy}p^2 + f_{yp}pq + f_yq \\ &+ f_{px}q + f_{py}pq + f_{pp}q^2 + f_pq' \end{align} $$

$$\qquad = f_{xx} + 2f_{xy}p + f_{yy}p^2 + 2f_{xp}q + 2f_{yp}pq + f_{pp}q^2 +f_pq'+f_yq$$

Plugging above results into

$$ g_0 - \frac{dg_1}{dx} + \frac{d^2g_2}{dx^2} = 0 $$

yields

$$ ( f_{xx} + 2pf_{xy} + p^2f_{yy} - g_{xp} - pg_{yp} + g_y ) + ( f_{xp} + pf_{yp} + 2f_y - g_{pp} ) q = 0$$

As noted in the lecture notes, since l and q are linearly independent, the only way for the above equation to be true is for their coefficients to be equal to zero.

Thus, $$ \begin{align} f_{xx} + 2pf_{xy} + p^2f_{yy} &= g_{xp} + pg_{yp} - g_y \\ f_{xp} + pf_{yp} + 2f_y &= g_{pp} \end{align} $$

as required.

=Problem 5.9 - Taylor series derivation =

Given:
Taylor series at $$x=0$$ (ala Maclaurin series)

Derive:

 * 1


 * 2

Solution:

 * 1

Let $$f(x)=\frac{1}{1-x}$$

Take derivatives of $$f(x)$$ we got

$$f'(x)=\frac{1}{(1-x)^2}$$

$$f''(x)=\frac{2}{(1-x)^3}$$

$$f'''(x)=\frac{3*2}{(1-x)^4}$$

$$\,\,\,\,.....$$

$$f^{(n)}(x)=\frac{n!}{(1-x)^{(n+1)}}$$

Plugging into Taylor series at Equation ($$) yeilds

$$f(x)=\frac{1}{1-x}=\sum_{n=0}^{\infty}\frac{f^{(n)}}{n!}x^n $$


 * $$\Rightarrow \frac{1}{1-x}=\sum_{n=0}^{\infty}\left. \frac{n!}{n!(1-x)^{(n+1)}} \right| _{x=0}x^n $$


 * $$\Rightarrow \frac{1}{1-x}=\sum_{n=0}^{\infty}x^n $$


 * $$\Rightarrow \frac{1}{1-x}=1+x+x^2+... $$


 * 2

Let $$f(x)=\frac{1}{x}arctan(1+x)=\frac{1}{x} f_1(x)$$

Let $$f_1(x)=\frac{1}{x}arctan(1+x)$$

Take derivatives of $$f_1(x)$$ we got

$$f_1'(x)=\frac{1}{(1+x)^2}$$

$$f_1''(x)=-\frac{2(x+1)}{((x+1)^2+1)^2}$$

$$f_1'''(x)=\frac{8(x+1)^2}{((x+1)^2+1)^3}-\frac{2}{((x+1)^2+1)^2}$$

$$\,\,\,\,.....$$

To find Taylor series at $$x=0 $$of $$f_1(x)$$, we refer to wolframalpha

$$f_1(x)=arctan(1+x)=\frac{\pi}{4}+\frac{x}{2}-\frac{x^2}{4}+\frac{x^3}{12}-\frac{x^5}{40}+...$$

$$\Rightarrow f(x)=\frac{1}{x} arctan(1+x)=\frac{\pi}{4x}+\frac{1}{2}-\frac{x}{4}+\frac{x^2}{12}-\frac{x^4}{40}+...$$

=Problem 5.10 Finding the local maximum of a hypergeometric function=

Prove That:

 * Use Matlab to plot $$_2 F_1(5,-10;1;x)$$ near $$x=0$$ to display the local maximum (or minimum in this region)

Proof
To prove this equation, we will assume a relation, the proof of which has been left for future work.


 * $$n! \sum\limits_{k=0}^m \dbinom{m}{k} \dbinom{k+n}{n} (-x)^k = \sum\limits_{k=0}^m \dbinom{m}{k} n! \dbinom{k+n}{n} (-x)^k $$
 * $$= \sum\limits_{k=0}^m \dbinom{m}{k} \frac {(k+n)!}{k!} (-x)^k $$
 * $$= \sum\limits_{k=0}^m \dbinom{m}{k}(-1)^k \frac {\mathrm{d}^n}{\mathrm{d}x^n} x^{k+n} $$
 * $$=  \frac {\mathrm{d}^n}{\mathrm{d}x^n} \sum\limits_{k=0}^m (-1)^k\dbinom{m}{k} x^{k+n} $$
 * $$=  \frac {\mathrm{d}^n}{\mathrm{d}x^n} \left[\left( \sum\limits_{k=0}^m \dbinom{m}{k} (-x)^k \right) x^n \right]$$

Similarly

Using ($$), ($$) and ($$), we get


 * $$\frac {\frac {\mathrm{d}^n} {\mathrm{d}x^n} (1-x)^m x^n}{\frac {\mathrm{d}^m} {\mathrm{d}x^m} (1-x)^n x^m} = \frac {n !}{m !} \frac {(1-x)^m}{(1-x)^n}.\quad\blacksquare$$

Proof
The Jacobi polynomials are defined via the hypergeometric function as follows :

Setting $$\alpha = 0, \beta = a+b-1, n=-a$$ we get


 * $$_2F_1(a,b;1;\frac{1-z}{2}) = \frac{(-a)!}{(1)_{-a}} P_{-a}^{(0,a+b-1)}(z)$$

Since $$ (1)_{-a} = (-a)!$$


 * $$_2F_1(a,b;1;\frac{1-z}{2}) = P_{-a}^{(0,a+b-1)}(z)$$

Using Rodrigues' formula as


 * $$P_n^{(\alpha,\beta)} (z) = \frac{(-1)^n}{2^n n!} (1-z)^{-\alpha} (1+z)^{-\beta} \frac{d^n}{dz^n} \left\{ (1-z)^\alpha (1+z)^\beta (1 - z^2)^n \right\} $$

Setting $$\alpha = 0$$


 * $$\Rightarrow\,_2F_1(a,b;1;\frac{1-z}{2}) = \frac{(-1)^{-a}}{2^{-a} (-a)!} (1+z)^{-a-b+1} \frac{d^n}{dz^n} \left\{(1-z)^{-a} (1+z)^{a+b-1 - a } \right\} $$

Setting $$z=1-2x$$


 * $$\Rightarrow \frac{\mathrm{d}f}{\mathrm{d}x} = \frac{\mathrm{d}f}{\mathrm{d}z}\frac{\mathrm{d}z}{\mathrm{d}x}$$
 * $$\Rightarrow \frac{\mathrm{d}f}{\mathrm{d}x} = \frac{\mathrm{d}f}{\mathrm{d}z}\left(-\frac{1}{2}\right)$$

Also and

Substituting ($$), ($$), ($$), ($$) into ($$) and simplifying, we get

Similarly, setting $$\alpha = 0, \beta = -a-b+1, n=b-1, z=1-2x$$ we get


 * $$\Rightarrow \frac {_2F_1(a,b;1;x)} {_2F_1(-b+1,-a+1;1;x)} = (1-x)^{-a-b+1}  \frac {(b-1)!} {(-a)!} \times   \frac {(1-x)^{-a}} {(1-x)^{b-1}}  \times  \frac {\frac {\mathrm{d}^{-a}} {\mathrm{d}x^{-a}} \left[(1-x)^{b-1}x^{-a}\right]} {\frac {\mathrm{d}^{b-1}} {\mathrm{d}x^{b-1}} \left[(1-x)^{-a}x^{b-1}\right]}$$

Setting $$n=-a, m=b+1$$ in ($$)
 * $$\frac {\frac {\mathrm{d}^{-a}} {\mathrm{d}x^{-a}} \left[(1-x)^{b-1}x^{-a}\right]} {\frac {\mathrm{d}^{b-1}} {\mathrm{d}x^{b-1}} \left[(1-x)^{-a}x^{b-1}\right]} = \frac {(-a)!} {(b-1)!} \times \frac {(1-x)^{b-1}} {(1-x)^{-a}} $$
 * $$\Rightarrow \frac {(b-1)!} {(-a)!} \times  \frac {(1-x)^{-a}} {(1-x)^{b-1}}  \times  \frac {\frac {\mathrm{d}^{-a}} {\mathrm{d}x^{-a}} \left[(1-x)^{b-1}x^{-a}\right]} {\frac {\mathrm{d}^{b-1}} {\mathrm{d}x^{b-1}} \left[(1-x)^{-a}x^{b-1}\right]} = 1 $$
 * $$\Rightarrow \frac {_2F_1(a,b;1;x)} {_2F_1(-b+1,-a+1;1;x)} = (1-x)^{-a-b+1}$$
 * $$_2F_1(a,b;1;x) = (1-x)^{-a-b+1}\,_2F_1(-b+1,-a+1;1;x)\qquad \blacksquare$$

Solution:
Using ($$) and setting $$a=5, b=-10$$, we get
 * $$_2F_1(5,-10;1;x) = (1-x)^{-5+10+1}\,_2F_1(-4,11;1;x)$$
 * $$\Rightarrow\,_2F_1(5,-10;1;x) = (1-x)^6\,_2F_1(-4,11;1;x)$$

Since in the second term, $$a$$ is a negative integer, the summation will terminate after $$-(-4)+1=5$$ terms
 * $$\Rightarrow\,_2F_1(5,-10;1;x) = (1-x)^6\sum\limits_{k=0}^4 \frac{(-4)_k(11)_k}{(1)_k}\frac{x^k}{k!}$$

Since $$(1)_k = k!$$
 * $$\Rightarrow\,_2F_1(5,-10;1;x) = (1-x)^6\sum\limits_{k=0}^4 (-4)_k(11)_k \frac{x^k}{(k!)^2}$$
 * $$\Rightarrow\,_2F_1(5,-10;1;x) = (1-x)^6\left[ (-4)_0(11)_0 \frac{x^0}{(0!)^2} + (-4)_1(11)_1 \frac{x^1}{(1!)^2} + (-4)_2(11)_2 \frac{x^2}{(2!)^2} + (-4)_3(11)_3 \frac{x^3}{(3!)^2} + (-4)_4(11)_4 \frac{x^4}{(4!)^2}\right]$$
 * $$\Rightarrow\,_2F_1(5,-10;1;x) = (1-x)^6\left[ 1 + (-4) \times 11 x + (-4)\times(-3)\times 11 \times 12 \times\frac{x^2}{4} + (-4)\times(-3)\times(-2)\times 11 \times 12 \times 13\times \frac{x^3}{36}\right.$$
 * $$\left. + (-4)\times(-3)\times(-2)\times(-1)\times 11 \times 12 \times 13 \times 14 \times \frac{x^4}{(24)^2} \right]$$
 * $$\Rightarrow\,_2F_1(5,-10;1;x) = (1-x)^6 (1 - 44x + 396x^2 - 1144x^3 + 1001x^4)\qquad\blacksquare$$

Since $$_2F_1(5,-10;1;x)$$ is a polynomial with six of its roots are at $$x=1$$, the first derivative of $$_2F_1(5,-10;1;x)$$ will have at least 5 roots at $$x=1$$. Also since $$_2F_1(5,-10;1;x)$$ is of degree 10, the first derivative will have a degree of 9 and a maximum of 9 real roots. Since 5 of the roots are at the same point $$x=1$$, at max we can see 5 points where the first derivative will be 0. So $$_2F_1(5,-10;1;x)$$ can have a maximum of 5 optima combined. The following graph displays all of the 5 points of optima with the blue circles.

Pavel Bhowmik (talk) 15:48, 31 October 2012 (UTC)

=Problem 5.11 - Altitude by Hypergeomtric function=

Link to lecture notes

Given:

 * $$n=3, a=2, b=10$$

Find:
Plot $$z(t)$$ versus $$t$$

Solution:

 * $$\int\frac{dz}{az^n+b}=\frac{1}{b}z_{2}F_{1}\left(1,\frac{1}{n} ; 1+\frac{1}{n} ; -a\frac{z^n}{b} \right )+k$$

Using ($$) solve for $$t$$


 * $$t=\frac{-1}{10}z_{1}F_{2}\left(1,\frac{1}{3} ; \frac{4}{3} ; \frac{-1}{5}z^3 \right )+k$$

Matlab was used to solve for the hypergeometric fucntion. An alternate form of the solution was given by the Wolfram Alpha computational engine and can be found at this link

=Problem 5.12* Showing the hypergeometric function is a solution of the differential equation =

Find:
Part 1. See if the differential equation G is exact. Part 2. Is the differential equation in the power form. Part 3. Show the hypergeometric function is a solution to G.

Part 1:
We need to see if G is exact in order to find the solution. In order to be exact the function G has to satisfy the exactness conditions in equations ($$) and ($$). Use ($$) and ($$) to find f and g.


 * $$f=x-x^2$$
 * $$g=-aby+[c-(a+b+1)x]p$$


 * $$f_{xp}=0$$
 * $$f_{yp}=0$$
 * $$f_{y}=0$$
 * $$g_{pp}=0$$

Plugging these terms in the equation ($$),


 * $$0+2(0)+2(0)=0$$
 * $$0=0$$

which satisfies the exactness condition.

The equation G also need to satisfy ($$),


 * $$f_{xx}+2pf_{xy}+p^2f_{yy}=g_{xp}+pg_{yp}-g_{y}$$
 * $$f_{xx}=2$$
 * $$f_{xy}=0$$
 * $$f_{yy}=0$$
 * $$g_{xp}=[c-(a+b+1)]$$
 * $$g_{yp}=0$$
 * $$g_{y}=-ab$$

Plugging these terms into equation ($$) we get,


 * $$2+2p(0)+p^2(0)=[c-(a+b+1)]+p(0)+ab$$
 * $$2=[c-(a+b+1)]+ab$$

This exactness condition is satisfied because all independent variables disappeared.

Part 2:
The equation G is in power form so we can find an integrating factor to make G exact.

Part 3:
We need to find the first integral of the function G to find the solution.


 * $$\phi(x,y,p)=h(x,y)+\int f(x,y,p)dp$$

This lowers the degree of the function G. We know the $$f(x,y,p)$$ so the second half of the equation can be easily found.


 * $$\int f(x,y,p)dp$$
 * $$\int (x-x^2)dp$$
 * $$(x-x^2)p$$

Plugging this into our equation for the first integral we get,
 * $$\phi(x,y,p)=h(x,y)+(x-x^2)p$$

To find the solution y(x) we need to put the integral into the form,
 * $$\phi(x,y,p)=a_{0}(x)y+y'=b(x)$$

From here we can find the first part of the solution,
 * $$h(x)=\exp \int^x a_{0}(x)dx$$

Now that we found $$h(x)$$ we can find $$y(x)$$
 * $$y(x)= \frac{\int^x h(s)b(s)ds+k_{3}}{h(x)}$$
 * $$y(x)= \frac{\int^x \exp \int^x a_{0}(x)b(x)dx+k_{3}}{\exp \int^x a_{0}(x)dx}$$

Now that we have the solution $$y(x)$$ for hypergeometric differential equation ($$) and need to show the hypergeometric equation is a solution. The solution $$y(x)$$ we found has exponentials in the numerator and denominator and we can replace these with equation ($$) seen below, $$ \exp x=1+\frac {x}{1!}+\frac{x^2}{2!}+...$$ which is the hypergeometric function produces the hypergeometric function.

=Problem 5.13* -Show exactness of Legendre, Bessel and Hermite equations =

Given
Legendre equation:

Bessel equation:

Hermite equation:

Find
1. Verify the exactness of the designated L2-ODE-VC. 2. If 13.3 is not exact, check whether it is in power form, and see whether if it can be made exact using IFM 3. Verify the above equations are homogeneous solutions of the 13.3

$$ \begin{align} H_0(x) &=1 \\ H_1(x) &= 2x \\ H_2(x) &= 4x^2-2 \end{align} $$

Legendre equation
The first exactness condition is that

$$G= g(x,y,p) + f(x,y,p)y'' = 0$$

$$G = \underbrace {(1 - x^2)}_{f(x,y,p)}y'' - \underbrace {2xp + n(n+1)y}_{g(x,y,p)} = 0$$

this equation satisties the first exactness condition. In order to satisfy 2nd exactness condition,

$$f_{xx} + 2pf_{xy}+p^2f_{yy} = g_{xp} + pg_{yp} - g_{y}$$

$$f_{xp} + pf_{yp} + 2f_y = g_{pp}$$

$$f(x,y,p) = (1-x^2)$$

$$g(x,y,p) = -2xp + n(n+1)y$$

$$ \begin{align} f_{xx} &=-2 \\ f_{xy} &= 0 \\ f_{yy} &= 0 \\ g_{xp} &= -2 \\ g_{yp} &= 0 \\ g_y   &= n(n+1) \\ f_{xp} &= 0 \\ f_{yp} &= 0 \\ f_y &= 0 \\ g_{pp} &= 0 \\ \end{align} $$

Substituting the above values in the conditions

$$-2 = -2 + 0 - n(n+1), \,\,\,\,0=0$$

It shows that 13.1 satisfies the 2nd exactness condition when n=0 or n= -1

Using second method:

$$g_0 - \frac {dg_1}{dx} + \frac {d^2g_2}{dx^2}=0$$

$$ \begin{align} g_0 &= \frac {\partial G}{\partial y^{(0)}} = n(n+1) \\ g_1 &= \frac {\partial G}{\partial y^{(1)}} = -2x \\ g_2 &= \frac {\partial G}{\partial y^{(2)}} = (1-x^2) \\ \frac {dg_1}{dx} &= -2 \\ \frac {d^2g_2}{dx^2} &= -2 \\ \end{align} $$

From above values, 13.1 is following that

n(n+1) + 2 - 2 = 0

It also shows that this equation satisfies the second exactness when n=0.

Bessel Equation
$$G= g(x,y,p) + f(x,y,p)y'' = 0$$

$$G = \underbrace {(1-x^2)}_{f(x,y,p)}y'' - \underbrace {2xp + (x^2-v^2))y}_{g(x,y,p)} = 0$$

this equation satisties the first exactness condition. In order to satisfy 2nd exactness condition,

$$f_{xx} + 2pf_{xy}+p^2f_{yy} = g_{xp} + pg_{yp} - g_{y}$$

$$f_{xp} + pf_{yp} + 2f_y = g_{pp}$$

$$f(x,y,p) = (1-x^2)$$

$$g(x,y,p) = -2xp + (x^2-v^2))y$$

$$ \begin{align} f_{xx} &=-2 \\ f_{xy} &= 0 \\ f_{yy} &= 0 \\ g_{xp} &= -2 \\ g_{yp} &= 0 \\ g_y   &= (x^2-v^2) \\ f_{xp} &= 0 \\ f_{yp} &= 0 \\ f_y &= 0 \\ g_{pp} &= 0 \\ \end{align} $$

Substituting the above values in the conditions

$$-2 = -2 + 0 -(x^2 -v^2), \,\,\,\,0=0$$

It shows that 13.2 doesnot satisfy the condition and hence is not exact.

Using second method:

$$g_0 - \frac {dg_1}{dx} + \frac {d^2g_2}{dx^2}=0$$

$$ \begin{align} g_0 &= \frac {\partial G}{\partial y^{(0)}} = x^2-v^2 \\ g_1 &= \frac {\partial G}{\partial y^{(1)}} = -2x \\ g_2 &= \frac {\partial G}{\partial y^{(2)}} = (1-x^2) \\ \frac {dg_1}{dx} &= -2 \\ \frac {d^2g_2}{dx^2} &= -2 \\ \end{align} $$

From above values, 13.1 is following that

$$(x^2 - v^2) + 2 - 2 = 0$$

Again we see that the equation is not exact.

Hermite equation
First Method: Hermite equation satisfies the first exactness condition due to following form.

$$\underbrace {1}_{f(x,y,p)}\cdot y'' \underbrace {-2xp + 2ny}_{g(x,y,p)} = 0$$

In order to satisfy 2nd exactness condition, the equation 13.3 must satisfy

$$f(x,y,p) = 1$$ $$g(x,y,p) = -2xp + 2np$$

$$ \begin{align} f_{xx} &= 0 \\ f_{xy} &= 0 \\ f_{yy} &= 0 \\ g_{xp} &= -2 \\ g_{yp} &= 0 \\ g_y   &= 2n \\ f_{xp} &= 0 \\ f_{yp} &= 0 \\ f_y &= 0 \\ g_{pp} &= 0 \\ \end{align} $$

$$0 + 0 + 0 = -2 + 0 + -2n$$ $$2n = -2$$ $$n= -1$$

$$0=0$$

It shows that 13.3 satisfies the 2nd exactness condition when n=-1.

Using second method: $$f_0 - \frac {df_1}{dx} + \frac {d^2f_2}{dx^2}=0$$

$$ \begin{align} f_0 &= \frac {\partial F}{\partial y^{(0)}} = 2n \\ f_1 &= \frac {\partial F}{\partial y^{(1)}} = -2x \\ f_2 &= \frac {\partial F}{\partial y^{(2)}} = 1 \\ \frac {df_1}{dx} &= -2 \\ \frac {d^2f_2}{dx^2} &= 0 \\ \end{align} $$

From the above values, eq 5.2.6 is following that

$$2n + 2 = 0$$ $$n= -1 $$

It shows that this equation satisfies 2nd exactness condition when n= -1

Part Two
From the previous part we see that the equation 13.3 is exact.

Part Three
From eq 13.3 Boundary condition from the given values are that where $$b=0, y(0)= 1, y'(0)=0$$ where $$b=1, y(0)= 0, y'(0)=2$$ where $$b=2, y(0)= -2, y'(0)= 0$$ where b=1, and the initial conditions are given as

$$y(0)= a_0 = 0, y'(0) = a_1 = 2$$

It shows that all even coefficients will be equal to zero because each coefficient is a multiple of its second predecessor.

$$a_0 = a_2 = a_4 = a_6 = a_8 = a_10 ..... = 0$$

$$a_3 = \frac {2(1-1)}{(1+2)(1+1)}a_1 = 0$$

It shows that $$a_5 = a_7 = a_9 = .... = 0$$

Only a1 = 2.

Thus,

$$y(x) = H_1(x) = a_0\cdot x^0 + a_1\cdot x^1 + a_2\cdot x^2 + ...... a_n\cdot x^n = x $$

Where b=2, and the initial conditions are given as

$$y(0)= a_0 = -2, y'(0) = a_1 = 0$$

$$a_2 = \frac {2(0-2)}{(0+2)(0+1)}a_0 = 4$$

$$a_4 = \frac {2(2-2)}{(2+2)(2+1)}a_2 = 0 $$

It shows that $$a_4 = a_6 = a_8 = .... = 0$$

Only a0 = -2 and a2 = 4

Thus, from $$y(x) = H_2(x) = a_0 \cdot x^0 + a_1\cdot x^1 + a_2\cdot x^2 + ........ a_n\cdot x^n = 4x^2 -2 $$

Where b=0, and the initial conditions are given as $$\displaystyle y(0)= a_0 = 1, y'(0) = a_1 = 0$$

$$y'' - 2xy' = 0$$

$$y(x)= c1 + \frac {1}{2}\sqrt{\pi}\cdot c2\cdot erfi(x)$$

From the initial values, c1= 1, c2=0 is that $$y(x)=H_0(x) = 1$$

$$ \begin{align} H_0(x) &=1 \\ H_1(x) &= 2x \\ H_2(x) &= 4x^2-2\\ \end{align} $$

These results show that the given values are homogeneous solution to the

=Problem 5.14* - Find expressions using Euler's Formula =

Given:
$$X^{(4)}-K^4x=0$$

Assuming

$$r_{1,2}=\pm K$$ and $$r_{3,4}=\pm i\, K$$

Eluer Formula

Find:
Find the expressions for $$X(t)$$ in terms of $$cos\, Kx, sin\, K, cosh\,Kx, sinh\,Kx$$.

Recall Euler's formula for $$cos\, Kx, sin\, Kx$$;a similar relation exists for $$cosh\, Kx, sinh\, Kx$$

Solution:
Referring to Lecture Note

$$\Rightarrow X(x)=c_1\,e^{r_1 x}+c_2\,e^{r_2 x}+c_3\,e^{r_3 x}+c_4\,e^{r_4 x}$$

Plugging $$r_{1,2,3,4}$$ into Equation($$) yeilds

$$\Rightarrow X(x)=c_1\,e^{Kx}+c_2\,e^{-Kx}+c_3\,e^{i\, Kx}+c_4\,e^{-i\, Kx}$$

Recall Eluer Formula Equation($$) referring to lecture note

$$e^{ikx}=cos(kx)+i\,sin(kx)$$

$$e^{-ikx}=cos(kx)-i\,sin(kx)$$

$$e^{kx}=cosh(kx)+sinh(kx)$$

$$e^{-kx}=cosh(kx)-sinh(kx)$$

Plugging Eluer Formulas into $$X(x)$$ we get

$$\Rightarrow X(x)=c_1\,[cosh(Kx)+sinh(Kx)]+c_2\,[cosh(Kx)-sinh(Kx)]+c_3\,[cos(Kx)+i\,sin(Kx)]+c_4\,[cos(kx)-i\,sin(kx)]$$

$$\Rightarrow X(x)=(c_1+c_2)\,cosh(Kx)+(c_1-c_2)sinh(Kx)+(c_3+c_4)\,cos(Kx)+(c_3-c_4)i\,sin(Kx)$$

Referring to Lecture note, if

$$\left\{\begin{matrix} b_3:=c_3+c_4 \in \mathbb{R}\\ \,\,ib_4:=c_3-c_4 \in \mathbb{C} \end{matrix}\right.$$

with:$$b_3,b_4\in \mathbb{R}$$

$$\Rightarrow \left\{\begin{matrix} c_3:=\frac{1}{2}(b_3+ib_4 )\in \mathbb{C}\\ c_4:=\frac{1}{2}(b_3-ib_4 )\in \mathbb{C} \end{matrix}\right.$$

Thus: $$X(x)=(c_1+c_2)\,cosh(Kx)+(c_1-c_2)\,sinh(Kx)+b_3\,cos(Kx)+b_4\,sin(Kx)$$

The 4 real constants are:$$\{c_1,c_2,b_3,b_4\}\in \mathbb{R}^4$$

=Contribution Table=

= References =