User:Egm6321.f12.team3/Report5

=R5.1 Proof that exponentiation of Transverse of a Matrix equals the Transverse of the Exponentiation Expansion=

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
$$\displaystyle \mathbf A \in \mathbb R^{(n \times n)}$$

and

Solution
We will first expand the LHS, then the RHS of ($$) using ($$) and compare the two expressions.

Expanding the LHS,

$$\underset{\color{red}n \times n}{\exp[\mathbf{A^T}]} = \underset{\color{red}n \times n}{\mathbf{I^T}} + \underset{\color{red}n \times n}{\frac{1}{1!}\mathbf{A^T}} + \underset{\color{red}n \times n}{\frac{1}{2!}[\mathbf{A^T}]^2} + .... = \underset{\color{red}n \times n}{\sum^{\infty}_{k=0} \frac{1}{k!} \mathbf[{A^T}]^k}$$

But we know that

$$\mathbf I^T = \mathbf I$$

Now expanding the RHS,

$$\underset{\color{red}n \times n}{\exp[\mathbf{A}]^T} = \left[\underset{\color{red}n \times n}{\mathbf{I}} + \underset{\color{red}n \times n}{\frac{1}{1!}\mathbf{A}} + \underset{\color{red}n \times n}{\frac{1}{2!}[\mathbf{A}]^2} + .... = \underset{\color{red}n \times n}{\sum^{\infty}_{k=0} \frac{1}{k!} \mathbf[{A}]^k}\right]^T$$

Which on calculating, reduces to

$$\underset{\color{red}n \times n}{\exp[\mathbf{A^T}]} = \underset{\color{red}n \times n}{\mathbf{I^T}} + \underset{\color{red}n \times n}{\frac{1}{1!}\mathbf{A^T}} + \underset{\color{red}n \times n}{\frac{1}{2!}[\mathbf{A^T}]^2} + .... = \underset{\color{red}n \times n}{\sum^{\infty}_{k=0} \frac{1}{k!} \mathbf[{A^T}]^k}$$

or

Comparing ($$) and ($$)

We conclude the LHS = RHS, Hence Proved.

='''R5.2. Exponentiation of a Complex Diagonal Matrix''' =

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
A Diagonal Matrix

Problem
Show that

Solution
We know, from Lecture Notes ,

Let us consider a Simple yet Generic 4x4 Complex Diagonal Matrix $$ \mathbf D$$

where $$i=\sqrt{-1}$$.

Applying ($$) to ($$) and expanding,

Simplifying Term 2 and other higher power terms (upto Term k) in the following way,

$$ \text{Matrix 3} = \begin{bmatrix} ai & 0 & 0 & 0 \\ 0 & bi & 0 & 0 \\ 0 & 0 & ci & 0 \\ 0 & 0 & 0 & di \end{bmatrix}*\begin{bmatrix} ai & 0 & 0 & 0 \\ 0 & bi & 0 & 0 \\ 0 & 0 & ci & 0 \\ 0 & 0 & 0 & di \end{bmatrix}$$

Similarly,

Using ($$) and ($$) in ($$) and carrying out simple matrix addition, we get,

But every diagonal term of the matrix is of the form,

Therefore, ($$) can be rewritten as,

$$\text{(ai)}, \text{(bi)}, \text{(ci)}, \text{(di)}$$ are nothing but the diagonal elements of the original matrix in ($$). Hence,

Similarly it can be easily found for an $$n \times n$$ complex diagonal matrix that

Hence Proved.

=R5.3 Show form of Exponentiation of Matrix in terms of Eigenvalues of that matrix= On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
A matrix $$ \mathbf{A}$$ can be decomposed as

where

$$\mathbf \Lambda $$ is the diagonal matrix of eigenvalues of matrix $$ \mathbf{A}$$

and

$$\mathbf \Phi $$ is the matrix established by n linearly independent eigenvectors $$ \boldsymbol\phi_i (i=1,2,3,\ldots,n)$$of matrix  $$ \mathbf{A}$$, that is,

Problem
Show that

$$\displaystyle \exp [\mathbf{A}] = \mathbf \Phi \,\text{Diag}[\,e^{\lambda_1},\,e^{\lambda_2},\ldots,\,e^{\lambda_n}] \mathbf \Phi^{-1}$$

Solution
The power series expansion of exponentiation of matrix $$ \mathbf{A}$$ in terms of that matrix has been given as

Since matrix $$ \mathbf{A}$$ can be decomposed as,

Expanding the $$ k$$ power of matrix $$ \mathbf{A}$$ yields

Where the factors $$ (\mathbf \Phi) $$ which are neighbors of factors $$ (\mathbf \Phi^{-1} ) $$ can be all cancelled in pairs, that is,

Thus, the equation ($$) can be expressed as

According to the equation ($$), now we have,

Referring to the conclusion obtained in R5.2, which is

Replacing the matrix $$\mathbf D $$ with $$\mathbf \Lambda $$, the elements $$d_i $$ with $$\lambda_i $$,where $$ i=1,2,3,\ldots,n $$ and then substituting into ($$) yields

=R5.4 Show Decomposed Form of Matrix and its Exponentiation = On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
Exponentiation of a matrix $$ \mathbf{A}$$ can be decomposed as

The matrix $$ \mathbf B $$ is defined in lecture note

Problem
Show

and

Solution
To show the equation ($$), we should first find eigenvalues $$\lambda $$ of matrix  $$\mathbf B $$ using the matrix equation as follow, indroduce $$\mathbf I $$ to represent Identity matrix.

$$\displaystyle f(x)=\left | \lambda \mathbf I-\mathbf B \right |=0$$

$$\displaystyle \Rightarrow f(x)=\left |\begin{bmatrix}\lambda&0\\0&\lambda\end{bmatrix}-\begin{bmatrix}0&-1\\1&0\end{bmatrix} \right |=\left |\begin{bmatrix}\lambda&1\\-1&\lambda\end{bmatrix}  \right |=0$$

$$\displaystyle \Rightarrow f(x)=\lambda^2+1=0$$

$$\displaystyle \Rightarrow \lambda_1 =i, \lambda_2 = -i$$

Since the two eigenvalues of matrix $$\mathbf B $$ are both obtained, now solve for the corresponding two eigenvectors.

$$\displaystyle (\lambda_1 \mathbf I-\mathbf B) \mathbf X =\mathbf O$$

$$\displaystyle \Rightarrow (\begin{bmatrix}\lambda_1&0\\0&\lambda_1\end{bmatrix}-\begin{bmatrix}0&-1\\1&0\end{bmatrix}) \begin{bmatrix}x_1\\x_2\end{bmatrix}=\begin{bmatrix}0\\0\end{bmatrix}$$

Thus, for the first value of $$\lambda $$, we have

{{NumBlk|:|$$\displaystyle \left\{\begin{matrix}\lambda_1 x_1+x_2=0\\ -x_1+\lambda_1x_2=0\end{matrix}\right.$$ |$$}}

Substituting $$\lambda_1 =i$$ into the equations above and solving yields, for the eigenvalue $$\lambda_1 =i$$, that

{{NumBlk|:|$$\displaystyle \left\{\begin{matrix}x_1=1\\ x_2=-i\end{matrix}\right.$$ |$$}}

Similarly, we have the equation which can be used for solving eigenvector corresponding to $$\lambda_2 =-i$$,

{{NumBlk|:|$$\displaystyle \left\{\begin{matrix}\lambda_2 x_1+x_2=0\\ -x_1+\lambda_2x_2=0\end{matrix}\right.$$ |$$}}

Substituting $$\lambda_2 =-i$$ into the equations above and solving yields, for the eigenvalue $$\lambda_2 =-i$$, that

{{NumBlk|:|$$\displaystyle \left\{\begin{matrix}x_1=1\\ x_2=i\end{matrix}\right.$$ |$$}}

Now we have obtained two eigenvectors $$ \boldsymbol\phi_1$$ and $$ \boldsymbol\phi_2$$ of matrix $$ \mathbf B$$, where

Thus we have

Then, calculating the inverse matrix of matrix $$ \mathbf \Phi $$ yields

Therefore we reach the conclusion that,

$$\displaystyle \mathbf B =  \begin{bmatrix}1 & 1\\-i&i\end{bmatrix} \begin{bmatrix}{i} & 0\\0&{-i}\end{bmatrix} \begin{bmatrix}i & -1\\i&1\end{bmatrix}\frac{1}{2i}$$

According to the conclusion we have reached in R5.3, we have,

$$\displaystyle \exp [\mathbf{B}] = \mathbf \Phi \,\text{Diag}[\,e^{\lambda_1},\,e^{\lambda_2}] \mathbf \Phi^{-1}$$

$$\displaystyle \Rightarrow \exp[\mathbf B t] = \begin{bmatrix}1 & 1\\-i&i\end{bmatrix} \begin{bmatrix}e^{i} & 0\\0&e^{-i}\end{bmatrix} \begin{bmatrix}i & -1\\i&1\end{bmatrix}\frac{t}{2i}$$

$$\displaystyle \Rightarrow \exp[\mathbf B t] = \begin{bmatrix}1 & 1\\-i&i\end{bmatrix} \begin{bmatrix}e^{it} & 0\\0&e^{-it}\end{bmatrix} \begin{bmatrix}i & -1\\i&1\end{bmatrix}\frac{1}{2i}$$

Doing the multiplication of matrices at the right side of equation above yields

$$\displaystyle \exp[\mathbf B t] = \begin{bmatrix}e^{it} & e^{-it}\\-ie^{it}&ie^{-it}\end{bmatrix} \begin{bmatrix}i & -1\\i&1\end{bmatrix}\frac{1}{2i}$$

$$\displaystyle \Rightarrow \exp[\mathbf B t] = \begin{bmatrix}ie^{it}+ie^{-it} & e^{-it}-e^{it}\\e^{it}-e^{-it}&-ie^{-it}+ie^{it}\end{bmatrix} \frac{1}{2i}$$

Consider Euler’s Formula,

Replacing $$\displaystyle i$$ with $$\displaystyle -i$$ yields

Solve ($$) together with ($$),  we have

$$\displaystyle \left\{\begin{matrix}e^{it}=\cos t+i \sin t\\ e^{-it}=\cos t-i \sin t\end{matrix}\right.$$

{{NumBlk|:|$$\displaystyle \Rightarrow \left\{\begin{matrix} \cos t=\frac{1}{2}(e^{it}+e^{-it})\\ \sin t=\frac{1}{2i}(e^{it}-e^{-it}) \end{matrix}\right.$$ |$$}}

Substituting ($$)  into ($$)  yields

Obviously,

$$\displaystyle \begin{bmatrix}\cos t  & -\sin t\\\sin t&\cos t\end{bmatrix}\ne \begin{bmatrix}1   & e^{-t}\\e^t&1\end{bmatrix}$$

=R*5.5 Generating a class of exact L2-ODE-VC = On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
A L2-ODE-VC :

The first intregal $$ \phi(x,y,p)$$ can also be expressed as:

Problem
Show that($$) and ($$) lead to a general class of exact L2-ODE-VC of the form:

Nomenclature
$$\displaystyle p := y ' := \frac {dy}{dx}$$

Derivation of Eq. 5.5.3
The first exactness condition for L2-ODE-VC:

From ($$) and ($$), we can infer that

Integrating ($$), w.r.t p, we obtain:

Partial derivatives of $$ \phi $$ w.r.t to x and y can be written as:

Substituting the partial derivatives of $$\phi$$ w.r.t x,y and p [($$), ($$), ($$)] into ($$), we obtain:

Comparing ($$) with ($$), we can write:

Thus $$ \displaystyle \frac{\partial k(x,y)}{\partial x} = R(x) y $$

Integrating w.r.t x,

Substituting the $$ k(x,y) $$ obtained in ($$) back into the expression for $$ \phi $$ obtained in ($$), we obtain:

The partial derivative of $$\phi$$ ($$) w.r.t y,

But from ($$) and ($$), we see that $$ \displaystyle \phi_y = Q(x) $$.

So, $$Q(x)=T(x)+k'_1(y)$$

Since, $$ Q(x)$$ is only a function of $$x$$, so, we can now say that $$ T(x) = Q(x) $$ and $$ {k_1}^{\prime}(y) = 0 $$.

Thus $$ k_1(y) $$ is a constant.

Hence we obtain the following expression for $$ \phi $$:

which represents a general class of Exact L2-ODE-VC.

=R*5.6 Solving a L2-ODE-VC = On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Problem
1. Show that ($$) is exact.

2. Find $$\displaystyle \phi$$

3. Solve for $$\displaystyle y(x)$$

Nomenclature
$$\displaystyle p =: y ' =: \frac {dy}{dx}$$

$$\displaystyle f_{ij} = \frac{d^{2}f}{\partial i \partial j}$$

Exactness Conditions
The exactness conditions for N2-ODE (Non Linear Second Order Differential Equation) are:

First Exactness condition

For an equation to be exact, they must be of the form

$$ \displaystyle G(x,y,y',y' ') = g(x,y,p) + f(x,y,p)y' ' = \frac{\mathrm{d} \phi}{\mathrm{d} x}$$

Second Exactness Condition

Work
We have

$$ \displaystyle G = (cos x)y'' + (x^2 - sin x)y' + 2xy = 0$$

Where we can identify

$$\displaystyle g(x,y,p) = (x^2 - sin x)y' + 2xy$$

and

$$\displaystyle f(x,y,p)= cos x $$

Thus the equation satisfies the first exactness condition.

For the second exactness condition, we first calculate the various partial derivatives of f and g.

$$\displaystyle g_p = x^2 - sinx$$

$$\displaystyle g_{pp} = 0$$

$$\displaystyle g_{xp} = 2x -cosx$$

$$\displaystyle g_{yp} = 0$$

$$\displaystyle g_y = 2x$$

$$\displaystyle f_p = 0$$

$$\displaystyle f_{xp} = f_{yp} = 0$$

$$\displaystyle f_y = f_{yy} = f_{xy} = 0$$

$$\displaystyle f_x = -sinx$$

$$\displaystyle f_{xx} = -cosx $$

Substituting the values in ($$) we get

$$\displaystyle L.H.S = -cosx + 0 + 0 = -cosx$$

$$\displaystyle R.H.S = 2x -cosx + 0 - 2x = -cosx$$

Therefore the first equation satisfies.

Substituting the values in ($$) we get

$$\displaystyle L.H.S. = 0 + 0 + 0 = 0$$

$$\displaystyle R.H.S = 0$$

Therefore the second equation satisfies as well.

Thus the second exactness condition is satisfied and the given differential equation is exact.

Now, we have $$\displaystyle f(x,y,p) = \phi _p $$

Integrating w.r.t. p, we get

$$\displaystyle \phi = \int cosx.dp + h(x,y)$$

where h(x,y) is a function of integration as we integrated only partially w.r.t. p.

Partially differentiating ($$) w.r.t x

$$\displaystyle \phi_x = -psinx. + h_x$$

Partially differentiating ($$) w.r.t y

$$\displaystyle \phi_y = h_y$$

From equation ($$), we have

$$\displaystyle g(x,y,p) = \phi_x + \phi_{y}.y'$$

$$\displaystyle = -p.sinx +h_x + h_{y}.p$$

$$\displaystyle = p(h_y - sinx) + h_x $$

We have established that

$$\displaystyle g(x,y,p) = (x^2 - sin x)y' + 2xy$$

Comparing the two equations, we get,

$$\displaystyle h_x = 2xy$$

On integrating,

$$\displaystyle h = x^{2}y + k_{1}(x)$$

Thus,

$$\displaystyle h_x = 2xy +k_{1}' = 2xy, \therefore k_1 = const $$

Thus we have

$$\displaystyle \phi = cosx.y' + x^{2}y + k_1 = k_2$$

$$\displaystyle \phi = cosx.y' + x^{2}y = k$$

$$\displaystyle y' + \frac{x^{2}y}{cosx} = \frac{k}{cosx}$$

This N1-ODE can be solved using the Integrating Factor Method that we very well know.

$$\displaystyle h(x) = e^{\int \frac{x^{2}}{cosx} .dx} $$

$$\displaystyle y = \frac{1}{h(x)}.\int h(x).\frac {k}{cosx}.dx + c$$

$$\displaystyle y = \frac {1}{e^{\int \frac{x^{2}}{cosx}.dx}} \int \left[e^{\int \frac{x^{2}}{cosx} .dx} {\frac{k}{cosx}}dx \right] + c$$

=R*5.7 Show equivalence to symmetry of second partial derivatives of first integral = On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
where

Problem
Show equivalence to symmetry of mixed second partial derivatives of first integral, that is

$$\displaystyle \phi_{xy}=\phi_{yx}, \phi_{py}=\phi_{yp}, \phi_{xp}=\phi_{px} $$

where

$$\displaystyle p(x):=y'(x)$$

Solution
From ($$), we have,

$$\displaystyle \frac{d}{dx}g_1=\frac{d}{dx}[\phi_{xp}+\phi_{yp}y'+\phi_{pp}y'']+\frac{d\phi_y }{dx} $$

Substituting ($$),($$) and ($$)  into ($$)  yields

$$\displaystyle \frac{\partial}{\partial y}(\frac{d \phi}{dx}) -\frac{d}{dx}[\phi_{xp}+\phi_{yp}y'+\phi_{pp}y]-\frac{d }{dx}(\frac{\partial\phi}{\partial y})+\frac{d}{dx}[\phi_{px}+\phi_{py}y'+\phi_{pp}y]=0 $$

Because

$$\displaystyle \frac{\partial}{\partial y}(\frac{d \phi}{dx})=\frac{\partial}{\partial y}(\frac{\partial \phi}{\partial x}+\frac{\partial \phi}{\partial y}\frac{dy}{dx}+\frac{\partial \phi}{\partial y'}\frac{dy'}{dx})=\phi_{xy}+\phi_{yy}\frac{dy}{dx}+\phi_{py}\frac{dy'}{dx} $$

$$\displaystyle \frac{d }{dx}(\frac{\partial\phi}{\partial y}) =[\frac{\partial}{\partial x}(\frac{\partial\phi}{\partial y})+\frac{\partial}{\partial y}(\frac{\partial\phi}{\partial y})\frac{dy}{dx}+\frac{\partial}{\partial y'}(\frac{\partial\phi}{\partial y})\frac{dy'}{dx}] =\phi_{yx}+\phi_{yy}\frac{dy}{dx}+\phi_{yp}\frac{dy'}{dx} $$

Thus

Substituting ($$) into ($$) yields

Because

Substitute ($$) into ($$), we have

$$\displaystyle g_0-\frac{dg_1}{dx}+\frac{d^2g_2}{dx^2}=(\phi_{xy}-\phi_{yx})+2(\phi_{py}-\phi_{yp})y''+\frac{d}{dx}(\phi_{px}-\phi_{xp})+y'\frac{d}{dx}(\phi_{py}-\phi_{yp})=0 $$

Since $$y''$$ and $$y'$$ can be the second and first derivative of any solution function $$y$$ of any second order ODE in terms of which the equation $$g_0-\frac{dg_1}{dx}+\frac{d^2g_2}{dx^2}=0 $$ is hold. That is, the factor $$2y''+y'\frac{d}{dx}$$, which consists of two derivatives of solution function and the derivative operater so that depends partly on the solution functin of ODE, can be arbitrary and thus linearly indepent of the derivative operater $$\frac{d}{dx}$$, which is a factor of the third term on left hand side of ($$).

Similarly, comparing the first and the third terms on left hand side of ($$) yields that the factor 1 (which can be treated as a unit nature number basis of function space) of the first term and the derivative operater (which is another basis of derivative function space) of the third term are linearly independent of each other.

For the left side of ($$) being zero under any circumstances, we should have,

while

From ($$),since the factor $$2y''+y'\frac{d}{dx}$$ is arbitrary, we obtain,

Thus,

From($$), consider $$(\phi_{px}-\phi_{xp}) $$ to be also a function of variables x,y and p, which can be represented as $$h(x,y,p) $$, thus,

Since the partial derivative opraters $$\frac{\partial}{\partial x},\frac{\partial}{\partial y},\frac{\partial}{\partial p}$$ are linearly independent, we have,

Obviously the only condition by which the three equations above are all satisfied is that the function $$h(x,y,p)$$ is a numerical constant.

Thus, we have

where $$C $$ is a constant. To find the value of constant $$C $$, try the process as follow.

Find integral on both sides of ($$) in terms of x,

where the term $$f(y,p)$$ is an arbitrarily selected function of independent variables y and p. Then find integral on both sides of ($$) in terms of p,

where the term $$g(x,y)$$ is an arbitrarily selected function of variables x and y.

The first partial derivatives of both sides of ($$) in terms of x could be

Then find partial derivative of both sides of ($$) in terms of p,

Because the right hand side of ($$) is a function of two variables y and p, while the left hand side is a function of p' only, the equation ($$) could not hold if the constant $$C $$ has a non-zero value. Thus, the only condition by which the equation ($$) will be satisfied is that $$C=0 $$ while $$\frac{\partial}{\partial p}[f(y,p)]=0 $$ ,that is, $$f(y,p)= f(y)$$.

Substituting $$C=0 $$ into ($$) yields,

Thus we have

We are now left with $$\displaystyle (\phi_{xy}-\phi_{yx}) = 0$$

Thus

='''R*5.8. Working with the coefficients in 1st exactness condition'''=

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Problem
Using The Coefficients in the 1st exactness condition prove that ($$) can be written in the form

Nomenclature
For an equation to be exact, they must be of the form

$$ \displaystyle G(x,y,y',y' ') = g(x,y,p) + f(x,y,p)y' ' $$

using chain and product rule

plugging Eq(2),(3),&(40 into Eq(1)

after cancellation of the opposite term

Now, we can club the terms

$$ \displaystyle(f_{xx} + 2pf_{xy} + p^2f_{yy} - g_{px} - pg_{py} +g_y)1 = \bar g$$

and

$$ \displaystyle (f_{xp} +pf_{yp} +2f_y - g_{pp})q = \bar f $$

Since 1 and q, i.e the second derivative of y, are in general non linear, for the equation to hold true, their coefficients must both be equal to zero.

Thus we say that

$$ \displaystyle \bar g = (f_{xx} + 2pf_{xy} + p^2f_{yy} - g_{px} - pg_{py} +g_y)1 = 0$$

and

$$ \displaystyle \bar f = (f_{xp} +pf_{yp} +2f_y - g_{pp})q = 0 $$

Which is the required proof.

=R5.9: Use of MacLaurin Series=

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Problem
Use Taylor Series at x=0 (MacLaurin Series) to derive

$$\displaystyle (1-x)^{-a} = 1+ ax + a(a+1)\frac {x^2}{2!} + a(a+1)(a+2)\frac {x^3}{3!}+.......= F(a,b;b;x) $$

$$\displaystyle \frac{1}{x} arctan(1+x) = 1 - \frac{x^2}{3} + \frac{x^4}{5} - \frac{x^6}{7} + .... = F \left( \frac{1}{2},1,\frac{3}{2},-x^2 \right) $$

Solution
The Taylor's series expansion of a function f(x) about a real or complex number c is given by the formula

When the neighborhood for the expansion is zero, i.e c = 0, the resulting series is called the Maclaurin Series.

Part a
We have the function

$$\displaystyle f(x) = \frac {1}{(1-x)^a}$$

Rewriting the Maclaurin series expansion,

Substituting the values from the tables in ($$) we get

Where $$(a)_0 := 1$$

$$(a)_k := a(a+1)(a+2)\cdots(a+k-1)$$

We can represent

$$ \sum_{n=0} ^ {\infin } \frac{(a)_k \, (b)_k}{(c)_k}\, \frac {x^k}{n!} = F(a,b;c;x) $$

($$) can be written as $$\sum_{n=0} ^ {\infin } \frac{(a)_k (b)_k}{(b)_k}\frac {x^k}{n!} = F(a,b;b;x)$$, hence proved.

Part b
We have the function

$$\displaystyle \frac{1}{x}arctan(1+x)$$

We will use a slightly different approach here when compared to part a of the solution. We will expand $$\displaystyle arctan(1+x)$$ and multiply the resulting expanded function with $$\displaystyle \frac {1}{x}$$

Rewriting the Maclaurin series expansion,

Substituting the values from the tables in ($$) we get

Multiplying ($$) with $$\displaystyle \frac {1}{x}$$

$$\displaystyle \frac{1}{x} arctan(1+x) = \frac{\pi}{4x} + \frac {1}{2*1!} - \frac{1}{2*2!} (x)+\frac{1}{2*3!}(x)^2+ \cdots.$$

This expression does not match the expression that we have been asked to prove. This, we believe is because there has been a misprint and the expression to be found out must be $$\frac{1}{x} arctan(x)$$

Expanding $$arctan(x)$$ using Maclaurin's series

Rewriting the Maclaurin series expansion,

Substituting the values from the tables in ($$) we get

Multiplying ($$) with $$\displaystyle \frac {1}{x}$$

$$\displaystyle arctan(x) = 0 +\frac {1}{1!}+\frac{-2}{3!}(x)^2 + \frac{24}{5!}(x)^4 \cdots.$$

Which is the expression in the RHS.

=R5.10 Gauss Hypergeometric Series =

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Problem
1. Use MATLAB to plot $$F(5,-10;1;x)$$ near x=0 to show the local maximum (or maxima) in this region.

Solution
The MATLAB code, shown below, will plot the hypergeometric function $$F(5,-10;1;x)$$ over the interval: $$0\leq x \leq 0.8$$.

The plot of the hypergeometric function near x=0 reveals a local maximum of 0.1481 at x = 0.23.



The hypergeometric function $$F(5,-10;1;x)$$ can be expressed as $$\sum^\infty_{k=0}\frac{(a)_k(b)_k}{(c)_k}\frac{x^k}{k!}$$ using the Pochhammer Symbol

Here $$ a=5$$, $$ b=-10$$ and $$c=1$$.

The hypergeometric series represented by $$F(5,-10;1;x)$$ terminates after the 11th term because the constant b = -10. This is because starting with the 12th term in the series the factor $$(b+k-1)$$ appears in the numerator.

For the 12th term in the series k = 11, so $$ (-10 + 11 -1 ) = 0$$

The hypergeometric series represented by the function $$F(5,-10;1;x)$$ can be written in expanded form:

If the expansion of $$ (1-x)^6*(1001x^4-1144x^3+396x^2-44x+1)$$ agrees with ($$) then it is a valid representation of the hypergeometric function.

$$(1-x)^6 = 1-6x+15x^2-20x^3+15x^4-6x^5+x^6$$


 * $$1001x^4-6006x^5+15015x^6-20020x^7+15015x^8-6006x^9+1001^{10}$$
 * $$-1144x^3+6864x^4-17160x^5+22880x^6-17160x^7+6864x^8-1144x^9$$
 * $$396x^2-2376x^3+5940x^4-7920x^5+5940x^6-2376x^7+396x^8$$
 * $$-44x+264x^2-660x^3+880x^4-660x^5+264x^6-44x^7$$
 * $$1-6x+15x^2$$$$-20x^3$$$$+15x^4$$$$-6x^5$$$$+1x^6$$

Combining all like terms yields the following:

$$1 - 50x + 675x^2 -4200x^3 + 14700x^4 -31752x^5 + 44100x^6 -39600x^7 + 22275x^8 -7150x^9 +1001x^{10}$$

The expansion of ($$) agrees with the expanded form of the hypergeometric function ($$), which confirms that ($$) is true.

= R 5.11 Calculation of Time Taken by a projectile to hit the Ground =

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
Where $$ _2F_1 (a_1,a_2;b_1;x)$$ is a Hypergeometric Function.

Problem
Consider the integral in (3) Pg.63-8 and ($$)

$$ \int_{z(0)=0}^{z(t)}\frac{dz}{az^n+b}= -\int_0^t dt = -t $$

Let n=3, a=2 and b=10

For each value of time (t), solve for altitude z(t), plot z(t) vs t, and find the time when projectile returns to ground.

Solution
The given integral is a reduced form of the integral (3) Pg 63.8 which relates the mass of a projectile, the forces acting upon it when moving in air ( the air resistance, which is a function of its height in air, and its own weight) and the time taken for the projectile to reach the ground. Thus it represents a real world problem whose solution must actually exist.

We have been given the values of n, a and b. Substituting the values in ($$), we get:

$$ \int \frac{dz}{2z^3+10} = \frac{1}{10}z\ _2F_1 \left(1,\frac{1}{3};1+ \frac{1}{3};-2\frac{z^3}{10}\right) $$

$$ = \frac{1}{10}z\ _2F_1 \left(1,\frac{1}{3}; \frac{4}{3};\frac{-2z^3}{10}\right) $$

The solution of the above Hypergeometric function contains complex terms according to Wolfram Alpha which does not seem to make sense as the function represents a real world problem with real numbers.

When expanded, this is a series that goes to infinity as there is no negative term in the hypergeometric function which will make one of the terms go to zero. But a projectile can not stay in the air forever and at a particular point in time, the function must necessarily go to zero. This does not seem to happen when we look at the Hypergeometric function.

=R5.12: Hypergeometric Function=

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Problem
1. Is (1)p.64-10 exact?

2. Is (1)p.64-10 in the power form of (3) p.21-1?

3. Verify that F(a,b;c;x) is indeed a solution of (1) p.64-10.

Solution
1. In order for (1) p.64-10 to be exact, it must first be in the form of (2)p.16-4, with g and f defined in (3)-(4) p.16-4, as seen below.

Therefore, the first exactness condition is satisfied.

In order to satisfy the second exactness condition, the following derivatives must be found.

By substituting into the 1st relation, (1) p.16-5:

This is not true for all values of a and b, so the 1st relation is not valid.

By substituting into the 2nd relation, (2) p.16-5:

This is true, so the 2nd relation is valid.

One of the relations is not valid, therefore the second exactness condition is not satisfied.

Hence, (1) p.64-10 is not exact.

2. The following equalities must be true for (1) p.64-10 to be in power form of (3) p.21-1.

Since there are no values of $$\displaystyle \alpha, \beta , r, s $$ that make these equalities true, then (1) p.64-10 is not in power form.

3. In order to verify that F(a,b;c;x) is a solution of (1) p.64-10, we select the example of $$\displaystyle y = F(1,1;1;x) \, since \, a, b, c \in \mathbb R $$.

Next, the first and second derivatives of y must be found.

Substituting into (1) p.64-10:

This equation is valid, therefore, F(a,b;c;x) is a solution for (1) p.64-10.

=R*5.13 Exactness of Legendre and Hermite equations =

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
Given

Legendre equation:

Hermite equation:

Problem
1. Verify the exactness of the Legendre ($$) and Hermite ($$) equations.

2. If Hermite equation is not exact, check whether it is in power form, and see whether it can be made exact using IFM with $$\displaystyle h(x,y)=x^m y^n $$.

3. The first few Hermite polynomials are:


 * $$\displaystyle H_0(x)=1$$


 * $$\displaystyle H_1(x)=2x$$


 * $$\displaystyle H_2(x)=4x^2-2$$

Verify that these are homogeneous solutions to the Hermite differential equation ($$).

Nomenclature

 * $$\displaystyle p =: y ' =: \frac {dy}{dx}$$


 * $$\displaystyle f_{ij} = \frac{d^{2}f}{\partial i \partial j}$$

Exactness of Legendre equation
To satisfy the first exactness condition, the Legendre equation ($$) should be of the form:


 * $$\displaystyle G = \underbrace{(1-x^2)}_{f(x,y,p)} y^{''} \underbrace{- 2x y^{\prime} + n(n+1)y}_{g(x,y,p)}$$

Hence ($$) satisfies the first exactness condition.

The second exactness condition can be checked in two ways.

Method 1

The second exactness condition is satisfied if ($$) satisfies ($$) and ($$):

Computing derivatives,

Substituting these into ($$) and ($$),

($$) $$ \rightarrow -2 + 2p \cdot 0 + p^2 \cdot 0 = -2 + p \cdot 0 - n(n+1) $$.

($$) $$ \rightarrow 0 + p \cdot 0 + 2 \cdot 0 = 0 $$.

Hence, the second exactness condition is satisfied when $$\displaystyle n=0 $$ or $$\displaystyle n=-1 $$.

Method 2

The second exacness condition is met if ($$) satisfies:

where $$\displaystyle g_i := \frac{\partial G}{\partial y ^{(i)}}$$

Computing the derivatives,

Substituting these in ($$) yields,


 * $$ n(n+1) - (-2) -2 = 0 $$
 * $$ n(n+1) = 0 $$

Again we see that the second exactness condition is satisfied when $$\displaystyle n=0 $$ or $$\displaystyle n=-1 $$.

Exactness of Hermite equation
To satisfy the first exactness condition, the Hermite equation ($$) should be of the form ($$):


 * $$\displaystyle G = \underbrace{1}_{f(x,y,p)} \cdot y^{''} \underbrace{- 2x y^{\prime} + 2ny}_{g(x,y,p)}$$

Hence ($$) satisfies the first exactness condition.

The second exactness condition can be checked in two ways.

Method 1

The second exactness condition is satisfied if ($$) satisfies ($$) and ($$):

Computing the derivatives,

Substituting these into ($$) and ($$),

($$) $$ \rightarrow 0 + 2p \cdot 0 + p^2 \cdot 0 = -2 + p \cdot 0 - 2n  $$.

($$) $$ \rightarrow 0 + p \cdot 0 + 2 \cdot 0 = 0 $$.

Hence, the second exactness condition is satisfied only when $$\displaystyle n=-1 $$ This is a necessary condition.

Method 2

The second exacness condition is met if ($$) satisfies ($$).

Computing the derivatives,

Substituting these in ($$) yields,


 * $$ 2n - (-2) + 0 = 0$$
 * $$  2(n+1) = 0 $$

The second exactness condition is satisfied only when $$\displaystyle n=-1 $$.

Power form and making the Hermite equation exact using IFM
We have seen that the Hermite equation ($$) is not exact when $$\displaystyle n\neq -1$$.

The power form of L2-ODE-VC is

Comparing ($$) with ($$), we can see that the Hermite equation is of the power form with:


 * $$\displaystyle \alpha=1;\beta=-2;\gamma=2n.$$
 * $$\displaystyle r=0;s=1;t=0.$$

Hence, we can consider an integrating factor which is in power form, $$\displaystyle h(x,y)=x^my^n$$

Replacing the 'n' term in ($$) with $$ \alpha $$ to avoid confusion, we need to find $$m,n\in \mathbb{E}$$, such that the following N2-ODE is exact:

The Hermite equation ($$) can be written as:

($$) should satisfy ($$) and ($$) to meet the second exactness condition.

Computing the derivatives,

Substituting the derivatives in ($$) and ($$),

($$) $$\displaystyle \rightarrow \displaystyle m(m-1)x^{m-2}y^n+2p\cdot mnx^{m-1}y^{n-1}+p^2n(n-1)x^my^{n-2}=-2(m+1)x^my^n+p(-2nx^{m+1}y^{n-1})-2\alpha (n+1)x^my^n+2nx^{m+1}y^{n-1}p $$

($$) $$\displaystyle \rightarrow 0+p\cdot 0+2nx^my^{n-1}=0 $$

We can see that the second exactness condition can be satisfied only when $$\displaystyle n=0$$. When $$\displaystyle n=0$$,
 * $$\displaystyle m(m-1)x^{m-2}=-2(m+1)x^m-2\alpha x^m$$.

Hence we can say
 * $$\displaystyle m(m-1)=0$$
 * $$\displaystyle m+1+\alpha =0$$

Therefore, $$\displaystyle m=1,\alpha =-2 \,$$ is a solution.

Hence, ($$) can be made exact using the integrating factor $$\displaystyle h(x,y)=x$$

Verification of homogeneous solutions of the Hermite equation
Case 1 


 * $$\displaystyle y=H_0(x)=1$$
 * $$\displaystyle y'=H_0'(x)=0$$
 * $$\displaystyle y=H_0(x)=0$$

Substituting in ($$), $$\displaystyle 0-2x\cdot 0+0\cdot 1=0$$

Case 2 


 * $$\displaystyle y=H_1(x)=2x$$
 * $$\displaystyle y'=H_1'(x)=2$$
 * $$\displaystyle y=H_1(x)=0$$

Substituting in ($$), $$\displaystyle 0-2x\cdot 2+2\cdot 2x=0$$

Case 3 


 * $$\displaystyle y=H_2(x)=4x^2-2$$
 * $$\displaystyle y'=H_2'(x)=8x$$
 * $$\displaystyle y=H_2(x)=8$$

Substituting in ($$), $$\displaystyle 8-2x\cdot 8x+4\cdot (4x^2-2)=0$$

Hence the given first three Hermite polynomials are homogeneous solutions of the Hermite equation.

=R*5.14 Expressions for X(x) =

On our honor, we did this problem on our own, without looking at the solutions in previous semesters or other online solutions.

Given
Given

Where


 * $$\displaystyle r_{1,2} = \pm K $$


 * $$\displaystyle r_{3,4} = \pm i \, K $$

Problem
Find expressions for $$ \displaystyle X(x)$$ in terms of $$ \displaystyle \cos Kx, \sin Kx, \cosh Kx, \sinh kh$$

Solution
By definition,
 * $$ \displaystyle e^{iKx} = \cos Kx +i\sin Kx$$ and
 * $$ \displaystyle e^{Kx} = \cosh Kx + \sinh Kx$$

Hence, ($$) can be written as:


 * $$\displaystyle X(x) = C_1 (\cosh Kx \ + \sinh Kx) + C_2 (\cosh Kx \ - \sinh Kx) + C_3 (\cos Kx + i \sin Kx) + C_4 (\cos Kx - i\sin Kx) $$

where,
 * $$\displaystyle c_1 = C_1 + C_2  $$
 * $$\displaystyle c_2 = C_1 - C_2 $$
 * $$\displaystyle c_3 = C_3 + C_4  $$
 * $$\displaystyle c_4 = i(C_3 - C_4) $$

Hence, ($$) is a generic expression for X(x).

=Contributing Members=

=References=