User:Egm4313.s12.team1.armanious/team1/R5

=Report 5=

Statement
Find the radius of convergence for:


 * {| style="width:100%" border="0"

$$\displaystyle r(x)=\sum_{k=0}^{\infty } (k + 1) k x^k $$     (1.0)
 * 1)
 * 1)
 * 
 * }
 * {| style="width:100%" border="0"

$$\displaystyle r(x)=\sum_{k=0}^{\infty }\frac{-1^{k}}{\gamma ^{k}} x^{2k} $$     (1.1)
 * 2)
 * 2)
 * 
 * }

And find the radius of convergence for the Taylor series of 3) Sin(x) about x=0, 4) log(1+x) about x=0, and 5)log(1+x) about x=1.

Solution
1)---

First, we establish what $$ d_{k}$$ is. As found in the notes (section 7-c), we consider an infinite power series of the form:
 * {| style="width:100%" border="0"

r(x)=\sum_{k=0}^{\infty }d_kx^k $$     (1.2)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

The radius is then calculated using the formula below:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{d_{k+1}}{d_{k}} \right | \right ] ^{-1} $$     (1.3)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

In the case of equation (1.0), $$ d_{k}$$ is equals to:
 * {| style="width:100%" border="0"

d_{k}= (k+1)(k) $$     (1.4)
 * $$\displaystyle
 * $$\displaystyle
 * 

This means, from equation (1.3):


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{(k+2)(k+1)}{(k+1)(k)} \right | \right ] ^{-1} $$     (1.5)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This simplifies to:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{(k+2)}{(k)} \right | \right ] ^{-1} $$     (1.6)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This limit becomes $$ \frac{\infty }{\infty} $$, and thus L'hopital's rule is necessary. The radius of convergence is then represented by:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{1}{1} \right | \right ] ^{-1} $$     (1.7)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This limit just becomes 1 and the radius of convergence is then established as 1.

2)--- In the case of equation (1.1), after making a bounds change where j=2k and then making k=j,$$ d_{k}$$ is equal to:
 * {| style="width:100%" border="0"

d_{k}= \frac {-1^{k/2}}{\gamma ^{k/2}} $$     (1.8)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This means, from equation (1.3):


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{(-1^{k/2+1/2})(\gamma^{k/2}) }{(\gamma ^{k/2+1/2})(-1^{k/2})} \right | \right ] ^{-1} $$     (1.9)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

$$ \gamma $$ is a constant in this case, and the radius of convergence simplifies to this form:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{(-1^{1/2})}{(\gamma ^{1/2} )} \right | \right ] ^{-1} $$     (1.10)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

When taking the limit at infinity, the radius of convergence becomes $$ - \gamma^{1/2} $$.

3)--- The Taylor series that represents sin(x) about x=0 is:
 * {| style="width:100%" border="0"

\sin x=\sum_{n=0}^{\infty}\frac{(-1)^n}{(2n+1)!}x^{2n+1} $$     (1.11)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

We put this in the form found in equation (1.2) first. This will allow us to get a $$ d_{k}$$. To put (1.11) in the form found in (1.2), we let k=2n+1:
 * {| style="width:100%" border="0"

r(x)=\sum_{k=1}^{\infty }\frac{-1^{ \frac{k+1}{2}}}{k!} x^k $$     (1.12)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This means $$ d_{k} $$ would equal:
 * {| style="width:100%" border="0"

\frac{-1^{ \frac{k+1}{2}}}{k!} $$     (1.13)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This means, from equation (1.3):


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{ (-1^{ \frac{k+2}{2}}) (k)! } {(k+1)! (-1^{ \frac{k+1}{2}})} \right | \right ] ^{-1} $$     (1.14)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This simplifies to:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{(-1^{.5})}{(k+1)} \right | \right ] ^{-1} $$     (1.15)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }

This inverted limit makes $$ R_{c}= \infty $$. This is the best possible case, as this means the series converges for all values of x. 4)--- The Taylor series that represents log(1+x) about x=0 is:
 * {| style="width:100%" border="0"

\sin x=\sum_{k=1}^{\infty}\frac{(-1)^k}{k}x^k $$     (1.16)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

So $$d_k $$ would be defined as shown below:
 * {| style="width:100%" border="0"

d_{k}= \frac{(-1)^k}{k} $$     (1.17)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This means, from equation (1.3):


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{(-1^{k+1})(k) }{(k+1)(-1^k)} \right | \right ] ^{-1} $$     (1.18)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This simplifies to:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{(-1)(k) }{(k+1)} \right | \right ] ^{-1} $$     (1.19)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This limit becomes $$ \frac{\infty }{\infty} $$, and thus L'hopital's rule is necessary. The radius of convergence is then represented by:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{-1}{1} \right | \right ] ^{-1} $$     (1.20)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This absolute, inverted limit just becomes 1 and the radius of convergence is then established as 1.

5)--- The Taylor series that represents log(1+x) about x=1 is:
 * {| style="width:100%" border="0"

\sin x= - \sum_{k=1}^{\infty}\frac{(-1)^k}{k}x^k $$     (1.21)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

So $$d_k $$ would be defined as shown below:
 * {| style="width:100%" border="0"

d_{k}= - \frac{(-1)^k}{k} $$     (1.22)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This means, from equation (1.3):


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | -\frac{(-1^{k+1})(k) }{(k+1)(-1^k)} \right | \right ] ^{-1} $$     (1.23)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This simplifies to:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | -\frac{(-1)(k) }{(k+1)} \right | \right ] ^{-1} $$     (1.24)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This limit becomes $$ \frac{\infty }{\infty} $$, and thus L'hopital's rule is necessary. The radius of convergence is then represented by:


 * {| style="width:100%" border="0"

R_{c} = \left [ \overset{lim}{k \rightarrow \infty } \left | \frac{1}{1} \right | \right ] ^{-1} $$     (1.25)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This absolute, inverted limit just becomes 1 and the radius of convergence is then established as 1.

Author
Solved and Typed By -Egm4313.s12.team1.silvestri (talk) 17:26, 24 March 2012 (UTC)

Reviewed By - --Egm4313.s12.team1.durrance (talk) 17:33, 30 March 2012 (UTC)

Statement
Determine whether the following functions are linear independent using the Wronskian.


 * {| style="width:100%" border="0"

f(x)=x^2, g(x)=x^4 $$     (2.0)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

f(x)=\cos(x), g(x)=\sin(3x) $$     (2.1)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Using the Gramian over interval [a,b] = [1,1] to come to the same conclusion.

Wronskian Solution (1)
The Wronskian is defined as:
 * {| style="width:100%" border="0"

W(f,g):=\det \begin{bmatrix} f & g\\f'&g'\end{bmatrix}=fg'-gf' $$     (2.2)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

If the Wronskian does not equal zero, then the equations are linearly independent.

Values needed are:
 * {| style="width:100%" border="0"

f'=2x $$     (2.3)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

g'=4x^3 $$     (2.4)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Substituting all of the conditions into Eq. (2.2):
 * {| style="width:100%" border="0"

W(f,g):=\det \begin{bmatrix} f & g\\f'&g'\end{bmatrix}=x^2(4x^3)-2x(x^4)=2x^{5}\neq 0 $$     (2.5)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Because the Wronskian does not equal zero, Eqs. (2.0) are linearly independent.

Wronskian Solution (2)
The Wronskian is defined as:
 * {| style="width:100%" border="0"

W(f,g):=\det \begin{bmatrix} f & g\\f'&g'\end{bmatrix}=fg'-gf' $$     (2.2)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

If the Wronskian does not equal zero, then the equations are linearly independent.

Values needed are:
 * {| style="width:100%" border="0"

f'=-\sin(x) $$     (2.6)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

g'=3\cos(3x) $$     (2.7)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Substituting all of the conditions into Eq. (5.2):
 * {| style="width:100%" border="0"

W(f,g)=\det \begin{bmatrix} f & g\\f'&g'\end{bmatrix}=3\cos(x)(\cos(3x))-(-\sin(x))(\sin(3x))\neq 0 $$     (2.8)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Because the Wronskian does not equal zero, Eqs. (2.1) are linearly independent.

Gramian Solution (1)
The Gramian is defined as:
 * {| style="width:100%" border="0"

\Gamma(f,g):=\det \begin{bmatrix} \langle f, f \rangle & \langle f, g \rangle \\ \langle g, f \rangle & \langle g, g \rangle \end{bmatrix} $$     (2.9)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Where the scalar product is defined as:
 * {| style="width:100%" border="0"

\langle f, g \rangle := \int_a^b f(x) g(x) \,dx $$     (2.10)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

When the Gramian does not equal zero, the functions are linearly independent.

So calculating the scalar products:
 * {| style="width:100%" border="0"

\langle f, f \rangle = \int_{-1}^1 (x^2)(x^2)\,dx = 2/5$$ $$\displaystyle \langle f, g \rangle = \int_{-1}^1 (x^2)(x^4)\,dx = 2/7$$ $$\displaystyle \langle g, f \rangle = \int_{-1}^1 (x^4)(x^2)\,dx = 2/7$$ $$\displaystyle \langle g, g \rangle = \int_{-1}^1 (x^4)(x^4)\,dx = 2/9 $$     (2.11)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Substituting those values into Eq. (2.9) and calculating the determinant:
 * {| style="width:100%" border="0"

\Gamma(f,g):=\det \begin{bmatrix} \langle f, f \rangle & \langle f, g \rangle \\ \langle g, f \rangle & \langle g, g \rangle \end{bmatrix}= (2/5)(2/9)-(2/7)(2/7)=16/2205\neq 0 $$     (2.12)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Because the determinant does not equal zero, Eqs. (2.0) are linearly independent.

Gramian Solution (2)
The Gramian is defined as:
 * {| style="width:100%" border="0"

\Gamma(f,g):=\det \begin{bmatrix} \langle f, f \rangle & \langle f, g \rangle \\ \langle g, f \rangle & \langle g, g \rangle \end{bmatrix} $$     (2.9)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Where the scalar product is defined as:
 * {| style="width:100%" border="0"

\langle f, g \rangle := \int_a^b f(x) g(x) \,dx $$     (2.10)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

When the Gramian does not equal zero, the functions are linearly independent.

So calculating the scalar products:
 * {| style="width:100%" border="0"

\langle f, f \rangle = \int_{-1}^1 \cos(x)\cos(x)\,dx = 1.9998$$ $$\displaystyle \langle f, g \rangle = \int_{-1}^1 \cos(x)\sin(3x)\,dx = 0$$ $$\displaystyle \langle g, f \rangle = \int_{-1}^1 \sin(3x)\cos(x)\,dx = 0$$ $$\displaystyle \langle g, g \rangle = \int_{-1}^1 \sin(3x)\sin(3x)\,dx = 0.0018 $$     (2.13)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Substituting those values into Eq. (2.9) and calculating the determinant:
 * {| style="width:100%" border="0"

\Gamma(f,g):=\det \begin{bmatrix} \langle f, f \rangle & \langle f, g \rangle \\ \langle g, f \rangle & \langle g, g \rangle \end{bmatrix}= 1.998*0.0018-0=0.0036\neq 0 $$     (2.14)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Because the determinant does not equal zero, Eqs. (2.1) are linearly independent.

Author
Solved and Typed By - --Egm4313.s12.team1.durrance (talk) 18:35, 26 March 2012 (UTC)--128.227.12.77 18:17, 26 March 2012 (UTC)

Reviewed By ---Egm4313.s12.team1.stewart (talk) 19:03, 26 March 2012 (UTC)

Statement
From [https://elearning2.courses.ufl.edu/access/content/group/UFL-EGM4313-5641-12012/Lecture%20Notes/iea.s12.sec7c.djvu Lect. 7c Pg. 38] Verify that $$\mathbf{b_{1}},\mathbf{b_{2}}\!$$ are linearly independent using the Gramian.


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{b_{1}}=2\mathbf{e}_{1}+7\mathbf{e}_{2} $$     (3.1)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{b_{2}}=1.5\mathbf{e}_{1}+3\mathbf{e}_{2} $$     (3.2)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Solution
The Gramian:
 * {| style="width:100%" border="0"

\Gamma(f,g):=\det \begin{bmatrix} \langle f, f \rangle & \langle f, g \rangle \\ \langle g, f \rangle & \langle g, g \rangle \end{bmatrix} $$     (3.3)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

But for the vectors given the Gramian looks like this:


 * {| style="width:100%" border="0"

\Gamma(\mathbf{b_{1}},\mathbf{b_{2}}):=\det \begin{bmatrix} \langle\mathbf{b_{1}}, \mathbf{b_{1}} \rangle & \langle \mathbf{b_{1}}, \mathbf{b_{2}} \rangle \\ \langle \mathbf{b_{2}}, \mathbf{b_{1}} \rangle & \langle \mathbf{b_{2}}, \mathbf{b_{2}} \rangle \end{bmatrix} $$     (3.4)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

And:


 * {| style="width:100%" border="0"

$$     (3.5)
 * $$\langle \mathbf{b_{i}}, \mathbf{b_{j}} \rangle \equiv \mathbf{b_{i}} \cdot \mathbf{b_{j}}
 * $$\langle \mathbf{b_{i}}, \mathbf{b_{j}} \rangle \equiv \mathbf{b_{i}} \cdot \mathbf{b_{j}}
 * <p style="text-align:right">
 * }

And for the scalar dot product:
 * {| style="width:100%" border="0"

$$  \displaystyle \langle \mathbf e_1, \mathbf e_2 \rangle = 0 $$     (3.6) Therefore:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\langle \mathbf{b_{1}}, \mathbf{b_{1}} \rangle = (2\mathbf{e}_{1}+7\mathbf{e}_{2}) \cdot (2\mathbf{e}_{1}+7\mathbf{e}_{2})=4\mathbf{e}_{1}+49\mathbf{e}_{2}=4+49=53 $$     (3.7)
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

\langle \mathbf{b_{1}}, \mathbf{b_{2}} \rangle = (2\mathbf{e}_{1}+7\mathbf{e}_{2}) \cdot (1.5\mathbf{e}_{1}+3\mathbf{e}_{2}) = 3\mathbf{e}_{1}+21\mathbf{e}_{2}=3+21=24 $$     (3.8)
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

\langle \mathbf{b_{2}}, \mathbf{b_{1}} \rangle = (1.5\mathbf{e}_{1}+3\mathbf{e}_{2}) \cdot (2\mathbf{e}_{1}+7\mathbf{e}_{2})= 3\mathbf{e}_{1}+21\mathbf{e}_{2}=3+21=24 $$     (3.9)
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

\langle \mathbf{b_{2}}, \mathbf{b_{2}} \rangle = (1.5\mathbf{e}_{1}+3\mathbf{e}_{2}) \cdot (1.5\mathbf{e}_{1}+3\mathbf{e}_{2})= 2.25\mathbf{e}_{1}+9\mathbf{e}_{2}=2.25+9=11.25 $$     (3.10)
 * <p style="text-align:right">
 * }

The determinant to solve for the Gramian is:


 * {| style="width:100%" border="0"

\Gamma(\mathbf{b_{1}},\mathbf{b_{2}}):=\det \begin{bmatrix} \langle\mathbf{b_{1}}, \mathbf{b_{1}} \rangle & \langle \mathbf{b_{1}}, \mathbf{b_{2}} \rangle \\ \langle \mathbf{b_{2}}, \mathbf{b_{1}} \rangle & \langle \mathbf{b_{2}}, \mathbf{b_{2}} \rangle \end{bmatrix}=\langle\mathbf{b_{1}}, \mathbf{b_{1}} \rangle\cdot \langle \mathbf{b_{2}}, \mathbf{b_{2}} \rangle -\langle \mathbf{b_{1}}, \mathbf{b_{2}} \rangle\cdot \langle \mathbf{b_{2}}, \mathbf{b_{1}} \rangle $$     (3.11)
 * <p style="text-align:right">
 * }

Plugging in values:
 * {| style="width:100%" border="0"

\Gamma(\mathbf{b_{1}},\mathbf{b_{2}})=(53)\cdot (11.25)-(24)\cdot (24) $$     (3.12)
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

\Gamma(\mathbf{b_{1}},\mathbf{b_{2}})=596.25-576 $$     (3.13)
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

\Gamma(\mathbf{b_{1}},\mathbf{b_{2}})=20.25 $$     (3.14)
 * <p style="text-align:right">
 * }

Qualifications for a linearly independent system of vectors:
 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma } \neq 0 \Rightarrow \mathbf{\Gamma^{-1} } exists \Rightarrow \mathbf{c}=\mathbf{\Gamma } ^{-1}\mathbf{d} $$     (3.15)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

So the vectors $$\mathbf{b_{1}},\mathbf{b_{2}}\!$$ are linearly independent because:
 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma }= 20.25 \neq 0 $$     (3.16)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Author
Solved and Typed By - Egm4313.s12.team1.stewart (talk) 18:20, 25 March 2012 (UTC)

Reviewed By - Egm4313.s12.team1.rosenberg (talk) 05:20, 28 March 2012 (UTC)

Statement
Show that $$ y_p(x) = \sum_{i=0}^{n}y_{p,i}(x)\!$$ is indeed the overall particular solution of the L2-ODE-VC $$ y''_{p,i} + p(x)y'_{p,i} + q(x)y_{p,i} = r_i(x)\!$$ with the excitation $$ r(x) = r_1(x) + r_2(x) + ... + r_n(x) = \sum_{i=0}^{n}r_i(x)\!$$. Discuss the choice of $$ y_p(x)\!$$, in example for $$ r(x) = Kcos(wx)\!$$. Why would you need to have both $$ cos(wx)\!$$ and $$ sin(wx)\!$$ in $$ y_p(x)\!$$?

Solution
The following represent particular solutions and their derivatives, equated into a summation:
 * {| style="width:100%" border="0"

y_p(x) = y_{p,1}(x) + y_{p,2}(x) + ... + y_{p,n}(x) \rightarrow y_p(x) = \sum_{i=0}^{n}y_{p,i}(x) $$     (4.0)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

y_p'(x) = y_{p,1}'(x) + y_{p,2}'(x) + ... + y_{p,n}'(x) \rightarrow y_p'(x) = \sum_{i=0}^{n}y_{p,i}'(x) $$     (4.1)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

y_p(x) = y_{p,1}(x) + y_{p,2}''(x) + ... + y_{p,n}(x) \rightarrow y_p(x) = \sum_{i=0}^{n}y_{p,i}''(x) $$     (4.2)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Because the given ODE is in the form of L2-ODE-VC, it is linearly independent. Each $$ y_{p,i}\! $$ is a solution for each $$ r_i\! $$, and when there are multiple excitations, the solution to a sum of these excitations is the sum of the particular solutions. Using these particular solutions, we can show that they are the solutions to the following L2-ODE-VC (4.3) with the given excitation:


 * {| style="width:100%" border="0"

y_{p,i}'' + p(x)y_{p,i}' + q(x)y_{p,i} = r_i(x) $$     (4.3)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

\sum_{i=0}^{n}y_{p,i}''(x) + p(x)\sum_{i=0}^{n}y_{p,i}'(x) + q(x)\sum_{i=0}^{n}y_{p,i}(x) = \sum_{i=0}^{n}r_i(x) $$     (4.4)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

For $$ r(x) = Kcos(wx)\!$$. You need to have both $$ cos(wx)\!$$ and $$ sin(wx)\!$$ in $$ y_p(x)\!$$ because when solving for the particular solution, it is necessary to take derivatives of the particular solution, and in the case of $$ r(x) = Kcos(wx)\!$$ the derivatives will produce extra terms since the derivative of $$ cos(wx)\!$$ will produce both $$ sin(wx)\!$$ and $$ cos(wx)\! $$ terms. Having both $$ cos(wx)\!$$ and $$ sin(wx)\!$$ is necessary to eliminate the extra terms.

Author
Solved and Typed By - --Egm4313.s12.team1.wyattling (talk) 19:17, 26 March 2012 (UTC)

Reviewed By - Egm4313.s12.team1.armanious (talk) 05:25, 30 March 2012 (UTC)

Statement
1. Show that $$\cos 7x$$ and $$\sin 7x$$ are linearly independent using the Wronskian and the Gramian(1 period). 2. Find 2 equations for the two unknowns M, N, and solve for M, N. 3. Find the overall solution $$y(x)$$ that corresponds to the initial condition (3b) p3-7. $$y(0)=1, y'(0)=0$$ Plot the solution over 3 periods.

Solution
Part 1a When using the Wronskian, if the solution does not equal to 0, then the two are linearly independent of each other. The Wronskian can be defined as:
 * {| style="width:100%" border="0"

\displaystyle W(f,g):=det\begin{bmatrix} f &g \\ f'&g' \end{bmatrix}=fg'-gf' $$     (5.1)
 * <p style="text-align:right">
 * }

Lets set $$f = \cos (7x)$$ and $$g = \sin (7x)$$ so that we can find $$f', g'$$
 * {| style="width:100%" border="0"

\displaystyle f' = -7\sin (7x), g' = 7\cos (7x) $$     (5.2)
 * <p style="text-align:right">
 * }

Plugging these values into the Wronskian equation yields:
 * {| style="width:100%" border="0"

\displaystyle W(f,g):=det\begin{bmatrix} \cos (7x) &\sin (7x) \\ -7\sin (7x)&7\cos (7x) \end{bmatrix}=7\cos ^2 (7x) - 7\sin ^2 (7x) $$     (5.3)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\displaystyle 7\cos ^2 (7x) - 7\sin ^2 (7x) \neq 0 $$     (5.4)
 * <p style="text-align:right">
 * }

Thus f and g are in fact linearly independent of each other. Part 1b Now we need to solve it using the Gramian Gramian can be defined as:


 * {| style="width:100%" border="0"

\displaystyle \Gamma (f,g):=det\begin{bmatrix} <f,f> &<f,g> \\ <g,f> &<g,g> \end{bmatrix} $$     (5.5)
 * <p style="text-align:right">
 * }

Where like with the Wronskian, f and g are linearly independent of each other when $$\Gamma (f,g)\neq 0 $$ Integrating over one period implies that our boundaries will be $$(0, \frac{2\pi }{7})$$
 * {| style="width:100%" border="0"

\displaystyle <f,f>=\int_{0}^{\frac{2\pi }{7}}\cos ^{2}(7x)dx $$     (5.6)
 * <p style="text-align:right">
 * }

Setting $$u=7x$$ gives us $$du=7dx$$ which also changes our integration factors to be $$(0,2\pi )$$ giving us:
 * {| style="width:100%" border="0"

\displaystyle <f,f>=\frac{1}{7}\int_{0}^{2\pi }\cos ^{2}(u)du $$     (5.7)
 * <p style="text-align:right">
 * }

Integrating gives us:
 * {| style="width:100%" border="0"

\displaystyle \frac{1}{7}[\frac{u}{2} +\frac{1}{4} \sin 2u\mathbf{\mid} _{0}^{2\pi }] $$     (5.8)
 * <p style="text-align:right">
 * }

Which equals:
 * {| style="width:100%" border="0"

\displaystyle \frac{\pi }{7} $$     (5.9)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\displaystyle <g,g>=\int_{0}^{\frac{2\pi }{7}}\sin ^{2}(7x)dx $$     (5.10)
 * <p style="text-align:right">
 * }

Setting $$u=7x$$ gives us $$du=7dx$$ which also changes our integration factors to be $$(0,2\pi )$$ giving us:
 * {| style="width:100%" border="0"

\displaystyle <g,g>=\frac{1}{7}\int_{0}^{2\pi }\sin ^{2}(u)du $$     (5.11)
 * <p style="text-align:right">
 * }

Integrating gives us:
 * {| style="width:100%" border="0"

\displaystyle \frac{1}{7}[\frac{u}{2} -\frac{1}{4} \sin 2u\mathbf{\mid} _{0}^{2\pi }] $$     (5.12)
 * <p style="text-align:right">
 * }

Which equals:
 * {| style="width:100%" border="0"

\displaystyle \frac{\pi }{7} $$     (5.13)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\displaystyle <f,g>=<g,f>=\int_{0}^{\frac{2\pi }{7}}\cos (7x)\sin (7x)dx $$     (5.14)
 * <p style="text-align:right">
 * }

Because cos and sin are orthogonal of each other, without going through with the integration, we can say that $$<f,g>=<g,f>=0$$ Plugging these values into our Gramian equation gives us:
 * {| style="width:100%" border="0"

\displaystyle \Gamma (f,g):=det\begin{bmatrix} \frac{\pi }{7}&0 \\ 0 &\frac{\pi }{7} \end{bmatrix} =\frac {\pi ^2}{49} \neq 0 $$     (5.15)
 * <p style="text-align:right">
 * }

Thus, we again see that f and g are linearly independent of each other. Part 2 From the notes, we are given the following information:
 * {| style="width:100%" border="0"

\displaystyle y''-3y'-10y=3\cos (7x) $$     (5.16)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\displaystyle y_{p}(x)=M\cos (7x)+N\sin (7x) $$     (5.17)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\displaystyle y'_{p}(x)=-M7\sin (7x)+N7\cos (7x) $$     (5.18)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\displaystyle y''_{p}(x)=-M49\cos (7x)-N49\sin (7x) $$     (5.19)
 * <p style="text-align:right">
 * }

Plugging 5.17-19 back into our original equation in 5.16 yields:
 * {| style="width:100%" border="0"

\displaystyle -M49\cos (7x)-N49\sin (7x)+M21\sin (7x)-N21\cos (7x)-M10\cos (7x)-N10\sin (7x)=3\cos (7x) $$     (5.20)
 * <p style="text-align:right">
 * }

Collecting like terms leaves us with:
 * {| style="width:100%" border="0"

\displaystyle -59M\cos (7x)-59N\sin (7x)+21M\sin (7x)-21N\cos (7x)=3\cos (7x) $$     (5.21)
 * <p style="text-align:right">
 * }

Now we can equate coefficients to solve of M and N:
 * {| style="width:100%" border="0"

\displaystyle -59M = 3,M=\frac{-3}{59} $$     (5.22)
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

\displaystyle 21N = 3,N=\frac{1}{7} $$     (5.23)
 * <p style="text-align:right">
 * }

Part 3 The overall solution to the equation is expressed as $$ y(x)=y_{p}(x)+y_{h}(x)$$. We must now find the homogeneous equation. Writing our given equation in homogeneous form gives us:
 * {| style="width:100%" border="0"

\displaystyle y''-3y'-10=0 $$     (5.24)
 * <p style="text-align:right">
 * }

rewriting in characteristic form:
 * {| style="width:100%" border="0"

\displaystyle \lambda ^2-3\lambda -10 = 0 $$     (5.25)
 * <p style="text-align:right">
 * }

We can solve for our roots using simple factoring:
 * {| style="width:100%" border="0"

\displaystyle (\lambda -5)(\lambda +2) = 0 $$     (5.26)
 * <p style="text-align:right">
 * }

Thus:
 * {| style="width:100%" border="0"

\displaystyle \lambda_{1,2}=(5,-2) $$     (5.27)
 * <p style="text-align:right">
 * }

Yielding:
 * {| style="width:100%" border="0"

\displaystyle y_{h}(x)=c_{1} e^{5x}+c_{2} e^{-2x} $$     (5.28)
 * <p style="text-align:right">
 * }

Now using the given initial conditions, we can solve for $$c_{1},c_{2}$$
 * {| style="width:100%" border="0"

\displaystyle y'_{h}(x)=5c_{1}e^{5x}-2c_{2}e^{-2x} $$     (5.29)
 * <p style="text-align:right">
 * }

Plugging in initial conditions we have:
 * {| style="width:100%" border="0"

\displaystyle y(0)=c_{1}+c_{2}=1, y'(0)=5c_{1}-2c_{2}=1 $$     (5.30)
 * <p style="text-align:right">
 * }

Solving for each gives us:
 * {| style="width:100%" border="0"

\displaystyle c_{1}=\frac{2}{7}, c_{2}=\frac{5}{7} $$     (5.31)
 * <p style="text-align:right">
 * }

Therfore:
 * {| style="width:100%" border="0"

\displaystyle y_{h}(x)=\frac{2}{7} e^{5x}+\frac{5}{7} e^{-2x} $$     (5.32)
 * <p style="text-align:right">
 * }

Now plugging in our answers from Step 2 into the general particular equation we get that:
 * {| style="width:100%" border="0"

\displaystyle y_{p}(x)=\frac{-3}{59} \cos (7x) + \frac{1}{7} \sin (7x) $$     (5.33)
 * <p style="text-align:right">
 * }

so our final equation to is:
 * {| style="width:100%" border="0"

\displaystyle y(x)=\frac{2}{7} e^{5x}+\frac{5}{7} e^{-2x} +\frac{-3}{59} \cos (7x) + \frac{1}{7} \sin (7x) $$     (5.34)
 * <p style="text-align:right">
 * }

To plot over 3 periods, we will plot from $$(0,\frac{6\pi }{7})$$

Author
Solved and Typed By - Egm4313.s12.team1.rosenberg (talk) 18:45, 30 March 2012 (UTC)

Reviewed By -Egm4313.s12.team1.silvestri (talk) 18:44, 30 March 2012 (UTC)

Statement
Consider the following L2-ODE-CC; see p.6-6:
 * {| style="width:100%" border="0"

$$\displaystyle{y}''-4{y}'+13{y}=2e^{-2x}\cos(3x)$$ (6.0) Homogeneous solution:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle{y}_{h}(x)=e^{-2x}[A\cos3x+B\sin3x]$$ (6.1) Particular solution:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle{y}_{p}(x)=xe^{-2x}[M\cos3x+N\sin3x]$$ (6.2) Complete the solution for this problem.
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Find the overall solution $$y(x)$$ that corresponds to the initial condition (3b) p.3-7
 * {| style="width:100%" border="0"

$$\displaystyle y(0)=1,y'(0)=0$$ (6.3)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Solution
Start by finding $$y_{p}'$$ and $$y_{p}''$$.
 * {| style="width:100%" border="0"

$$\displaystyle {y}_{p}'={{e}^{-2x}}[(-3Mx-2Nx+N)\sin 3x+(-2Mx+3Nx+M)\cos 3x]$$ (6.4)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle {y}_{p}''={{e}^{-2x}}[(12Mx-6M-5Nx-4N)\sin 3x+(-15Mx-12M-12Nx+6N)\cos 3x]$$ (6.5) Substitute $$y_p$$ and its derivatives into (6.0) to find $$M$$ and $$N$$.
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle {y}_{p}''+4{y}_{p}'+13{y}_{p}={{e}^{-2x}}[(-6M)\sin 3x+(-10Mx-8M+6N)\cos 3x]$$ (6.6) Separating terms and setting equal to the excitation from (6.0):
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle -6M{{e}^{-2x}}\sin 3x+(-8M+6N){{e}^{-2x}}\cos 3x-10Mx{{e}^{-2x}}\cos 3x=2{{e}^{-2x}}\cos 3x$$
 * style="width:95%" |
 * style="width:95%" |

(6.7) From (6.7), we solve coefficients to get
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle -6M=0$$ $$\displaystyle -8M+6N=2$$ $$\displaystyle -10M=0$$ (6.8) Solving for $$M$$ and $$N$$:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle M=0$$ $$\displaystyle N=\frac{1}{3}$$ (6.9) Which gives us the particular solution:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle {y}_{p}=\frac{1}{3}x{{e}^{-2x}}\sin 3x$$ (6.10) For the general solution,
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle y={{y}_{h}}+{{y}_{p}}$$ (6.11)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle y=e^{-2x}[A\cos3x+B\sin3x]+\frac{1}{3}x{{e}^{-2x}}\sin 3x$$ (6.12)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle y=e^{-2x}[A\cos3x+(B+\frac{1}{3}x)\sin3x]$$ (6.13) To solve for $$A$$ and $$B$$, we use initial conditions from (6.3):
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle y(0)=e^{-2(0)}[A\cos3(0)+(B+\frac{1}{3}(0))\sin3(0)]=1$$ (6.14) Which simplifies to:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle A=1$$ (6.15) For the second initial condition from (6.3):
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle y'={{e}^{-2x}}[(-3A-2B+\frac{1}{3}+\frac{2}{3}x)\sin 3x+(3B+x-2)\cos 3x]$$ (6.16)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle y'(0)={{e}^{-2(0)}}[(-3A-2B+\frac{1}{3}+\frac{2}{3}(0))\sin 3(0)+(3B+(0)-2)\cos 3(0)]=0$$ (6.17)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle 3B-2=0$$ (6.18)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle B=\frac{2}{3}$$ (6.19) We can now write $$y$$ with all coefficients known.
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$\displaystyle y={{e}^{-2x}}[(1)\cos 3x+(\frac{2}{3}+\frac{1}{3}x)\sin 3x]$$ (6.20) Final Equation
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$\displaystyle y={{e}^{-2x}}[\cos 3x+(\frac{2}{3}+\frac{1}{3}x)\sin 3x]$$ (6.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }



Author
Solved and Typed By - Egm4313.s12.team1.essenwein (talk) 00:28, 20 March 2012 (UTC)

Reviewed By - --Egm4313.s12.team1.stewart (talk) 18:29, 25 March 2012 (UTC)

Statement
'''[https://elearning2.courses.ufl.edu/access/content/group/UFL-EGM4313-5641-12012/Lecture%20Notes/iea.s12.sec8b.djvu See R5.7 Lect. 8b pg. 11]:'''


 * {| style="width:100%" border="0"

$$\mathbf{v}=4\mathbf e_1+2\mathbf e_2=c_1\mathbf b_1+c_2\mathbf b_2$$
 * style="width:95%" |
 * style="width:95%" |
 * }

The oblique basis vectors b1,b2 are:
 * {| style="width:100%" border="0"

$$\mathbf b_1 =2\mathbf e_1 + 7\mathbf e_2$$
 * style="width:95%" |
 * style="width:95%" |
 * }


 * {| style="width:100%" border="0"

$$\mathbf b_2 =1.5\mathbf e_1 + 3\mathbf e_2$$
 * style="width:95%" |
 * style="width:95%" |
 * }

1. Find the components c1, c2 using the Gramian matrix. 2. Verify the result found above.

Solution
To find the components of the oblique basis vectors, the Gram matrix must be used. The Gram matrix is defined as such:
 * {| style="width:100%" border="0"

$$  \displaystyle \boldsymbol \Gamma (\mathbf b_1, \mathbf b_2) := \begin{bmatrix} \langle \mathbf b_1, \mathbf b_1 \rangle & \langle \mathbf b_1, \mathbf b_2 \rangle \\ \langle \mathbf b_2, \mathbf b_1 \rangle & \langle \mathbf b_2, \mathbf b_2 \rangle \end{bmatrix} $$     (7.0)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

This matrix requires several scalar products to be found. one important feature of scalar products will be presented here and implicitly used throughout these calculations:
 * {| style="width:100%" border="0"

$$  \displaystyle \langle \mathbf e_1, \mathbf e_2 \rangle = 0 $$     (7.1)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Equation 7.1 states that the scalar product of two perpendicular vectors is zero. To find the Gram matrix, three scalar products must be found:
 * {| style="width:100%" border="0"

$$  \displaystyle \langle \mathbf b_1, \mathbf b_1 \rangle = (2)(2)+(7)(7)=53 $$     (7.2)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \langle \mathbf b_1, \mathbf b_2 \rangle =\langle \mathbf b_2, \mathbf b_1 \rangle =(2)(1.5)+(7)(3)=24 $$     (7.3)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \langle \mathbf b_2, \mathbf b_2 \rangle = (1.5)(1.5)+(3)(3)=11.25 $$     (7.4) Using the above values the Gram matrix is found to be:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \boldsymbol \Gamma (\mathbf b_1, \mathbf b_2) = \begin{bmatrix} 53 & 24 \\ 24 & 11.25 \end{bmatrix} $$     (7.5) To find the components, the Gram matrix must be used to solve the following equation:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \begin{bmatrix} \langle \mathbf b_1, \mathbf b_1 \rangle & \langle \mathbf b_1, \mathbf b_2 \rangle \\ \langle \mathbf b_2, \mathbf b_1 \rangle & \langle \mathbf b_2, \mathbf b_2 \rangle \end{bmatrix}\begin{Bmatrix}c_1\\c_2 \end{Bmatrix}=\begin{Bmatrix}\langle \mathbf b_1, \mathbf v \rangle\\\langle \mathbf b_2, \mathbf v \rangle \end{Bmatrix} $$     (7.6)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

The right hand side of the equation can be found using:
 * {| style="width:100%" border="0"

$$  \displaystyle \langle \mathbf b_1, \mathbf v \rangle =(2)(4)+(7)(2)=22 $$     (7.7)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \langle \mathbf b_2, \mathbf v \rangle =(1.5)(4)+(3)(2)=12 $$     (7.8) Using known values, 7.6 becomes:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \begin{bmatrix} 53 & 24 \\ 24 & 11.25 \end{bmatrix}\begin{Bmatrix}c_1\\c_2 \end{Bmatrix}=\begin{Bmatrix}22\\12 \end{Bmatrix} $$     (7.9) To find the components, the Gramian (the determinant of the Gram matrix) and the inverse of the Gram must be found.
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \Gamma = \begin{vmatrix} 53 & 24 \\ 24 & 11.25 \end{vmatrix}=(53)(11.25)-(24)(24)=20.25 $$     (7.10)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \boldsymbol \Gamma ^{-1} = \frac{1}{20.25}\begin{bmatrix} 11.25 & -24 \\ -24 & 53 \end{bmatrix}=\begin{bmatrix} \frac{5}{9} & -\frac{32}{27} \\ -\frac{32}{27} & \frac{212}{81} \end{bmatrix} $$     (7.11) This can be used in the matrix equation to solve for the components:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \begin{Bmatrix}c_1\\c_2 \end{Bmatrix}= \begin{bmatrix} \frac{5}{9} & -\frac{32}{27} \\ -\frac{32}{27} & \frac{212}{81} \end{bmatrix}\begin{Bmatrix}22\\12 \end{Bmatrix}=\begin{Bmatrix}-2\\ \frac{16}{3} \end{Bmatrix} $$     (7.12) Therefore the matrix v with respect to the oblique vectors is:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle \mathbf v = -2 \mathbf b_1 + \frac{16}{3} \mathbf b_2 $$     (7.13) As a check, the definitions of each vectors with respect to the basis e1 and e2:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle 4\mathbf e_1 + 2\mathbf e_2 = -2 (2\mathbf e_1 + 7\mathbf e_2) + \frac{16}{3} (1.5\mathbf e_1 + 3\mathbf e_2)=-4\mathbf e_1 -14\mathbf e_2 + 8\mathbf e_1 + 16\mathbf e_2 $$     (7.14) Which reduces to:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle 4\mathbf e_1 + 2\mathbf e_2 = 4\mathbf e_1 + 2\mathbf e_2 $$     (7.15)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Author
Solved and Typed By - Egm4313.s12.team1.armanious (talk) 01:40, 25 March 2012 (UTC)

Reviewed By - --Egm4313.s12.team1.stewart (talk) 18:31, 25 March 2012 (UTC)

Statement
'''[https://elearning2.courses.ufl.edu/access/content/group/UFL-EGM4313-5641-12012/Lecture%20Notes/iea.s12.sec8b.djvu see R5.8 Lect. 8b pg. 16]:''' Find the integral: $$\int x^n \log(1+x) dx\!$$

Solution

 * {| style="width:100%" border="0"

$$  \displaystyle \int x^n \log(1+x) dx $$ (8.0) To find this integral, integration by parts must be used. The formula for integration by parts is:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int udv=uv-\int vdu + C $$ (8.1)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle u = \log(1+x) $$     (8.2)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle du=\frac{dx}{1+x} $$     (8.3)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle dv=x^ndx $$     (8.4)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle v=\frac{x^{n+1}}{n+1} $$     (8.5) Using these expressions in the formula yields:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int x^n \log(1+x)dx=\frac{x^{n+1}}{n+1}\log(1+x)-\frac{1}{n+1}\int \frac{x^{n+1}}{1+x}dx + C $$ (8.6) Now, the integral $$\int \frac{x^{n+1}}{1+x} dx\!$$ must be found. To start, the fraction must be expanded using long division:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \frac{x^{n+1}}{1+x}=x^n-x^{n-1}+x^{n-2}-x^{n-3}+...+\frac{(-1)^{n+1}}{x+1}=\frac{(-1)^{n+1}}{x+1}+\sum_{k=0}^{n}(-1)^kx^{n-k} $$     (8.7) This expression can now be easily integrated to yield the following:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int \frac{x^{n+1}}{1+x}dx=\int\frac{(-1)^{n+1}}{x+1}dx+\int\sum_{k=0}^{n}(-1)^kx^{n-k}dx $$     (8.8)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int\frac{(-1)^{n+1}}{x+1}dx=(-1)^{n+1}\log(1+x) $$     (8.9)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int\sum_{k=0}^{n}(-1)^kx^{n-k}dx=\sum_{k=0}^{n}\int(-1)^kx^{n-k}dx= \sum_{k=0}^{n}\frac{(-1)^k}{n-k+1}x^{n-k+1} $$     (8.10) Therefore:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int \frac{x^{n+1}}{1+x} dx= (-1)^{n+1}\log(1+x)+\sum_{k=0}^{n}\frac{(-1)^k}{n-k+1}x^{n-k+1} $$     (8.11) Substituting into the original equation:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int x^n \log(1+x)dx=\frac{x^{n+1}}{n+1}\log(1+x)-\frac{1}{n+1}((-1)^{n+1}\log(1+x)+\sum_{k=0}^{n}\frac{(-1)^k}{n-k+1}x^{n-k+1}) + C $$ (8.12) Simplifying this yields:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle \int x^n \log(1+x)dx=\frac{1}{n+1}[(x^{n+1}-(-1)^{n+1})\log(1+x)-\sum_{k=0}^{n}\frac{(-1)^k}{n-k+1}x^{n-k+1}]+C $$     (8.13) To illustrate this, two test cases with n=0 and n=1 will be used. For n=0
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int x^0 \log(1+x)dx=\frac{1}{0+1}[(x^{0+1}-(-1)^{0+1})\log(1+x)-\frac{(-1)^0}{0-0+1}x^{0-0+1}]+C $$     (8.14) This simplifies to:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle \int \log(1+x)dx=(x+1)\log(1+x)-x+C $$     (8.15) For n=1
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int x^1 \log(1+x)dx=\frac{1}{1+1}[(x^{1+1}-(-1)^{1+1})\log(1+x)-\frac{(-1)^0}{1-0+1}x^{1-0+1}-\frac{(-1)^1}{1-1+1}x^{1-1+1}]+C $$     (8.16) This simplifies to:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle \int x \log(1+x)dx=\frac{1}{2}[(x^{2}-1)\log(1+x)-\frac{1}{2}x^{2}+x]+C $$     (8.17) In fact, this formula can be further generalized for any $$\int x^n \log(r+x)dx \!$$ where r is any real number. The most notable step that changes is the long division expansion. Each term in the expansion increases by a factor of $$r^{k} \!$$. The result is:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \frac{x^{n+1}}{r+x}=x^n-rx^{n-1}+r^2x^{n-2}-r^3x^{n-3}+...+\frac{(-r)^{n+1}}{x+r}=\frac{(-r)^{n+1}}{x+r}+\sum_{k=0}^{n}(-r)^kx^{n-k} $$     (8.18) It is important to note that this particular step will fail when r=0 because $$0^{0} \!$$ is undefined. A special case with r=0 will also be shown for full generality. The rest of the process is identical to that shown above, with every $$\log (1+x) \!$$ term replaced with $$\log (r+x) \!$$. The final result of the integration yields:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle \int x^n \log(r+x)dx=\frac{1}{n+1}[(x^{n+1}-(-r)^{n+1})\log(r+x)-\sum_{k=0}^{n}\frac{(-r)^k}{n-k+1}x^{n-k+1}]+C; r\neq 0 $$     (8.19) When r=0, the integral is of the form $$\int x^n \log(x)dx \!$$, which can be integrated using integration by parts.
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle u = \log(x) $$     (8.20)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle du=\frac{dx}{x} $$     (8.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle dv=x^ndx $$     (8.22)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle v=\frac{x^{n+1}}{n+1} $$     (8.23) Using these in (8.1) yields:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int x^n \log(x)dx=\frac{x^{n+1}}{n+1}\log(x)-\frac{1}{n+1}\int \frac{x^{n+1}}{x}dx + C $$ (8.24) Now, the integral $$\int \frac{x^{n+1}}{x} dx\!$$ must be found. This is simply:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \int \frac{x^{n+1}}{x}dx=\int x^ndx=\frac{x^{n+1}}{n+1}+C $$     (8.25) Substituting (8.25) into (8.24) yields:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle \int x^n \log(x)dx=\frac{x^{n+1}}{n+1}\log(x)-\frac{x^{n+1}}{(n+1)^2} + C $$ (8.26) Thus, the overall solution for any real value of r is
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="1"

$$  \displaystyle \int x^n \log(r+x)dx=\begin{cases} \frac{1}{n+1}[(x^{n+1}-(-r)^{n+1})\log(r+x)-\sum_{k=0}^{n}\frac{(-r)^k}{n-k+1}x^{n-k+1}]+C & \text{ if } r\neq 0\\ \frac{x^{n+1}}{n+1}\log(x)-\frac{x^{n+1}}{(n+1)^2} + C& \text{ if } r=0 \end{cases} $$     (8.27)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Author
Solved and Typed By - Egm4313.s12.team1.armanious (talk) 02:51, 25 March 2012 (UTC)

Reviewed By - Egm4313.s12.team1.silvestri (talk) 18:29, 30 March 2012 (UTC)

Statement
Consider the following L2-ODE-CC with log(1+x) as the excitation:
 * {| style="width:100%" border="0"

r(x)=y''-3y'+2y $$     (9.0)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

r(x)=log(1+x) $$     (9.1)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Also, consider the initial conditions:
 * {| style="width:100%" border="0"

y(- \frac{3}{4})=1, y'(- \frac{3}{4})=0 $$     (9.2)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

1)Project the excitation r(x) on the polynomial basis


 * {| style="width:100%" border="0"

\left \{ b_{j}(x) = x^j, j=0,1,...n \right \} $$     (9.3)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

i.e., find $$ d_{j} $$ such that:
 * {| style="width:100%" border="0"

r(x)\approx r_n (x) = \sum_{j=0}^{n} d_jx^j $$     (9.4)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

for x in [-.75, 3], and for n= 0,1. Plot $$ r(x) $$ and $$ r_n(x) $$ to show uniform approximation and convergence. Note that $$ <x^i, r>=\int_{a}^{b} x^i\log(1+x) dx $$

In a separate series of plots, compare the approximation of the function log(1+x) by 2 methods:
 * {| style="width:100%" border="0"

B. Taylor series expansion about $$ \hat x =0 $$
 * A. Projection on Polynomial basis (1) p8-17
 * A. Projection on Polynomial basis (1) p8-17


 * }

Observe and discuss the pros and cons of each method.

2) Find $$ y_n (x) $$ such that:


 * {| style="width:100%" border="0"

y''_n +ay'_n + by_n = r_n(x) $$     (9.5)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

With the same initial conditions (9.2).

Plot $$ y_n(x)$$ for n=0,1 for x in [-.75,3]

In a series of separate plots, compare the results obtained with the projected excitation on polynomial basis to those with truncated Taylor series of the excitation. Plot also the numerical solution as a baseline for comparison.

Solution
Part 1 First to project the excitation r(x)=log(1+x) onto the polynomial basis. We know that $$ <b_i,b_j> \cdot d_j=<b_i,r> $$. $$ <b_0, b_0>$$ is the only term of concern for the $$ \gamma $$ matrix when n=0.
 * {| style="width:100%" border="0"

<b_0,b_0> = \int_{-3/4}^{3} x^0x^1 dx=(3+3/4)=3.75 $$     (9.6)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

$$<b_i,r>$$ when n=0 is calculated below:
 * {| style="width:100%" border="0"

<b_0,r> = \int_{-3/4}^{3} x^0log(1+x)dx=2.141751035 $$     (9.7)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This means that, as stated in the opening sentence:
 * {| style="width:100%" border="0"

<b_i,b_j> \cdot d_0=<bi,r> $$ $$\displaystyle 3.75 \cdot d_0=2.141751035 $$ $$\displaystyle d_0=.571136093 $$     (9.8)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

With that in mind, $$ r_n(x)$$ is developed from equation (9.4) as shown below:
 * {| style="width:100%" border="0"

r(x)\approx r_n (x) = \sum_{j=0}^{n} d_jx^j= \sum_{j=0}^{0} d_jx^0=.571136093 $$     (9.9)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Now projecting the excitation onto the polynomial basis with n=1: We know that (in matrix form) $$ \gamma \cdot d_j=<b_i,r> $$. Beginning with the $$ \gamma $$ matrix:
 * {| style="width:100%" border="0"

\gamma = \begin{bmatrix} <b_0,b_0> & <b_0,b_1>\\ <b_1,b_0>& <b_1,b_1> \end{bmatrix} $$     (9.9)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

This matrix becomes:
 * {| style="width:100%" border="0"

\gamma = \begin{bmatrix} 3.75 & 4.21875\\ 4.21875& 9.140625 \end{bmatrix} $$     (9.10)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

The matrix containing $$<b_i,r>$$ is shown below, defined as matrix c:
 * {| style="width:100%" border="0"

c =\begin{Bmatrix} <b_o,r>\\ <b_1,r> \end{Bmatrix} = \begin{Bmatrix} \int_{-3/4}^{3} x^0log(1+x))\\ \int_{-3/4}^{3} x^1log(1+x)) \end{Bmatrix} = \begin{Bmatrix} 2.141751035\\ 5.007550553 \end{Bmatrix} $$     (9.10)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

We then find the $$ d_j $$ matrix in the following way:
 * {| style="width:100%" border="0"

\gamma ^{-1} \cdot c=d_j =\begin{Bmatrix} -.0939750342\\ .5912076831 \end{Bmatrix} $$     (9.11)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

With that in mind, $$ r_n(x)$$ is developed from equation (9.4) as shown below:
 * {| style="width:100%" border="0"

r(x)\approx r_n (x) = \sum_{j=0}^{n} d_jx^j= \sum_{j=0}^{1} d_jx^j=-.0939750342+.5912076831x $$     (9.12)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

Graphed out compared to the actual excitation of log(1+x) [in red], the projections with n=0[in blue],1[in green] are compared below:

Part 2 First we create the characteristic equation in standard form:
 * {| style="width:100%" border="0"

$$\displaystyle{\lambda^2-3 \lambda +2=0}$$ (9.13)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Then, by setting it equal to zero, we can find what $$ \lambda \!$$ equals:
 * {| style="width:100%" border="0"

$$\displaystyle{(\lambda - 2)(\lambda -1) = 0}$$ (9.14)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$\displaystyle{\lambda = 2,\lambda =1}$$ (9.15)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Given two, distinct, real roots, the homogeneous solution looks like this:
 * {| style="width:100%" border="0"

$$  \displaystyle y_h(x)=C_1e^{2x} + C_2e^{x} $$     (9.16) By using the method of undetermined coefficients, for n=0, the excitation $$ 5.571136093$$ is analyzed to yield a particular solution: In assessing a polynomial with a power of 0, the form of the particular solution will look like this:
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle y_p (x)= A_0 $$     (9.17)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

The first and second derivatives of $$ A_0 $$, being a constant, are 0. So when plugging in $$ y_p (x)$$ into (9.0), $$ A_0 $$ is determined to be:
 * {| style="width:100%" border="0"

.571136093=2 A_0 $$ $$\displaystyle .2855680465= A_0 $$     (9.18)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

The general solution, after adding $$ y_h $$ and $$ y_p $$ then becomes:
 * {| style="width:100%" border="0"

y_g(x)=C_1e^{2x} + C_2e^{x} + .2855680465 $$     (9.18)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

We consider the initial conditions by taking the first derivative of the general solution:


 * {| style="width:100%" border="0"

$$  \displaystyle 2C_1e^{2x} + C_2e^{x} = y'_g(x) $$     (9.19)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

By plugging in -3/4 for x, 1 for y, and 0 for y', we can solve for the constants $$C_1, C_2 \!$$:
 * {| style="width:100%" border="0"

$$  \displaystyle y_g(-3/4)=1 =C_1e^{2(-3/4)} + C_2e^{-3/4} + .2855680465 $$     (9.20)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle y'_g(-3/4)=0= 2C_1e^{2(-3/4)} + C_2e^{-3/4} $$     (9.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Solving the equations proves that $$C_1 = -3.201861878, C_2 = 3.024904915 \!$$: The resulting complete solution with consideration for initial conditions then becomes:
 * {| style="width:100%" border="0"

$$  \displaystyle y_g(x)=(-3.024904915)e^{2x} + (3.024904915)e^{x} + .2855680465 $$     (9.22)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

By using the method of undetermined coefficients, for n=1, the excitation $$ 5.571136093$$ is analyzed to yield a particular solution: In assessing a polynomial with a power of 0, the form of the particular solution will look like this:


 * {| style="width:100%" border="0"

$$  \displaystyle y_p (x)= A_1 x+ A_0 $$     (9.23)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

The first derivative of $$ y_p $$ is simply $$ A_1 $$ and the second derivative is 0. So when plugging in $$ y_p (x)$$ into (9.0), $$ A_0, A_1 $$ are determined to be:
 * {| style="width:100%" border="0"

.5912076831=2 A_1 $$ $$\displaystyle .2956038416= A_1 $$ $$\displaystyle -.0939750342= -3A_1+2A_0 $$ $$\displaystyle -.0939750342= -3(.2956038416)+2A_0 $$
 * $$\displaystyle
 * $$\displaystyle

$$\displaystyle .3964182452=A_0 $$     (9.24)
 * <p style="text-align:right">
 * }

The general solution, after adding $$ y_h $$ and $$ y_p $$ then becomes:
 * {| style="width:100%" border="0"

y_g(x)=C_1e^{2x} + C_2e^{x} + .2956038416x + .3964182452 $$     (9.25)
 * $$\displaystyle
 * $$\displaystyle
 * <p style="text-align:right">
 * }

We consider the initial conditions by taking the first derivative of the general solution:


 * {| style="width:100%" border="0"

$$  \displaystyle 2C_1e^{2x} + C_2e^{x} + .2956038416 = y'_g(x) $$     (9.26)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

By plugging in -3/4 for x, 1 for y, and 0 for y', we can solve for the constants $$C_1, C_2 \!$$:
 * {| style="width:100%" border="0"

$$  \displaystyle y_g(-3/4)=1 =C_1e^{2(-3/4)} + C_2e^{-3/4} + .2956038416(-3/4) + .3964182452 $$     (9.27)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle y'_g(-3/4)=0= 2C_1e^{2(-3/4)} + C_2e^{-3/4} + .2956038416 $$     (9.28)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Solving the equations proves that $$C_1 = -13.92092674, C_2 = 12.54701279 \!$$: The resulting complete solution with consideration for initial conditions then becomes:
 * {| style="width:100%" border="0"

$$  \displaystyle y_g(x)=(-13.92092674)e^{2x} + (12.54701279)e^{x} + .2956038416x + .3964182452 $$     (9.29)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Graphed below, the approximations are shown. In green, the approximation with n=1, in blue, the approximation with n=0, and in red, the truncated Taylor series as n=1 is shown over the interval of [-3/4 to 3].

Author
Solved and Typed By - Egm4313.s12.team1.silvestri (talk) 21:26, 29 March 2012 (UTC)

Reviewed By - Egm4313.s12.team1.durrance (talk) 19:02, 30 March 2012 (UTC)