User:Egm4313.s12.team8/R5

=R5.1 - Find $$R_c$$ for five series=

Problem
$$ \displaystyle \text{Find } R_c \text{ for the following series:}$$ $$ \displaystyle \text{1. } r(x) = \sum_{k=0}^\infty(k+1)kx^k $$ $$ \displaystyle \text{2. } r(x) = \sum_{k=0}^\infty\frac{(-1)^k}{\gamma^k}x^{2k} $$ $$ \displaystyle \gamma = \text{constant} $$

$$ \displaystyle \text{Use (2)-(3) p.7-31 to find } R_c \text{ for the Taylor series of}$$

$$ \displaystyle \text{3. } sinx \text{ at } \hat{x}=0 $$ $$ \displaystyle \text{4. } log(1+x) \text{ at } \hat{x}=0 $$ $$ \displaystyle \text{5. } log(1+x) \text{ at } \hat{x}=1 $$

Solution
$$ \displaystyle \text{From 7-31, }$$ $$ \displaystyle r(x) = \sum_{k=0}^\infty d_k x^k $$ $$ \displaystyle \text{The radius of convergence of the series (1) is given by } $$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{d_{k+1}}{d_k} \right| \right]^{-1}$$ $$ \displaystyle \text{or} $$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \sqrt[k] {| d_k |} \right]^{-1}$$

$$ \displaystyle \text{1. } r(x) = \sum_{k=0}^\infty(k+1)kx^k $$ $$ \displaystyle d_k = (k+1)k $$ $$ \displaystyle d_{k+1} = (k+2)(k+1) $$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{(k+2)(k+1)}{(k+1)k} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| 1+ \frac{2}{k} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ 1+0 \right]^{-1}$$ $$ \displaystyle R_c = 1$$

$$ \displaystyle \text{2. } r(x) = \sum_{k=0}^\infty\frac{(-1)^k}{\gamma^k}x^{2k} $$ $$ \displaystyle d_k = \frac{(-1)^k}{\gamma^k} $$ $$ \displaystyle d_{k+1} = \frac{(-1)^{k+1}}{\gamma^{k+1}} $$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{\frac{(-1)^{k+1}}{\gamma^{k+1}}}{\frac{(-1)^k}{\gamma^k}} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{(-1)^{k+1} \gamma^k}{(-1)^k \gamma^{k+1}} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{-1}{\gamma} \right| \right]^{-1}$$ $$ \displaystyle R_c = \gamma $$ $$ \displaystyle \text{Now since we have } x^{2k} \text{ instead of } x^k $$ $$ \displaystyle R_c = \sqrt{\gamma} $$

$$ \displaystyle \text{3. Taylor Series of } sinx \text{ at } \hat{x}=0 \text{ is}$$ $$ \displaystyle sin(x) = \sum_{k=0}^\infty \frac{(-1)^k}{(2k+1)!} x^{1+2k} $$ $$ \displaystyle d_k = \frac{(-1)^k}{(2k+1)!} $$ $$ \displaystyle d_{k+1} = \frac{(-1)^{k+1}}{(2(k+1)+1)!} $$

$$ \displaystyle \text{Then,} $$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{d_{k+1}}{d_k} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{\frac{(-1)^{k+1}}{(2(k+1)+1)!}}{\frac{(-1)^k}{(2k+1)!}} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{(-1)^{k+1}[(2k+1)!]}{(-1)^k[(2k+3)!]} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{(2k+1)!}{(2k+3)!} \right| \right]^{-1}$$ $$ \displaystyle R_c = \lim_{k \to \infty} \frac{(2k+3)!}{(2k+1)!}$$ $$ \displaystyle R_c = \infty$$

$$ \displaystyle \text{4. Taylor Series of } log(1+x) \text{ at } \hat{x}=0 \text{ is}$$ $$ \displaystyle log(1+x) = \sum_{k=1}^\infty \frac{(-1)^{k+1}}{k} x^{k} $$ $$ \displaystyle d_k = \frac{(-1)^{k+1}}{k} $$ $$ \displaystyle d_{k+1} = \frac{(-1)^{k+2}}{k+1} $$

$$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{\frac{(-1)^{k+2}}{k+1}}{\frac{(-1)^{k+1}}{k}} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{(-1)^{k+2}k}{(-1)^{k+1}(k+1)} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \frac{k}{k+1} \right]^{-1}$$ $$ \displaystyle R_c = \frac{\infty}{\infty} = 1$$

$$ \displaystyle \text{5. Taylor Series of } log(1+x) \text{ at } \hat{x}=1 \text{ is}$$ $$ \displaystyle log(1+x) = log(2) + \sum_{k=1}^\infty \frac{(-1)^{k+1}}{2^kk} (x-1)^{k} $$ $$ \displaystyle d_k = \frac{(-1)^{k+1}}{2^kk} (x-1)^{k} $$ $$ \displaystyle d_{k+1} = \frac{(-1)^{k+2}}{2^{k+1}k+1} (x-1)^{k+1} $$

$$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{(-1)^{k+2}(x-1)^{k+1}2^kk}{2^{k+1}k+1(-1)^{k+1}(x-1)^k} \right| \right]^{-1}$$ $$ \displaystyle R_c = \left[ \lim_{k \to \infty} \left| \frac{(-1)(x-1)}{2} \right| \right]^{-1}$$ $$ \displaystyle R_c = \frac{2}{(x-1)} \text{, which, to converge, is less than 1. So, }$$ $$ \displaystyle \frac{2}{(x-1)}<1 $$ $$ \displaystyle x>3 $$ $$ \displaystyle R_c = 3 $$

=R5.2 - Verify linear independence using Wronskian and Gramian=

Problem Statement
Determine whether the following pairs of functions are linearly independent using the Wronskian. Then use the Gramian to verify the result over the interval $$\ [a,b]=[-1,1]$$

1) $$\ f(x)=x^2, g(x)=x^4$$

2) $$\ f(x)=cosx, g(x)=sin3x$$

Part 1
$$\ f(x)=x^2, g(x)=x^4$$

The Wronskian of two functions, f and g, is defined as:

$$\ W(f,g)=\det \begin{bmatrix} f & g \\f' & g' \end{bmatrix}$$

$$\ f(x)=x^2 \Rightarrow f'(x)=2x$$

$$\ g(x)=x^4 \Rightarrow g'(x)=4x^3$$

By substituting functions the Wronskian becomes:

$$\ W(x^2,x^4)=\det \begin{bmatrix} x^2 & x^4 \\2x & 4x^3 \end{bmatrix}$$

$$\ W(x^2,x^4)=(x^2)(4x^3)-(x^4)(2x)=4x^5-2x^5=2x^5 \ne 0$$

The Wronskian does not equal 0, therefore the two functions are linearly independent.

The Gramian is defined as:

$$\ \Gamma(f,g)=\det \begin{bmatrix} \langle f,f \rangle & \langle f,g \rangle \\ \langle g,f \rangle & \langle g,g \rangle \end{bmatrix}$$

where:

$$\ \langle f,g \rangle=\int_a^b f(x)g(x)\ dx$$

For the above functions:

$$\ \langle x^2,x^2 \rangle=\int_{-1}^1 (x^2)(x^2)\ dx=\int_{-1}^1 x^4\ dx=\frac{x^5}{5} \mid_{-1}^1=\frac{2}{5}$$

$$\ \langle x^2,x^4 \rangle=\int_{-1}^1 (x^2)(x^4)\ dx=\int_{-1}^1 x^6\ dx=\frac{x^7}{7} \mid_{-1}^1=\frac{2}{7}$$

$$\ \langle x^4,x^2 \rangle=\int_{-1}^1 (x^4)(x^2)\ dx=\int_{-1}^1 x^6\ dx=\frac{x^7}{7} \mid_{-1}^1=\frac{2}{7}$$

$$\ \langle x^4,x^4 \rangle=\int_{-1}^1 (x^4)(x^4)\ dx=\int_{-1}^1 x^8\ dx=\frac{x^9}{9} \mid_{-1}^1=\frac{2}{9}$$

The Gramian becomes:

$$\ \Gamma(x^2,x^4)=\det \begin{bmatrix} 2/5 & 2/7 \\ 2/7 & 2/9 \end{bmatrix}$$

$$\ \Gamma=(\frac{2}{5})(\frac{2}{9})-(\frac{2}{7})(\frac{2}{7})=\frac{16}{2205} \ne 0$$

The Gramian does not equal 0, therefore the functions are linearly independent.

Part 2
$$\ f(x)=cosx, g(x)=sin3x$$

The Wronskian of two functions, f and g, is defined as:

$$\ W(f,g)=\det \begin{bmatrix} f & g \\f' & g' \end{bmatrix}$$

$$\ f(x)=cosx \Rightarrow f'(x)=-sinx$$

$$\ g(x)=sin3x \Rightarrow g'(x)=3cos3x$$

By substituting functions the Wronskian becomes:

$$\ W(cosx,sin3x)=\det \begin{bmatrix} cosx & sin3x \\-sinx & 3cos3x \end{bmatrix}$$

$$\ W(cosx,sin3x)=(cosx)(3cos3x)-(sin3x)(-sinx)=3cosxcos3x+sinxsin3x \ne 0$$

The Wronskian does not equal 0, therefore the two functions are linearly independent.

The Gramian is defined as:

$$\ \Gamma(f,g)=\det \begin{bmatrix} \langle f,f \rangle & \langle f,g \rangle \\ \langle g,f \rangle & \langle g,g \rangle \end{bmatrix}$$

where:

$$\ \langle f,g \rangle=\int_a^b f(x)g(x)\ dx$$

For the above functions:

$$\ \langle cosx,cosx \rangle=\int_{-1}^1 (cosx)(cosx)\ dx=\int_{-1}^1 cos^{2}x\ dx=\int_{-1}^1 \frac{1+cos2x}{2}\ dx=x/2+sin(2x)/4 \mid_{-1}^1=1.455$$

$$\ \langle cosx,sin3x \rangle=\int_{-1}^1 (cosx)(sin3x)\ dx=\frac{1}{2}\int_{-1}^1 [sin4x-sin(-2x)]\ dx=\frac{1}{2}[\frac{-cos4x}{4}-\frac{cos(-2x)}{2}] \mid_{-1}^1=0$$

$$\ \langle sin3x,cosx \rangle=\int_{-1}^1 (sin3x)(cox)\ dx=\frac{1}{2}\int_{-1}^1 [sin4x-sin(-2x)]\ dx=\frac{1}{2}[\frac{-cos4x}{4}-\frac{cos(-2x)}{2}] \mid_{-1}^1=0$$

$$\ \langle sin3x,sin3x \rangle=\int_{-1}^1 (sin3x)(sin3x)\ dx=\int_{-1}^1 sin^{2}3x\ dx=\int{-1}^1 \frac{1-cos6x}{2}\ dx=x/2-sin(6x)/12 \mid_{-1}^1=1.0465$$

The Gramian becomes:

$$\ \Gamma(cosx,sin3x)=\det \begin{bmatrix} 1.455 & 0 \\ 0 & 1.0465 \end{bmatrix}$$

$$\ \Gamma=(1.455)(1.0465)-(0)(0)=1.522 \ne 0$$

The Gramian does not equal 0, therefore the functions are linearly independent.

=R5.3 - Verify linear independence of vectors using Gramian=

Problem Statement
Verify that $$\ b_1$$ and $$\ b_2$$ in (1)-(2) p.7-34 are linearly independent using the Gramian.

$$\ b_1=2e_1+7e_2$$

$$\ b_2=1.5e_1+3e_2$$

Solution
For vectors, the Gramian is defined as:

$$\ \boldsymbol \Gamma(b_1,b_2)=\begin{bmatrix} \langle b_1,b_1 \rangle & \langle b_1,b_2 \rangle\\ \langle b_2,b_1 \rangle & \langle b_2,b_2 \rangle \end{bmatrix}$$

where:

$$\ \langle b_i,b_j \rangle=b_i \cdot b_j$$

For the given vectors, the dot products are:

$$\ \langle b_1,b_1 \rangle=(2e_1+7e_2) \cdot (2e_1+7e_2)=(2)(2)+(7)(7)=4+49=53$$

$$\ \langle b_1,b_2 \rangle=(2e_1+7e_2) \cdot (1.5e_1+3e_2)=(2)(1.5)+(7)(3)=3+21=24$$

$$\ \langle b_2,b_1 \rangle=(1.5e_1+3e_2) \cdot (2e_1+7e_2)=(1.5)(2)+(3)(7)=3+21=24$$

$$\ \langle b_2,b_2 \rangle=(1.5e_1+3e_2) \cdot (1.5e_1+3e_2)=(1.5)(1.5)+(3)(3)=2.25+9=11.25$$

So the Gramian matrix becomes:

$$\ \boldsymbol \Gamma(b_1,b_2)=\begin{bmatrix} 53 & 24\\ 24 & 11.25 \end{bmatrix}$$

Finding the determinant of the Gramian matrix gives the Gramian:

$$\ \Gamma=(53)(11.25)-(24)(24)=596.25-576=20.25 \ne 0$$

The Gramian does not equal 0, therefore the vectors $$\ b_1$$ and $$\ b_2$$ are linearly independent.

=R5.4 - Demonstrate superposition principle=

Problem Statement
(1) Show that: $$\displaystyle y_{p}(x)=\sum_{i=0}^{n}y_{p,i}(x) $$    (5.1) is indeed the overall particular solution of the L2-ODE-VC: $$\displaystyle y''+p(x)y'+q(x)y=r(x) $$     (5.2) with the excitation: $$\displaystyle r(x)=r_{1}(x)+r_{2}(x)+...+r_{n}(x)=\sum_{i=0}^{n}r_{i}(x) $$    (5.3) (2) Discuss the choice of $$\displaystyle y_{p}(x) $$ in the above table, e.g., for: $$\displaystyle r(x)=kcos(\omega x) $$ Why would you need to have both $$\displaystyle cos(\omega x), sin(\omega x) $$ in $$\displaystyle y_{p}(x) $$?

Solution (1)
Using the following equation: $$\displaystyle r_{i}(x)=y_{p,i}''+p(x)y_{p,i}'+q(x)y_{p,i} $$    (5.4) for different r and y values gives us the following: $$\displaystyle r_{1}(x)=y_{p,1}''+p(x)y_{p,1}'+q(x)y_{p,1} $$    (5.5) $$\displaystyle r_{2}(x)=y_{p,2}''+p(x)y_{p,2}'+q(x)y_{p,2} $$    (5.6) $$\displaystyle r_{3}(x)=y_{p,3}''+p(x)y_{p,3}'+q(x)y_{p,3} $$    (5.7) Now, adding (5.4),(5.5), and (5.6), gives us: $$\displaystyle r_{1}(x)+r_{2}(x)+r_{3}(x)=(y_{p,1}+y_{p,2}+y_{p,3})''+p(x)(y_{p,1}+y_{p,2}+y_{p,3})'+q(x)(y_{p,1}+y_{p,2}+y_{p,3} )$$    (5.8) Equation (5.8) shows us that the overall particular solution of (5.2) with excitation (5.3), is in fact, equation (5.1).

Solution (2)
We know that the given example for an excitation is the periodic excitation: $$\displaystyle r(x)=kcos(\omega x) $$ When we decompose a periodic excitation into a Fourier trigonometric series, we find: $$\displaystyle r(x)=a_{0}+\sum_{n=0}^{\infty }[a_{n}cos(n\omega x)+b_{n}sin(\omega x)] $$ Since we know that the particular solution should depend on the excitation, we know that for a periodic excitation $$\displaystyle r(x) $$, we would need both $$\displaystyle cos(\omega x), sin(\omega x) $$ in $$\displaystyle y_{p}(x) $$ to obtain the correct particular solution.

=R5.5 - Demonstrate linear independence & solve an ODE=

Problem Statement
1. Show that cos(7x) and sin(7x) are linearly independent using the Wronskian and Gramian.

2. Find 2 equations for the 2 unknowns M and N and solve for M and N.

3. Find the overall solution that corresponds to the initial condition.

Plot the solution over 3 periods.

Solution
1) Show that cos(7x) and sin(7x) are linearly independent using the Wronskian and Gramian.

The Wronskian is defined as:

$$ \displaystyle W(f,g):=det\begin{bmatrix} f &g \\ f'&g' \end{bmatrix}=fg'-gf' $$

The Wronskian proves linear independence if:

$$ \displaystyle W(f,g)\neq 0 $$

Filling in the formula:

$$ \displaystyle f= cos(7x) $$

$$ \displaystyle g= sin(7x) $$

$$ \displaystyle f'=-7sin(7x) $$

$$ \displaystyle g'=7cos(7x) $$

$$ \displaystyle W(f,g):=det\begin{bmatrix} cos(7x) &sin(7x) \\ -7sin(7x)&7cos(7x) \end{bmatrix}= cos(7x)*7cos(7x)-sin(7x)*(-7sin(7x)) $$

$$ \displaystyle W(f,g):= 7cos^{2}(7x)+7sin^{2}(7x)\neq 0 $$

Since the Wronskian does not equal 0 this proves that these two equations are linearly independent.

The Gramian is defined as:

$$\displaystyle \Gamma (f,g):=det\begin{bmatrix}  & \\  & \end{bmatrix} $$

Where the notation is:

$$\displaystyle :=\int_{a}^{b}f(x)g(x)dx $$

The Gramian proves linear independence if:

$$ \displaystyle \Gamma (f,g)\neq 0 $$

So for this problem and integrating over one period:

$$\displaystyle :=\int_{0}^{\frac{2\pi}{7}}cos(7x)cos(7x)dx $$

This integral must be solved using u substitution and the following equations:

$$ \displaystyle u=7x $$

$$ \displaystyle du=7dx $$

The integral then reduces to:

$$\displaystyle :=\frac{1}{7}\int_{0}^{2\pi}cos^{2}(u)dx= \frac{\pi}{7}$$

$$\displaystyle :=\int_{0}^{\frac{2\pi}{7}}sin(7x)sin(7x)dx $$

Which using the same u substitution reduces to:

$$\displaystyle :=\frac{1}{7}\int_{0}^{2\pi}sin^{2}(u)dx= \frac{\pi}{7}$$

$$\displaystyle := :=\int_{0}^{\frac{2\pi}{7}}sin(7x)cos(7x)dx $$

Due to the orthogonality of the trigonometric system and the fact that the coefficients in the sine and cosine terms are equal both of these integrals will equal 0.

$$\displaystyle := :=\int_{0}^{\frac{2\pi}{7}}sin(7x)cos(7x)dx=0 $$

$$\displaystyle \Gamma (f,g):=det\begin{bmatrix} \frac{\pi}{7} &0 \\ 0 & \frac{\pi}{7} \end{bmatrix}= \frac{\pi^{2}}{49} \neq 0$$

So once again these two equations are proven linearly independent.

2) Find 2 equations for the 2 unknowns M and N and solve for M and N.

Given the followin L2-ODE-CC:

$$\displaystyle y^{''}-3y^{'}-10y=3cos(7x) $$

$$\displaystyle y_{p}=Mcos(7x)+ Nsin(7x) $$

$$\displaystyle y'_{p}=-7Msin(7x)+ 7Ncos(7x) $$

$$\displaystyle y''_{p}=-49Mcos(7x)- 49Nsin(7x) $$

Plugging these back into the original equation one obtains:

$$\displaystyle -49Mcos(7x)- 49Nsin(7x)+ 21Msin(7x)- 21Ncos(7x)- 10Mcos(7x) -10Nsin(7x)= 3cos(7x) $$

Collecting like terms:

$$\displaystyle -59Mcos(7x)- 59Nsin(7x)+ 21Msin(7x)- 21Ncos(7x)= 3cos(7x) $$

Equating coefficients the following two equations are found:

$$\displaystyle -59M -21N=3 $$

$$\displaystyle -59N +21M=0 $$

Solving these equations:

$$\displaystyle M=\frac{-177}{3922} $$

$$\displaystyle N=\frac{-63}{3922} $$

Therefore the particular solution is:

$$\displaystyle y_{p}(x)=\frac{-177}{3922}cos(7x)+ \frac{-63}{3922}sin(7x) $$

3) Find the overall solution that corresponds to the initial condition:

$$ \displaystyle y(0)=1 $$

$$ \displaystyle y'(0)=0 $$

Now to solve for the homogeneous solution:

$$\displaystyle y^{''}-3y^{'}-10y=3cos(7x) $$

This equation follow the form:

$$\displaystyle y^{''}+ay^{'}+by=0 $$

So:

$$\displaystyle a^{2}-4b=9+40=49 $$

Since this is greater than 0 we have Case I, two distinct real roots.

$$\displaystyle y_{h}(x)=c_{1}e^{\lambda_{1}x}+ c_{2}e^{\lambda_{2}x} $$

$$\displaystyle \lambda_{1}=\frac{3+7}{2}=5 $$

$$\displaystyle \lambda_{1}=\frac{3-7}{2}=-2 $$

$$\displaystyle y_{h}(x)=c_{1}e^{5x}+ c_{2}e^{-2x} $$

Using the initial conditions and the first derivative:

$$\displaystyle y_{h}'(x)=5c_{1}e^{5x}- 2c_{2}e^{-2x} $$

$$\displaystyle y(0)=1=c_{1}+c_{2}=1 $$

$$\displaystyle y'(0)=0=5c_{1}-2c_{2} $$

Solving this system of equations:

$$\displaystyle c_{1}=\frac{2}{7} $$

$$\displaystyle c_{2}=\frac{5}{7} $$

$$\displaystyle y_{h}(x)=\frac{2}{7}e^{5x}+ \frac{5}{7}e^{-2x} $$

$$\displaystyle y(x)=y_{p}+y_{h} $$

$$\displaystyle y(x)= \frac{2}{7}e^{5x}+ \frac{5}{7}e^{-2x}+ \frac{-177}{3922}cos(7x)+ \frac{-63}{3922}sin(7x) $$

Plot
Solution plotted over 3 periods.

$$\displaystyle P=\frac{2\pi}{7}, 3P=\frac{6\pi}{7} $$

Matlab code:

>> x=linspace(0,(6*pi)/7);

>> y=(2/7).*exp(5.*x)+(5/7).*exp(-2.*x)-(177/3922).*cos(7.*x)-(63/3922).*sin(7.*x);

>> plot(x,y)

=R5.6 - Solve an ODE given the solution's form=

Problem Statement
Find the solution $$ y(x) $$ to the problem given the following:

$$ y''+4y'+13y=2e^{-2x} \cos(3x) $$

$$ y_h(x)=(e^{-2x})(A\cos(3x)+B\sin(3x))$$

$$ y_{p}(x)=x(e^{-2x})(M\cos(3x)+N\sin(3x)) $$

$$ y(0)=1, \, y'(0)=0 $$

Solution
First determine the first and second derivatives of the particular solution $$ y_p(x)$$:

$$ y'_{p}(x)=\frac{d}{dx}\frac{(x (M \cos(3 x)+N \sin(3 x))}{e^{2 x}} = e^{-2 x} (\sin(3 x) (-3 M x-2 N x+N)+\cos(3 x) (-2 M x+M+3 N x)) $$

$$ y''_{p}(x)=\frac{d^2}{dx^2}\frac{(x (M \cos(3 x)+N \sin(3 x))}{e^{2 x}} = e^{-2 x} (\sin(3 x) (6 M (2 x-1)-N (5 x+4))-\cos(3 x) (M (5 x+4)+6 N (2 x-1))) $$

Inserting these derivatives into the original ODE yields:

$$ e^{-2 x} (\sin(3 x) (6 M (2 x-1)-N (5 x+4))-\cos(3 x) (M (5 x+4)+6 N (2 x-1)))+4(e^{-2 x} (\sin(3 x) (-3 M x-2 N x+N)+\cos(3 x) (-2 M x+M+3 N x)))+13(x(e^{-2x})(M\cos(3x)+N\sin(3x)))=2e^{-2x} $$

Cancel like terms leads to the following variables:

$$ M=0 $$

$$ N=\frac{1}{3} $$

Given that $$ y(x)= y_{h}(x)+y_{p}(x)$$ insert the homogeneous and particular solutions with the calculated M and N to express $$y(x)$$:

$$ y(x)=(e^{-2x})(A\cos(3x)+B\sin(3x))+x(e^{-2x})((0)\cos(3x)+(1/3)\sin(3x)) $$

simplifying to:

$$ y(x)=(e^{-2x})(A\cos(3x)+B\sin(3x))+x(e^{-2x})((1/3)\sin(3x)) $$

Using the first initial condition $$ y(0)=1 $$ simplifies the equation further to:

$$y(0)=(e^{0})(A\cos(0)+B\sin(0))+0(e^{0})((1/3)\sin(0))=A=0$$

Showing that: $$ A=0 $$

To use the second initial condition $$ y'(0)=0 $$ first calculate y' knowing that $$A=0$$:

$$ y' = (1/3) e^{-2 x} (3 (3 B+x-2) \cos(3 x)-2 (3 B+x+4) \sin(3 x))=0 $$

Solving for B yields:

$$ B=\frac{5}{3} $$

Now with all variables $$M, N, A, B$$ solved, $$y(x)$$ can be expressed:

$$ y(x)=(e^{-2x})(cos(3x)+(\frac{5}{3})sin(3x))+x(e^{-2x})((1/3)\sin(3x)) $$

The following plot illustrates the the solutions behavior over 3 periods:



=R5.7 - Projection of a vector=

Problem Statement
Given:

$$ \mathbf v=4\mathbf {e_1}+2\mathbf {e_2}=c_1\mathbf {b_1}+c_2\mathbf {b_2} $$

$$ \mathbf {b_1}=2\mathbf {e_1}+7\mathbf {e_2} $$

$$ \mathbf {b_2}=1.5\mathbf {e_1}+3\mathbf {e_2} $$

1. Find the components $$ c_1, c_2 $$ using the Gram matrix.

2. Verify the result by using $$ \mathbf {b_1}, \mathbf {b_2}$$ in $$ \mathbf {v}$$.

Solution
1. Since $$\mathbf {b_1}$$ and $$\mathbf {b_2}$$ are linearly independent we conclude:

$$\Gamma \neq 0$$ so Inverse of Gram exists so $$ \mathbf{c} = \mathbf{\Gamma}^{-1}\mathbf{d} $$

Where $$\mathbf {c} = [c_1, c_2]^T$$

and $$\mathbf{d} = [<\mathbf {b_1}, \mathbf {v}>, <\mathbf {b_2}, \mathbf {v}>]^T$$

Solving for the Gram Matrix gives:

$$\mathbf\Gamma=\left[ \begin{array}{cc} 4+49 & 3+21 \\ 3+21 & 2+(\frac14)+9 \end{array} \right] = \left[ \begin{array}{cc} 53 & 24 \\ 24 & 11+(\frac14) \end{array}\right]$$

Inverting the matrix using Wolfram Alpha yields:

$$ \mathbf{\Gamma}^{-1}=(\frac{1}{81})\left[ \begin{array}{cc} 45 & -96 \\ -96 & 212 \end{array}\right]$$

Solving for $$ \mathbf{d} $$ gives:

$$ \mathbf{d}=\left[ \begin{array}{c} 8+14 \\ 6+6 \end{array}\right]=\left[ \begin{array}{c} 22 \\ 12 \end{array}\right]$$

Using the relation to solve for $$\mathbf {c}$$:

$$ \mathbf{c} = \mathbf{\Gamma}^{-1}\mathbf{d}=(\frac{1}{81})\left[ \begin{array}{cc} 45 & -96 \\ -96 & 212 \end{array}\right]\left[ \begin{array}{c} 22 \\ 12 \end{array}\right]=\left[ \begin{array}{c} -2 \\ \frac{16}{3} \end{array}\right]$$

$$ c_1=-2, \, c_2=\frac{16}{3} $$

2. Verifying the result by using the calculated $$c_1,c_2$$ and $$ \mathbf {b_1}, \mathbf {b_2}$$ in $$ \mathbf {v}$$ gives:

$$ 4\mathbf {e_1}+2\mathbf {e_2}=c_1\mathbf {b_1}+c_2\mathbf {b_2} $$

$$ 4\mathbf {e_1}+2\mathbf {e_2}=(2c_1+\frac{3}{2}c_2)\mathbf {e_1}+(7c_1+3c_2)\mathbf {e_2}$$

Simplifying to:

$$4\mathbf {e_1}+2\mathbf {e_2}=4\mathbf {e_1}+2\mathbf {e_2}$$

Yielding equal expressions and thereby verifying the results for $$c_1$$ and $$c_2$$.

=R5.8 - Antiderivatives=

Problem Statement
Find the integral of $$\displaystyle \int x^n \log(1+x)dx $$. First by using integration by parts and then with the help of the General Binomial Theorem for n=0,1:

$$\displaystyle (x+y)^n=\sum_{k=0}^{n}\begin{pmatrix}n\\k \end{pmatrix} x^{n-k}y^k $$ Where $$\displaystyle \begin{pmatrix}n\\k \end{pmatrix}=\frac{n!}{k!(n-k)!} $$

For n=0
$$\displaystyle \int x^n \log(1+x)dx = \int x^0 \log(1+x)dx= \int \log(1+x)dx$$

Using Integration By Parts: $$\displaystyle \int udv= uv-\int vdu$$ $$\displaystyle u=\log(1+x) \text{ and } du=\frac{1}{1+x}dx$$ $$\displaystyle dv=dx \text{ and } v=x$$

$$\displaystyle \int \log(1+x)dx= x\log(1+x)-\int \frac{x}{1+x}dx$$ $$\displaystyle \int \log(1+x)dx= x\log(1+x)-\int (1- \frac{1}{1+x})dx$$ $$\displaystyle \int \log(1+x)dx= x\log(1+x)-x+\log(1+x)+C$$ Group together like terms, factor the log(1+x), and simplify to get: $$\displaystyle \int \log(1+x)dx= (x+1)\log(1+x)-x+C$$

For n=1
$$\displaystyle \int x^n \log(1+x)dx = \int x^1 \log(1+x)dx= \int x\log(1+x)dx$$

Using Integration By Parts: $$\displaystyle \int udv= uv-\int vdu$$ $$\displaystyle u=\log(1+x) \text{ and } du=\frac{1}{1+x}dx$$ $$\displaystyle dv=x \text{ and } v=\frac{1}{2}x^2$$

$$\displaystyle \int x\log(1+x)dx= \frac{1}{2}x^2 \log(1+x)-\int \frac{x^2}{2(1+x)}dx$$ $$\displaystyle \int x\log(1+x)dx= \frac{1}{2}x^2 \log(1+x)-\frac{1}{2}\int (x+\frac{1}{1+x}-1)dx$$ $$\displaystyle \int x\log(1+x)dx= \frac{1}{2}x^2 \log(1+x)-\frac{1}{2}[\frac{1}{2}x^2+\log(1+x)-x]+C$$ $$\displaystyle \int x\log(1+x)dx= \frac{1}{2}x^2 \log(1+x)-\frac{1}{2}[\frac{1}{2}x^2+\log(1+x)-x]+C$$ $$\displaystyle \int x\log(1+x)dx= \frac{1}{2}x^2 \log(1+x)-\frac{1}{4}x^2-\frac{1}{2}\log(1+x)+\frac{1}{2}x+C$$

Group together like terms, factor, and simplify to get: $$\displaystyle \int x\log(1+x)dx= \frac{1}{2}(x^2-1)\log(1+x)-\frac{1}{2}x(\frac{1}{2}x-1)+C$$

=R5.9 - Solve L2-ODE-CC with basis projection of excitation=

Problem Statement
Consider the L2-ODE-CC with $$\log(1+x)$$ as excitation:
 * $$y'' - 3y' + 2y = r(x)$$
 * $$r(x) = \log(1+x)$$

and the initial conditions:
 * $$y(-\frac{3}{4}) = 1, \, y'(-\frac{3}{4}) = 0$$

Part 1
Project the excitation $$r(x)$$ on the polynomial basis:
 * $$\{ b_i(x) = x^i, \, i=0,1,...,n \}$$

i.e. find $$d_i$$ such that:
 * $$r(x) \approx r_n(x) = \sum_{i=0}^n d_i \, x^i$$

for $$x$$ in $$\left[ -\frac{3}{4}, 3 \right]$$, and for n=3,6,9.

Plot $$r(x)$$ and $$r_n(x)$$ to show uniform approximation and convergence.

Note that $$\langle x^i, r \rangle = \int_a^b x^i \log(1+x) \, dx$$

In separate series of plots, compare the approximation of the function $$\log(1+x)$$ by 2 methods: Observe and discuss the pros and cons of each method.
 * Projection on a polynomial basis.
 * Taylor series expansion about $$\hat x = 0$$

Part 2
Find $$y_n(x)$$ such that:
 * $$y_{n}'' + a y_{n}' + b y_n = r_n(x)$$

with the above initial conditions.

Plot $$y_n(x)$$ for n=3,6,9, for x in $$\left[ -\frac{3}{4}, 3 \right]$$.

In a series of separate plots, compare the results obtained with the projected excitation on polynomial basis to those with truncated Taylor series of the excitation. Plot also the numerical solution as a baseline for comparison.

Part 1
To solve for the coefficients of the projection of a function onto an orthonormal basis, we use the following equation:
 * $$\mathbf{ \Gamma c } = \mathbf{ d }$$

Where $$\mathbf \Gamma$$, $$\mathbf c$$ and $$\mathbf d$$ are given by the following:
 * $$\mathbf \Gamma ( \{ b_i \} ) = \left[

\begin{array}{ccccc} \langle b_0, b_0 \rangle & \langle b_0 , b_1 \rangle & \cdots & \langle b_0 , b_{n - 1} \rangle & \langle b_0 , b_n \rangle \\ \langle b_1, b_0 \rangle & \langle b_1 , b_1 \rangle & \cdots & \langle b_1 , b_{n - 1} \rangle & \langle b_1 , b_n \rangle \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \langle b_{n-1}, b_0 \rangle & \langle b_{n-1} , b_1 \rangle & \cdots & \langle b_{n-1} , b_{n - 1} \rangle & \langle b_{n-1} , b_n \rangle \\ \langle b_n, b_0 \rangle & \langle b_n , b_1 \rangle & \cdots & \langle b_n , b_{n - 1} \rangle & \langle b_n , b_n \rangle \end{array} \right] $$


 * $$\mathbf c = \left[

\begin{array}{c} c_0 \\ c_1 \\ \vdots \\ c_{n-1} \\ c_n \end{array} \right] $$


 * $$\mathbf d = \left[

\begin{array}{c} \langle b_0, f \rangle \\ \langle b_1, f \rangle \\ \vdots \\ \langle b_{n-1}, f \rangle \\ \langle b_n, f \rangle \end{array} \right] $$

And where the scalar product for two functions is defined as:
 * $$\langle f, g \rangle := \int_a^b f(x) g(x) \, dx$$

Using the polynomial basis:
 * $$\{ b_i(x) = x^i, \, i=0,1,...,n \}$$

Over the region $$[a, \, b]$$ where:
 * $$a = -\frac{3}{4} \,, \; b = 3$$

We obtain the following matrix equation:
 * $$\left[

\begin{array}{ccccc} \langle 1, 1 \rangle & \langle 1 , x \rangle & \cdots & \langle 1 , x^{n-1} \rangle & \langle 1 , x^{n} \rangle \\ \langle x, 1 \rangle & \langle x , x \rangle & \cdots & \langle x , x^{n-1} \rangle & \langle x , x^{n} \rangle \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \langle x^{n-1}, 1 \rangle & \langle x^{n-1} , x \rangle & \cdots & \langle x^{n-1} , x^{n-1} \rangle & \langle x^{n-1} , x^{n} \rangle \\ \langle x^n, 1 \rangle & \langle x^n , x \rangle & \cdots & \langle x^n , x^{n-1} \rangle & \langle x^n , x^n \rangle \end{array} \right] \left[ \begin{array}{c} c_0 \\ c_1 \\ \vdots \\ c_{n-1} \\ c_n \end{array} \right] = \left[ \begin{array}{c} \langle 1, \ln(x+1) \rangle \\ \langle x, \ln(x+1) \rangle \\ \vdots \\ \langle x^{n-1}, \ln(x+1) \rangle \\ \langle x^n, \ln(x+1) \rangle \end{array} \right] $$

Solving with n = 0
Evaluating the scalar products gives us:
 * $$\left[

\begin{array}{c} 3.75 \end{array} \right] \mathbf c =\left[ \begin{array}{c} 2.14175 \end{array} \right]$$

Solving for $$\mathbf c$$ then yields:
 * $$\mathbf c = \left[

\begin{array}{c} 0.571134 \\ \end{array} \right]$$

This results in the following polynomial:
 * $$r_0(x) = 0.571134$$

Solving with n = 1
Evaluating the scalar products gives us:
 * $$\left[

\begin{array}{cccc} 3.75 & 4.21875 \\ 4.21875 & 9.14063 \end{array} \right] \mathbf c =\left[ \begin{array}{c} 2.14175 \\ 5.00755 \end{array} \right]$$

Solving for $$\mathbf c$$ then yields:
 * $$\mathbf c = \left[

\begin{array}{c} -0.093975 \\ 0.591208 \end{array} \right]$$

This results in the following polynomial:
 * $$r_1(x) = 0.591208x - 0.093975$$

Comparison of different n
Plotting these two polynomials against the original function yields the following:
 * [[Image:R5.9 r,n=0,1.png]]

We can easily see the rapid convergence of this method to the function. For n=0, the average error is 0.549822. For n=1, the average error is 0.165758. Continuing through higher n, we can see that the average error is roughly halved from n to n+1.

Comparison to Taylor series
The Taylor series about x = 0 is given as follows:
 * $$\log(1+x) \approx \sum_{i=1}^n \frac{(-1)^{i+1}}{i} x^i$$

Generating polynomials for n = 0 and 1:
 * $$\begin{align}

r_{taylor, 0}(x) &= 0\\ r_{taylor, 1}(x) &= x \end{align}$$

Plotting these against their respective projections:
 * [[Image:R5.9 n=0+taylor.png]]
 * [[Image:R5.9 n=1+taylor.png]]

As we can see, this function is clearly better modeled within this domain by the polynomial obtained via projection than that from its Taylor expansion about 0. While both are "accurate" (this statement applies more for higher n) for x between -1 and 1, when we look at the graph past 1 for the Taylor polynomial approximations, they rapidly diverge from the actual value of the function. Therefore, while immensely easier to compute, Taylor polynomials are terrible for this range without extending them as in Report 4 by generating a new polynomial wherever the old one diverges and combining the functions piecewise. The major disadvantage of projections, computational complexity, is easily mitigated nowadays with symbolic math software.

Part 2
From our work in Report 4, we know the homogeneous solution to the ODE is of the form:
 * $$y_h(x) = k_1 e^x + k_2 e^{2x}$$

Also from Report 4, we know that the particular solution to the ODE is of the form:
 * $$y_p(x) = \sum_{i=0}^{\infty} z_i x^i \approx \sum_{i=0}^{n} z_i x^i$$

From Report 4, we know we can solve for the coefficients by solving the matrix equation:
 * $$\mathbf{ A z } = \mathbf c$$

Where $$\mathbf A$$ and $$\mathbf \alpha$$ are given:
 * $$\mathbf A = \begin{bmatrix}

b &     a &  2     &      0 &       0 &         &       0 \\ 0 &     b & 2a     &      6 &       0 &         &       0 \\ 0 &     0 &  b     &     3a &      12 &         &       0 \\ &       &        & \ddots & \ddots  &  \ddots &       0 \\ 0 &     0 &      0 &      0 &       b & a (n-1) & n (n-1) \\ 0 &     0 &      0 &      0 &       0 &       b & a n     \\ 0 &     0 &      0 &      0 &       0 &       0 &   b \end{bmatrix}$$
 * $$\mathbf z = \left[

\begin{array}{c} z_0 \\ z_1 \\ \vdots \\ z_{n-1} \\ z_n \end{array}\right]$$

And $$\mathbf c$$, being the coefficients of our excitation function, determined above:
 * $$\mathbf c = \mathbf \Gamma ^{-1} \mathbf d = \left[

\begin{array}{ccccc} \langle 1, 1 \rangle & \langle 1 , x \rangle & \cdots & \langle 1 , x^{n - 1} \rangle & \langle 1 , x^n \rangle \\ \langle x, 1 \rangle & \langle x , x \rangle & \cdots & \langle x , x^{n - 1} \rangle & \langle x , x^n \rangle \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \langle x^{n-1}, 1 \rangle & \langle x^{n-1} , x \rangle & \cdots & \langle x^{n-1} , x^{n-1} \rangle & \langle x^{n-1} , x^n \rangle \\ \langle x^n, 1 \rangle & \langle x^n , x \rangle & \cdots & \langle x^n , x^{n-1} \rangle & \langle x^n , x^n \rangle \end{array} \right]^{-1} \left[ \begin{array}{c} \langle 1, \ln(x+1) \rangle \\ \langle x, \ln(x+1) \rangle \\ \vdots \\ \langle x^{n-1}, \ln(x+1) \rangle \\ \langle x^n, \ln(x+1) \rangle \end{array} \right] $$

From our ODE, we determine that $$a=-3$$ and $$b=2$$. Therefore, to solve for the the coefficients of $$y_p,n$$, we solve the following matrix equation:
 * $$\begin{bmatrix}

2 &     -3 &  2     &      0 &       0 &         &       0 \\     0 &      2 & -6     &      6 &       0 &         &       0 \\     0 &      0 &  2     &     -9 &      12 &         &       0 \\       &        &        & \ddots & \ddots  &  \ddots &       0 \\ 0 &     0 &      0 &      0 &       2 & -3 (n-1) & n (n-1) \\ 0 &     0 &      0 &      0 &       0 &       2 & -3 n     \\ 0 &     0 &      0 &      0 &       0 &       0 &   b \end{bmatrix} \mathbf z = \left[ \begin{array}{ccccc} \langle 1, 1 \rangle & \langle 1 , x \rangle & \cdots & \langle 1 , x^{n-1} \rangle & \langle 1 , x^n \rangle \\ \langle x, 1 \rangle & \langle x , x \rangle & \cdots & \langle x , x^{n-1} \rangle & \langle x , x^n \rangle \\ \vdots & \vdots & \ddots & \vdots & \vdots \\ \langle x^{n-1}, 1 \rangle & \langle x^{n-1} , x \rangle & \cdots & \langle x^{n-1} , x^{n-1} \rangle & \langle x^{n-1} , x^n \rangle \\ \langle x^n, 1 \rangle & \langle x^n , x \rangle & \cdots & \langle x^n , x^{n-1} \rangle & \langle x^n , x^n \rangle \end{array} \right]^{-1} \left[ \begin{array}{c} \langle 1, \ln(x+1) \rangle \\ \langle x, \ln(x+1) \rangle \\ \vdots \\ \langle x^{n-1}, \ln(x+1) \rangle \\ \langle x^n, \ln(x+1) \rangle \end{array} \right] $$

We will then apply the following initial conditions to our solution to solve for the constants in the homogeneous solution, thus giving us the final solution:
 * $$y(-\frac{3}{4}) = 1, \, y'(-\frac{3}{4}) = 0$$

Solving with n = 0
Generating our A matrix using the above definition and using our previous value for c, we see:
 * $$\begin{bmatrix}

2 \end{bmatrix} \mathbf z = \left[ \begin{array}{c} 0.571134 \end{array} \right] $$

Then, solving for z, the coefficients of our particular solution:
 * $$\mathbf z = \left[

\begin{array}{c} 0.285567 \end{array} \right]$$

We can now write our solution as the combination of the homogeneous and particular:
 * $$y_0(x) = k_1 e^x + k_2 e^{2 x} + 0.285567$$

Applying initial conditions and solving for the constants gives us our final solution:
 * $$y_0(x) = 3.02491 e^x - 3.20187 e^{2 x} + 0.285567$$

Solving with n = 1
Generating our A matrix using the above definition and using our previous value for c, we see:
 * $$\begin{bmatrix}

2 & -3 \\ 0 & 2 \end{bmatrix} \mathbf z = \left[ \begin{array}{c} -0.093975 \\ 0.591208 \\ \end{array} \right] $$

Then, solving for z, the coefficients of our particular solution:
 * $$\mathbf z = \left[

\begin{array}{c} 0.396418 \\ 0.295604 \\ \end{array} \right]$$

We can now write our solution as the combination of the homogeneous and particular:
 * $$y_1(x) = k_1 e^x+k_2 e^{2 x}+0.295604 x+0.396418$$

Applying initial conditions and solving for the constants gives us our final solution:
 * $$y_1(x) = 4.12005 e^x - 5.02347 e^{2 x} +0.295604x+0.396418$$

Comparison of different n
Using the following MATLAB code to obtain the numeric solution to y, we can then plot it versus our three projected solutions for y.
 * [[Image:R5.9 y,n=0,1.png]]

Here we observe, as expected, that again, the higher order polynomial more closely fits the graph.

Comparison to corresponding Taylor series
To generate the Taylor series solution for n=0, we observe that the Taylor series expansion of our excitation is 0, leading to the solution being equivalent to that of the homogeneous equation:
 * $$y(x) = k_1 e^x + k_2 e^{2x}$$

Substituting in our initial conditions and solving for $$k_1$$ and $$k_2$$ yields:
 * $$y(x) = 4.234 e^x - 4.48169 e^{2x}$$

To generate the Taylor series solution for n=1, we reuse the following MATLAB code from Report 4:

Plotting these against their corresponding solutions determined above, we obtain the following plots for our three values of n:
 * [[Image:R5.9 y,n=0+taylor.png]]
 * [[Image:R5.9 y,n=1+taylor.png]]

For n=0, it is clear from the first plot that the projected solution more closely matches the numeric solution. This is expected, as it features a contribution from the excitation, whereas the Taylor series solution features no such contribution.

We can zoom in on portions of each graph to determine which method features better performance for n=1:
 * [[Image:R5.9 y,n=1+taylor zoomed.png]]
 * [[Image:R5.9 y,n=1+taylor zoomed 2.png]]

In these plots, we can observe that the projection-derived function has marginally more error than the Taylor-derived function.

=Team Contributions=