User:Eml5526.s11.team4/HW2

= Problem 2.1 - Derive Heat Problem in 1-D =

Problem Statement


Derive the heat problem in 1-D (1.1).

Solution
Starting with the first law of thermodynamics

where no work is being done,

Assuming outer edge is insulated there are two contributions,

Bringing in the sign for the normal vectors,

Noting the Taylor Series is

or equivalently

Bringing in the Taylor Series, (1.7), into the first law with now work, (1.5), we have

We note that $$f(x,t)$$ is the heat source term in (1.8) acting along the length of the body, so that the equation becomes

Addressing the time term by explicitly including density and the heat capacity,

For the problem at hand density and the area change with respect to $$x$$. With a linear change in area, the average area times the width gives an exact measure of the volume. We proceed finding the average area while using a Taylor Series approximation for the 'far side',

The time term then becomes,

Applying the same manipulation to the density,

Carrying out the product and dropping high order terms results in

Combining (1.9) and (1.14) results in

Again dropping higher order terms (second-order and higher),

Utilizing Fourier's Law,

where temperature is proportionate to the internal energy and (1.17) is per unit area, (1.16) becomes

Finally, simplifying

Part 5
From lecture slide 7-2,4,

$$ \vec{\mathbf{K}}=\left[K_{ij}\right]=\mathbf{w}_{i}\cdot \mathbf{b}_{j} = \mathbf{a}_{i}\cdot \mathbf{b}_{j} $$

where
 * $$ \mathbf{w}_{i}=\mathbf{a}_{i}, \qquad \mathbf{b}_{1}=\mathbf{a}_{1}+\mathbf{a}_{2}+\mathbf{a}_{3}, \quad \mathbf{b}_{2}=2\mathbf{a}_{1}-\mathbf{a}_{2}+3\mathbf{a}_{3}, \quad \mathbf{b}_{3}=3\mathbf{a}_{1}+2\mathbf{a}_{2}+6\mathbf{a}_{3} $$

From equ(2-6) and using equ(2-2)

$$ \vec{\mathbf{F}}=\left\{F_{i}\right\}=\left\{\mathbf{a}_{i}\cdot \mathbf{v}\right\} $$ So,

Part 6
$$ \mathbf{d}=\vec{\mathbf{K}}^{-1}\vec{\mathbf{F}} $$

$$ \mathbf{d} $$ is the same as $$ \mathbf{d} $$ in Part 4.

 Matlab Code 

 Result 

Part 7
We can observe that $$ \mathbf{K} $$ is Symmetric and $$ \vec{\mathbf{K}} $$is Nonsymmetric.

The Petrov-Galerkin method is faster to get a stiffness matrix than the Bubnov-Galerkin. We can get vector v fast.

However, the Bubnov-Galerkin has symmetric stiffness matrix, so we can handle it easily.

= Problem 2.3  =

Problem Statement
Refer to lecture slide [[media:fe1.s11.mtg8.djvu|mtg-8]] for the problem statement.

Given
Use { ai } (basis),let $$\underline{w}(x)=\sum\limits_{i=1}^{n}(x)$$, then equation
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

\underline{w}(x)\underline{P}(\underline{v})=0\begin{matrix} {} & \forall  \\ \end{matrix}\underline{w}(x)=\sum\limits_{i=1}^{n}(x)

$$     (3.1) becomes
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

\underline{w}(x)\underline{P}(\underline{v})=0\begin{matrix} {} & \forall  \\ \end{matrix}\{{{\beta }_{1}},{{\beta }_{i}}..\}\in {{\mathbb{R}}^{n}}\begin{matrix} {} & s.t.\underline{w}(x)=\sum\limits_{i=1}^{n} \\ \end{matrix}

$$     (3.2)
 * 
 * }

Objectives
Show that equation (2) is equivalent to
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

\underline(x)\underline{P}(\underline{v})=0\begin{matrix} {} & i=1,2,...n \\ \end{matrix}

$$     (3.3)
 * 
 * }

Solutions
1). First, we try to derive Eq.(3) using Eq.(2).

Eq. (1) is valid for any function of w (x), which is a linear combination of these linearly independent bases. Thus, the Eq. (1) is valid for some particular sets of $${{\beta }_{i}}$$ value to get $$\underline{w}(x)=\sum\limits_{i=1}^{n}(x)$$.

For example, we choose $${{\beta }_{1}}=1,{{\beta }_{2}}={{\beta }_{3}}=....{{\beta }_{n}}=0$$, then $$\underline{w}(x)=\sum\limits_{i=1}^{n}(x)={{\underline{a}}_{1}}(x)$$. By substituting $$\underline{w}(x)$$ into Eq.(2), we obtain:
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

\underline{w}(x)\underline{P}(v)=\underline(x)\underline{P}(v)=0

$$ Which is the equation for Eq.(3) of i=1.
 * 
 * }

Then, in similar way, we choose $${{\beta }_{2}}=1,{{\beta }_{1}}={{\beta }_{3}}=....{{\beta }_{n}}=0$$, then we obtain:
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

\underline{w}(x)\underline{P}(v)=\underline(x)\underline{P}(v)=0

$$
 * 
 * }

Which is the equation for Eq.(3) of i=2.

Similarly, we continuing to choose $${{\beta }_{i}}=1,{{\beta }_{1}}={{\beta }_{2}}=..={{\beta }_{i-1}}={{\beta }_{i+1}}..={{\beta }_{n}}=0$$ then we obtain:
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

\underline{w}(x)\underline{P}(v)=\underline(x)\underline{P}(v)=0

$$ Which is the equation for Eq.(3) of any value of i. So, Eq. (3) is derived with Eq.(2).
 * 
 * }

2). Secondly, we try to derive Eq.(3) using Eq.(2).

{ ai(x), i=1,2…n } is a family of linearly independent basis functions, and according to Eq.(2) we have:


 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

\underline(x)\underline{P}(v)=0

$$
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

\underline(x)\underline{P}(v)=0

$$
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

\underline(x)\underline{P}(v)=0

$$
 * 
 * }

Linearly combining the above equations, by choosing any set of $${{\beta }_{i}}$$value, we obtain:
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

\begin{align} & {{\beta }_{1}}[\underline(x)\underline{P}(v)]+{{\beta }_{2}}[\underline(x)\underline{P}(v)]+..+{{\beta }_{i}}[\underline(x)\underline{P}(v)]+...{{\beta }_{n}}[\underline(x)\underline{P}(v)] \\ & ={{\beta }_{1}}\centerdot 0+{{\beta }_{2}}\centerdot 0+..+{{\beta }_{i}}\centerdot 0+..+{{\beta }_{n}}\centerdot 0 \\ & =0 \\ \end{align}

$$
 * 
 * }

Obviously,
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

\begin{align} & {{\beta }_{1}}[\underline(x)\underline{P}(v)]+{{\beta }_{2}}[\underline(x)\underline{P}(v)]+..+{{\beta }_{i}}[\underline(x)\underline{P}(v)]+...{{\beta }_{n}}[\underline(x)\underline{P}(v)] \\ & =[\sum\limits_{i=1}^{n}(x)]\underline{P}(v) \\ & =\underline{w}(x)\underline{P}(v) \\ & =0 \\ \end{align}

$$
 * 
 * }

Because the value set of $${{\beta }_{i}}$$undefinedis arbitrary, Eq.(2) is valid for any w (x).

So, we obtain Eq.(2) using Eq.(3).

According the above two aspects, we can conclude that Eq.(2) is equivalent to Eq. (3).

Problem Statement
Given that

$$\textbf{b}_i \cdot \textbf{P}(\textbf{v})=0 \qquad \left \{ i=1,...,n \right.\left. \right \}$$ .....equation 1

$$\textbf{w} \cdot \textbf{P}(\textbf{v})=0  \qquad \forall \textbf{w}\in \mathbb{R}^{n}$$......equation 2

such that $$\textbf{w}= \sum \alpha _i\textbf{b}_i$$

Prove

equation 1 => equation 2

Solution
Equation 1 can be rewritten as multiple equations

such that

$$\textbf{b}_1 \cdot \textbf{P}(\textbf{v})=0$$    .... equation a

$$\textbf{b}_2 \cdot \textbf{P}(\textbf{v})=0$$    .... equation b

$$\textbf{b}_3 \cdot \textbf{P}(\textbf{v})=0$$    .... equation c

up to

$$\textbf{b}_n \cdot \textbf{P}(\textbf{v})=0$$    .... equation n

Here $$\textbf{b}_1,\textbf{b}_2,\textbf{b}_3..\textbf{b}_n$$ are basis vectors in the given coordinate.Each of the equations a,b,c...n can be considered as particular cases in the same coordinate.

From the given fact that $$\textbf{w}= \sum \alpha _i\textbf{b}_i$$

we can infer $$\textbf{w}= \alpha _1\textbf{b}_1+\alpha _2\textbf{b}_2+\alpha _3\textbf{b}_3+...\alpha _n\textbf{b}_n$$ where $$\alpha _1,\alpha _2,\alpha _3...\alpha _n$$ are components of the vector $$\textbf{w}$$

This implies that $$(\alpha _1\textbf{b}_1+\alpha _2\textbf{b}_2+\alpha _3\textbf{b}_3+...\alpha _n\textbf{b}_n) \cdot \textbf{P}(\textbf{v})=0$$

This is simply, another way of writing

$$\textbf{w} \cdot \textbf{P}(\textbf{v})=0$$

Hence the equivalence is proved.

Problem Statement
Consider F= { 1, Cos iwx, Sin iwx } on interval [ 0,T ], i.e. i=1,2

1). Construct $$\displaystyle \underline{\Gamma }(F) $$; observe property of $$\displaystyle \underline \Gamma  (F)$$.

2). Find det $$\underline{\Gamma }(F)$$.

3). Conclude that F is orthogonal basis, i.e., $$\displaystyle {{\Gamma }_{ij}}=\left\langle {{b}_{i}},{{b}_{j}} \right\rangle ={{\delta }_{ij}}$$

Solutions
1). Calculate the value of Gram matrix, $$\displaystyle w=\frac{2\pi }{T}$$:
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{ij}}=\left\langle {{b}_{i}},{{b}_{j}} \right\rangle =\int\limits_{\Omega }(x){{b}_{j}}(x)dx

$$
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{11}}=\int\limits_{\Omega }(x){{b}_{1}}(x)dx=\int_{0}^{T}{1dx}=T

$$
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{12}}={{\Gamma }_{21}}=\int\limits_{\Omega }(x){{b}_{2}}(x)dx=\int_{0}^{T}{\cos wxdx}=\frac{\sin wT}{w}=0

$$
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} & {{\Gamma }_{22}}=\int\limits_{\Omega }(x){{b}_{2}}(x)dx=\int_{0}^{T}{{{\left( \cos wx \right)}^{2}}dx} \\ & =\frac{\sin 2wT+2T}{4}-\frac{\sin 0+2\centerdot 0}{4}=\frac{T}{2} \\ \end{align}

$$
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{13}}={{\Gamma }_{31}}=\int\limits_{\Omega }(x){{b}_{3}}(x)dx=\int_{0}^{T}{\cos 2wxdx}=\frac{\sin 2wT}{2w}=0

$$
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{23}}={{\Gamma }_{32}}=\int\limits_{\Omega }(x){{b}_{3}}(x)dx=\int_{0}^{T}{\cos wx\centerdot \cos 2wxdx}=\frac{3\sin (wx)+\sin (3wx)}{6w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{33}}={{\Gamma }_{33}}=\int\limits_{\Omega }(x){{b}_{3}}(x)dx=\int_{0}^{T}{{{(\cos 2wx)}^{2}}dx}=\frac{\sin 4wT+4wx}{8w}|_{0}^{T}=\frac{T}{2}

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{14}}={{\Gamma }_{41}}=\int\limits_{\Omega }(x){{b}_{4}}(x)dx=\int_{0}^{T}{\sin wxdx}=\frac{-\cos wx}{w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{24}}={{\Gamma }_{42}}=\int\limits_{\Omega }(x){{b}_{4}}(x)dx=\int_{0}^{T}{\cos wx\centerdot \sin wxdx}=\frac{-\cos (2wx)}{4w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{43}}={{\Gamma }_{34}}=\int\limits_{\Omega }(x){{b}_{3}}(x)dx=\int_{0}^{T}{(\cos 2wx)\centerdot \sin wxdx}=-\frac{\cos 3wx-3\cos wx}{6w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{44}}=\int\limits_{\Omega }(x){{b}_{4}}(x)dx=\int_{0}^{T}{{{(\sin wx)}^{2}}dx}=\frac{2wx-\cos 2wx}{4w}|_{0}^{T}=\frac{T}{2}

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{15}}={{\Gamma }_{51}}=\int\limits_{\Omega }(x){{b}_{5}}(x)dx=\int_{0}^{T}{\sin 2wxdx}=\frac{-\cos 2wx}{2w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{25}}={{\Gamma }_{52}}=\int\limits_{\Omega }(x){{b}_{5}}(x)dx=\int_{0}^{T}{\cos wx\centerdot \sin 2wxdx}=\frac{-2{{\cos }^{3}}(wx)}{3w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{53}}={{\Gamma }_{35}}=\int\limits_{\Omega }(x){{b}_{3}}(x)dx=\int_{0}^{T}{(\cos 2wx)\centerdot \sin 2wxdx}=\frac{-\cos 4wx}{8w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{54}}={{\Gamma }_{45}}=\int\limits_{\Omega }(x){{b}_{4}}(x)dx=\int_{0}^{T}{(\sin wx)\sin 2wxdx}=\frac{2{{\sin }^{3}}wx}{3w}|_{0}^{T}=0

$$
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{\Gamma }_{55}}=\int\limits_{\Omega }(x){{b}_{5}}(x)dx=\int_{0}^{T}{{{(\sin 2wx)}^{2}}dx}=\frac{4wx-\cos 4wx}{8w}|_{0}^{T}=\frac{T}{2}

$$ So, the Gram matrix is:
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

\Gamma =\left[ \begin{matrix} T & 0 & 0 & 0 & 0 \\ 0 & T/2 & 0 & 0 & 0 \\ 0 & 0 & T/2 & 0 & 0 \\ 0 & 0 & 0 & T/2 & 0 \\ 0 & 0 & 0 & 0 & T/2 \\ \end{matrix} \right]

$$ It is observed that this matrix is diagonal matrix, i.e. only the diagonal component is not zero.
 * <p style="text-align:right">
 * }

2). Calculate the determinate of the Gram matrix,
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

\det (\Gamma )=\det \left[ \begin{matrix} T & 0 & 0 & 0 & 0 \\ 0 & T/2 & 0 & 0 & 0 \\ 0 & 0 & T/2 & 0 & 0 \\ 0 & 0 & 0 & T/2 & 0 \\ 0 & 0 & 0 & 0 & T/2 \\ \end{matrix} \right]=\frac{16}

$$
 * <p style="text-align:right">
 * }

3). from the matrix, it is found that only the diagonal components, i.e. i=j, have non-zero value.
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

{{\Gamma }_{ij}}=0\begin{matrix} {} & i\ne j \\ \end{matrix}

$$     (Eq )
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

{{\Gamma }_{ij}}\ne 0\begin{matrix} {} & i=j \\ \end{matrix}

$$ This agrees with the definition of Kronecker delta function. So, the F is orthogonal basis.
 * <p style="text-align:right">
 * }

Problem statement
Family of basis functions given:

with the same domain $$\displaystyle \Omega =[0,1]$$.

Do the following steps to determine whether $$\displaystyle \mathcal{F}$$ is a orthogonal family.

1)Construct the Gramian matrix $$\displaystyle \Gamma \left( \mathcal{F} \right)$$, and observe its property

2)Find the determinant of $$\displaystyle \Gamma \left( \mathcal{F} \right)$$

3)Conclude on the orhogonality of $$\displaystyle \mathcal{F}$$.

Solution
1)Construct the Gramian matrix $$\displaystyle \Gamma \left( \mathcal{F} \right)$$, and observe its property

First let ‘ s assign notation to the family $$\displaystyle \mathcal{F}$$:

The Gramian matrix is defined as

Since,

then,

Then we can construct the Gramian matrix,

And we know that it is a symmetric matrix.

2)Find the determinant of $$\displaystyle \Gamma \left( \mathcal{F} \right)$$

The determinant can be calculated via WolframAlpha:

3)Conclude on the orhogonality of $$\displaystyle \mathcal{F}$$.

Since the determinant of $$\displaystyle \Gamma \left( \mathcal{F} \right)$$ is non-zero, so we can conclude that the family of basis functions $$\displaystyle \mathcal{F}$$ is an orthogonal family.

Problem Statement
Given:

Where $$\displaystyle \underline{w}(x)\cong {{\underline{w}}^{h}}(x)=\sum\limits_{i=1}^{n}(x)$$, $$\displaystyle \underline{u}(x)\cong {{\underline{u}}^{h}}(x)=\sum\limits_{j=1}^{n}{{{d}_{j}}\underline}(x)$$.

Show that Eq. (8.1) is equivalent to Eq. (8.2).

==Solutions ==

1). First, we try to derive Eq.(8.2) using Eq.(8.1).

Eq. (8.1) is valid for any function of wh (x), which is a linear combination of these linearly independent basis.

Thus, the Eq. (8.1) is valid for some particular sets of Ci value to get $$\displaystyle {{\underline{w}}^{h}}(x)=\sum\limits_{i=1}^{n}(x)$$.

For example, we choose C1=1, C2=C3=..=Cn=0, then $$\displaystyle {{\underline{w}}^{h}}(x)=\sum\limits_{i=1}^{n}(x)={{\underline{b}}_{1}}(x)$$. By substituting $$\displaystyle {{\underline{w}}^{h}}(x)$$ into Eq.(8.1), we obtain:

Which is the equation for Eq.(8.2) of $$\displaystyle i=1$$.

Then, in similar way, we choose C2=1, C1=C3=..=Cn=0, then we obtain:

Which is the equation for Eq.(8.2) of $$\displaystyle i=2$$.

Similarly, we continuing to choose Ci=1, C1=C2=.. Ci-1 = Ci+1 =Cn=0 then we obtain:

Which is the equation for Eq.(8.2) of any value of $$\displaystyle i$$. So, Eq. (8.2) is derived with Eq.(8.1).

2). Secondly, we try to derive Eq.(8.2) using Eq.(8.1).

{ bi(x), i=1,2…n } is a family of linearly independent basis functions, and according to Eq.(8.1) we have:

Linearly combining the above equations, by choosing any set of Civalue, we obtain:

Obviously,

Because the value set of Ciis arbitrary, Eq.(8.1) is valid for any wh (x). So, we obtain Eq.(1) using Eq.(8.2).

According the above two aspects, we can conclude that Eq.(8.1) is equivalent to Eq. (8.2).

Problem Statement
Consider $$\scriptstyle \left\{b_j(x); j=0,1,...,n\right\}=\cos(jx+\phi)$$. Select $$\phi$$ such that $$\scriptstyle b_j(x=0)\ne 0$$. Consider $$\phi=\pi/4$$ and $$\pi/2$$.


 * 1) Let $$\scriptstyle{ n=2  \rightarrow \text{ndof} = n+1 = 2+1=3 }$$, where ndof is the number of degrees of freedom and  $$\scriptstyle \textbf{ d} = \left\{d_j;j=0,...,n\right\}$$.
 * 2) Find two equations that enforce boundary conditions for $$u^h(x) = \sum_{j=0}^n{d_jb_j(x)}$$
 * 3) Find 1 more equation to solve for $$\textbf{ d} =\left\{ d_j\right\}_{3X1}  (j=0,1,2)$$ by projecting the residue $$P(u^h)$$ on a basis function $$b_k(x)$$ with $$k=0,1,2$$ such that the additional equation is linearly independent from the above equations in part 2.

<ol> Display three equations in matrix form,

$$\textbf{Kd=F}$$

Observe symmetry properties of $$\textbf{K}$$ </li> <li> Solve for $$\textbf{d}$$</li> <li> Construct $$u^h(x)$$ and plot $$u^h(x)$$ versus $$u(x)$$</li> <li> Repeat 1. through 6. for <ol> $$n=4$$</li> $$n=6 $$</li></ol> </li> </ol>

Exact Solution
After applying boundary conditions

$$ n = 2 $$
with

Boundary Conditions
The boundary conditions are

Which defines two of the needed three unknowns.

Additional Equation
For the third condition, we project the residue on the basis function.

Recalling

So that

Matrix Form
There are now three equations for three unknowns which can be expressed in matrix form.

Noting that $$ \mathbf{K} $$ is not symmetric.

$$ \phi = \pi/4 $$
Solving the system for $$ \phi = \pi/4 $$



$$ \phi = \pi/2 $$
Solving the system for $$ \phi = \pi/2 $$



$$n=4$$
with

Boundary Conditions
The boundary conditions are

Which defines two of the needed five unknowns.

Additional Equations
As with $$ n = 2$$, we project the residue on the basis function but an additional two times for $$ b_1, b_2, \text{ and } b_3$$

$$\phi = \pi/4 $$
With the boundary conditions adding the two equations

In matrix form

$$ \begin{bmatrix} 0.70711 & -0.21296 & -0.93723 & -0.79982 & 0.072944\\ 0 & -0.97706 & -0.69742 & 1.8007 & 3.9893\\ 0 & -0.29193 & -0.71256 & -0.37114 & 1.5396\\ 0 & -0.17814 & -2.3464 & -6.2838 & -7.1682\\ 0 & -0.041238 & -2.7928 & -8.9403 & -12.901 \end{bmatrix}\begin{bmatrix}d_0\\d_1\\d_2\\d_3\\d_4\end{bmatrix}= \begin{bmatrix} 0\\ 4.0\\ -0.80986\\ 0.53759\\ 1.3074 \end{bmatrix} $$

With the solution

$$ b_k=\begin{bmatrix} 0.1317\\ 5.6638\\ -1.5368\\ 0.40319\\ -0.066163 \end{bmatrix} $$



$$\phi = \pi/2 $$
With the boundary conditions adding the two equations

The above system does not involve $$d_0$$, and is thus insolvable.

$$n=6$$
with

Boundary Conditions
The boundary conditions are

Which defines two of the needed five unknowns.

Additional Equations
As with $$ n = 2$$, we project the residue on the basis function but an additional two times for $$ b_1, b_2,b_3,b_4, \text{ and } b_5$$

$$\phi = \pi/4 $$
With the boundary conditions adding the two equations

In matrix form

$$ \begin{bmatrix}0.70711 & -0.21296 & -0.93723 & -0.79982 & 0.072944 & 0.87864 & 0.87652\\ 0 & -0.97706 & -0.69742 & 1.8007 & 3.9893 & 2.3874 & -2.8882\\ 0 & -0.29193 & -0.71256 & -0.37114 & 1.5396 & 4.896 & 8.1699\\ 0 & -0.17814 & -2.3464 & -6.2838 & -7.1682 & -0.29708 & 11.966\\ 0 & -0.041238 & -2.7928 & -8.9403 & -12.901 & -7.7865 & 5.9511\\ 0 & 0.096228 & -1.792 & -7.2568 & -13.709 & -15.728 & -9.7467\\ 0 & 0.19584 & -0.047533 & -2.8032 & -10.066 & -20.402 & -27.035 \end{bmatrix} \begin{bmatrix}b_0\\b_1\\b_2\\b_3\\b_4\\b_5\\b_6\end{bmatrix}= \begin{bmatrix} 0\\ -4.0\\ -0.80986\\ 0.53759\\ 1.3074\\ 1.2783\\ 0.71075 \end{bmatrix} $$

With the solution

$$ b_k=\begin{bmatrix} 0.14316\\ 5.6686\\ -1.6181\\ 0.53522\\ -0.16633\\ 0.03925\\ -0.0055692 \end{bmatrix} $$



$$\phi = \pi/2 $$
As before, this system of equations is singular.