User:Eml5526.s11.team5/sub2

= HW 2.1 =

Problem
Derive the expression for the Differential Equation in the 1-D Heat Problem

Find
By taking infinitesimal slice of the bar (shown in red in Figure 1),$$ dx $$, develop expression for elastodynamic response in Heat Problem in 1-D

Solution
Heat is transferred in the form of Conduction, Convection and Thermal Radiation. In this problem we will do Heat Conduction only.

Here we have consider a small infinitesimal portion of wall of thickness dx as shown in figure.

Let,The Heat Flux is represented by $$ \quad q$$.

{| style="width:100%" border="0" Here $$ \quad \tilde{m}(x) =\quad m(x).c$$
 * style="width:95%" |
 * style="width:95%" |

--Lokeshdahiya 20:32, 2 February 2011 (UTC)

= HW 2.2 =

Given
1. 
 * {| style="width:100%" border="0"

$$ \underline{a}_i \bullet  \underline{a}_j = \delta_{ij} $$
 * style="width:95%" |
 * style="width:95%" |

where $$ \delta_{ij} =

\begin{cases} i = j \rightarrow 1 \\ i \neq j \rightarrow 0 \end{cases}

$$ $$
 *  $$ \displaystyle (Eq. 2.2.1)
 * }

2. 
 * {| style="width:100%" border="0"

$$ \underline{b}_j = \underline{b}_{jk} \underline{a}_k $$ $$
 * style="width:95%" |
 * style="width:95%" |
 *  $$ \displaystyle (Eq. 2.2.2)
 * }

3. 
 * {| style="width:100%" border="0"

$$ [b_{jk}] =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 3 \\ 3 & 2  & 6 \end{bmatrix}

$$ $$
 *  $$ \displaystyle
 * }

eg. $$ \underline{b}_2 = 2a_1 + (-1)a_2 + 3a_3 $$

Consider


 * {| style="width:100%" border="0"

$$ \underline{v} = 5a_1 - 7a_2 -4a_3 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 *  $$ \displaystyle (Eq. 2.2.3)
 * }

Find
1. $$ det[\underline{b}_{jk}] $$

2. Find Gram matrix $$ \underline{\Gamma}(\underline{b}_1, \underline{b}_2, \underline{b}_3) = \underline{K} $$, $$ det(\underline{\Gamma}) $$

3. Find $$ \underline{F} = {F_i} = {\underline{b}_i \bullet \underline{v}_i} $$

4. Solve $$ \underline{K} \underline{d} = \underline{F} $$ for $$ \underline{d} = {v_i} $$

5. Use $$ \underline{w}_i \bullet \underline{P}(v) = 0 $$, to find $$ \bar{\underline{K}}  \underline{d} = \bar{\underline{F}}  $$.

What is $$ \bar{\underline{K}} $$ and $$ \bar{\underline{F}} \ $$ ? $$ \underline{d} = {v_j} $$

,where $$ \underline{w}_i \quad $$ is lineraly independent family of vectors (basis), e.g. $${ \underline{w}_i, i = 1, 2, 3,... n} \quad $$

6. Solve for $$ \underline{d} $$; compare to $$ \underline{d} $$ in part 4

7. Observe symmetry properties of $$ \underline{K} $$ and $$ \bar{\underline{K}} $$

Solution
1.


 * {| style="width:100%" border="0"

$$ det[b_{jk}] = det
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 3 \\ 3 & 2  & 6 \end{bmatrix}

$$ $$
 *  $$ \displaystyle
 * }


 * {| style="width:100%" border="0"

$$ a_{11}*(a_{22}*a_{33} - a_{32}*a_{23}) - a_{12}*(a_{21}*a_{33} - a_{31}*a_{23}) + a_{13}*(a_{21}*a_{32} - a_{31}*a_{22}) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 *  $$ \displaystyle (Eq. 2.2.4)
 * }


 * {| style="width:100%" border="0"

$$ 1*(-1*6 - 2*3) - 1*(2*6 - 3*3) + 1*(2*2 - 3*(-1)) = 1*(-6-6)-1(12-9)+1*(4+3)= -12-3+7 = -8 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 *  $$ \displaystyle
 * }

$$ \det[ b_{jk}] = -8 \quad $$

2.


 * {| style="width:100%" border="0"

$$ \underline{\Gamma} = (\underline{b}_i \bullet \underline{b}_j) $$ $$
 * style="width:95%" |
 * style="width:95%" |
 *  $$ \displaystyle (Eq. 2.2.5)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{b}_1 = 1a_1 + 1a_2 + 1a_3 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.6a)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{b}_2 = 2a_1 + (-1)a_2 + 3a_3 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.6b)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{b}_3 = 3a_1 + 2a_2 + 6a_3 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.6c)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma}(b_{11}) = \underline{b}_1 \bullet \underline{b}_1 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma}(b_{12}) = \underline{b}_1 \bullet \underline{b}_2 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma}(b_{13}) = \underline{b}_1 \bullet \underline{b}_3 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma}(b_{21}) = \underline{b}_2 \bullet \underline{b}_1 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \vdots $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma}(b_{ij}) = \underline{b}_i \bullet \underline{b}_j $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.7)
 * }

also by definition of dot product,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ b_{21} = b_{21} \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.8)
 * }

The $$ \underline{\Gamma} \quad $$ matrix is symmetric

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma} =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} b_{11} & b_{12} & b_{13} \\ b_{21} & b_{22} & b_{23} \\ b_{31} & b_{32} & b_{33} \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.9)
 * }

Using matlab (code is attached below)

<span id="(1)">
 * {| style="width:100%" border="0"

$$
 * style="width:95%" |
 * style="width:95%" |

\underline{\Gamma} =

\begin{bmatrix} 3  &  4  &  11 \\     4   &  14  &  22  \\    11  &  22  &   49 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ det(\underline{\Gamma}) = 64 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

3.

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{F} = {\underline{b}_i \bullet \underline{v}_i} = [b_{jk}][v] =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 1 & 1 & 1 \\ 2 & -1 & 3 \\ 3 & 2  & 6 \end{bmatrix}

\begin{bmatrix} 5 \\ -7 \\ -4 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.10)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{F} =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} -6 \\ 5 \\ -23 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

4.

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma} \underline{d}_i = \underline{F} \Rightarrow [d_j] = [\Gamma]^T [F] $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.11)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ [d_j] =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 3  &  4  &  11 \\     4   &  14  &  22  \\    11  &  22  &   49 \end{bmatrix}^T

\begin{bmatrix} -6 \\ 5 \\ -23 \end{bmatrix} $$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

$$ [d_j] =

\begin{bmatrix} 8.3750 \\ 5.6250 \\ -4.8750 \end{bmatrix}

$$

5. Consider $$ w_i * \underline{P(v)} = 0 $$ and assume $$ w_i = a_i \quad $$

Therefore

<span id="(1)">
 * {| style="width:100%" border="0"

$$ a_i * \underline{P(v)} = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.12)
 * }

By definition of $$ \underline{P(v)} $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{P(v)} = \sum_{j=1}^n \underline{a}_j v_j - \underline{v} = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.13)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Rightarrow \underline{a}_i * \sum_{j=1}^n \underline{a}_j v_j = \underline{a}_i \bullet \underline{v} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.14)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$
 * style="width:95%" |
 * style="width:95%" |

\begin{matrix} \sum_{j=1}^n & (\underline{a}_i \bullet \underline{a}_j) & v_j & = & a_i \bullet \underline{v}\\ & \Downarrow & \Downarrow & & \Downarrow \\ & \bar{\underline{K}_{ij}} & \underline{d} & & \bar{\underline{F}} \end{matrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.15)
 * }

From given $$ Eq. 2.2.2 |quad $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ [a_{jk}] = [b_{jk}]^T[b_i] =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 1 & 2 & 3 \\ 1 & -1 & 2 \\ 1 & 3  & 6 \end{bmatrix}

\begin{bmatrix} b_1 \\ b_2 \\ b_3 \end{bmatrix}

=

\begin{bmatrix} 1 & 2 & 3 \\ 1 & -1 & 2 \\ 1 & 3  & 6 \end{bmatrix}

=

\bar{\underline{K}}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ a_1 = 1b_1 + 2b_2 + 3b_3 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.16a)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ a_2 = 1b_1 + (-1)b_2 + 2b_3 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.16b)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ a_1 = 1b_1 + 3b_2 + 6b_3  \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.16c)
 * }

Following the same procedure as before

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{\Gamma}_a =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} a_{11} & a_{12} & a_{13} \\ a_{21} & a_{22} & a_{23} \\ a_{31} & a_{32} & a_{33} \end{bmatrix} =

\underline{\Gamma}_a =

\begin{bmatrix} 14 &   5  &  25  \\     5    &  6   &  10 \\    25  &  10  &   46 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.17)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ det(\underline{\Gamma_a}) = 64 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Concerning $$ F_a \quad $$, noting the the functions are orthonormal, we may convert from one bases per say "a" to another bases "b" and vice verse by using identity matrix <span id="(1)">
 * {| style="width:100%" border="0"

$$ [I] =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 1 &   0  &  0  \\     0    &  1   &  0 \\    0  &  0  &   1 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.18)
 * }

Therefore:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ [F_a] = [I][F_b] \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.19)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ [F_a] =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 1 &   0  &    0 \\     0    &  1   &  0 \\    0  &  0  &   1 \end{bmatrix}

\begin{bmatrix} 5 \\ -7 \\ -4 \end{bmatrix}

=

\begin{bmatrix} 5 \\ -7 \\ -4 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.20)
 * }

6.

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \bar{\underline{K}} \underline{d}_j =  \bar{\underline{F}} \Rightarrow [d_{ja}] = [\bar{\underline{K}}]^T [\bar{\underline{F}}]  $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.2.21)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ [d_{ja}] =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 1 & 2 & 3 \\ 1 & -1 & 2 \\ 1 & 3  & 6 \end{bmatrix}^{-1}

\begin{bmatrix} 5 \\ -7 \\ -4 \end{bmatrix}

=

\begin{bmatrix} 8.3750 \\ 5.6250 \\ -4.8750 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

From a plain observation it could be concluded that displacement $$ d_{ja} \quad $$ is the same.

7.

Both matrices $$ \bar{\underline{K}} \quad $$ and $$ \underline{K} \quad $$ were symmetric matrices, netherless neither diagonal nor the non-diagonal values were equal. Also it should be noted that these 2 matrices determinants came out to be the same.

= HW 2.3 = == 2.3.1 Background     ==

In defining basis  $$ \left \{\underline b_{i},i=1,...,n  \right \} $$   for   $$ \mathbb{R}^{n} $$,   $$ \left \{\underline b_{i} \right \} $$   is not necessarily orthonormal, where:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{i}\cdot \underline b_{j}\neq \delta _{ij} $$  &   $$ \delta _{ij}=\left\{\begin{matrix} 1\ for\ i=j\\ 0\ for\ i\neq j \end{matrix}\right. $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.1)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Fig. 2.3.1)
 * }

A vector  $$ \textrm{\underline v} $$   within basis   $$ \left \{\underline b_{i} \right \} $$   may be defined as:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \textrm{\underline v}=\sum_{i=1}^{n}\ \textrm{v}_{i}\underline b_{i} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.2)
 * }

or

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \textrm{\underline v}=\sum_{j=1}^{n}\ \textrm{v}_{j}\underline b_{j} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.3)
 * }

So if

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \textrm{\underline P (\underline v)}=\sum_{j=1}^{n}\ \textrm{v}_{j}\underline b_{j} - \textrm{\underline v}=\underline 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.4)
 * }

then

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{i}\cdot\textrm{\underline P (\underline v)}=\underline b_{i}\cdot \sum_{j=1}^{n}\ \textrm{v}_{j}\underline b_{j} - \underline b_{i}\cdot \textrm{\underline v}=\underline 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.5)
 * }

And if

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{i}\cdot \sum_{j=1}^{n}\ \textrm{v}_{j}\underline b_{j}=\sum_{j=1}^{n}(\underline b_{i}\cdot \underline b_{j})\textrm{v}_{j}=\underline b_{i}\cdot \textrm{\underline v} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.6)
 * }

then

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{i}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ i=1,...,n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.7)
 * }

The following equation

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \sum_{j=1}^{n}(\underline b_{i}\cdot \underline b_{j})\textrm{v}_j=\underline b_{i}\cdot \textrm{\underline v} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.8)
 * }

can also be written as

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \left [ K_{ij} \right ]\left \{ \textrm {v}_j \right \}=\left \{ F_i \right \} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.9)
 * }

where

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \left [ K_{ij} \right ]=(\underline b_i\cdot \underline b_j) $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.10)
 * }

The equation above may also be written in the form:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline K \ \underline d=\underline F $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.11)
 * }

$$ \underline K \ \underline d=\underline F $$ may also be obtained by multiplying $$ \underline P(\textrm v) $$   as defined above with any linear independent family of vectors (basis), such as   $$ \left \{ \underline w_i, i=1,...,n \right \} $$   for example. In such a case,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w_i\cdot \underline P (v)=0 \ \forall \ i=1,...,n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.12)
 * }

This equation is more general than

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{i}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ i=1,...,n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.13)
 * }

as previously defined.

Considering now

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P (\underline v)=0 \ \forall \ \underline w\in \mathbb{R}^{n} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.14)
 * }

and using our  $$ \left \{ \underline b_i \right \} $$   basis while setting

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w = \sum_{i}\alpha_i\underline b_i $$  , $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.15)
 * }

our above equation becomes equivalent to two statements:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w \cdot \underline P(\textrm{ \underline v})=0 \ \forall \ \underline w=\sum_{i}\alpha _i \underline b_i \ \in \ \mathbb{R}^n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.16)
 * }

and

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w \cdot \underline P(\textrm{ \underline v})=0 \ \forall \ \left \{ \alpha_1,..., \alpha_n \right \}\ \in \ \mathbb{R}^n $$  , such that    $$ \underline w=\sum_{i}\alpha _i\underline b_i $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.17)
 * }

To prove that this last equation is equivalent to

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{i}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ i=1,...,n $$ , three choices are made. $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.18)
 * }

In Choice #1, where  $$ \alpha_1=1\quad$$, $$\alpha_2=...=\alpha_n=0 \quad$$  ,   $$ \left \{\alpha _1, ..., \alpha _n  \right \} $$   are arbitrary, i.e.    $$ \forall \ \left \{\alpha _1, ..., \alpha _n  \right \} $$

Then,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w=\sum_{i}\alpha _i\underline b_i=\sum_{i}\left \{ 1,0,...,0 \right \}\underline b_i=\underline b_1 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.19)
 * }

and therefore

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.20)
 * }

and

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_1\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.21)
 * }

In Choice #2, where  $$ \alpha_2=1\quad$$, $$\alpha_1=0\quad$$, $$\alpha_3=...=\alpha_n=0 \quad$$  ,   $$ \left \{ \alpha _i \right \}=\left \{ 0,1,0,...,0 \right \} $$

Then,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w=\sum_{i}\alpha _i\underline b_i=\sum_{i}\left \{ 0,1,0,...,0 \right \}\underline b_i=\underline b_2 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.22)
 * }

and therefore

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.23)
 * }

and

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_2\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.24)
 * }

In Choice #3, where  $$ \alpha_n=1,\ \alpha_1=...=\alpha_{n-1}=0 $$   ,$$ \left \{ \alpha_i \right \}=\left \{ 0,..., 0,1 \right \} $$

Then,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w=\sum_{i}\alpha _i\underline b_i=\sum_{i}\left \{ 0,...,0,1 \right \}\underline b_i=\underline b_n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.25)
 * }

and therefore

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.26)
 * }

and

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{n}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ \left \{ \alpha_1,...,\alpha_n \right \}\in \ \mathbb{R}^n $$  such that    $$ \underline w=\sum_{i}\alpha_i\underline b_i $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.27)
 * }

This end result is perfectly equivalent to:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline b_{i}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ i=1,...,n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.28)
 * }

== 2.3.2 Solution      ==

Now we shall re-attempt the analysis above, yet redefine our basis to be   $$ \left \{\underline a_{i},i=1,...,n  \right \} $$  , perhaps where

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_{i}\cdot \underline a_{j}= \delta _{ij} $$ , where    $$ \delta _{ij}=\left\{\begin{matrix} 1\ for\ i=j\\ 0\ for\ i\neq j \end{matrix}\right. $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.29)
 * }

which makes it an orthonormal basis.

<span id="(1)">
 * {| style="width:100%" border="0"

$$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Fig. 2.3.2)
 * }

As before, a vector  $$ \textrm {\underline v} $$   within basis   $$ \left \{\underline a_{i} \right \} $$   may be defined as:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \textrm{\underline v}=\sum_{i=1}^{n}\ \textrm{v}_{i}\underline a_{i} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.30)
 * }

or

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \textrm{\underline v}=\sum_{j=1}^{n}\ \textrm{v}_{j}\underline a_{j} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.31)
 * }

So if

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \textrm{\underline P (\underline v)}=\sum_{j=1}^{n}\ \textrm{v}_{j}\underline a_{j} - \textrm{\underline v}=\underline 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.32)
 * }

Then

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_{i}\cdot\textrm{\underline P (\underline v)}=\underline a_{i}\cdot \sum_{j=1}^{n}\ \textrm{v}_{j}\underline a_{j} - \underline a_{i}\cdot \textrm{\underline v}=\underline 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.33)
 * }

And if

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_{i}\cdot \sum_{j=1}^{n}\ \textrm{v}_{j}\underline a_{j}=\underline a_{i}\cdot \textrm{\underline v}=\sum_{j=1}^{n}(\underline a_{i}\cdot \underline a_{j})\textrm{v}_{j}=\sum_{j=1}^{n}\ \delta_{ij}\cdot \textrm{v}_{j} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.34)
 * }

then

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_{i}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ i=1,...,n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.35)
 * }

Considering again

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P (\textrm{\underline v})=0 \ \forall \ \underline w\in \mathbb{R}^{n} $$  , $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.36)
 * }

but this time setting

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w = \sum_{i}\beta_i\underline a_i $$ , then the above equation is equivalent to: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.37)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w \cdot \underline P(\textrm{\underline v})=0 \ \forall \ \left \{ \beta_1,..., \beta_n \right \}\ \in \ \mathbb{R}^n $$  , such that $$ \underline w=\sum_{i}\beta_i\underline a_i $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.38)
 * }

To now prove that the above equation is equivalent to

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_{i}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ i=1,...,n $$ , three choices are made once again: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.39)
 * }

For Choice #1 where  $$ \beta_1=1 \quad$$, $$\beta_2=...=\beta_n=0 \quad$$  ,   $$ \left \{\beta _1, ..., \beta _n  \right \} $$   are arbitrary, i.e.   $$ \forall \ \left \{\beta _1, ..., \beta _n  \right \} $$

Then

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w=\sum_{i}\beta _i\underline a_i=\sum_{i}\left \{ 1,0,...,0 \right \}\underline a_i=\underline a_1 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.40)
 * }

and therefore,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P(\textrm{\underline v})=0 $$  , $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.41)
 * }

and so

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_1\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.42)
 * }

For Choice #2, where  $$ \beta_2=1 \quad $$, $$\beta_1=0 \quad$$, $$\beta_3=...=\beta_n=0 \quad$$  ,   $$ \left \{ \beta _i \right \}=\left \{ 0,1,0,...,0 \right \} $$

Therefore,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w=\sum_{i}\beta _i\underline a_i=\sum_{i}\left \{ 0,1,0,...,0 \right \}\underline a_i=\underline a_2 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.43)
 * }

and

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.44)
 * }

so therefore,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_2\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.45)
 * }

In Choice #3, where  $$ \beta_n=1,\ \beta_1=...=\beta_{n-1}=0 $$  ,   $$ \left \{ \beta_i \right \}=\left \{ 0,..., 0,1 \right \} $$

So then

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w=\sum_{i}\beta _i\underline a_i=\sum_{i}\left \{ 0,...,0,1 \right \}\underline a_i=\underline a_n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.46)
 * }

and accordingly,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline w\cdot \underline P(\textrm{\underline v})=0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.47)
 * }

So finally,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_{n}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ \left \{ \beta_1,...,\beta_n \right \}\in \ \mathbb{R}^n $$ , such that   $$ \underline w=\sum_{i}\beta_i\underline a_i $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.48)
 * }

This end result is perfectly equivalent to:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline a_{i}\cdot \textrm{\underline P(\underline v)}=0\ \forall \ i=1,...,n $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.49)
 * }

= HW 2.4 =

Given\Find
Show $$ \int_{} \frac{x^2}{1+x}dx = \frac{x^2}{2} - x +log(1+x) + k $$

Do the following problems to lead this result

1) Show $$\int{log(x) }dx = xlog|x| - x $$ 2) Show $$\int{x log(x)}dx = \frac{1}{2}x^2\left[log(x)-\frac{1}{2}\right]$$ 3) Find $$\int{\frac{x^2}{1+cx}}dx$$ 4) Find $$\int{\frac{x^2}{a+bx}}dx$$ 5) Find the exact solution of $$ \displaystyle u(x) $$ for, $$ \displaystyle            \frac{d}\left[ (2+3x)\frac\right] + 5x = 0 $$ 6) Plot $$ \displaystyle u(x) $$

1) Show $$\int{logx}dx = xlogx - x $$
Using Integration By Parts,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \int{u}dv =vu-\int{u}dv $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.1)
 * }

Let

Entering values from $$ Eq. 2.4.2 $$ into $$ Eq. 2.4.1 $$ yields

<span id="(1)">
 * {| style="width:100%" border="0"

$$ xlogx-\int 1dx $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \quad xlogx-x $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

2) Show $$\int{x logx}dx = \frac{1}{2}x^2\left[logx-\frac{1}{2}\right]$$
Using Integration by parts, let

Now entering values from $$ Eq. 2.4.3 $$ into $$ Eq. 2.4.1 $$ yields

3) Find $$\int \frac{x^2}{1+cx}$$
Using Integration by parts, let

4) Find $$\int \frac{x^2}{a+bx}$$
Using integration by parts, let

Now, entering values from $$ Eq. 2.4.10 $$ into $$ Eq. 2.4.1 $$ yields,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \int \frac{x^{2}}{a+bx}dx =(\frac{log(a+bx)}{b})( x^{2}) - \int{ (\frac{log(a+bx)}{b})2x}dx $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.11)
 * }

Now, integration by parts must be done again. Let

The equation now takes the form

<span id="(1)">
 * {| style="width:100%" border="0"

$$(\frac{log(a+bx)}{b})( x^{2}) - [ vu - \int{vdu}] $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.13)
 * }

Entering the values from $$ Eq. 2.4.12 $$ into $$ Eq. 2.4.13 $$ yields,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\frac{x^2}{b} log(a+bx) - [ ( \frac{x}{b}log(a+bx) + \frac{a}{b^2}log(a+bx) - \frac{x}{b})( 2x) - \int{( \frac{x}{b}log(a+bx) + \frac{a}{b^2}log(a+bx) - \frac{x}{b})} {(2dx)}] $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Distributing terms,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\frac{x^2}{b} log(a+bx) - \frac{2x^2}{b} log(a+bx) + \frac{2ax}{b^2}log(a+bx) + \frac{2x^2}{b} + \frac{2}{b}\int{xlog(a+bx)}dx + \frac{2a}{b^2}\int{log(a+bx)}dx - \frac{2}{b}\int{x}dx $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.14)
 * }

Solving each integral in $$ Eq. 2.4.14 $$ individually,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\frac{2}{b}\int{xlog(a+bx)}dx = \frac{x^2}{b}\log(a+bx)-\frac{a^2}{b^3}\log(a+bx)-\frac{x^2}{2b}+\frac{ax}{b^2} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \frac{2a}{b^2}\int{log(a+bx)}dx = \frac{2ax}{b^2}log(a+bx)+\frac{2a^2}{b^3}log(a+bx) - \frac{2ax}{b^2} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$\frac{2}{b}\int{x}dx = -\frac{x^2}{b} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">$$ \displaystyle (Eq. 2.4.15)
 * }

Substituting $$ Eq. 2.4.15 $$ into $$ Eq. 2.4.14 $$ yields

Which simplifies to

<span id="(1)">
 * {| style="width:100%" border="0"

$$\frac{x^2}{2b}-\frac{ax}{b^2} + \frac{a^2}{b^3}log(a+bx) + k_{constant} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">$$ \displaystyle (Eq. 2.4.16)
 * }

5) Find the exact solution for
<span id="(1)">
 * {| style="width:100%" border="0"

$$ \frac{d}\left[ (2+3x)\frac\right] + 5x = 0 $$  ;    $$  \forall x \in ] 0,1 [ $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

With the following Boundary Conditions: <span id="(1)">
 * {| style="width:100%" border="0"

Essential: $$ \quad u(1)=4$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.17)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

Natural: $$ \quad -\frac{d}{dx} u(x=0) =6$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.18)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \frac{d}{du} [(2+3x)\frac{du}{dx}]=-5x $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.19)
 * }

Integrate both sides of $$ Eq. 2.4.19 $$,

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \int d [(2+3x)\frac{du}{dx}] = \int-5x dx$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$(2+3x)\frac{du}{dx}=\frac{-5x^2}{2}+c_1 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$\frac{du}{dx}=\frac{-5x^2}{2(2+3x)}+\frac{c_1}{2+3x}$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.20)
 * }

Apply Natural B.C., $$ Eq. 2.4.18 $$, to find $$ c_1 $$, <span id="(1)">
 * {| style="width:100%" border="0"

$$-\frac{du}{dx}(x=0) =6 \implies -6= 0+ \frac{c_1}{2}$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$\quad c_1=-12$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.21)
 * }

Enter value from $$ Eq. 2.4.21 $$ into $$ Eq. 2.4.20 $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$\frac{du}{dx}=\frac{-5x^2}{2(2+3x)}+\frac{-12}{2+3x} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$du=\frac{-5x^2}{2(2+3x)}dx+\frac{-12}{2+3x}dx$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.22)
 * }

Integrate both sides of $$ Eq. 2.4.22 $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int du=\int\frac{-5x^2}{2(2+3x)}dx+\int\frac{-12}{2+3x}dx $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$u=-\frac{5}{2}\int \frac{x^2}{2+3x}dx-12\int \frac{dx}{2+3x} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$u=\frac{-5}{2}[\frac{x^2}{6}-\frac{2x}{9}+\frac{4log(2+3x)}{27}]-12[\frac{log(2+3x)}{3}]+c_2 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$u=\frac{-5x^2}{12} +\frac{5x}{9}-\frac{118}{27}log(2+3x)+c_2$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.23)
 * }

Apply Essential B.C., $$ Eq. 2.4.17 $$, to find

<span id="(1)">
 * {| style="width:100%" border="0"

$$u(1)=4 \implies 4=\frac{-5}{12} +\frac{5}{9}-\frac{118}{27}log(2+3x)+c_2$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$\quad c_2=10.895$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.4.24)
 * }

Enter the value from $$ Eq. 2.4.24 $$ into $$ Eq. 2.4.23 $$ to obtain exact solution,

<span id="(1)">
 * {| style="width:100%" border="0"

$$u(x)=\frac{-5x^2}{12} +\frac{5x}{9}-\frac{118}{27}log(2+3x)+10.895$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

6) Plot $$ \quad u(x)$$


= HW 2.5 =

Given
$$\overset{b _{i}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$

Find
Show that $$\overset{b _{i}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$  where i = 1, 2,. . ., n is the same as $$\overset{W}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$   such that     $$\overset{W}{\rightarrow}=\sum\limits_{i=1}^{n}$$

Solution
For i = 1, 2,. . ., n

<span id="(1)">
 * {| style="width:100%" border="0"

$$\overset{b _{i}}{\rightarrow}\centerdot\overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.5.1)
 * }

Multiplying the arbitrary constant αi   on both sides ;

<span id="(1)">
 * {| style="width:100%" border="0"

$${{\alpha }_{i}}(\overset{b _{i}}{\rightarrow}\centerdot\overset{P}{\rightarrow}\overset{(V)}{\rightarrow})=0$$ $$ From the properties of Dot product of a vector, $$c\left( \vec{u}\cdot \vec{v} \right)=c(\vec{u})\cdot \vec{v}=\vec{u}\cdot c(\vec{v})$$, where 'c' is an arbitrary constant.
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.5.2)
 * }

Therefore, equation (2.5.2) becomes,

<span id="(1)">
 * {| style="width:100%" border="0"

$${{\alpha }_{i}}\overset{b _{i}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

For i = 1;

<span id="(1)">
 * {| style="width:100%" border="0"

$${{\alpha }_{1}}\overset{b _{1}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq.1)
 * }

For i = 2;

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$${{\alpha }_{2}}\overset{b _{2}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2)
 * }

Similarly, For all i = n;

<span id="(1)">
 * {| style="width:100%" border="0"

$${{\alpha }_{n}}\overset{b _{n}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. n)
 * }

Adding the equations Eq. 1, Eq. 2 .... upto Eq. n, we get,

<span id="(1)">
 * {| style="width:100%" border="0"

$${{\alpha }_{1}}\overset{b _{1}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}$$ + $${{\alpha }_{2}}\overset{b _{2}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}$$ +. . . . . . + $${{\alpha }_{n}}\overset{b _{n}}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.5.3)
 * }

Using the Distributive property of dot Product of a vector, Eqn. 2.5.3 can be written as,

<span id="(1)">
 * {| style="width:100%" border="0"

$$({{\alpha }_{1}}\overset{b _{1}}{\rightarrow} + {{\alpha }_{2}}\overset{b _{2}}{\rightarrow} + . . . . . . + {{\alpha }_{n}}\overset{b _{n}}{\rightarrow})\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.5.4)
 * }

The above equation can be written as,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\sum\limits_{i=1}^{n}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow} = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.5.5)
 * }

Since $$\sum\limits_{i=1}^{n} = \overset{W}{\rightarrow},$$ Eq. 2.5.5 becomes

<span id="(1)">
 * {| style="width:100%" border="0"

$$\overset{W}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.5.6)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$\overset{W}{\rightarrow}\centerdot \overset{P}{\rightarrow}\overset{(V)}{\rightarrow}=0$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

Vignesh Solai Rameshbabu--Eml5526.s11.team5.srv 21:09, 30 January 2011 (UTC)

= HW 2.6 =

Given
<span id="(1)">
 * {| style="width:100%" border="0"

$$F=\left \{ 1,cos i\omega x, sin i\omega x \right \}, i = \left \{ 1, 2 \right \} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

on the interval, $$ \Omega =[0,T] $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

i.e.,$$F=\left \{ 1,cos \omega x, cos 2\omega x, sin \omega x, sin 2\omega x \right \} = \left \{b_1(x), b_2(x), b_3(x), b_4(x), b_5(x) \right \} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Find
<span id="(1)">
 * {| style="width:100%" border="0"

1)Construct $$ \Gamma_{5x5}(F)$$. Observe the properties of $$ \Gamma $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

2) Find the det $$\Gamma$$(F) $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

3) Conclude F is orthogonal Basis, i.e., $$ \Gamma_{ij} = \delta_{ij} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Trigonometric Properties Used
$$ \implies \int_{0}^{T} cos (n \omega x) = 0, \forall n = \left \{ 1, 2, 3, ....\right \}, \omega = \frac{2\pi}{T} $$

$$ \implies \int_{0}^{T} sin (n \omega x) = 0, \forall n = \left \{ 1, 2, 3, ....\right \}, \omega = \frac{2\pi}{T} $$

$$ sin(x)cos(y) = \left [sin(x+y) + sin(x-y) \right ]/2,$$

$$cos(x)sin(y) = \left [sin(x+y) - sin(x-y) \right]/2,$$

$$cos(x)cos(y) = \left [cos(x-y) + cos(x+y) \right]/2,$$

$$sin(x)sin(y) = \left [cos(x-y) - cos(x+y)\right]/2.$$

Solution
1)

<span id="(1)">
 * {| style="width:100%" border="0"

The Gram Matrix is defined as follows:, $$\Gamma_{ij}=<b_i,b_j>=\int_\Omega b_i(x)b_j(x)\,dx$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

It Can be observed that $$\Gamma_{ij}=\Gamma_{ji}$$, i.e. The $$ \Gamma $$ Matrix is symmetric $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

[Since $$ \int_\Omega b_i(x)b_j(x)\,dx = \int_\Omega b_j(x)b_i(x)\,dx$$] $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

Now Let us calculate each element in the $$ \Gamma $$ Matrix: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,1} = \int_{0}^{T} dx = \left [ x \right ]_{0}^{T} = T $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,2} = \Gamma_{2,1} = \int_{0}^{T} cos \omega x dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,3} = \Gamma_{3,1} = \int_{0}^{T} cos 2\omega x dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,4} = \Gamma_{4,1} = \int_{0}^{T} sin \omega xdx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,5} = \Gamma_{5,1} = \int_{0}^{T} sin 2\omega x dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{2,2} = \int_{0}^{T} cos^{2} \omega x dx = \int_{0}^{T} \frac {cos 2\omega x +1}{2} = \frac {T}{2} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{2,3} = \Gamma_{3,2} = \int_{0}^{T} cos \omega x cos 2\omega x dx = \frac {1}{2} \int_{0}^{T} \left [ cos \omega x + cos 3\omega x \right ] dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{2,4} = \Gamma_{4,2} = \int_{0}^{T} cos \omega x sin \omega x dx = \frac {1}{2} \int_{0}^{T} sin 2\omega x dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{2,5} = \Gamma_{5,2} = \int_{0}^{T} cos \omega x sin 2\omega x dx = \frac {1}{2} \int_{0}^{T} \left [ sin 3\omega x + sin \omega x \right ] dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{3,3} = \int_{0}^{T} cos^{2} \omega 2x dx = \frac {1}{2} \int_{0}^{T} \left [ cos 4\omega x + 1 \right ] dx = \frac {T}{2}$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{3,4} = \Gamma_{4,3} = \int_{0}^{T} cos 2\omega x sin \omega x dx = \frac {1}{2} \int_{0}^{T} \left [ sin 3\omega x - sin \omega x \right ] dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{3,5} = \Gamma_{5,3} = \int_{0}^{T} cos 2\omega x sin 2\omega x dx = \frac {1}{2} \int_{0}^{T} sin 4\omega x dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{4,4} = \int_{0}^{T} sin^{2} \omega x dx = \frac {1}{2} \int_{0}^{T} \left [ 1 - cos 2\omega x \right ] dx = \frac{T}{2}$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{4,5} = \Gamma_{5,4} = \int_{0}^{T} sin \omega x sin 2\omega x dx = \frac {1}{2} \int_{0}^{T} \left [ cos \omega x - cos 3\omega x \right ] dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{4,4} = \int_{0}^{T} sin^{2} 2\omega x dx = \frac {1}{2} \int_{0}^{T} \left [ 1 - cos 4\omega x \right ] dx = \frac{T}{2}$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }



a) \Gamma_{5x5} \left (F \right) = \begin{bmatrix} T & 0 & 0 & 0 & 0\\ 0 & \frac{T}{2} & 0 & 0 & 0\\ 0 & 0 & \frac{T}{2} & 0 & 0\\ 0 & 0 & 0 & \frac{T}{2} & 0\\ 0 & 0 & 0 & 0 & \frac{T}{2} \end{bmatrix} $$


 * b) It is observed that the $$ \Gamma $$ Matrix is:


 * $$ \rightarrow $$ Diagnol


 * $$ \rightarrow $$ Symmetric

2)


 * $$ \det \left ( \Gamma \right ) = \frac {T^{5}}{16} $$
 * Since $$ \det \left ( \Gamma \right ) \ne 0, F $$ is a linearly independent family of functions.

3)


 * It can be observed that $$ \forall i \ne j, \Gamma_{ij} = 0,$$ Hence $$ F $$ is Orthogonal Basis.

--Eml5526.s11.team5.vijay 19:01, 1 February 2011 (UTC)--

= HW 2.7 =

Given
<span id="(1)">
 * {| style="width:100%" border="0"

Consider $$ F=\left \{ 1,x,x^2,x^3,x^4 \right \} $$ on the interval $$ \Omega =[0,1] $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Find
<span id="(1)">
 * {| style="width:100%" border="0"

1. Construct the $$ \Gamma(F)$$ matrix and observe its propeties.
 * style="width:95%" |
 * style="width:95%" |
 * }

<span id="(1)">
 * {| style="width:100%" border="0"


 * 2. Find the determinant of $$ \Gamma(F)$$.
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

3. Conclude if F is an orthogonal family., i.e., $$ \Gamma_{ij} = \delta_{ij} $$
 * style="width:95%" |
 * style="width:95%" |
 * }

Solution
1)

<span id="(1)">
 * {| style="width:100%" border="0"

The Gram Matrix is defined as:
 * style="width:95%" |
 * style="width:95%" |


 * $$\Gamma_{ij}=<b_i,b_j>=\int_\Omega b_i(x)b_j(x)\,dx$$

$$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.7.1)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

For the case where $$ F=\left \{ 1,x,x^2,x^3,x^4 \right \} $$ $$ <span id="(1)">
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }
 * {| style="width:100%" border="0"

$$ b_1(x)=1 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ b_2(x)=x \quad $$ $$
 * style="width:90%" |
 * style="width:90%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$  b_3(x)=x^2 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$  b_4(x)=x^3  \quad$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ b_5(x)=x^4 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

Now we need to calculate each element in the $$ \Gamma $$ Matrix using eqn. 1.1: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

Since $$ \int_\Omega b_i(x)b_j(x)\,dx = \int_\Omega b_j(x)b_i(x)\,dx$$,  $$\Gamma_{ij}$$ will be equivalent to $$\Gamma_{ji}$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,1} = \int_{0}^{1} 1*1*dx = \left [ x\right ]_{0}^{1} = 1 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,2} = \Gamma_{2,1} = \int_{0}^{1} 1*x*dx = \left [ x^2/2\right ]_{0}^{1} = 1/2 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,3} = \Gamma_{3,1} = \int_{0}^{1} 1*x^2*dx = \left [ x^3/3 \right ]_{0}^{1} = 1/3 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,4} = \Gamma_{4,1} = \int_{0}^{1} 1*x^3*dx = \left [ x^4/4 \right ]_{0}^{1} = 1/4 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{1,5} = \Gamma_{5,1} = \int_{0}^{1} 1*x^4*dx = \left [ x^5/5 \right ]_{0}^{1} = 1/5 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{2,2} = \int_{0}^{1} x*x*dx = \left [ x^3/3\right ]_{0}^{1} = 1/3 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{2,3} = \Gamma_{3,2} = \int_{0}^{1} x*x^2*dx = \left [ x^4/4\right ]_{0}^{1} = 1/4 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{3,3} = \int_{0}^{1} x^2*x^2*dx = \left [ x^5/5 \right ]_{0}^{1} = 1/5 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{3,4} = \Gamma_{4,3} = \int_{0}^{1} x^2*x^3*dx = \left [ x^6/6\right ]_{0}^{1} = 1/6 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{3,5} = \Gamma_{5,3} = \int_{0}^{1} x^2*x^4*dx = \left [ x^7/7\right ]_{0}^{1} = 1/7 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{4,4} = \int_{0}^{1} x^3*x^3*dx = \left [ x^7/7\right ]_{0}^{1} = 1/7 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{4,5} = \Gamma_{5,4} = \int_{0}^{1} x^3*x^4*dx = \left [ x^8/8\right ]_{0}^{1} = 1/8 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_{5,5} = \Gamma_{5,5} = \int_{0}^{1} x^4*x^4*dx = \left [ x^9/9\right ]_{0}^{1} = 1/9 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

Assembling the matrix we get: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }



\Gamma_ \left (F \right) = \begin{bmatrix} 1 & \frac{1}{2} & \frac{1}{3} & \frac{1}{4} & \frac{1}{5}\\ \frac{1}{2} & \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{1}{6}\\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{1}{6} & \frac{1}{7}\\ \frac{1}{4} & \frac{1}{5} & \frac{1}{6} & \frac{1}{7} & \frac{1}{8}\\ \frac{1}{5} & \frac{1}{6} & \frac{1}{7} & \frac{1}{8} & \frac{1}{9} \end{bmatrix} $$

<span id="(1)">
 * {| style="width:100%" border="0"

The matrix is observed to only be symmetric $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

2)

<span id="(1)">
 * {| style="width:100%" border="0"

Using the matlab function det(X) the determinant of $$\Gamma$$ can be found to be: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }


 * $$ \det \left ( \Gamma \right ) = 3.7493*10^{-12}$$

3) <span id="(1)">
 * {| style="width:100%" border="0"

To conclude if $$\Gamma$$ is orthogonal we compare it to a matrix formed by the Kronecker Delta. $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

The Kronecker Delta is defined by: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }


 * {| style="width:100%" border="0"

$$\delta_{ij} = \begin{cases} 1, & \mbox{ for i=j} \\ 0, & \mbox{ for i≠j} \end{cases} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

Thus the general form of a the matrix is: $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \begin{bmatrix} 1    & \cdots & 0      \\ \vdots & \ddots & \vdots \\ 0     & \cdots & 1 \end{bmatrix}
 * style="width:95%" |
 * style="width:95%" |

$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

Thus the $$ \Gamma $$ Matrix is not orthogonal $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

--Eml5526.s11.team5.savery 19:49, 2 February 2011 (UTC)

= HW 2.8 =

Given
<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0 \;\; \forall\; w^{h}(x)$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0 \;\; for\;\; i = 1,2,...,n$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Find
1.

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0  \;\Rightarrow\;  \int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

2.

<span id="(1)">
 * {| style="width:100%" border="0"

$$  \int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0     \;\Rightarrow\;  \int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Solution
Let 

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.1)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.2)
 * }

1)To prove

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0  \;\Rightarrow\;  \int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Choice 1:
 * $$\;\; \big[c_{1}, c_{2}, c_{3},.........., c_{n}\big] = \big[ 1,0,0,...,0 \big] $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$\therefore\;\; w^{h}(x) = b_{1}(x) $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Therefore Eq. 2.8.1 becomes,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\displaystyle \;\;\int_{\Omega}b_{1}(x)P(u^{h}(x))dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.3)
 * }

Choice 2:
 * $$\;\; \big[c_{1}, c_{2}, c_{3},............, c_{n}\big] = \big[ 0,1,0,...,0 \big] $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$\therefore\;\; w^{h}(x) = b_{2}(x) $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Therefore Eq. 2.8.1 becomes,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\displaystyle \;\;\int_{\Omega}b_{2}(x)P(u^{h}(x))dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.4)
 * }

Choice n:
 * $$\;\; \big[c_{1}, c_{2}, c_{3},............, c_{n}\big] = \big[ 0,0,0,...,1 \big] $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$\therefore\;\; w^{h}(x) = b_{n}(x) $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

Therefore Eq. 2.8.1 becomes,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\displaystyle \;\;\int_{\Omega}b_{n}(x)P(u^{h}(x))dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.5)
 * }

In General, Eq. 2.8.3, Eq. 2.8.4, Eq. 2.8.5 can be written as,


 * {| style="width:100%" border="0" align="left"

|$$\displaystyle \int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0 \;\; where\;\; i = 1,2,...,n  \;( which \;is \;the\; same \;as\; Eq. 2.8.2) $$ $$
 * <p style="text-align:right;">$$\displaystyle
 * }
 * }

2)To prove

<span id="(1)">
 * {| style="width:100%" border="0"

$$  \int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0     \;\Rightarrow\;  \int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

For i = 1, 2,. . ., n

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}b_{i}(x)P(u^{h}(x))dx = 0 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.6)
 * }

Multiplying the arbitrary constant $$\;\;c_{i}\;\;$$   on both sides ;

<span id="(1)">
 * {| style="width:100%" border="0"

$$\;\int_{\Omega}c_{i}b_{i}(x)P(u^{h}(x))dx = 0 \;\; for\;\; i = 1,2,...,n$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.7)
 * }

For i = 1;

<span id="(1)">
 * {| style="width:100%" border="0"

$$\;\int_{\Omega}c_{1}b_{1}(x)P(u^{h}(x))dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq.1)
 * }

For i = 2;

<span id="(1)">
 * {| style="width:100%" border="0"

$$\;\int_{\Omega}c_{2}b_{2}(x)P(u^{h}(x))dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq.2)
 * }

Similarly, For all i = n;

<span id="(1)">
 * {| style="width:100%" border="0"

$$\;\int_{\Omega}c_{n}b_{n}(x)P(u^{h}(x))dx = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq.n)
 * }

Adding the equations Eq. 1, Eq. 2 .... upto Eq. n, we get,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}c_{1}b_{1}(x)P(u^{h}(x))dx + \int_{\Omega}c_{2}b_{2}(x)P(u^{h}(x))dx + .\; .\;. + \int_{\Omega}c_{n}b_{n}(x)P(u^{h}(x))dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.8)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$\;\;\int_{\Omega}\left [ c_{1}b_{1}(x) + c_{2}b_{2}(x)+ .\; .\;. + c_{n}b_{n}(x)\right ]P(u^{h}(x))dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.9)
 * }

The above equation can be written as,

<span id="(1)">
 * {| style="width:100%" border="0"

$$\displaystyle \Rightarrow\;\;\int_{\Omega}\sum_{i=1}^{n}c_{i}b_{i}(x)P(u^{h}(x))dx = 0$$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.10)
 * }

Since $$\sum\limits_{i=1}^{n} = \;w^{h}(x)$$ Eqn(2.8.10) becomes

<span id="(1)">
 * {| style="width:100%" border="0"

$$\int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.8.11)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$\int_{\Omega}w^{h}(x)P(u^{h}(x))dx = 0 \;\; \forall\; w^{h}(x)\;\;\; (which \;is\; the\; same\; as\; Eq. 2.8.1)$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

Vignesh Solai Rameshbabu--Eml5526.s11.team5.srv 21:09, 30 January 2011 (UTC)

= HW 2.9 =

Given
<span id="(1)">
 * {| style="width:100%" border="0"

$$ \omega = ]0,1[ \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.1)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ a_2 = 2 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.2)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ f = 3 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.3)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \frac{\partial{u}}{\partial{t}}(x,t) = 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.4)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_g = 1 $$, $$ g = 0 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u(x=1) = 0 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.5)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \Gamma_h = 0 $$, $$ h = 4 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ -\frac{du^h}{dx}(0) = 4 \quad $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.6)
 * }

Find
 Find approximate solution $$ u^n \quad $$ and compare it to exact one

Consider base function $$ b_j(x), j = 0, 1, 2, 3,... n \quad $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ b_j(x) = cos(jx+\phi) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.7)
 * }

Consider $$ \phi = \frac{\pi}{4} $$ and $$ \phi = \frac{\pi}{2} $$

(1) Let $$ n = 2 \Rightarrow ndof = n+1 = 3 \quad $$

Generate variables considered for $$ n = 2 \quad $$ case

(2) Find 2 eqs that enforce b.c.'s for $$ u^h(x) = \sum_{j -0}^{n} d_j b_j(x) $$

(3) Find 1 more eq. to solve for $$ \underline{d} = d_j (j = 0, 1, 2) $$ by projecting the residue $$ \underline{p}(u^h) $$ on a basis function $$ b_k(x) \quad $$ with $$ k = 0, 1, 2 \quad $$. The additional equation must be lineally independent from the above 2 equations in part (2).

(4) Display 3 equations in matrix form

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{K}\underline{d} = \underline{F} $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.8)
 * }

Observe if $$ \underline{K} $$ is symmetric

(5) Solve for $$ \underline{d} \quad $$

(6) Construct $$ u_n^k(x) $$ and plot $$ u_n^k(x) $$ vs. $$ ux) $$

(7) Repeat steps (1) to (6) for


 * (7.1) $$ n = 4 \quad $$
 * (7.2) $$ n = 6 \quad $$

(8) Compute $$ u_n^h (x = 0.5) \quad $$ for $$ n = 2, 4, 6 \quad $$

error

<span id="(1)">
 * {| style="width:100%" border="0"

$$ e_n(0.5) = u_n(0.5) - u_n^h(0.5) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.9)
 * }

Plot $$ e_n(0.5) \quad $$ vs. $$ n \quad $$

Solution
(1)

For $$ n = 2 \quad $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{b} =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} b_0     & b_1 & b_2 \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.10)
 * }

From base function expression $$ b_j(x) = cos(jx+\phi) \quad $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{b} =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} cos(0x+\phi)     & cos(1x+\phi) & cos(2x+\phi) \end{bmatrix}

=

\begin{bmatrix} cos(\phi)     & cos(x+\phi) & cos(2x+\phi) \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.11)
 * }

Unknown function

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{d} =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} d_0     & d_1 & d_2 \end{bmatrix} $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.12)
 * }

Approximate solution

(2)

at B.C. $$ u(1) = 0 \quad $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u^h = \sum_{i=0}^{n=2} d_ib_i $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.13)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u^h(x) = d_0cos(\phi) + d_1cos(x+\phi) + d_2cos(2x+\phi) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.14)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u^h(1) = 0 = d_0cos(\phi) + d_1cos(1+\phi) + d_2cos(2+\phi) =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} cos(\phi)     & cos(1+\phi) & cos(2+\phi) \end{bmatrix}

\begin{bmatrix} d_0     \\ d_1 \\ d_2 \end{bmatrix} $$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

at B.C $$ \frac{du}{dx}(0) = 4 $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ (u')^h(x) = \frac{du}{dx} = \sum_{i=0}^{n=2} d_ib_i' $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.15)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \underline{b}' =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 0     & -sin(x+ \phi) & -2sin(2x+\phi) \end{bmatrix}

$$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.16)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ (u')^h(0) = - 4 = d_00 - d_1sin(\phi) - d_22sin(\phi) =
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} 0     & sin(\phi) & 2sin(\phi) \end{bmatrix}

\begin{bmatrix} d_0     \\ d_1 \\ d_2 \end{bmatrix} $$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

(3)

By projecting the residue part of a function on the bases, 3rd equation can be generated:

From the definition of elastodynamics case of ODE function $$ P_u(x) \quad $$

Rewriting $$ Eq. 2.3.7 $$

<span id="(1)">
 * {| style="width:100%" border="0"

$$ b_j(x) = cos(jx+\phi) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

In this, more specific, "self adjoin" case

<span id="(1)">
 * {| style="width:100%" border="0"

$$ P(u^h) :=[a_2u']' + a_0u - g = 0 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.17)
 * }

noting

$$ g = 0 \Rightarrow \quad $$ no forcing function (steady state)

$$ a_0u = f = 3 \quad $$ (given)

$$ a_2 = 2 \quad $$ (given)

$$ P(u^h) :=[2u']' + 3 = 0 \Rightarrow \quad $$ since $$ u \quad $$ is given as constant 2 (not function of "x")

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ P(u^h) :=2u'' + 3 = 0\quad = 2\frac{d^2u}{dx} +3 = 0 $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.18)
 * }

From $$ Eq. xxx $$, $$ u^h \quad $$ was defined as

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u^h = \sum_{i=0}^{n=2} d_ib_i = d_0cos(\phi) + d_1cos(x+\phi) + d_2cos(2x+\phi) $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.19)
 * }

Taking $$ 2^{nd} \quad $$ derivative with respect to "x"

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u^{h''} = 0 - d_1cos(x+\phi) - 4d_2cos(2x+\phi) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.20)
 * }

Therefore

<span id="(1)">
 * {| style="width:100%" border="0"

$$ P(u^h) := 2(0 - d_1cos(x+\phi) - 4d_2cos(2x+\phi)) +3 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.21)
 * }

Afterwords, noting that

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \int_\omega w P(u^h) dx := 0 \Rightarrow w_i = b_i \Rightarrow \int_\omega b_i P(u^h) dx := 0 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.22)
 * }

which is also noted in $$ 10-4, Eq. (2) \quad $$

and substituting for $$ P(u^h) \quad $$ equation above, we get

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ \int_\omega b_i ([2(0 - d_1cos(x+\phi) - 4d_2cos(2x+\phi))] +3 dx) := 0 \quad $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.23)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ \int_\omega b_i [2(0 - d_1cos(x+\phi) - 4d_2cos(2x+\phi))] dx := \int_\omega b_i -3 dx \quad $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.24)
 * }

recalling that $$ \underline{b} =

\begin{bmatrix} cos(\phi)  \\ cos(x+\phi) \\ cos(2x+\phi) \end{bmatrix} $$

and taking any of the equations as the last needed equation, for example $$ b_1 \quad $$, we get

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \int_\omega
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} cos(\phi)  \\ cos(x+\phi) \\ cos(2x+\phi) \end{bmatrix}

[2(0 - d_1cos(x+\phi) - 4d_2cos(2x+\phi))] dx := \int_\omega

\begin{bmatrix} cos(\phi)  \\ cos(x+\phi) \\ cos(2x+\phi) \end{bmatrix} -3 dx \quad $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.25)
 * }

Doing symbolic integration and evaluating interval on $$ \omega = ]0,1[ \quad $$ we get

The outputs are as follow:

$$ \int_\omega

\begin{bmatrix} cos(\phi)  \\ cos(x+\phi) \\ cos(2x+\phi) \end{bmatrix}

[2(0 - d_1cos(x+\phi) - 4d_2cos(2x+\phi))] dx := $$ 0   0.3818   -1.0137         0    0.2919    0.7126         0    0.1781    2.3464

$$ \int_\omega

\begin{bmatrix} cos(\phi)  \\ cos(x+\phi) \\ cos(2x+\phi) \end{bmatrix} -3 dx \quad := $$

2.1213   0.8099   -0.5376



(4)

Assembling the matrix $$ \underline{K} \underline{d} = \underline{F} \quad $$ from 2 B.C. constrains and 1 more equation generated by weighting factor method:

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \begin{bmatrix} cos(\phi)     & cos(x+\phi) & cos(2x+\phi) \\ 0     & sin(\phi) & 2sin(\phi) \\ 0 &   0.3818  & -1.0137 \end{bmatrix}
 * style="width:95%" |
 * style="width:95%" |

\begin{bmatrix} d_0     \\ d_1 \\ d_2 \end{bmatrix}

=

\begin{bmatrix} 0\\ 4 \\ 2.1213 \end{bmatrix} $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.26)
 * }

For "K" matrix:

0.7071  -0.2130   -0.9372         0    0.7071    1.4142         0    0.3818   -1.0137

For "F" matrix:

0   4.0000    2.1213

(5)

Solving for $$ \underline{d} = \underline{K}^{-1} F\quad $$ at $$ \phi = \frac{\pi}{4} \quad $$ we get

(tried to solve for $$ \phi = \frac{\pi}{2} \quad $$, matrix approx. value came if not exact, then really close to exact and the warning of matlab was issued about singularity)

d = 1.7193   5.6137    0.0216

(6)

Constructed $$ u^h \quad $$ matrix on matlab by using formula $$ u^h = \sum_{i=0}^{n=2} d_ib_i $$

0.02156*cos(phi + 2.0*x) + 5.614*cos(phi + x) + 1.719*cos(phi)

Exact solution:

Given a general expression for 2nd Order ODE

<span id="(1)">
 * {| style="width:100%" border="0"

$$ a_2(x)y'' + a_1(x)y + a_0(x)y = g(x) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.27)
 * }

and taking a more specific case ("Self adjoin") for this specific problem, we have

<span id="(1)">
 * {| style="width:100%" border="0"

$$ [a_2(x)y']' + a_0(x)y = g(x) \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.17)
 * }

$$ g(x) = 0 \Rightarrow \quad $$ no forcing function (steady state)

$$ a_0(x)y = f(x) = 3 \quad $$ (given)

$$ a_2 = 2 \quad $$ (given)

Therefore $$ Eq. 2.3.17 $$ becomes

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ [2y']' + 3 = 0 \quad $$ $$
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.28)
 * }

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ \frac{du}{dx}(\frac{du}{dx}2) + 3 = 0 $$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \int \frac{du}{dx}2 = \int -3 dx $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ \frac{du}{dx}2 = -3x + c_1' \Rightarrow \frac{du}{dx} = -1.5x + c_1 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.29)
 * }

at B.C of $$ -\frac{du}{dx}(0) = 4 \Rightarrow $$

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ -4 = -1.5(0) + c_1 \Rightarrow c_1 = -4$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ \int {du} = \int (-1.5x - 4) dx $$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u = \frac{-1.5}{2}x^2 - 4x + c_2 $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.30)
 * }

at B.C of $$ u(1) = 0 \Rightarrow $$

<span id="(1)">
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$ 0 = \frac{-1.5}{2}(1)^2 - 4(1) + c_2 \Rightarrow c_2 = -4.75$$ $$
 * <p style="text-align:right"> $$ \displaystyle
 * }

<span id="(1)">
 * {| style="width:100%" border="0"

$$ u(x) = -0.75x^2 - 4x - 4.75 \quad $$ $$
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right"> $$ \displaystyle (Eq. 2.3.31)
 * }

So taken in account the exact solution formula, different plot were generated for different n values (n = 2, 4, 6)

Different 'u' values plotted on 1 graph for different member number 'n'

Highly zoomed in view, by zooming in it is clear that n =4 is good enough approximation for these series.

Error came out to be ex = n=2       n=4       n=6 -0.2291  -0.0102    0.0000



--Eml5526.s11.team5.JA 20:30, 2 February 2011 (UTC)

= References =

= Contributors =

--Eml5526.s11.team5.JA 20:29, 2 February 2011 (UTC)

--Eml5526.s11.team5.savery 20:33, 2 February 2011 (UTC)

--Eml5526.s11.team5.mcdaniel 20:35, 2 February 2011 (UTC)

--Eml5526.s11.team5.srv 20:36, 2 February 2011 (UTC)

--Lokeshdahiya 20:41, 2 February 2011 (UTC)

--Eml5526.s11.team5.vijay 20:42, 2 February 2011 (UTC)/>

--Eml5526.s11.team5.smith 20:50, 2 February 2011 (UTC)