User:Egm6321.f10.team2.oztekin/New

=Problem 2.3:=

Given
The equivalency validation was asked in meeting 8.3. as a second case where there are orthonormal vectors. First case as given:
 * {| style="width:100%" border="0"

$$  \displaystyle \begin{align} &\underline{w}=\sum\limits_{i} &\forall \left\{ {{\alpha }_{1}},......,{{\alpha }_{n}} \right\}\in {{R}^{n}} \end{align} $$     (3.1)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Second case:
 * {| style="width:100%" border="0"

$$ \displaystyle \begin{align} &\underline{w}=\sum\limits_{i} &\forall \left\{ {{b}_{1}},.....,{{b}_{n}} \right\}\in {{R}^{n}} \end{align} $$     (3.2)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Where $$ \displaystyle a_{i} $$'s are orthonormal basis functions.We can identify orthonormal basis functions as,


 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{a}}_{i}}.{{\underline{a}}_{j}}={{\delta }_{ij}} $$     (3.3)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \delta_{ij} =\left\{ \begin{matrix} \begin{matrix} \begin{matrix} 1 & for \\ \end{matrix} & i=j \\ \end{matrix} \\ \begin{matrix} \begin{matrix} 0 & for \\ \end{matrix} & i\ne j \\ \end{matrix} \\ \end{matrix} \right.
 * style="width:95%" |
 * style="width:95%" |

$$     (3.4)
 * 
 * }

The operator of $$\displaystyle \underline{\underline{P}}(\underline{v})$$ was defined as Eq(4) in meeting 7-2. This operator would lead us to conclude with stiffness matrix,unknown matrix and force matrix.
 * {| style="width:100%" border="0"

$$  \displaystyle \underline{\underline{P}}(\underline{v}):=\sum\limits_{j=1}^{n}{{{\underline{b}}_{j}}{{v}_{j}}-\underline{v}=0} $$     (3.5)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{b}}_{i}}.\underline{\underline{P}}(\underline{v})=0 $$     (3.6)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{b}}_{i}}.\sum\limits_{j}{{{\underline{b}}_{j}}{{v}_{j}}={{\underline{b}}_{i}}.\underline{v}} $$     (3.7)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\left[ {{K}_{ij}} \right]}_{n*n}}{{\left\{ {{v}_{j}} \right\}}_{n*1}}={{\left\{ {{F}_{i}} \right\}}_{n*1}} $$     (3.8)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Find
Show $$ \displaystyle \underline{w}.\underline{\underline{P}}(\underline{v})=0$$ is equivalent to $$\displaystyle {{\underline{a}}_{i}}\underline{\underline{P}}(\underline{v})=0 $$

Solution
For i=1,..........n. We have n equations and n unknowns.Since the components of vector w are arbitrary we can decide what they are as our convenience in order to proof.

Choice 1 :$$ \displaystyle \beta _{1}=1,\beta _{2}=......=\beta _{n}=0$$ then we can observe ;


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{w}={{\underline{a}}_{1}} $$     (3.9)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ b_{1},....,b_{n} \right \}=\left \{ \alpha _{1},......,\alpha _{n} \right \}\Rightarrow {{\underline{a}}_{1}} ={{\underline{b}}_{1}}
 * style="width:95%" |
 * style="width:95%" |

$$     (3.10)
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \underline{a}{}_{1}.\underline{\underline{P}}(\underline{v})=0 $$     (3.11) Choice 2 :$$ \displaystyle \left \{ \beta _{1} ,.......,\beta _{n}\right \}=\left \{ 0,1,0,......,0 \right \}$$ then ;
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{w}={{\underline{a}}_{2}} $$     (3.12)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ b_{1},....,b_{n} \right \}=\left \{ \alpha _{1},......,\alpha _{n} \right \}\Rightarrow {{\underline{a}}_{2}} ={{\underline{b}}_{2}}
 * style="width:95%" |
 * style="width:95%" |

$$     (3.13)
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{a}}_{2}}.\underline{\underline{P}}(\underline{v})=0 $$     (3.14)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \begin{matrix} . \\   .  \\   .  \\ \end{matrix}
 * style="width:95%" |
 * style="width:95%" |

$$
 * }

Choice 3 : $$ \displaystyle \left \{ \beta _{1},....,\beta _{n} \right \}=\left \{ 0,.....0,1 \right \} $$


 * {| style="width:100%" border="0"

$$  \displaystyle \underline{w}={{\underline{a}}_{n}} $$     (3.15)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ b_{1},....,b_{n} \right \}=\left \{ \alpha _{1},......,\alpha _{n} \right \}\Rightarrow {{\underline{a}}_{n}} ={{\underline{b}}_{n}}
 * style="width:95%" |
 * style="width:95%" |

$$     (3.16)
 * 
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{a}}_{n}}.\underline{\underline{P}}(\underline{v})=0 $$     (3.17) So we can say that by selecting proper components for vector w we can conclude same equation which was given Eq(3.1).Actually Eq(3.1) is more  general casse than Eq(3.2). Also we can broaden combinations of components of vector w.For example;
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \left \{ \beta _{1},....,\beta _{n} \right \}=\left \{ 1,1,1,0,...,0 \right \} $$     (3.18)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{w}}_{1}}={{\beta }_{1}}{{\underline{a}}_{1}}+{{\beta }_{2}}{{\underline{a}}_{2}}+{{\beta }_{3}}{{\underline{a}}_{3}}$$ (3.19)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{w}}_{1}}={{\underline{b}}_{1}} $$     (3.20)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle {{\underline{b}}_{1}}={{\beta }_{1}}{{\underline{a}}_{1}}+{{\beta }_{2}}{{\underline{a}}_{2}}+{{\beta }_{3}}{{\underline{a}}_{3}}$$ (3.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

This is 'nearly' the same vector for basis $$\displaystyle b_{n}$$. We can validate $$\displaystyle {{\underline{b}}_{n}}\underline{\underline{P}}(\underline{v})=0$$. But after knowing all basis vectors of vector w, then we have to check if their gram matrix is not equal to zero. Then we say that our new vectors are new arbitrary 'linerly independent' family.

Expressing vector b in terms of orthonormal vectors a is exactly same with the case presented in meeting 8-2.Because in this case we had validated that any linearly independent fuctions scalar product with $$\displaystyle \underline{\underline{P}}(\underline{v})$$ is equal zero. Therefore we can always express linearly independent basis 'vector b' with our orthonormal vectors.

=Problem 2.7:Determining orthogonality of family of functions=

Given
Problem was satated in meeting 10-3. . Our family of equations are :


 * {| style="width:100%" border="0"

$$  \displaystyle F=\left \{ 1,x,x^{2},x^{3},x^{4} \right \} $$     (7.1)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Test of linear independency of family of equations can be done by setting up 'Gram Matrix'.Gram Matrix If the linear independency exists between functions of the family their gram matrix's determinant must be non-zero.


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma } \left ( b_{1}(x),.....,b_{n}(x) \right )=\left [< b_{i},b_{j} >\right ]_{n\times n} $$ (7.2)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma_{ij}}=<b_{i},b_{j}>=\int_{\Omega }b_{i}(x)b_{j}(x)dx $$     (7.3)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

We can identify our basis functions as :


 * {| style="width:100%" border="0"

$$  \displaystyle b_{1}(x)=1 $$     (7.4)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{2}(x)=x $$     (7.5)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{3}(x)=x^{2} $$     (7.6)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{4}(x)=x^{3} $$     (7.7)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{5}(x)=x^{4} $$     (7.8)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Find
Find if the family of these basis functions are orthogonal over the domain $$\displaystyle\Omega =\left [ 0,1 \right ] $$

Solution
First we have to set up our gram matrix.


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma } =\begin{bmatrix} b_{11} & b_{12} & b_{13} & b_{14} &b_{15} \\ b_{21} & b_{22} & b_{23} & b_{24} &b_{25} \\ b_{31} & b_{32} & b_{33} & b_{34} &b_{35} \\ b_{41} &b_{42} & b_{43} & b_{44} &b_{45} \\ b_{51} &b_{52} & b_{53} &b_{54}  & b_{55} \end{bmatrix} $$     (7.9)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

Now we can look at the components of this matrix. Since 'scalar product' is symmetric the components which are symmetric about the diagonal of this matrix will be same each other.


 * {| style="width:100%" border="0"

$$  \displaystyle b_{11}=<b_{1},b_{1}>=\int_{0}^{1}1.1dx=1 $$     (7.10)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{22}=<b_{2},b_{2}>=\int_{0}^{1}x.xdx=\frac{1}{3} $$     (7.11)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{33}=<b_{3},b_{3}>=\int_{0}^{1}x^{2}x^{2}dx=\frac{1}{5} $$     (7.12)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{44}=<b_{4},b_{4}>=\int_{0}^{1}x^{3}x^{3}dx=\frac{1}{7} $$     (7.13)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{55}=<b_{5},b_{5}>=\int_{0}^{1}x^{4}x^{4}dx=\frac{1}{9} $$     (7.14)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{12}=b_{21}=<b_{1},b_{2}>=\int_{0}^{1}xdx=\frac{1}{2} $$     (7.15)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }


 * {| style="width:100%" border="0"

$$  \displaystyle b_{13}=b_{31}=<b_{1},b_{3}>=\int_{0}^{1}x^{2}dx=\frac{1}{3} $$     (7.16)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{14}=b_{41}=<b_{1},b_{4}>=\int_{0}^{1}x^{3}dx=\frac{1}{4} $$     (7.17)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{15}=b_{51}=<b_{1},b_{5}>=\int_{0}^{1}x^{4}dx=\frac{1}{5} $$     (7.18)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{23}=b_{32}=<b_{2},b_{3}>=\int_{0}^{1}x.x^{2}dx=\frac{1}{4} $$     (7.19)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{24}=b_{42}=<b_{2},b_{4}>=\int_{0}^{1}x.x^{3}dx=\frac{1}{5} $$     (7.20)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{34}=b_{43}=<b_{3},b_{4}>=\int_{0}^{1}x^{2}.x^{3}dx=\frac{1}{6} $$     (7.21)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{54}=b_{45}=<b_{3},b_{4}>=\int_{0}^{1}x^{3}.x^{4}dx=\frac{1}{8} $$     (7.22)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{52}=b_{25}=<b_{2},b_{5}>=\int_{0}^{1}x.x^{4}dx=\frac{1}{6} $$     (7.23)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }
 * {| style="width:100%" border="0"

$$  \displaystyle b_{53}=b_{35}=<b_{3},b_{5}>=\int_{0}^{1}x^{2}.x^{4}dx=\frac{1}{7} $$     (7.24)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

We can conlude with :


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma }=\begin{bmatrix} 1 & \frac{1}{2} & \frac{1}{3} &\frac{1}{4} &\frac{1}{5} \\ \frac{1}{2}& \frac{1}{3} & \frac{1}{4} &\frac{1}{5} &\frac{1}{6} \\ \frac{1}{3} & \frac{1}{4} & \frac{1}{5} & \frac{1}{6} &\frac{1}{7} \\ \frac{1}{4}& \frac{1}{5} & \frac{1}{6} & \frac{1}{7} & \frac{1}{8}\\ \frac{1}{5} & \frac{1}{6}&\frac{1}{7}  & \frac{1}{8} & \frac{1}{9} \end{bmatrix} $$     (7.25)
 * style="width:95%" |
 * style="width:95%" |
 * <p style="text-align:right">
 * }

We can easily take the determinant of gram matrix by using a solver. Matlab Mathworks gives the solution of determinant as :$$\displaystyle 3.7493e(-12)$$ The gram matrix is 'symmetric' so its transpoze will be same with itself but for orthogonality matrix transpoze has to be same as its inverse.The inverse of gram matrix can easly get from matlab as:


 * {| style="width:100%" border="0"

$$  \displaystyle \mathbf{\Gamma ^{-1}}= 1.0e0.005\times \left[ \begin{matrix} \begin{matrix} \begin{matrix} \begin{matrix} 0,0002 \\   -0,003  \\ \end{matrix}  \\ 0,0105 \\   -0,014  \\   0,0063  \\ \end{matrix} & \begin{matrix} \begin{matrix} -0,003 \\   0,0480  \\ \end{matrix}  \\ -0,189 \\   0,2688  \\   -0,126  \\ \end{matrix}  \\ \end{matrix} & \begin{matrix} \begin{matrix} 0,0105 \\   -0,189  \\ \end{matrix}  \\ 0,7938 \\   -1,176  \\   0,5670  \\ \end{matrix} & \begin{matrix} \begin{matrix} -0,014 \\   0,2688  \\ \end{matrix}  \\ -1,176 \\   1,792  \\   -0,882  \\ \end{matrix} & \begin{matrix} \begin{matrix} 0,0063 \\   -0,126  \\ \end{matrix}  \\ 0,5670 \\   -0,8820  \\   0,4410  \\ \end{matrix}  \\ \end{matrix} \right]
 * style="width:95%" |
 * style="width:95%" |

$$     (7.26)
 * <p style="text-align:right">
 * }

The inverse of the matrix is not similar with its transpoze. So it is not orthogonal.

= References =