User:Eml5526.s11.team6/hwk2

Homework 2

=Problem 2.1=

Find
For the heat problem, derive the following partial differential equation by performing a balance of heat:

$$ \frac{\partial}{\partial x}\left [ A(x)k(x)\frac{\partial u}{\partial x} \right ]+f(x,t)=A(x) \rho(x)c\cdot \frac{\partial u}{\partial t}\ $$

Solution
Consider the FBD of an infinitesimally small section of length dx of the 1-D conductive bar with varying material properties at a distance x from the origin. Where, A(x) - Varying cross section of the bar, k(x) - conductivity of the material, rho (x) - Mass density, u - Temperature constrained only to x- axis (1-D bar), t - Time.



Where, heat flux at $$\displaystyle x=q(x)$$,

heat flux at $$\displaystyle x+dx=q(x+dx)$$

outward heat flow through A(x)=Q(x), at n(x)=-1,

$$\displaystyle Q(x)=q(x)A(x)n(x)= -q(x)A(x)$$ $$\displaystyle (Eq. 2.1.1) $$

Similarly, outward heat flow through $$\displaystyle A(x+dx)=Q(x+dx), at n(x+dx)=1,$$

$$\displaystyle Q(x+dx)=q(x+dx)A(x+dx)n(x+dx)= q(x+dx)A(x+dx) $$ $$\displaystyle (Eq. 2.1.2) $$

Also, internal heat source per unit volume= r(x,t)

Hence, heat source per unit length= r(x,t)A(x) = f(x,t)$$\displaystyle (Eq. 2.1.3) $$

Now, balance of heat can be done as follows:

$$\displaystyle H_{1}= $$heat flow in control volume

$$\displaystyle =-[Q(x)+Q(x+dx)]$$ $$\displaystyle =-[-q(x)A(x)+q(x+dx)A(x+dx)]$$ $$\displaystyle (Eq. 2.1.4) $$

$$\displaystyle H_{2}= $$Heat by internal source r(x,t) $$\displaystyle =f(x,t)dx$$$$\displaystyle (Eq. 2.1.5) $$

$$\displaystyle H_{3}= $$heat by temperature difference $$\displaystyle=\;\; A(x)\rho (x)c\cdot\frac{\partial u}{\partial t}$$$$\displaystyle (Eq. 2.1.6) $$

Now, for balance of heat

$$\displaystyle H_{1}+H_{2}=H_{3}$$$$\displaystyle (Eq. 2.1.7) $$

Therefore, from $$\displaystyle (Eq. 2.1.4) $$, $$\displaystyle (Eq. 2.1.5) $$,  $$\displaystyle (Eq. 2.1.6) $$,  $$\displaystyle (Eq. 2.1.7) $$ we can write,

$$\displaystyle q(x)A(x)-q(x+dx)A(x+dx)+f(x,t)dx=\;\; A(x)\rho (x)c\cdot\frac{\partial u}{\partial t}$$$$\displaystyle (Eq. 2.1.8) $$

put equations $$\displaystyle (Eq. 2.1.1) $$ and $$\displaystyle (Eq. 2.1.2) $$ in $$\displaystyle (Eq. 2.1.8) $$

$$\displaystyle Q(x,t)-Q(x+dx,t)+f(x,t)dx=\;\; A(x)\rho (x)c\cdot\frac{\partial u}{\partial t}$$

Including Taylor series expansion for $$Q(x+dx,t)$$ and neglecting the higher order terms (h.o.t) yields:

$$ \cancel {Q(x,t)}-[\cancel {Q(x,t)}+\frac {\partial }{\partial x}(Q(x,t))+(h.o.t)]+f(x,t)=\;\; A(x)\rho (x)c\cdot\frac{\partial u}{\partial t}$$

Hence, $$ -\frac {\partial }{\partial x}(Q(x,t))+(h.o.t)]+f(x,t)=\;\; A(x)\rho (x)c\cdot\frac{\partial u}{\partial t}$$$$\displaystyle (Eq. 2.1.9) $$ where, $$\displaystyle Q(x,t)= -q(x)A(x)$$ and $$\displaystyle q(x)=k(x)\frac {\partial u}{\partial x}$$

Therefore we get,

=Problem 2. 2=

Given
Reference for all formulae is Mtg 7 $$ \{ {{\mathbf{b}}_i},i = 1, \cdot \cdot  \cdot ,n\} $$ are basis vectors for $$ {{\mathbf{R}}^n} $$ which are not necessarily orthonormal.

In order to find $$ \{ {{\mathbf{v}}_i} \} $$ such that $$ {\mathbf{v}} = \sum\limits_{i = 1}^n $$ , we consider $$ \{ {{\mathbf{a}}_i},i = 1, \cdot \cdot  \cdot ,n\} $$ are orthonormal basis in Cartesian coordinates, and $$ {{\mathbf{b}}_j} = b{_{jk}}{{\mathbf{a}}_k} $$ , $$ {{\mathbf{v}}=5{{\mathbf{a}}_1}-7{{\mathbf{a}}_2}-4{{\mathbf{a}}_3}} $$

where $$ \left[ \right] = \left[ {\begin{array}{ccccccccccccccc} 1&1&1 \\  2&{ - 1}&3 \\   3&2&6 \end{array}} \right] $$

Find
A) Find $$ \det \left[ \right] $$

B) Find $$ \det \Gamma $$ where $$ {\mathbf{\Gamma }} = \left\{ \right\} = {\mathbf{K}} $$

C) Find $$ {\mathbf{F}} = \left\{ \right\} = \left\{ {{{\mathbf{b}}_{\mathbf{i}}} \cdot {\mathbf{v}}} \right\}$$

D) Solve equation (5) on page 2 of Mtg 7 for $$ {\mathbf{d}} = \left\{ \right\} $$

E) Use equation (1) on page 4 of Mtg 7 to find $$ \overline {\mathbf{K}} {\mathbf{d} = \overline {\mathbf{F}} }$$ and what is $$ \overline {\mathbf{K}}, \overline {\mathbf{F}} $$

F) Solve for $$ {\mathbf{d}}$$ and compare of $$ {\mathbf{K}}$$ and $$ \overline {\mathbf{K}} $$

G) Observe symmetric properties of $$ {\mathbf{K}}$$ and $$ \overline {\mathbf{K}} $$ and discuss pros and cons of these two methods

Solution
A) The determinant of $$ \left[ \right] $$ is as following:

B) Write $$ \left[ \Gamma \right] = \left[ K \right] $$ in matrix form:

Then the determinant is as following

C) Write $$ \left[ F \right] $$ in matrix form:

D) Use equation (2.2.5)on page 2 of Mtg 7 and write it in matrix form:

$$ \left[ {\begin{array}{ccccccccccccccc} 3&4&{11} \\  4&{14}&{22} \\   {11}&{22}&{49} \end{array}} \right] \times \left[ d \right] = \left[ {\begin{array}{ccccccccccccccc} { - 6} \\  5 \\   { - 23} \end{array}} \right] $$

Multiply the above equation with the inverse of matrix K, we can get d in matrix form:

E) Consider $$ {w_i} = {a_i},i = 1, \cdot \cdot  \cdot ,n $$ and follow the steps above, it is easy to get K and F in matrix form:

This is as $$ {w_i} = {a_i},i = 1, \cdot \cdot  \cdot ,n $$ form an identity matrix and this is a multiplication involving an identity matrix

Similar is the case about Matrix F

F) Similar with step D, we can get d in matrix form with finding inverse of $$ \overline {\mathbf{K}} $$, using Matlab

Note that matrix d obtained in step D and step F are the same.

G)$$ {\mathbf{K}}$$ is always a symmetric matrix because it is obtained from the product of $$\underline{b}_{j}.\underline{b}_{i}$$. However $$ \overline {\mathbf{K}} $$ is unsymmetrical and will be symmetric only if the basis are orthonormal and not otherwise.

Bubnov-Galerkin method gives slightly more difficult to compute the symmetric stiffness matrix K, but can choose arbitrary vector to find the answer.

Petrov Galerkin method gives easier way to compute the unsymmetrical stiffness matrix K but with unsymmetrical nature of the stiffness matrix and without orthonormal vectors it becomes difficult to solve the system of equations.

=Problem 2. 3 =

Find
Show that (Eq. 2.3.2) is equivalent to

Solution
(Eq. 2.3.2) is equivalent to (Eq. 2.3.3) given arbitrary choices of $$\beta_n$$. For example,

Choice 1
 * Choosing $$\beta_1 = 1, \beta_2 = ... = \beta_n = 0$$, and substituting into (Eq. 2.3.2) yields
 * $$ a_1 \cdot \underline P \left ( \underline{v} \right )=0$$

Choice 2
 * Choosing $$\beta_2 = 1, \beta_1 = \beta_3 = ... = \beta_n = 0$$, and substituting into (Eq. 2.3.2) yields
 * $$ a_2 \cdot \underline P \left ( \underline{v} \right )=0$$

Choice n
 * Choosing $$\beta_n = 1, \beta_1 = ... = \beta$$n-1 $$= 0$$, and substituting into (Eq. 2.3.2) yields
 * $$ a_n \cdot \underline P \left ( \underline{v} \right )=0$$

Therefore, given a specific choice of $$ \beta_n $$ within the real domain,

=Problem 2.4=

Find
To Prove Solution provided by Wolfram Alpha

Prove Following steps and use them to get the final proof of the above eqution:

Part 1: $$ \int log(x)dx = x*log(x) - x$$ Solution provided by Wolfram Alpha using integration by parts.

Part 2: $$ \int x*log(x)dx = \frac {1}{2}*x^2* \left [ log(x) - \frac {1}{2} \right ] $$

Part 3: $$ \int \frac {x^2 dx}{1+C*x} = \frac {2*log(C*x + 1) + x*(C*x-2)*C}{2*C^3} $$

Part 4: $$ \int \frac {x^2 dx}{A+C*x} = \frac {2*A^2*log(C*x + A) + C*x*(C*x-2*A)}{2*C^3} $$

Part 5: Now refer to meeting 9 [[media:fe1.s11.mtg9.djvu|Mtg 9 (c)]] and find exact solution $$\displaystyle u(x)$$ for problem (3)

Part 6: Plot $$\displaystyle u(x)$$

Solution
Part 1

Prove Solution provided by Wolfram Alpha

using integration by parts, i.e.,

Part 2

Prove Solution provided by Wolfram Alpha

using integration by parts, (Eq. 2.4.3)

Part 3

Prove Solution provided by Wolfram Alpha

Using integration by parts, (Eq. 2.4.3)

Continuing with the integration by parts with the latter part of the RHS of (Eq. 2.4.15), Given (Eq. 2.4.7) and the substitution given in (Eq. 2.4.14),

Applying integration by parts again, we obtain the following relations:

Finally, the integral in the RHS of (Eq. 2.4.19) can be solved for and simplified using (Eq. 2.4.12) and the integration by substitution given (Eq. 2.4.14), i.e., Combining the former portion of the RHS of (Eq. 2.4.15), the former portion of the RHS of (Eq. 2.4.19), and the latter portion of the RHS of (Eq. 2.4.20), and then simplifying, the final solution to the integral in (Eq. 2.4.13) can be found to be

Part 4

Prove Solution provided by Wolfram Alpha

and the first integration by parts given in Step 3, (Eq. 2.4.15),

Continuing with the integration by parts with the latter part of the RHS of (Eq. 2.4.24),

and given the second integration by parts in Step 3, (Eq. 2.4.19)

Finally, the integral in the RHS of (Eq. 2.4.27) can be solved for and simplified using (Eq. 2.4.12) and the integration by substitution given in (Eq. 2.4.23), i.e.,

Combining the former portion of the RHS of (Eq. 2.4.24), the former portion of the RHS of (Eq. 2.4.27), the latter portion of the RHS of (Eq. 2.4.28), and then simplifying, the final solution to the integral in (Eq. 2.4.22) can be found in a similar fashion to (Eq. 2.4.20) of Step 3, i.e.,

Final Step

Given (Eq. 2.4.29) from Step 4, if A = C = 1, then

Thus, required result stated in ' Find' section Eq 2.4.1 is achieved.

Part 5

Given:

$$\frac{d}{dx}[(2+3x)*(\frac{du}{dx})]+5x = 0 \forall x \in \left]0,1 \right[  $$

$$ u(1)=4, -(\frac{du}{dx}) (x=0) = 6 $$

To find:

Find the exact solution for u(x):

Solution:

$$ \frac{d}{dx}[(2+3x)(\frac{du}{dx})]+5x=0 $$

$$ \frac{d}{dx}[(2+3x)(\frac{du}{dx})]= - 5x $$

Integrating with respect to x

$$ (2+3x) \frac{(du)}{(dx)}= \frac{-5x^2}{2}+k1 $$

$$ \frac{du}{dx}= \frac{-5x^2}{2(2+3x)}+\frac{k1}{(2+3x)} $$

Integrating again with respect to x and using eq.2.4.22

$$ u(x)= \frac{-5}{2}[\frac{log(3x+2)+3x(x-4)}{(3^3*2)}]+k1[\frac{ln(2+3x)}{3}]+k2 $$

To find the constants, we use the given boundary conditions:

$$ \frac{du}{dx}= \frac{-5*x^2}{2(2+3x)}+\frac{k1}{(2+3x)} $$

$$ but \frac{-du}{dx}(x=0)=6 $$

Substituting x=1 in the above equation,

$$ k1= -12 $$

Substituting $$ k1=-12 $$ and simplifying, we get:

$$ u(x)= \frac{-118}{27}*log(2+3x)-\frac{5*x^2}{12}+\frac{5}{9}x+k2 $$

Also u(x)=4 at x=1

Substituting x=1 in the expression for u(x)

$$ k2= 10.896 $$

Thus final expression for u(x) is

Part 6



=Problem 2.5=

Given
We have Eq. (2) from page 7-2 of mtg 7 ,

$$ \underline{b} _{i}  .\underline{P}(\underline{v})=0 $$

where  i =  1, 2 ,..., n

Find
Show that $$ \underline{w} .\underline{P}(\underline{v})=0 $$ ,

where $$  \underline{w} = \sum_{i=1}^{n}\alpha_{i} \underline{b} _{i} $$ ,

and $$ \forall (\alpha _{1},\alpha _{2},...,\alpha _{n})\in R^{n} $$

Solution
We have,

$$ \underline{b} _{i}  .\underline{P}(\underline{v})=0 $$

With i = 1 to n, we can then write -

$$ \underline{b}_{1}.\underline{P}(\underline{v})=0 $$

$$ \underline{b}_{2}.\underline{P}(\underline{v})=0 $$

$$ \underline{b}_{3}.\underline{P}(\underline{v})=0 $$

. . . . . . . . . . . . . . . . . . . . . . . . . . . . ..

$$ \underline{b} _{n}.\underline{P}(\underline{v})=0 $$

Now consider a collection of arbitrary numbers $$  (  \alpha _{1}, \alpha _{2} ,..., \alpha _{n}  ) $$ i.e.   \alpha _{i}      with i = 1 to n

Now successively multiplying above n equations, on both sides by corresponding  \alpha _{i}, we get the following new set of n equations -

With i = 1 to n, we can then write -

$$ \alpha _{1}. \underline{b}_{1} .\underline{P}(\underline{v})=0 $$

$$ \alpha _{2}. \underline{b}_{2}.\underline{P}(\underline{v})=0 $$

$$ \alpha _{3}. \underline{b}_{3}.\underline{P}(\underline{v})=0 $$

. . . . . . . . . . . . . . . . . . . . . . . . . . . . ..

$$ \alpha _{n}. \underline{b}_{n}.\underline{P}(\underline{v})=0 $$

Adding LHS and RHS respectively for above n equations, we get the following -

$$ \alpha _{1} \underline{b}_{1}.\underline{P}(\underline{v})+ $$ $$ \alpha _{2} \underline{b}_{2}.\underline{P}(\underline{v})+...+ $$ $$ \alpha _{n} \underline{b}_{n}.\underline{P}(\underline{v})=0 $$

which in index notation can be expressed as the following:

$$ \sum_{i=1}^{n}\alpha_{i} \underline{b}_{i} .\underline{P}(\underline{v})=0 $$

But as per previous declaration in 'Find' tab, we have stated -

$$ \underline{w} = \sum_{i=1}^{n}\alpha_{i} \underline{b}_{i}$$ ,

Hence we can now say that,

$$ \underline{w}.\underline{P}(\underline{v})=0 $$

This shows that required equation (2) of page 7- 2 mtg 7 can be used to get equation (2) of page 8-2 mtg 7

= Problem 2.6=

Given
$$F=\left \{ 1,cos(\omega x),cos(2\omega x),sin(\omega x),sin(2\omega x) \right \} , \Omega =[0,T], i=1,2$$

Find
1) Construct Γ5x5(F) and observe the proportionality of Γ.

2) Find det Γ(F)

3) Conclude whether F is orthogonal basis i.e. $$\Gamma _{ij}=\left \langle b_{i},b_{j} \right \rangle=\delta _{ij}$$

Solution
We have from reference of Mtg 10_Page 2 1) $$ \mathbf{\Gamma} _{5x5}=\int_{\Omega }{}b_{i}\cdot b_{j}\;dx=\int_{0}^{T}b_{i}\cdot b_{j}\;dx $$

Constructing a matrix of bi ˑ bj yields:

$$ \Rightarrow \begin{bmatrix} 1\cdot 1 & cos(\omega x)\cdot 1 & cos(2\omega x)\cdot 1 & sin(\omega x)\cdot 1 & sin(2\omega x)\cdot 1\\ 1\cdot cos(\omega x) & cos(\omega x)\cdot cos(\omega x) & cos(2\omega x)\cdot cos(\omega x) & sin(\omega x)\cdot cos(\omega x) & sin(2\omega x)\cdot cos(\omega x)\\ 1\cdot cos(2\omega x) & cos(\omega x)\cdot cos(2\omega x) & cos(2\omega x)\cdot cos(2\omega x) & sin(\omega x)\cdot cos(2\omega x) & sin(2\omega x)\cdot cos(2\omega x)\\ 1\cdot sin(\omega x) & cos(\omega x)\cdot sin(\omega x) & cos(2\omega x)\cdot sin(\omega x) & sin(\omega x)\cdot sin(\omega x) & sin(2\omega x)\cdot sin(\omega x)\\ 1\cdot sin(2\omega x) & cos(\omega x)\cdot sin(2\omega x) & cos(2\omega x)\cdot sin(2\omega x) & sin(\omega x)\cdot sin(2\omega x) & sin(2\omega x)\cdot sin(2\omega x) \end{bmatrix} $$

Multiplying out the above matrix we integrate the resulting terms within limits from 0 to T to get the $$ \mathbf{\Gamma} _{5x5} $$ :

$$\Rightarrow \begin{bmatrix} x & \frac{sin(\omega x)}{\omega } & \frac{sin(2\omega x)}{2\omega } & -\frac{cos(\omega x)}{\omega } & -\frac{1}{2\omega }cos(2\omega x)\\

\frac{sin(\omega x)}{\omega }) & \frac{1}{2}(x+\frac{sin(2\omega x)}{2\omega }) & \frac{1}{6\omega }(3sin(\omega x)+sin(3\omega x)) & -\frac{1}{2\omega }cos^2(\omega x) & -\frac{2}{3\omega }cos^3(\omega x)\\

\frac{sin(2\omega x)}{2\omega } & \frac{1}{6\omega }(3sin(\omega x)+sin(3\omega x)) & \frac{1}{8\omega }(4x+sin(4\omega x)) & \frac{1}{6\omega }(cos(3\omega x)-3cos(\omega x)) & -\frac{1}{8\omega }cos(4\omega x))\\

-\frac{cos(\omega x)}{\omega }) & -\frac{1}{2\omega }cos^2(\omega x) & \frac{1}{6\omega }(cos(3\omega x)-3cos(\omega x)) & \frac{1}{2\omega}(x-sin(\omega x)cos(\omega x)) & \frac{2}{3\omega }sin^3(\omega x)\\

-\frac{1}{2\omega }cos(2\omega x) & -\frac{2}{3\omega }cos^3(\omega x) & -\frac{1}{8\omega }cos(4\omega x)) & \frac{2}{3\omega}sin^3(\omega x) & \frac{1}{8\omega}(4x-sin(4\omega x)) \end{bmatrix} $$

By definition of Angular Frequency

$$2\pi =\omega T\rightarrow \omega=\frac{2\pi}{T}$$

Above is the resulting matrix to be integrated within limits of 0 to T.

And now integrating the results using a table of integrals yielded the following terms for the final integrated matrix.

$$\begin{array}{l} \int_0^T {1 \cdot 1dx = x} |_0^T = T \\ \int_0^T {1 \cdot \cos (\omega x)dx = \frac{1}{\omega }\sin (\omega x)|_0^T = 0} \\ \int_0^T {1 \cdot \cos (2\omega x)dx = \frac{1}\sin (2\omega x)|_0^T = 0} \\ \int_0^T {1 \cdot \sin (\omega x)dx = - \frac{1}{\omega }\cos (\omega x)|_0^T = 0} \\ \int_0^T {1 \cdot \sin (2\omega x)dx = - \frac{1}\cos (2\omega x)|_0^T = 0} \\ \int_0^T {\cos (\omega x) \cdot \cos (\omega x)dx = \left( {\frac + \frac{x}{2}} \right)|_0^T = \frac{T}{2}}\\ \int_0^T {\cos (\omega x) \cdot \cos (2\omega x)dx = \left( {\frac} \right)|_0^T = 0} \\ \int_0^T {\cos (\omega x) \cdot \sin (\omega x)dx = - \frac|_0^T = 0} \\ \int_0^T {\cos (\omega x) \cdot \sin (2\omega x)dx = - \frac|_0^T = 0} \\ \int_0^T {\cos (2\omega x) \cdot \cos (2\omega x)dx = \left( {\frac + \frac{x}{2}} \right)|_0^T = \frac{T}{2}} \\ \int_0^T {\cos (2\omega x) \cdot \sin (\omega x)dx = \frac|_0^T = 0} \\ \int_0^T {\cos (2\omega x) \cdot \sin (2\omega x)dx = - \frac|_0^T = 0} \\ \int_0^T {\sin (\omega x) \cdot \sin (\omega x)dx = \left( { - \frac + \frac{x}{2}} \right)|_0^T = \frac{T}{2}} \\ \int_0^T {\sin (\omega x) \cdot \sin (2\omega x)dx = \left( {\frac} \right)|_0^T = 0} \\ \int_0^T {\sin (2\omega x) \cdot \sin (2\omega x)dx = \left( { - \frac + \frac{x}{2}} \right)|_0^T = \frac{T}{2}} \\ \end{array}$$

Above integrals results were verified with Wolframalpha.

The $$ \mathbf{\Gamma} _{5x5} $$ matrix is thus :


 * {| style="width:100%" border="0"




 * }
 * }

We note that this Gram matrix is a diagonal matrix.

2 ) Determinant of the above gram matrix $$ \mathbf{\Gamma} _{5x5} $$   :

The determinate of the above Г matrix was calculated using MATLAB as shown below:




 * {| style="width:100%" border="0"




 * }
 * }

We note that the determinant is non- zero and hence we can say that the given functions form a family of Linearly Independent Basis Functions

3) We see that the $$ \mathbf{\Gamma} _{5x5} $$ derived is not equivalent to The Kronecker Delta.


 * {| style="width:100%" border="0"




 * }
 * }

=Problem 2.7=

Given
$$F=\left \{ 1,x,x^2,x^3,x^4 \right \},

\Omega =[0,1]$$

Find
1) Construct Γ5x5(F) and observe the proportionality of Γ.

2) Find det Γ(F)

3) Conclude F is orthogonal basis i.e. $$\Gamma _{ij}=\left \langle b_{i},b_{j} \right \rangle=\delta _{ij}$$

Solution
1) $$ \mathbf{\Gamma} _{5x5}=\int_{\Omega }{}b_{i}\cdot b_{j}\;dx=\int_{0}^{1}b_{i}\cdot b_{j}\;dx $$

Constructing a matrix of bi ˑ bj yields:

$$ \Rightarrow \begin{bmatrix} 1\cdot 1 & x\cdot 1 & x^2\cdot 1 & x^3\cdot 1 & x^4\cdot 1\\ 1\cdot x & x\cdot x & x^2\cdot x & x^3\cdot x & x^4\cdot x\\ 1\cdot x^2 & x\cdot x^2 & x^2\cdot x^2 & x^3\cdot x^2 & x^4\cdot x^2\\ 1\cdot x^3 & x\cdot x^3 & x^2\cdot x^3 & x^3\cdot x^3 & x^4\cdot x^3\\ 1\cdot x^4 & x\cdot x^4 & x^2\cdot x^4 & x^3\cdot x^4 & x^4\cdot x^4 \end{bmatrix} $$

Multiplying out the above matrix yields:

$$ \Rightarrow \begin{bmatrix} 1 & x & x^2 & x^3 & x^4\\ x & x^2 & x^3 & x^4 & x^5\\ x^2 & x^3 & x^4 & x^5 & x^6\\ x^3 & x^4 & x^5 & x^6 & x^7\\ x^4 & x^5 & x^6 & x^7 & x^8 \end{bmatrix} $$

Integrating the above matrix with respect to x yields:

$$ \Rightarrow \begin{bmatrix} x & \frac{x^2}{2} & \frac{x^2}{2} & \frac{x^3}{3} & \frac{x^4}{4}\\ \frac{x^2}{2} & \frac{x^3}{3} & \frac{x^4}{4} & \frac{x^5}{5} & \frac{x^6}{6}\\ \frac{x^3}{3} & \frac{x^4}{4} & \frac{x^5}{5} & \frac{x^6}{6} & \frac{x^7}{7}\\ \frac{x^4}{4} & \frac{x^5}{5} & \frac{x^6}{6} & \frac{x^7}{7} & \frac{x^8}{8}\\ \frac{x^5}{5} & \frac{x^6}{6} & \frac{x^7}{7} & \frac{x^8}{8} & \frac{x^9}{9} \end{bmatrix}_{0}^{1} $$

Evaluating the integrated matrix with the limits of [0,1] yielded:

$$ \Rightarrow \begin{bmatrix} 1-0 & \frac{1^2}{2}-0 & \frac{1^3}{3}-0 & \frac{1^4}{4}-0 & \frac{1^5}{5}-0\\ \frac{1^2}{2}-0 & \frac{1^3}{3}-0 & \frac{1^4}{4}-0 & \frac{1^5}{5}-0& \frac{1^6}{6}-0\\ \frac{1^3}{3}-0 & \frac{1^4}{4}-0 & \frac{1^5}{5}-0 & \frac{1^6}{6}-0& \frac{1^7}{7}-0\\ \frac{1^4}{4}-0 & \frac{1^5}{5}-0 & \frac{1^6}{6}-0 & \frac{1^7}{7}-0& \frac{1^8}{8}-0\\ \frac{1^5}{5}-0 & \frac{1^6}{6}-0 & \frac{1^7}{7}-0 & \frac{1^8}{8}-0& \frac{1^9}{9}-0 \end{bmatrix} $$

The family of linearly independent basis functions matrix is thus:


 * {| style="width:100%" border="0"




 * }
 * }

Г is a symmetric matrix about the diagonal.

2) The determinate of the above Г matrix was computed using MATLAB as shown.




 * {| style="width:100%" border="0"




 * }
 * }

3) An orthogonal matrix is a matrix such that the transpose of the matrix multiplied by the original matrix should yield the identity matrix I, a matrix with only 1's on the diagonal and zeros everywhere else.




 * {| style="width:100%" border="0"




 * }
 * }

=Problem 2.8= Refer to lecture slides [[media:fe1.s11.mtg11.djvu|11-1]] and [[media:fe1.s11.mtg10.djvu|10-4]] for the problem statement.

Given
where $$\displaystyle w^h(x)$$ and $$\displaystyle u^h(x)$$ are approximations of $$\displaystyle w(x)$$ and $$\displaystyle u(x)$$, respectively, as follows

and $$\displaystyle b_i(x)$$ for $$\displaystyle i=1,...,n$$ is a family of linearly independent basis functions.

Find
Show $$\displaystyle (Eq  2.8.1)\Leftrightarrow(Eq   2.8.2)$$.

Part 1
$$\displaystyle (8.1)\Rightarrow(8.2)$$

Since (Eq 8.1) is valid for all weighting functions $$\displaystyle w^h(x)$$, we can choose arbitrary values of $$\displaystyle c_i$$ to satisfy (Eq 8.3).

Case 1

Let $$\displaystyle \{c_1,c_2,c_3,...c_{n-1},c_n\}=\{1,0,0,...,0,0\}$$.

From (Eq 8.3):

Then (Eq 8.1) becomes:

Case 2

Let $$\displaystyle \{c_1,c_2,c_3,...c_{n-1},c_n\}=\{0,1,0,...,0,0\}$$.

From (Eq 8.3):

Then (Eq 8.1) becomes:


 * $$\displaystyle \vdots$$


 * $$\displaystyle \vdots$$

Case n

Let $$\displaystyle \{c_1,c_2,c_3,...c_{n-1},c_n\}=\{0,0,0,...,0,1\}$$.

From (Eq 8.3):

Then (Eq 8.1) becomes:

Hence, $$\displaystyle (Eq 8.1)\Rightarrow(Eq 8.2)$$:

Part 2
$$\displaystyle (Eq 8.2)\Rightarrow(Eq 8.1)$$

Multiplying both sides of (Eq 8.2) by $$\displaystyle c_i$$ yields

Since (Eq 8.12) is equivalently zero for every value of i, we can successively add (Eq 8.12) to itself for each value of i without changing its value.

Hence,

=Problem 2. 9=

Given
General one dimensional Model 1.0/Data set 2 on page 1 of Mtg 12:

where $$ {a_2} = 2,f = 3 $$

Consider $$ {b_j}\left( x \right) = \cos \left( {jx + \phi } \right) $$ and $$ \phi = \frac{1}{4}\pi $$ or $$ \phi  = \frac{3}{4}\pi $$

Find
A) Let n=2, What is the ndof of this problem?

B) Find 2 equations that enforce boundary conditions for $$ {u^h}\left( x \right) = \sum\limits_{j = 0}^n $$

C) Find 1 more equation to solve for $$ {\mathbf{d}} = {\left\{ \right\}_{3 \times 1}} $$ by projecting the residue $$ {\mathbf{P}}\left(  \right) $$ on a basis function $$ {b_k}(x)   with  k = 0,1,2  $$ s.t. the additional equation is linear independence from the above 2 equations

D) Display 3 equations in matrix form $$ {\mathbf{Kd}} = {\mathbf{F}} $$ and observe symmetric property of $$ {\mathbf{K}} $$

E) Solve for $$ {\mathbf{d}} $$

F) Construct $$ u_n^h\left( x \right) $$ and plot $$ u_n^h\left( x \right) \ vs. \ u\left( x \right) $$ ($$ u\left( x \right) $$ is the exact solution)

G) Repeat (1) to (6) for n=4 and n=6

H) Compare $$ u_n^h\left( {x = 0.5} \right)forn = 2,4,6 $$. If error $$ {e_n}\left( {0.5} \right) = u\left( {0.5} \right) - {u^h}\left( {0.5} \right) $$, plot $$ {e_n}\left( {0.5} \right) \ vs. \ n $$

Solution
A) Because n begins from 0, so the degree of freedom is 3 when n=2.

B) From (Eq 9.2) we can get $$ \sum\limits_{j = 0}^2 {{d_j}{b_j}\left( 1 \right)} = 0 $$

Then rewrite the above equation in matrix form:

where

From (9.3) we can get $$ \sum\limits_{j = 0}^2 {{d_j}b_j^{\left( 1 \right)}\left( 0 \right)} =  - 4 $$

Then rewrite the above equation in matrix form:

where

C) Because the residue $$ P = \frac{d}\left( {{a_2}\frac{d}{b_j}\left( x \right)} \right) - f\left( x \right)$$, after projecting the residue we get:

$$ \sum\limits_{j = 0}^2 {\left\{ {\int_0^1 {{b_k}\left( x \right)\left[ {\frac{d}\left( {{a_2}\frac{d}{b_j}\left( x \right)} \right)} \right]dx} } \right\}} {d_j} = - \int_0^1 {{b_k}\left( x \right)f\left( x \right)dx} $$

Then rewrite the above equation in matrix form:

where

E)-H) In order to simplify the computation procedure, the following Fortran code based on (Eq 9.8), (Eq 9.9) and (Eq 9.10) was used to calculate the results, when n=2,4,6 and $$\phi = \frac{1}{4}\pi,\frac{3}{4}\pi$$:

program main use lib implicit none integer :: n,i,j,k,m,l real,allocatable :: KK,d,F,b real :: fi(2),pi,x(1001),y(1001),yh(1001),e(3,2) n=6; pi=3.1415926; fi(1)=pi/4.0; fi(2)=pi*3.0/4.0 open (unit=11, file="egm6611.s11.team6.hw2.9.1.plt", status="replace") open (unit=12, file="egm6611.s11.team6.hw2.9.2.plt", status="replace") open (unit=13, file="egm6611.s11.team6.hw2.9.3.plt", status="replace") do l=2,n,2 write(*,*) "When the degree of freedom is ", l+1 do i=1,2 allocate(KK(l+1,l+1),d(l+1),F(l+1),b(2,l+1)) do j=0,l b(1,j+1)=cos(j*1.0+fi(i)) b(2,j+1)=-1.0*j*sin(j*0.0+fi(i)) end do      do k=0,l do j=0,l if (k==j) then KK(k+1,j+1)=-1.0*k/2.0*(2.0*k-sin(2*fi(i))+sin(2*fi(i)+2*k)) else KK(k+1,j+1)=2*j*j*j*(cos(fi(i))*sin(fi(i))-cos(fi(i)+k)*sin(fi(i)+j))-k*(cos(fi(i))*sin(fi(i))-cos(fi(i)+j)*sin(fi(i)+k)))/(j*j-k*k)           end if             if (k==0) then               F(k+1)=-3.0*cos(fi(i))            else                F(k+1)=-3.0*(sin(fi(i)+k)-sin(fi(i)))/k            end if                      end do      end do       write(*,*) "when phi= ",fi(i)      do j=1,l+1      KK(1,j)=b(1,j)      KK(2,j)=b(2,j)      end do      F(1)=0     F(2)=-4     call invert(KK)     do k=1,l+1        d(k)=0.0        do j=1,l+1           d(k)=d(k)+KK(K,j)*F(j)        end do     end do     write(*,*) "solution vector d"     write(*,*) d     x=0.0; y=0.0; yh=0.0     do m=1,1001     x(m)=(m-1)/1000.0     y(m)=-3.0/4.0*x(m)*x(m)-4.0*x(m)+19.0/4.0     do k=1,l+1        yh(m)=yh(m)+d(k)*cos((k-1)*x(m)+fi(i))     end do   end do   do k=1,l+1      e(l/2,i)=e(l/2,i)+d(k)*cos((k-1)*0.5+fi(i)) end do  e(l/2,i)=(-3.0/4.0*0.5*0.5-4.0*0.5+19.0/4.0)-e(l/2,i) if (i==1) then write(11,*) "zone" do m=1,1001 write(11,"(3F12.4)") x(m),y(m),yh(m) end do  else write(12,*) "zone" do m=1,1001 write(12,"(3F12.4)") x(m),y(m),yh(m) end do  end if   deallocate(KK,d,F,b) end do !for i=1-2 end do !for n=1-8 write(13,"(A,2F12.4)") "2",e(1,1),e(1,2) write(13,"(A,2F12.4)") "4",e(2,1),e(2,2) write(13,"(A,2F12.4)") "6",e(3,1),e(3,2) end program module lib implicit none contains !  subroutine invert(matrix) ! invert a small square matrix onto itself implicit none real,intent(in out)::matrix integer::i,k,n; real::con ; n= ubound(matrix,1) do k=1,n con=matrix(k,k); matrix(k,k)=1. matrix(k,:)=matrix(k,:)/con do i=1,n if(i/=k) then con=matrix(i,k); matrix(i,k)=0.0 matrix(i,:)=matrix(i,:) - matrix(k,:)*con end if     end do   end do   return end subroutine invert end module lib

Using this code we get the output result for all the situations as following:

With the degree of freedom =           3 & with phi=  0.7853981    OR  pi /4 solution vector d      1.142170       7.209867     -0.7765067 With phi=   2.356194    OR 3 pi / 4 solution vector d     -17.24394       13.96153      -4.152339

With the degree of freedom =          5 With phi=  0.7853981 OR      pi /4 solution vector d     -2.044705       13.62609      -7.081253       2.811028     -0.5599513 when phi=   2.356194      OR 3 pi /4 solution vector d     -16.73928       16.00889      -7.965427       2.459494     -0.4499143

When the degree of freedom is           7 when phi=  0.7853981      OR pi /4 solution vector dc     -2.067435       14.06402      -8.531017       4.714233      -1.883228 c       0.4795609     -5.8918476E-02 when phi=   2.356194     OR 3 pi /4 solution vector d     -19.44370       26.57353      -25.56540       18.85248      -9.806686 3.187393    -0.5081997

For n=2,4,6 and

A ) $$ \phi = \frac{1}{4}\pi $$,

we compare the results with exact solution in the following image:



For n=2,4,6,

B) $$ \phi = \frac{3}{4}\pi $$,

we compare the results with exact solution in the following image:



In both the images, we can observe that the results become more close compared to exact solution when n increases.

For different shift phases, we compare the how the error's convergence situation in the following image:



It seems that both the shift phase can make a convergence around n=6.

=Contributing Team Members=
 * Eml5526.s11.team6.joglekar 19:52, 2 February 2011 (UTC)
 * Eml5526.s11.team6.kurth 00:57, 2 February 2011 (UTC)
 * Eml5526.s11.team6.gravois 00:59, 2 February 2011 (UTC)
 * Eml5526.s11.team6.tupsakhare 01:11, 2 February 2011 (UTC)
 * Eml5526.s11.team6.deshpande 03:37, 2 February 2011 (UTC)
 * Eml5526.s11.team6.vork 12:23, 2 February 2011 (UTC)