User:Eml4500.f08.bottle.loschak/HW6

HW 6
Equation (1), a partial differential equation (PDE), was given in lecture 29 and used to derive Equation (2).

$$ \frac{\partial }{\partial x}\left[(EA)\frac{\partial u}{\partial x} \right] + f = m\ddot{u} $$             (1)

$$ \mathbf{M\ddot{d}}+\mathbf{kd} = \mathbf{F} $$             (2)

The partial differential equation, Equation (1), represents a summation of forces for infinitesimally thin sections dx over the bounds 0 to length L. Each of the terms in Equation (1) can be represented discretely by a matrix of individual points. Equation (1) is just the continuous expression of the same thing. Therefore, every $$m\ddot{u}$$ between 0 and L added up continuously will have very similar results as the matrix expression $$\mathbf{M\ddot{d}}$$. The same is true for the other terms as well.

Knowing that Equations (1) and (2) are the continuous and discrete versions of the same system we can derive the continuous Equation (3) and use the discrete case to verify that our process is correct.

First, we know that $$\int w(x)g(x)dx = 0$$ for all $$w(x)$$. In the case that $$w(x) = g(x)$$, our above equation becomes $$\int g^{2}(x)dx$$. As long as $$g(x) \geq 0$$ then $$g(x) = 0$$.

Therefore, since Equation (1) can also be rearranged to equal zero, Equation (3) is true.

$$ 0 = \int_{x = 0}^{x = L}{W(x) \left\{ \frac{\partial }{\partial x}\left[ EA\frac{\partial u}{\partial x}\right] + f - m\ddot{u} \right\} dx} $$             (3)

To verify this process, we recall a proof for discrete matrices from lecture 24 that was used to justify weighting matrices.

For the case where

$$\textbf{W}_{6x1}\cdot (\textbf{Kd}-\textbf{F})_{6x1}=0_{1x1}$$

We use the following proof from Team Bottle HW 5.

Proof: Select W at $$ \textbf{W}^{T}_{1x6}=\begin{bmatrix} 1 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}_{1x6}\ $$ plugging W back into the Principle of Virtual Work equation gives: $$ \begin{bmatrix}1 & 0 & 0 & 0 & 0 & 0 \end{bmatrix} \cdot (\textbf{Kd}-\textbf{F}) = 1 \cdot \bigg[ \sum_{j=1}^{6}{K_{1j}d_{j}} - F_{1} \bigg]+ 0\cdot \bigg[\sum_{j=1}^{6}{K_{2-6j}d_{j}} - F_{2-6} \bigg]=0 $$ Therefore, $$ \textbf{W}\cdot (\textbf{Kd}-\textbf{F})=\sum_{j=1}^{6}{K_{1j}d_{j}=F_{1}}\ $$ Following through as before for the rest of the matrices gives these results: $$ \textbf{W}^{T}_{1x6}=\begin{bmatrix} 0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix}_{1x6}\ $$ $$ \begin{bmatrix}0 & 1 & 0 & 0 & 0 & 0 \end{bmatrix} \cdot (\textbf{Kd}-\textbf{F}) = 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{1j}d_{j}} - F_{1} \bigg]+ 1\cdot \bigg[\sum_{j=1}^{6}{K_{2j}d_{j}} - F_{2} \bigg]+ 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{3-6j}d_{j}} - F_{3-6} \bigg]=0 $$ Therefore, $$ \textbf{W}\cdot (\textbf{Kd}-\textbf{F})=\sum_{j=1}^{6}{K_{2j}d_{j}=F_{2}}\ $$ $$ \textbf{W}^{T}_{1x6}=\begin{bmatrix} 0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix}_{1x6}\ $$ $$ \begin{bmatrix}0 & 0 & 1 & 0 & 0 & 0 \end{bmatrix} \cdot (\textbf{Kd}-\textbf{F}) = 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{1-2j}d_{j}} - F_{1-2} \bigg]+ 1\cdot \bigg[\sum_{j=1}^{6}{K_{3j}d_{j}} - F_{3} \bigg]+ 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{4-6j}d_{j}} - F_{4-6} \bigg]=0 $$ Therefore, $$ \textbf{W}\cdot (\textbf{Kd}-\textbf{F})=\sum_{j=1}^{6}{K_{3j}d_{j}=F_{3}}\ $$ $$ \textbf{W}^{T}_{1x6}=\begin{bmatrix} 0 & 0 & 0 & 1 & 0 & 0 \end{bmatrix}_{1x6}\ $$ $$ \begin{bmatrix}0 & 0 & 0 & 1 & 0 & 0 \end{bmatrix} \cdot (\textbf{Kd}-\textbf{F}) = 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{1-3j}d_{j}} - F_{1-3} \bigg]+ 1\cdot \bigg[\sum_{j=1}^{6}{K_{4j}d_{j}} - F_{4} \bigg]+ 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{5-6j}d_{j}} - F_{5-6} \bigg]=0 $$ Therefore, $$ \textbf{W}\cdot (\textbf{Kd}-\textbf{F})=\sum_{j=1}^{6}{K_{4j}d_{j}=F_{4}}\ $$ $$ \textbf{W}^{T}_{1x6}=\begin{bmatrix} 0 & 0 & 0 & 0 & 1 & 0 \end{bmatrix}_{1x6}\ $$ $$ \begin{bmatrix}0 & 0 & 0 & 0 & 1 & 0 \end{bmatrix} \cdot (\textbf{Kd}-\textbf{F}) = 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{1-4j}d_{j}} - F_{1-4} \bigg]+ 1\cdot \bigg[\sum_{j=1}^{6}{K_{5j}d_{j}} - F_{5} \bigg]+ 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{6j}d_{j}} - F_{6} \bigg]=0 $$ Therefore, $$ \textbf{W}\cdot (\textbf{Kd}-\textbf{F})=\sum_{j=1}^{6}{K_{5j}d_{j}=F_{5}}\ $$ $$ \textbf{W}^{T}_{1x6}=\begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix}_{1x6}\ $$ $$ \begin{bmatrix}0 & 0 & 0 & 0 & 0 & 1 \end{bmatrix} \cdot (\textbf{Kd}-\textbf{F}) = 0 \cdot \bigg[ \sum_{j=1}^{6}{K_{1-5j}d_{j}} - F_{1-5} \bigg]+ 1\cdot \bigg[\sum_{j=1}^{6}{K_{6j}d_{j}} - F_{6} \bigg]=0 $$ Therefore, $$ \textbf{W}\cdot (\textbf{Kd}-\textbf{F})=\sum_{j=1}^{6}{K_{6j}d_{j}=F_{6}}\ $$ Hence, adding up all of these results gives us $$ \textbf{Kd}=\textbf{F} $$.

HW from slide 33-5:

Given the following equation, find an expression for $$ \mathbf{k^{(i)}} $$.

$$ \mathbf{k}^{(i)}_{2x2}=\int_{\tilde{x}=0}^{\tilde{x}=L^{(i)}}\mathbf{B^{T}}(\tilde{x})(EA)(\tilde{x})\mathbf{B}(\tilde{x})d\tilde{x}$$

It is already known that

$$ \mathbf{B}(\tilde{x}) = \begin{bmatrix}N_{i}(\tilde{x}) & N_{i+1}(\tilde{x})\end{bmatrix} $$

$$ \mathbf{B^{T}}(\tilde{x}) = \begin{bmatrix}N_{i}(\tilde{x})\\ N_{i+1}(\tilde{x})\end{bmatrix} $$

Also, from lecture 33, it is known that

$$A(\tilde{x})=N^{(i)}_1(\tilde{x})A_1+N^{(i)}_2(\tilde{x})A_2$$ $$E(\tilde{x})=N^{(i)}_1(\tilde{x})E_1+N^{(i)}_2(\tilde{x})E_2$$

Since Ni is assigned to Node 1 and Ni+1 is assigned to Node 2, the following two equations are true.

$$N_{i}(x) = N_{1}^{(i)}(\tilde{x}) $$ $$N_{i+1}(x) = N_{2}^{(i)}(\tilde{x}) $$

Plugging these expressions back into our original $$ \mathbf{k^{(i)}} $$ equation results in

$$ \mathbf{k^{(i)}} = \begin{Bmatrix} N_{1}^{(i)}(L)\\ N_{2}^{(i)}(L) \end{Bmatrix} \begin{bmatrix}N_{1}^{(i)}(L) & N_{2}^{(i)}(L)\end{bmatrix} E(L)A(L) - \begin{Bmatrix} N_{1}^{(i)}(0)\\ N_{2}^{(i)}(0) \end{Bmatrix} \begin{bmatrix}N_{1}^{(i)}(0) & N_{2}^{(i)}(0)\end{bmatrix} E(0)A(0) $$ Multiplying the 2x1 and 1x2 matrices together, the equation above becomes $$ \mathbf{k^{(i)}} = \left[   (N_{1}^{(i)}(L))^{2} + (N_{2}^{(i)}(L))^{2}   \right] E(L)A(L) - \left[   (N_{1}^{(i)}(0))^{2} + (N_{2}^{(i)}(0))^{2}   \right] E(0)A(0) $$

According to the definition of N at the point x = 0, N1 = 1, N2 = 0 At the point x = L, N1 = 0, N2 = 1.

Therefore,

$$ \mathbf{k^{(i)}} = \left[   (0)^{2} + (1)^{2}   \right] \left[(0)E_{1} + (1)E_{2} \right] \left[ (0)A_{1} + (1)A_{2}  \right] - \left[   (1)^{2} + (0)^{2}   \right] \left[(1)E_{1} + (0)E_{2} \right] \left[ (1)A_{1} + (0)A_{2}  \right] $$

$$ \mathbf{k^{(i)}}_{2x2} = \left[ E_{2}A_{2} - E_{1}A_{1} \right] \begin{bmatrix} 1 & -1\\ -1 & 1 \end{bmatrix} $$