Nonlinear finite elements/Matrices

Much of finite elements revolves around forming matrices and solving systems of linear equations using matrices. This learning resource gives you a brief review of matrices.

/Matrices/
Suppose that you have a linear system of equations

\begin{align} a_{11} x_1 + a_{12} x_2 + a_{13} x_3 + a_{14} x_4 &= b_1 \\ a_{21} x_1 + a_{22} x_2 + a_{23} x_3 + a_{24} x_4 &= b_2 \\ a_{31} x_1 + a_{32} x_2 + a_{33} x_3 + a_{34} x_4 &= b_3 \\ a_{41} x_1 + a_{42} x_2 + a_{43} x_3 + a_{44} x_4 &= b_4 \end{align} ~. $$

Matrices provide a simple way of expressing these equations. Thus, we can instead write

\begin{bmatrix} a_{11} & a_{12} & a_{13} & a_{14} \\ a_{21} & a_{22} & a_{23} & a_{24} \\ a_{31} & a_{32} & a_{33} & a_{34} \\ a_{41} & a_{42} & a_{43} & a_{44} \end{bmatrix} \begin{bmatrix} x_1 \\ x_2 \\ x_3 \\ x_4 \end{bmatrix} =    \begin{bmatrix} b_1 \\ b_2 \\ b_3 \\ b_4 \end{bmatrix} ~. $$

An even more compact notation is

\left[\mathsf{A}\right] \left[\mathsf{x}\right] = \left[\mathsf{b}\right] ~\text{or}~ \mathbf{A} \mathbf{x} = \mathbf{b} ~. $$

Here $$\mathbf{A}$$ is a $$4\times 4$$ matrix while $$\mathbf{x}$$ and $$\mathbf{b}$$ are $$4\times 1$$ matrices. In general, an $$m \times n$$ matrix $$\mathbf{A}$$ is a set of numbers arranged in $$m$$ rows and $$n$$ columns.

\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & a_{m3} & \dots & a_{mn} \end{bmatrix}~. $$

Practice Exercises
Practice: Expressing Linear Equations As Matrices

Types of Matrices
Common types of matrices that we encounter in finite elements are:


 * a  row vector that has one row and $$n$$ columns.

\mathbf{v} = \begin{bmatrix} v_1 & v_2 & v_3 & \dots & v_n \end{bmatrix} $$
 * a  column vector that has $$n$$ rows and one column.

\mathbf{v} = \begin{bmatrix} v_1 \\ v_2 \\ v_3 \\ \vdots \\ v_n \end{bmatrix} $$ diagonal elements ($$a_{ii}$$) nonzero.
 * a  square matrix that has an equal number of rows and columns.
 * a  diagonal matrix which is a square matrix with only the

\mathbf{A} = \begin{bmatrix} a_{11} & 0 & 0 & \dots & 0 \\ 0 & a_{22} & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & a_{nn} \end{bmatrix}~. $$ with each of its nonzero elements ($$a_{ii}$$) equal to 1.
 * the  identity matrix ($$\mathbf{I}$$) which is a diagonal matrix and

\mathbf{A} = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 \\ 0 & 1 & 0 & \dots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \dots & 1 \end{bmatrix}~. $$ such that $$a_{ij} = a_{ji}$$.
 * a  symmetric matrix which is a square matrix with elements

\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{12} & a_{22} & a_{23} & \dots & a_{2n} \\ a_{13} & a_{23} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{1n} & a_{2n} & a_{3n} & \dots & a_{nn} \end{bmatrix}~. $$ such that $$a_{ij} = -a_{ji}$$.
 * a  skew-symmetric matrix which is a square matrix with elements

\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ -a_{12} & a_{22} & a_{23} & \dots & a_{2n} \\ -a_{13} & -a_{23} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ -a_{1n} & -a_{2n} & -a_{3n} & \dots & a_{nn} \end{bmatrix}~. $$ Note that the diagonal elements of a skew-symmetric matrix have to be zero: $$a_{ii} = -a_{ii} \Rightarrow a_{ii} = 0$$.

Matrix addition
Let $$\mathbf{A}$$ and $$\mathbf{B}$$ be two $$m \times n$$ matrices with components $$a_{ij}$$ and $$b_{ij}$$, respectively. Then

\mathbf{C} = \mathbf{A} + \mathbf{B} \implies c_{ij} = a_{ij} + b_{ij} $$

Multiplication by a scalar
Let $$\mathbf{A}$$ be a $$m \times n$$ matrix with components $$a_{ij}$$ and let $$\lambda$$ be a scalar quantity. Then,

\mathbf{C} = \lambda\mathbf{A} \implies c_{ij} = \lambda a_{ij} $$

Multiplication of matrices
Let $$\mathbf{A}$$ be a $$m \times n$$ matrix with components $$a_{ij}$$. Let $$\mathbf{B}$$ be a $$p \times q$$ matrix with components $$b_{ij}$$.

The product $$\mathbf{C} = \mathbf{A} \mathbf{B}$$ is defined only if $$n = p$$. The matrix $$\mathbf{C}$$ is a $$m \times q$$ matrix with components $$c_{ij}$$. Thus,

\mathbf{C} = \mathbf{A} \mathbf{B} \implies c_{ij} = \sum^n_{k=1} a_{ik} b_{kj} $$ Similarly, the product $$\mathbf{D} = \mathbf{B} \mathbf{A}$$ is defined only if $$q = m$$. The matrix $$\mathbf{D}$$ is a $$p \times n$$ matrix with components $$d_{ij}$$. We have

\mathbf{D} = \mathbf{B} \mathbf{A} \implies d_{ij} = \sum^m_{k=1} b_{ik} a_{kj} $$

Clearly, $$\mathbf{C} \ne \mathbf{D}$$ in general, i.e., the matrix product  is not commutative.

However, matrix multiplication is  distributive. That means

\mathbf{A} (\mathbf{B} + \mathbf{C}) = \mathbf{A} \mathbf{B} + \mathbf{A} \mathbf{C} ~. $$

The product is also  associative. That means

\mathbf{A} (\mathbf{B} \mathbf{C}) = (\mathbf{A} \mathbf{B}) \mathbf{C} ~. $$

Transpose of a matrix
Let $$\mathbf{A}$$ be a $$m \times n$$ matrix with components $$a_{ij}$$. Then the  transpose of the matrix is defined as the $$n \times m$$ matrix $$\mathbf{B} = \mathbf{A}^T$$ with components $$b_{ij} = a_{ji}$$. That is,

\mathbf{B} = \mathbf{A}^T = \begin{bmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{m1} & a_{m2} & a_{m3} & \dots & a_{mn} \end{bmatrix}^T =    \begin{bmatrix} a_{11} & a_{21} & a_{31} & \dots & a_{m1} \\ a_{12} & a_{22} & a_{32} & \dots & a_{m2} \\ a_{13} & a_{23} & a_{33} & \dots & a_{m3} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{1n} & a_{2n} & a_{3n} & \dots & a_{mn} \end{bmatrix} $$

An important identity involving the transpose of matrices is

{    (\mathbf{A} \mathbf{B})^T = \mathbf{B}^T \mathbf{A}^T }~. $$

Determinant of a matrix
The  determinant of a matrix is defined only for square matrices.

For a $$2 \times 2$$ matrix $$\mathbf{A}$$, we have

\mathbf{A} = \begin{bmatrix} a_{11} & a_{12} \\ a_{21} & a_{22} \end{bmatrix} \implies \det(\mathbf{A}) = \begin{vmatrix} a_{11} & a_{12} \\ a_{21} & a_{22}\end{vmatrix} = a_{11} a_{22} - a_{12} a_{21} ~. $$

For a $$n \times n$$ matrix, the determinant is calculated by expanding into  minors as
 * $$\begin{align}

&\det(\mathbf{A}) = \begin{vmatrix} a_{11} & a_{12} & a_{13} & \dots & a_{1n} \\ a_{21} & a_{22} & a_{23} & \dots & a_{2n} \\ a_{31} & a_{32} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & a_{n3} & \dots & a_{nn} \end{vmatrix} \\ &= a_{11} \begin{vmatrix} a_{22} & a_{23} & \dots & a_{2n} \\ a_{32} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n2} & a_{n3} & \dots & a_{nn} \end{vmatrix} - a_{12} \begin{vmatrix} a_{21} & a_{23} & \dots & a_{2n} \\ a_{31} & a_{33} & \dots & a_{3n} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n3} & \dots & a_{nn} \end{vmatrix} + \dots \pm a_{1n} \begin{vmatrix} a_{21} & a_{22} & \dots & a_{2(n-1)} \\ a_{31} & a_{32} & \dots & a_{3(n-1)} \\ \vdots & \vdots & \ddots & \vdots \\ a_{n1} & a_{n2} & \dots & a_{n(n-1)} \end{vmatrix} \end{align}$$ In short, the determinant of a matrix $$\mathbf{A}$$ has the value

{   \det(\mathbf{A}) = \sum^n_{j=1} (-1)^{1+j} a_{1j} M_{1j} } $$ where $$M_{ij}$$ is the determinant of the submatrix of $$\mathbf{A}$$ formed by eliminating row $$i$$ and column $$j$$ from $$\mathbf{A}$$.

Some useful identities involving the determinant are given below.


 * If $$\mathbf{A}$$ is a $$n \times n$$ matrix, then

\det(\mathbf{A}) = \det(\mathbf{A}^T)~. $$
 * If $$\lambda$$ is a constant and $$\mathbf{A}$$ is a $$n \times n$$ matrix, then

\det(\lambda\mathbf{A}) = \lambda^n\det(\mathbf{A}) \implies \det(-\mathbf{A}) = (-1)^n\det(\mathbf{A}) ~. $$
 * If $$\mathbf{A}$$ and $$\mathbf{B}$$ are two $$n \times n$$ matrices, then

\det(\mathbf{A}\mathbf{B}) = \det(\mathbf{A})\det(\mathbf{B})~. $$

If you think you understand determinants, take the quiz.

Inverse of a matrix
Let $$\mathbf{A}$$ be a $$n \times n$$ matrix. The  inverse of $$\mathbf{A}$$ is denoted by $$\mathbf{A}^{-1}$$ and is defined such that

{   \mathbf{A} \mathbf{A}^{-1} = \mathbf{I} } $$ where $$\mathbf{I}$$ is the $$n \times n$$ identity matrix.

The inverse exists only if $$\det(\mathbf{A}) \ne 0$$. A  singular matrix does not have an inverse.

An important identity involving the inverse is

{   (\mathbf{A}\mathbf{B})^{-1} = \mathbf{B}^{-1} \mathbf{A}^{-1}, } $$ since this leads to: $$   {    (\mathbf{A} \mathbf{B})^{-1} (\mathbf{A} \mathbf{B}) = (\mathbf{B}^{-1} \mathbf{A}^{-1}) (\mathbf{A} \mathbf{B} ) = \mathbf{B}^{-1} \mathbf{A}^{-1} \mathbf{A} \mathbf{B} = \mathbf{B}^{-1} (\mathbf{A}^{-1} \mathbf{A}) \mathbf{B} = \mathbf{B}^{-1} \mathbf{I} \mathbf{B} = \mathbf{B}^{-1} \mathbf{B} = \mathbf{I}. }  $$

Some other identities involving the inverse of a matrix are given below.

determinant of its inverse.
 * The determinant of a matrix is equal to the multiplicative inverse of the

\det(\mathbf{A}) = \cfrac{1}{\det(\mathbf{A}^{-1})}~. $$ is equal to the original matrix.
 * The determinant of a  similarity transformation of a matrix

\det(\mathbf{B} \mathbf{A} \mathbf{B}^{-1}) = \det(\mathbf{A}) ~. $$ We usually use numerical methods such as Gaussian elimination to compute the inverse of a matrix.

Eigenvalues and eigenvectors
A thorough explanation of this material can be found at Eigenvalue, eigenvector and eigenspace. However, for further study, let us consider the following examples:

\mathbf{A} = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix}, \mathbf{v} = \begin{bmatrix} 6 \\ -5   \end{bmatrix},  \mathbf{t} = \begin{bmatrix} 7 \\ 4   \end{bmatrix}~. $$ Which vector is an eigenvector for $$ \mathbf{A} $$ ?
 * Let :$$

We have $$   \mathbf{A}\mathbf{v} = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix}\begin{bmatrix} 6 \\ -5   \end{bmatrix} = \begin{bmatrix} -24 \\ 20   \end{bmatrix} = -4\begin{bmatrix} 6 \\ -5   \end{bmatrix} $$, and $$ \mathbf{A}\mathbf{t} = \begin{bmatrix} 1 & 6 \\ 5 & 2 \end{bmatrix}\begin{bmatrix} 7 \\ 4   \end{bmatrix} = \begin{bmatrix} 31 \\ 43   \end{bmatrix}~.$$

Thus, $$ \mathbf{v}$$ is an eigenvector.

1 \\ 4   \end{bmatrix} $$   an eigenvector for $$ \mathbf{A} = \begin{bmatrix} -3 & -3 \\ 1 & 8 \end{bmatrix}$$ ?
 * Is $$ \mathbf{u} = \begin{bmatrix}

We have that since $$ \mathbf{A}\mathbf{u} = \begin{bmatrix} -3 & -3 \\ 1 & 8 \end{bmatrix}\begin{bmatrix} 1 \\ 4   \end{bmatrix} = \begin{bmatrix} -15 \\ 33   \end{bmatrix}$$, $$ \mathbf{u} = \begin{bmatrix} 1 \\ 4   \end{bmatrix} $$   is not an eigenvector for $$ \mathbf{A} = \begin{bmatrix} -3 & -3 \\ 1 & 8 \end{bmatrix}~.$$