Numerical Analysis/Truncation Errors

This page is about Truncation error of ODE methods.

Definition
Truncation errors are defined as the errors that result from using an approximation in place of an exact mathematical procedure.

There are two ways to measure the errors: Assume that our methods take the form:
 * 1) Local Truncation Error (LTE): the error, $$ \tau $$, introduced by the approximation method at each step.
 * 2) Global Truncation Error (GTE): the error, $$ e $$, is the absolute difference between the correct value and the approximate value.

Why do we care about truncation errors?
In the case of one-step methods, the local truncation error provides us a measure to determine how the solution to the differential equation fails to solve the difference equation. The local truncation error for multistep methods is similar to that of one-step methods.

A one-step method with local truncation error $$\tau_n(h) $$ at the nth step:
 * This method is consistent with the differential equation it approximates if
 * $$ \lim_{h \to 0} \max_{1 \leq n \leq N} | \tau_n(h) | = 0. $$

Note that here we assume that the approximation values are exactly equal to the true solution at every step.
 * The method is convergent with respect to the differential equation it approximates if
 * $$ \lim_{h \to 0} \max_{1 \leq n \leq N} | y_n - y(t_n) | = 0, $$

where $$y_n $$ denotes the approximation obtained from the method at the nth step, and $$y(t_n)$$ the exact value of the solution of the differential equation.

How do we avoid truncation errors?
The truncation error generally increases as the step size increases, while the roundoff error decreases as the step size increases.

Relationship Between Local Truncation Error and Global Truncation Error
The global truncation error (GTE) is one order lower than the local truncation error (LTE). That is, 
 * if $$\tau_n(h) = O(h^{p +1})$$, then $$e_n(h) = O(h^{p})$$.

Proof
We assume that perfect knowledge of the true solution at the initial time step. Let $$\tilde{y}(t)$$ be the exact solution of

\Big\{ \begin{align} y' &= f(t,y)\text{, and} \\ y(t_n) &= y_n\,. \end{align} $$ The truncation error at step n+1 is defined as $$ \tau_{n+1}(h) = \tilde{y}(t_{n+1}) - y_{n+1}. $$ Also, the global errors are defined as

\begin{align} e_0(h) &= 0 \\ e_{n+1}(h) &= y(t_{n+1}) - y_{n+1} \\ &= [y(t_{n+1}) - \tilde{y}(t_{n+1})] + [ \tilde{y}(t_{n+1}) - y_{n+1} ]\,. \end{align} $$ According to the Triangle inequality, we obtain that

The second term on the right-hand side of ($$) is the truncation error $$ \tau_{n+1}(h) $$. Here we assume

Thus,

The first term on the right-hand side of ($$) is the difference between two exact solutions.

Both $$ y(t) $$ and $$\tilde{y}(t) $$ satisfy $$ y' = f(t,y)$$ so

\Big\{ \begin{align} y'(t) &= f(t,y)\text{, and} \\ \tilde{y}(t) &= f(t,\tilde{y})\,. \end{align} $$ By subtracting one equation from the other, we can get that

\begin{align} y'(t) - \tilde{y}(t) &= f(t,y) - f(t,\tilde{y})\quad\text{so} \\ \end{align} $$
 * y'(t) - \tilde{y}'(t)| &= |f(t,y) - f(t,\tilde{y})|.

Since $$ f $$ is Lipschitz continuous, then
 * $$ |y'(t) - \tilde{y}'(t)| \leq L|y(t) - \tilde{y}(t)|, $$ where $$ t > t_n. $$

By Gronwall's inequality,

\begin{align} &= e^{L(t-t_n)}|y(t_n) - \tilde{y}(t_n)|, \end{align} $$ where $$ t \in [t_n, t_{n+1}].$$
 * y(t) - \tilde{y}(t)| &\leq |y(t_n) - \tilde{y}(t_n)| \exp\left(\int_{t_n}^t Lds\right) \\

Setting $$ t = t_{n+1} $$, we have that

Plugging equation ($$) and ($$) into ($$), we can get that

Note that equation ($$) is a recursive inequality valid for all values of $$ n $$.

Next, we are trying to use it to estimate $$|e_N (h)|, $$ where we assume $$ Nh = T $$.

Let $$ \alpha = e^{Lh}. $$ Dividing both sides of ($$) by $$ \alpha^{n+1}, $$ we get that
 * $$ \frac{|e_{n+1}(h)|}{\alpha^{n+1}} \leq \frac{e_n(h)}{\alpha^{n}} + Ch^{p+1}\frac{1}{\alpha^{n+1}}\,. $$

Summing over n = 0,1, 2,…, N-1,
 * $$\frac{|e_{1}(h)|}{\alpha^{1}} \leq \frac{e_0(h)}{\alpha^0} + Ch^{p+1}\frac{1}{\alpha^{1}}$$,
 * $$\frac{|e_{2}(h)|}{\alpha^{2}} \leq \frac{e_1(h)}{\alpha^1} + Ch^{p+1}\frac{1}{\alpha^{2}}$$,
 * $$\vdots$$

and
 * $$\frac{|e_{N}(h)|}{\alpha^{N}} \leq \frac{e_{N-1}(h)}{\alpha^{N-1}} + Ch^{p+1}\frac{1}{\alpha^{N}}\,.$$

Then we obtain

\begin{align} \frac{|e_{N}(h)|}{\alpha^{N}} &\leq \frac{e_0(h)}{\alpha^0} + Ch^{p+1}(\frac{1}{\alpha^{1}} + \frac{1}{\alpha^{2}} + \cdots + \frac{1}{\alpha^{N}}) \\ &= \frac{e_0(h)}{\alpha^0} + Ch^{p+1}[\frac{1}{\alpha^{N}}(1+ \alpha + \alpha^{2} + \cdots + \alpha^{N-1})] \\ &= \frac{e_0(h)}{\alpha^0} + Ch^{p+1}[\frac{1}{\alpha^{N}}(\frac{\alpha^{N} - 1}{ \alpha - 1})] \,. \end{align} $$

Since $$e_0(h) = 0,$$ we have

\begin{align} &\leq Ch^{p+1}(\frac{\alpha^{N} - 1}{ \alpha - 1}), \text{since } \alpha^{N}>1\,. \end{align} $$
 * e_N (h)| &\leq Ch^{p+1}[\frac{1}{\alpha^{N}}(\frac{\alpha^{N} - 1}{ \alpha - 1})] \\

Using the inequality $$ e^x - 1 \geq x, $$ we get

\begin{align} \alpha - 1 &= e^{Lh} -1 \geq Lh\quad\text{and} \\ \alpha^N - 1 &= e^{LNh} -1 = e^{LT} -1\,. \end{align} $$

Therefore, we can obtain that
 * $$|e_N (h)| \leq Ch^{p+1}(\frac{e^{LT} - 1}{Lh}) = C(\frac{e^{LT} - 1}{L})h^{p}. $$

That is,

 From equation ($$) and ($$),
 * $$ \tau_{n+1}(h) = O(h^{p+1})\quad\text{and}$$
 * $$|e_N (h)| = O(h^p), $$

so we can conclude that the global truncation error is one order lower than the local truncation error.

Graph
In this graph, $$ c = a + \frac{b - a}{2}. $$ The red line is the true value, the green line is the first step, and the blue line is the second step.
 * $$\overline{AB} $$ is the local truncation error at step 1, $$ \tau_1 = e_1 $$, equal to $$\overline{CD}. $$
 * $$ \overline{DE} $$ is separation because after the first step we are on the wrong solution of the ODE.
 * $$ \overline{EF} $$ is $$ \tau_2. $$

Thus, $$\overline{CF} $$ is the global truncation error at step 2, $$ e_2. $$

We can see from this,
 * $$ e_{n+1} = e_n + h[ A(t_n), y(t_n), h, f) - A(t_n, y_n, h, f) ] + \tau_{n+1}. $$

Then,
 * $$ e_2 = e_1 + h[ A(t_1), y(t_1), h, f) - A(t_1, y_1, h, f) ] + \tau_2. $$
 * $$ e_2 = \overline{AB} + \overline{DE} + \overline{CF}. $$

Exercise
Find the order of the 2-steps Adams-Bashforth method. You need to show the order of truncation error.

Solution: The basic method is to use Taylor expansions to derive the approximation method and to cancel as high of powers as you can. According to the Adams-Bashforth method, $$ y_{n+1} = y_{n} + h\left( \tfrac32 f(t_{n}, y_{n}) - \tfrac12 f(t_{n-1}, y_{n-1})\right). $$

By Taylor expansion, and we assume that we know the exact correct value,
 * $$ y(t_{n+1}) = y(t_n) + hy'(t_n) + \frac{h^2}{2!}y''(t_n) + \frac{h^3}{3!} + O(h^4) $$
 * $$ y_{n+1} = y(t_n) + \tfrac32 hy'(t_n) - \tfrac12 h\left( y'(t_n) - h y(t_n) +\frac{h^2}{2} y'(t_n) + O(h^3)\right). $$

Truncation error $$ \tau = y(t_{n+1}) - y_{n+1} = h^3y'''(t_n)( \frac{1}{6} + \frac{1}{4} ) + O(h^4) = O(h^3). $$

Since the truncation error $$ \tau $$ is $$ O(h^3). $$, then the global truncation error $$ e $$ is $$ O(h^2). $$

Therefore, the the order of the 2-steps Adams-Bashforth method is 2.