User:Egm6341.s11.team2/hwk5

=Problem 5.1: Error By Different Discretizations=

Ref: Lecture 23-1,Lecture 26-4

''' This problem was solved without referring to S10. '''

Given: The Following Functions
Consider the following three functions:

$$\displaystyle f(x) = \frac{1} $$

$$\displaystyle g(x) = {x^{1/3}} $$

$$\displaystyle h(x) = \exp (\sin (x)) $$

Find: The Error Based on the Following Discretizations
 A)  Use uniform discretization and find $$ {I_1} = \int\limits_0^6 {f(x)} .dx $$ and Find the error and the error ratio as the number of the intervals n are doubled

 B)  Use uniform, non-uniform discretization and Gauss-Legendre quadrature and find $$ {I_2} = \int\limits_0^1 {g(x)} .dx $$ and find the error and the error ratio as the number of the intervals n are doubled

 C)  Use uniform discretization and find $$ {I_3} = \int\limits_0^{2\pi } {h(x)} .dx $$ and find the error and the error ratio as the number of the intervals n are doubled

The Error of Numerical Integration Using Uniform Discretization

 * HW5 P1 A Trap.png


 * HW5_P1_A_TrapError.png‎


 * HW5_P1_A_SimpsonError.png

Comment: As the result shows, the critical error ratio of 4 and 16 are achieved using Trapezoid and Simpson rules, respectively.

The Error of Numerical Integration Using Uniform Discretization, Non-uniform Discretization, and Gauss-Legendre Quadrature

 * HW5 P1 B TrapError uniform.png


 * HW5 P1 B SimpsonError uniform.png

Comment: As the result shows that in case of using the uniform discretization, the error ratio never reaches to the critical ratio of 4 and 16 for Trapezoid and Simpson rules, respectively.


 * HW5 P1 B TrapError Non-uniform.png


 * HW5 P1 B SimpsonError Non-uniform.png

Comment It seems even using Chebychev non-uniform discretization does not help the error ratio to reach the critical ration of 4 and 16 for Trapezoid and Simpson rules, respectively.


 * HW5 P1 B Gauss.png

The Error of Numerical Integration Using Uniform Discretization for a Periodic Function

 * HW5 P1 C TrapError uniform.png


 * HW5 P1 C SimpsonError uniform.png

Comment: The result shows that the Trapezoid rule shows a much better error decrease ratio at the the same n value in comparison with Simpson rule.

=Problem 5.2: Linear State Space Model =

Given: The Linear State Space Model (LSSM)
Where LSSM has the general form
 * {| style="width:100%" border="0"

$$ \displaystyle \mathbf{x_{k+1}}_{\color{red}(nx1)} = \mathbf{F}_{\color{red}(nxn)} \mathbf{x_{k}}_{\color{red}(nx1)} $$     (2.1)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

And $$ \displaystyle \mathbf{F}_{\color{red}(nxn)} $$ is defined as
 * {| style="width:100%" border="0"

$$ \displaystyle \mathbf{F}_{\color{red}(nxn)} = \mathbf{I}_{\color{red}(nxn)} + \Delta \mathbf{A}_{\color{red}(nxn)} $$     (2.2)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

If we choose $$ \displaystyle {\color{red} n=2} $$ and define the matrices in (2.2) as
 * {| style="width:100%" border="0"

$$ \displaystyle \mathbf{I}_{\color{red}(2x2)} = \left[ \begin{matrix} 1 & 0 \\   0 & 1  \\ \end{matrix} \right], \quad \mathbf{\Delta} = 0.02, \quad \mathbf{A}_{\color{red}(2x2)} = \left[ \begin{matrix} -0.2 & 1    \\   -1   & -0.2  \\ \end{matrix} \right] $$     (2.3)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

In addition we will consider the k-th step of the system $$ \displaystyle \mathbf{x_k} $$ and initial point $$ \displaystyle \mathbf{x_0} $$, in (2.1) as
 * {| style="width:100%" border="0"

$$ \displaystyle \mathbf{x_k}_{\color{red}(2x1)} = \left\{ \begin{matrix} x_k^ \\ x_k^ \\ \end{matrix} \right\} \quad and \quad
 * style="width:95%" |
 * style="width:95%" |

\mathbf{x_0}_{\color{red}(2x1)} = \left\{ \begin{matrix} x_k^ \\ x_k^ \\ \end{matrix} \right\} = \left\{ \begin{matrix} 3 \\   -2  \\ \end{matrix} \right\} $$     (2.3)
 * 
 * }

1. Run LSSM
And plot $$ \displaystyle \{ \mathbf{x_j}, j=0,1,...\} $$ in the state space $$ \displaystyle \left( x^1,x^2\right) =\left( x,y\right) $$

2. Find the Equilibrium Point
As $$ \displaystyle \underset{x \to \infty}{\mathop{\lim }} \mathbf{x_{k+1}} = \underset{x \to \infty}{\mathop{\lim }} \mathbf{F^{k+1}\cdot x_0} =: \mathbf{ \hat{x}} $$

a) Plot $$ \mathbf{ \hat{x}} $$
Using a BIG RED DOT

b) Plot $$ \displaystyle \mathbf{x_} $$
Using a BIG BLUE DOT in the same plane as $$ \displaystyle \{ \mathbf{x_j}, j=0,1,...\} $$ using small dots

3. Gaussian Random Noise:
====a) Let $$ \displaystyle \mathbf{G} = \alpha \cdot \left\{ \begin{matrix}  1  \\   1  \\ \end{matrix} \right\}_{\color{red}2x1}  $$.====

4. Cauchy Random Noise:
====a) Let $$ \displaystyle \mathbf{G} = \alpha \cdot \left\{ \begin{matrix}  1  \\   1  \\ \end{matrix} \right\}_{\color{red}2x1}  $$.====

Hint:
Find a Matlab command to generate $$ \displaystyle \{ \mathbf{\theta_{j}}, j=0,1,2,... \} $$ in single-slit diffraction experiment.

1. Run LSSM
And plot $$ \displaystyle \{ \mathbf{x_j}, j=0,1,...\} $$ in the state space $$ \displaystyle \left( x^1,x^2\right) =\left( x,y\right) $$

Matlab Code: Matlab Plot of the LSSM:
 * HW5-P2-1.tif

Using the above Matlab code we were able to track the time evolution over time to the equilibrium point. We considered $$ \displaystyle n = 1001 $$ points to obtain the equilibrium position. It is reasonable to infer that this behavior continues as $$ \displaystyle n \rightarrow \infty $$ since the stability of the phase space by eigenvalue analysis holds. Namely, for a given $$ \displaystyle 2x2 $$ matrix $$ \displaystyle \mathbf{A} $$ of the general form,


 * {| style="width:100%" border="0"

$$ \displaystyle \mathbf{A}_{\color{red}(2x2)} = \left[ \begin{matrix} a & b \\ c & d \\ \end{matrix} \right] $$     (2.4)
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

Remarks:
Must satisfy the following two conditions to be classified as stable: i) The trace, $$ \displaystyle T = a+d < 0 $$ and ii) The determinant, $$ \displaystyle D = ad-bc > 0 $$

For our matrix $$ \displaystyle \mathbf{A} $$ we get: i) The trace, $$ \displaystyle T = -0.2+(-0.2) = -0.4 < 0 $$ and ii) The determinant, $$ \displaystyle D = (-0.2)(-0.2)-(-1)(1)=1.04 > 0 $$ Therefore, we observe a stable near-equilibrium at $$ \displaystyle n=1001 $$ which sufficiently demonstrates the behavior of this system. Since it was easy enough to carry out the numerical verification at larger $$ \displaystyle $$ a calculation for $$ \displaystyle n=10001 $$ which provided points around the order of $$ \displaystyle 10^{-17} $$.

2. Find the Equilibrium Point
As $$ \displaystyle k $$ goes to infinity,

a) Plot $$ \mathbf{ \hat{x}} $$
Using a BIG RED DOT

b) Plot $$ \displaystyle \mathbf{x_} $$
Using a BIG BLUE DOT in the same plane as $$ \displaystyle \{ \mathbf{x_j}, j=0,1,...\} $$ using small dots

Matlab Code:

Matlab Plot of the LSSM:
 * HW5-P2-1.tif

3. Gaussian Random Noise:
====a) Let $$ \displaystyle \mathbf{G} = \alpha \cdot \left\{ \begin{matrix}  1  \\   1  \\ \end{matrix} \right\}_{\color{red}2x1}  $$.====

c) Using the Matlab command randn to generate $$ \displaystyle \{ \mathbf{w_{j}}, j=0,1,2,... \} $$.
Matlab Code:

d) Plot $$ \displaystyle \{ \mathbf{x_{j}}, j=0,1,2,... \} $$ for $$ \displaystyle \alpha=0.5,1,2 $$.
Shown Below is the Linear State Space Model With Random Gaussian Noise and $$ \displaystyle {\color{red}\alpha = 0.5} $$:
 * HW5-P2-3-guassianNoiseAlpha_05.tif

Shown Below is the Linear State Space Model With Random Gaussian Noise and $$ \displaystyle {\color{red}\alpha = 1} $$:
 * HW5-P2-3-guassianNoiseAlpha_2.tif

Shown Below is the Linear State Space Model With Random Gaussian Noise and $$ \displaystyle {\color{red}\alpha = 2} $$:
 * HW5-P2-3-guassianNoiseAlpha_2.tif

4. Cauchy Random Noise:
====a) Let $$ \displaystyle \mathbf{G} = \alpha \cdot \left\{ \begin{matrix}  1  \\   1  \\ \end{matrix} \right\}_{\color{red}2x1}  $$.====

c) Using the Matlab command randn to generate $$ \displaystyle \{ \mathbf{w_{j}}, j=0,1,2,... \} $$.
Matlab Code:

d) Plot $$ \displaystyle \{ \mathbf{x_{j}}, j=0,1,2,... \} $$ for $$ \displaystyle \alpha=0.5,1,2 $$.
Shown Below is the Linear State Space Model With Cauchy Noise and $$ \displaystyle {\color{red}\alpha = 0.5} $$:
 * HW5-P2-4cauchyNoiseAlpha_05.tif

Shown Below is the Linear State Space Model With Cauchy Noise and $$ \displaystyle {\color{red}\alpha = 1} $$:
 * HW5-P2-4-cauchyNoiseAlpha_1.tif

Shown Below is the Linear State Space Model With Cauchy Noise and $$ \displaystyle {\color{red}\alpha = 2} $$:
 * HW5-P2-4cauchyNoiseAlpha_2.tif

Hint:
Find a Matlab command to generate $$ \displaystyle \{ \mathbf{\theta_{j}}, j=0,1,2,... \} $$ in single-slit diffraction experiment.

=Problem 5.3 Cauchy Distribution and Normal Distribution=

From the lecture slide Mtg26

Given:
The Cauchy distribution:

The Normal or Gauss distribution:

and the definition of quartiles $$\displaystyle {{Q}_{1}}$$, $$\displaystyle {{Q}_{2}}$$ and $$\displaystyle {{Q}_{3}}$$:

Find:
a) Three quartiles $$\displaystyle \left\{ Q_{1}^{C},Q_{3}^{C} \right\}$$ for Cauchy distribution

b) Three quartiles $$\displaystyle \left\{ Q_{1}^{G},Q_{3}^{G} \right\}$$ for Normal distribution

c) Let $$\displaystyle {{x}_{0}}=\mu =0$$ and $$\displaystyle {{\gamma }^{C}}=1$$ where $$\displaystyle {{\gamma }^{C}}$$ is the half width of $$\displaystyle C({{x}_{0}},\gamma )$$. Find $$\displaystyle {{\sigma }^{1}}$$ such that the half width of normal distribution $$\displaystyle {{\gamma }^{G}}=1$$. Then plot $$\displaystyle C(0,1)$$ and $$\displaystyle N(0,{{\sigma }^{1}})$$

d) Find $$\displaystyle \left\{ Q_{1}^{C},Q_{3}^{C} \right\}$$ for $$\displaystyle C(0,1)$$ and $$\displaystyle \left\{ Q_{1}^{G},Q_{3}^{G} \right\}$$ for $$\displaystyle N(0,{{\sigma }^{1}})$$. Then plot them with comments on results.

a) Three quartiles for Cauchy distribution
From the WolframAlpha we have got the general expression for the quartiles $$\displaystyle Q_{1}^{C}$$ and $$\displaystyle Q_{3}^{C}$$:

We can prove this by finding the cumulative distribution function of Cauchy distribution,

Then,

Similarly,

b) Three quartiles for Normal distribution
From the WolframAlpha we can find the quartiles for the normal distribution,

And also we can find the cumulative of normal distribution,

Then we can check the validity of the quartiles given.

c) Compare Cauchy and Gauss distribution with same half width.
With the information given we have our Cauchy distribution with half width equals 1,

To find a normal distribution we need to know the maximum value of

Hence,

Then we must find the x coordinates in the half width nodes.

The difference between the 2 roots should be twice the half width which is 2, hence,

which means the normal distribution is,

The plots of them can be achieved by WolframAlpha



d) Find quartiles for particular Cauchy and Normal distribution. Then plot them with comments on results.
By the formula from part a) and part b) we have,

The following figure is plotted by Adam Franklin,



Corrsponding Matlab Code

Comments

From the results above we can see that the range from $$\displaystyle Q_{1}^{C}$$ to $$\displaystyle Q_{3}^{C}$$ is significantly larger than that from $$\displaystyle Q_{1}^{G}$$ to $$\displaystyle Q_{3}^{G}$$. The probability density outside (-0.5729, 0.5729) decrease to very small in normal distribution while in Cauchy distribution this is until (-1, 1). And as we can see form the figures, the Cauchy pdf is generallu "under" normal distribution pdf around the peak, allowing more probability for the "tail".This means the Cauchy distribution allow more probability on the large outcome to occur.

=Problem 5.4 Mass-Spring-Damper with Gaussian noise and Cauchy noise=

Given: The Following Spring System
An ideal mass-spring-damper system with mass m, spring constant k and viscous damper of damping coefficient c is subject to an oscillatory force u. This can be illustrated in the following image.



Find:
1) Derive equations of motion in terms of d, c, k, m, u.

2) Let $$ {\mathbf{x}} = \left\{ {\begin{array}{ccccccccccccccc} d \\  {\dot d} \end{array}} \right\} = \left\{ {\begin{array}{ccccccccccccccc}   \\ \end{array}} \right\}$$. Find $$\left( {{\mathbf{F}},{\mathbf{G}}} \right) $$

3) Find $${c_{cr}}$$ in terms of k, m st. this system is critically damped.

4) Let k=1, m=1/2, $$ {{\mathbf{x}}_0} = {\left[ {0.8, - 0.4} \right]^T} $$

a) For u=0, plot $${{\mathbf{x}}_k}$$ for $$c = \frac{1}{2}{c_{cr}},{c_{cr}},\frac{3}{2}{c_{cr}}$$.

b) For u=0.5 Gaussian noise and $$c = \frac{3}{2}{c_{cr}}$$, plot $${{\mathbf{x}}_k}$$.

c) For u=0.5 Cauchy noise and $$c = \frac{3}{2}{c_{cr}}$$, plot $${{\mathbf{x}}_k}$$.

Solution:
1) Since the system is subject to an oscillatory force generated from the spring, we have this oscillatory force

and also a damping force from the damper:

Since we can use Newton's Second Law

Therefore we can derive equations of motion as following:

2) From Eq. 5.4.4, we can solve for $${\ddot d}$$ as following:

Therefore we can rewrite $${\dot x}$$ in matrix form

If we make a discretization as

and use forward Euler method

We can have

3) Since we can rewrite Eq. 5.4.4 as

where $$\omega_0 = \sqrt{ k \over m } $$, $$\zeta = { c \over 2 \sqrt{m k} }$$

When $$\zeta = 1$$, the system is said to be critically damped. A critically damped system converges to zero faster than any other and without oscillating. Therefore we have $${c_cr}$$ indicated in Eq. 5.4.12 and can look for damping for more explanations.

4) a) The following matlab codes are used to solve the problem without any noise, and the damping coefficients can be chose to create different cases with under-damping, critical damping or over-damping.



Comment:

Different c values cause different damping cases. From the figure above, we can find out a critically damped system converges to zero faster than any other and without oscillating.

b) The following matlab codes are used to solve the problem with the Gaussian Random Noise and over-damping case.



Comment:

When the system has Gaussian noise, the convergence can be archived, however the final result is not exactly zero since there is still noise all the time and the final result oscillated around zero.

c) The following matlab codes are used to solve the problem with the Cauchy Noise and over-damping case.



Comment:

When the system has Cauchy noise, the same result was generated as Gaussian noise. However the oscillation around the zero is much larger than the case with Gaussian noise.

=Problem 5.5: Richardson Extrapolation and Romberg Integration applied to HW* 2.4=

''' This problem was solved without referring to S10 homework. '''

Statement:
1) Modify the Matlab code from HW*2.4 to make it more efficient, i.e. use Richardson extrapolation to compute higher order integral estimates starting from the trapezoidal rule. 2) Construct a Romberg table for results in 1). Compare to results from HW* 2.4.

Solution:
1) From problem 2.4, the composite trapezoidal rule was implemented for 2,4,8,16,32, and 64 node points. Equation 5.5.1 was then implemented to find the higher order integral estimates. The code is easily modified to include greater numbers of nodes or even higher order integral estimates.

The order of the error on each integral estimate can be found using Equation 5.5.2:

2) The following table presents the results of using Romberg integration on problem 2.4. The subscript on T corresponds to the iteration. Since the method starts with the composite trapezoidal rule, T0 corresponds to integration by the trapezoidal rule for various numbers of nodes.

Using Equation 5.5.2, the error of T3(4) is on the order of ~O(-7). Thus, using this method the third order estimate using only 4 nodes has O(-7) accuracy. However, this requires knowing the integral estimate using the composite trapezoidal rule for 2,4,8,16 and 32 nodes. For comparison, the composite trapezoidal rule alone only reaches O(-6) accuracy using 512 nodes, and the composite Simpson rule reaches O(-6) accuracy with 16 nodes. Legendre-Gauss quadrature reaches O(-7) accuracy with only 4 nodes. If the quadrature weights are readily available, Legendre-Gauss quadrature converges to an accurate solution with many fewer nodes required than the other methods. The Romberg integration scheme provides another useful method for numerically calculating the value of an integral, especially because it uses previous results to improve the integral estimate.

=Problem 5.6: Computing CTk(n) =

''' This problem was solved without referring to S10. '''

From the lecture slide Mtg 30-2

Given:
From [[media:Nm1.s11.mtg30.djvu|Lecture 30-1]], the corrected Trap. Rule is

Find:
1) Compute In using CTk(n), for k = 1 and n = 2, 4, 8, ... until error is of order 10-6

2) Compute In using CTk(n), for k = 2 and n = 2, 4, 8, ... until error is of order 10-6

3) Compute In using CTk(n), for k = 3 and n = 2, 4, 8, ... until error is of order 10-6

Solution:
The corrected Trap. Rule is

where

and

$$B_{2i}$$ are Bernoulli numbers. The values for $$ d_{i}, i=1,2,3 $$ are

We can develop expressions for $$CT_1(n)$$,$$CT_2(n)$$ and  $$CT_3(n)$$ using Eq 6.1.

$$CT_1(n)$$

where $$T_0(n) $$ is the comp. trap. rule below.


 * {| style="width:100%" border="0" align="center"

T_0(n) = \frac{b-a}{n} \left[ \frac{1}{2}f_0 + f_1 + \ldots + f_{n-1} + \frac{1}{2}f_n \right]$$ (6.7)
 * $$\displaystyle
 * $$\displaystyle
 * 
 * }
 * }

$$CT_2(n)$$

$$CT_3(n)$$

The matlab code below was used to generate the above table.

=Problem 5.7: Discuss Pros and Cons of Different Numeric Integration Methods =

''' This problem was solved by referencing Team 2, S10 homework. '''

From the lecture slide Mtg 30-1

Problem Statement:
Discuss the pros and cons of the following quadrature methodslecture 30-1:


 * 1) Taylor's Series
 * 2) Composite Trapezoidal Rule
 * 3) Composite Simpson's Rule
 * 4) Romberg Table (Including Richardson's Extrapolation)
 * 5) Corrected Trapezoidal Rule.

Pros:

 * 1) Local behavior about the point of expansion is very accurate, fast and computationally tractable.
 * 2) The number of operations are typically small for higher order approximation of simple functions, such as polynomials.
 * 3) With the aid of symbolic packages complex function behavior is easily found and accurate.
 * 4) The solution is highly accurate for a small order compared to Trapezoidal and Simpson's rule.
 * 5) It is one step and explicit.

Cons:

 * 1) One needs to know the "smoothness" and global behavior of the function since computing derivative of an unbounded solution would be pointless.
 * 2) The number of operations may be large for a complex functions. For example, functions containing non-distributable products, quotients or composite trigonometric and exponential, to name a few.
 * 3) In addition, oscillatory functions, especially those having a small period(high frequency) are more cumbersome to approximate using a Taylor's Series.
 * 4) It needs the explicit form of derivatives of the function.

Pros:

 * 1) The method is quite simple in implementing, compared to other methods.
 * 2) It takes a piecewise linear approximation of the function and hence execution of the solution is faster.
 * 3) The solution is very rapidly convergent.
 * 4) It consists of the fact that weighting coefficients are nearly equal to each other.
 * 5) Convergence for periodic functions is very quick when choosing to integrate over integer multiple of the period.
 * 6) The function (in the integrand) need only be twice continuously differentiable on the domain.

Cons:

 * 1) The error is much higher compared to the other methods.
 * 2) This method is less accurate for non linear functions since it has straight line approximation.
 * 3) Steep concavity (or convexity) of the integrand can result in under or overestimates due to the second order continuously differentiable criteria. This means for a concave up integrand, and thus a positive second derivative, one will observe negative error and therefore an overestimate. Similarly, for a concave down (convex) integrand, and thus a negative second derivative, one will observe positive error and therefor an underestimate.

Pros:

 * 1) It assumes piecewise quadratic interpolation and hence the accuracy is much higher compared to trapezoidal rule.
 * 2) It can integrate polynomials up to third order accurately.
 * 3) The error is less compared to that of trapezoidal rule.
 * 4) Weighting coefficients are simple and do not fluctuate in terms of magnitude.

Cons:

 * 1) A large number of ordinates are needed in between the interval.

Pros:

 * 1) Error is reduced subsequently as we go from TK(n) to TK+1(n).
 * 2) It is efficient in the sense that TK+1(n) is calculated from the already computed values of TK(n) and TK(2n).
 * 3) The above condition reduces the time and space requirement. Also, the table format is good for pursuing the history of calculations and making some comparisons.

Cons:

 * 1) In Richardson extrapolation, We are neglecting the higher orders of aih2i.

Pros:

 * 1) When integrals of periodic functions are approximated numerically, Corrected Trapezoidal functions are the best choice. Even we can reach to the error equal to zero in this condition.

Cons:

 * 1) This method demands derivatives of the function at the end points of the interval.
 * 2) Bernoulli's number should be calculated for each order of derivatives.

=Problem 5.8: Theorem of Higher Order Error for Trap. Rule =

''' This problem was solved without referring to S10. '''

From the lecture slide Mtg 30-2

Given
Higher Order Error for Trap. rule Eq (5) from lecture 30-2, which is:

Solution
Part 1: Change of Variables

First, transform the variables of the error function for higher order trapezoidal rule,


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} E_{n}^{T} = I-{{I}_{n}} &=\int\limits_{a}^{b}{f(x)dx}-{{T}_{0}}(n) \\ &=\sum\limits_{k=0}^{n-1}{[\int\limits_^{f(x)dx}-\frac{h}{2}\{f({{x}_{k}})+f({{x}_{k+1}})\}]} \\ \end{align}

$$

(8.4)
 * 
 * }

Where $${{x}_{k}}:=a+kh,h=\frac{(b-a)}{n}$$.

Now, we transfer $$\int\limits_^{f(x)dx}$$ to $$\int\limits_{-1}^{+1}{f(x(t))dt}$$.
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} x(t)&=\frac{{{x}_{k+1}}-{{x}_{k}}}{2}t+\frac{{{x}_{k+1}}+{{x}_{k}}}{2} \\ & =\frac{h}{2}t+\frac{{{x}_{k+1}}+{{x}_{k}}}{2} \\ \end{align}

$$

(8.5)
 * 
 * }

From the transformation, we see the following:


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} x(-1)&={{x}_{k}} \\ x(0)&=\frac{{{x}_{k+1}}+{{x}_{k}}}{2} \\ x(+1)&={{x}_{k+1}} \\ dx&=\frac{h}{2}dt \\ \end{align}

$$

(8.6)
 * 
 * }.

By using Eq 8.6, we can rewrite that trapezoidal error as follows:


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} E_{n}^{T} &=\sum\limits_{k=0}^{n-1}{[\int\limits_{-1}^{+1}{f(x(t))\frac{h}{2}dt}-\frac{h}{2}\{f(x(-1))+f(x(+1))\}]} \\ & =\frac{h}{2}\sum\limits_{k=0}^{n-1}{[\int\limits_{-1}^{+1}{f(x(t))dt}-\{f(x(-1))+f(x(+1))\}]} \\ \end{align}

$$

(8.7)
 * 
 * }

By defining $${{g}_{k}}(t)=f(x(t))_ – ^ – such_ – ^ – that_ – ^ – x\in [{{x}_{k}},{{x}_{k+1}}]$$, we have


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

E_{n}^{T}=\frac{h}{2}\sum\limits_{k=0}^{n-1}{[\int\limits_{-1}^{+1}{{{g}_{k}}(t)dt}-\{{{g}_{k}}(-1)+{{g}_{k}}(+1)\}]}

$$

(8.8)
 * 
 * }

Part 2: Show the Equality

Next, integrate by parts.


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\int\limits_{-1}^{+1}{(-t)g_{k}^{(-1)}(t)dt} =[(-t){{g}_{k}}(t)]_{x=-1}^{x=1}-\int\limits_{-1}^{+1}{{{g}_{k}}(t)d(-t)}

$$

(8.9)
 * 
 * }

Part 3: Higher Order Terms

Lastly, since we have already defined the function $${{g}_{k}}(t)=f(x(t)), x\in [{{x}_{k}},{{x}_{k+1}}]$$ and from the transformation above, we have $$dx=\frac{h}{2}dt$$.

Successively differentiating the function $$\displaystyle {{g}_{k}}(t)$$, wrt $$\displaystyle t$$, we get

=References=

=Contributing Members & Referenced Lecture=