User:Egm6341.s11.team4/HW5

=Problem 5.1 Problem Type and Quadrature Technique= Problem given on Lecture Slides 23-1, 23-2, 23-4, and 26-4

Given
1. $$ I=\int_{0}^{6}\frac{dx}{1+(x-3)^{2}} $$

2.$$ I=\int_{0}^{1}x^{\frac{1}{3}} dx $$

3. $$ I=\int_{0}^{2\pi} e^{sin(x)} dx $$

Objective
1.

1. Develop a plot of the errors and error ratios for composite Trapezoidal and Composite Simpson's quadrature using uniform discretization as the discretization points go from 2 to 128.

2. Plot the piecewise linear and piecewise quadratic approximations to demonstrate the error difference in the quadrature approximations.

2.

1. Develop a plot of the errors and error ratios for composite Trapezoidal and Composite Simpson's quadrature using uniform discretization as the discretization points go from 2 to 128.

2. Develop a plot of the errors and error ratios for composite Trapezoidal and Composite Simpson's quadrature using non-uniform discretization with a smaller discretization near 0 as the discretization points go from 2 to 128.

3. Use Gauss-Legendre quadrature to approximate the integral.

3.

1. Develop a plot of the errors and error ratios for composite Trapezoidal and Composite Simpson's quadrature using uniform discretization as the discretization points go from 2 to 32.

Part 1: Runge Phenomenon
$$ I=\int_{0}^{6}\frac{dx}{1+(x-3)^{2}} $$

Performing the Composite Trapezoidal and Composite Simpson's Quadrature with an increasing number of uniformly distributed nodes we obtain the following table.

As expected, we see that Simpson's quadrature outperforms Trapezoidal quadrature for an equivalent number of nodes. Looking at the ratio of the errors as the number of nodes increases, we see that though the error ratio starts out large for trapezoidal quadrature it ends up exhibiting an error ratio from 3 to 4. This is near the theoretical prediction of 4 due to the $$ \mathcal{O}(h^{2}) $$ term. Looking at the error ratio for the Simpson's quadrature we see that it varies significantly as the number of nodes increases. It starts out small initially before overshooting and then converging as n goes to infinity. We would expect the error ratio to be about 16 due to the $$ \mathcal{O}(h^{4}) $$ term. (Note: After comparing these error results with the results obtained by other teams it seems that the error ratio is a numerically sensitive operation. This numerical sensitivity may have led to our error ratio not reflecting the theoretical error ratio of 16.)

Now we will plot the function against its piecewise linear and piecewise quadratic approximations used in Trapezoidal and Simpson's quadrature. The following three tables display the function and its approximations all on one graph for varying number of discretization points. As expected, the approximating polynomials produced by the composite quadrature vary significantly from the original function for a small number of nodes and becomes virtually indistinguishable from the true function value as n goes to infinity.

Since Simpson's quadrature is more effective for this problem, it is worthwhile to investigate the source of this advantage. While the two quadrature approximations both appear very similar over the majority of the function, the Simpson's quadrature outperforms the Trapezoidal quadrature at the peak. The following figure magnifies the function peak for n=32.



As can be seen in the above figure, the Simpson's quadrature does a substantially better job of approximating the function at peaks. In a problem exhibiting behavior such as this one we would expect Simpson's quadrature to be an effective choice for numerical integration.

Part 2: Discretization Improvement
$$ I=\int_{0}^{1}x^{\frac{1}{3}} dx $$

Performing the Trapezoidal and Simpson's Quadrature with an increasing number of uniformly distributed nodes, performing Trapezoidal and Simpson's Quadrature with a smaller discretization near zero, and performing Gauss-Legendre Quadrature, we obtain the following table.

Plotting our true function value against its quadrature approximations.

As expected, the trapezoidal quadrature improves substantially as the number of nodes increases. Additionally, the error ratio is consistently around 2.5, relatively near to our theoretical ratio of 4. It should also be noted, by quick inspection of the above plots, that the bulk of the quadrature error arises due to inadequate approximation near zero. The function is well approximated by a linear model away from 0, but near 0 it appears to be highly non-linear. This is our first clue that a node redistribution may improve quadrature.

Here, we can immediately see how Simpson's quadrature is a powerful tool. When n=2, though the function approximation is sub-par error cancellation still allows us to obtain a strong quadrature result. The error ratio is consistently near 2.5. This is not near the value of 16 we expect from theory. This is a clue that we may not be applying Simpson's rule as effectively as possible. It is interesting to note that the As n increases, we once again see that our function is not as well approximated near 0 as it is elsewhere. Node redistribution would likely aid Simpson's quadrature as well.

In order to aid the quadrature, we developed a node distribution method that created a smaller discretization near zero, where our integration methods were least effective. This was done through a modification of the Chebyshev nodes. Instead of applying the Chebyshev nodes directly, which would increase the discretization near both endpoints, we took only the negative half the Chebyshev nodes. These nodes varied from -1 to 0 and were disproportionately located near -1. To make these nodes work for our integration problem from 0 to 1, we added 1 to all node positions. This gave us a node distribution from 0 to 1 with a finer discretization near zero, as desired. It should also be noted that, since Simpson's rule requires three equidistant nodes for each panel, a node was placed between all neighboring pseudo-Chebyshev nodes that was equidistant to its neighbors. The same non-uniform node distribution was used for both the composite trapezoidal quadrature and composite Simpson's quadrature for fair comparison.

Our table shows the decided advantage our trapezoidal quadrature with node redistribution has over both uniform discretization quadrature methods. The error produced with two nodes is, as expected, the same for both the case with node redistribution and the case without it. This is due to the requirement we made in our node distribution that every other node be equidistant to its neighbors. This means that, when n=2, the nodes are placed in identical locations for uniform and non-uniform discretization. As n grows larger, however, we can see the vast improvement of node redistribution. A sight check of the plots produced by trapezoidal quadrature with and without node redistribution for n=8 shows that the error is visibly smaller. Additionally, our average error ratio of about 4.5 is substantially closer to the theoretical value of 4.

The node redistribution aids the Simpson quadrature as well as the trapezoidal quadrature. The node redistribution made smaller panels for the composite Simpson's quadrature near 0, meaning that the very non-parabolic behavior we saw in our previous graph that was contained to the first panel will have a less significant impact on our quadrature. A visual comparison for n=8 between Simpson's quadrature with and without node redistribution shows that there is a substantial visible improvement in our approximation. At n=16 the error of the non-uniform composite Simpson's quadrature is already an order of magnitude smaller than that of the uniform quadrature. Additionally, the ratio of the errors is much improved. By n=64 we are seeing an error ratio of 18.6, very near the theoretical value of 16.

Lastly, it should be noted that the Gauss-Legendre quadrature method is as effective or more effective than any of the previously discussed methods. Though it does require the computation of the weights and nodes when this can be done in an effective manner it is clear that Gauss-Legendre quadrature is a powerful method.

Part 3: Periodic Functions
$$ I=\int_{0}^{2\pi} e^{sin(x)} dx $$

It is worth noting that while Simpson's quadrature is generally considered superior to Trapezoidal quadrature, there are are classes of problems where this may not be the case, as in periodic functions such. Performing the quadrature approximation of the provided equation with a uniform discretization, we obtain the following table.

As can be seen, the Trapezoidal quadrature is substantially better than the Simpson's quadrature for even a small number of nodes. At n=8 the Trapezoidal error is 4 orders of magnitude smaller than the Simpson's error. Furthermore, the error ratio is roughly 30,000, far exceeding the theoretical value of 4 for the composite trapezoidal rule. Meanwhile, the Simpson's error ratio is only about 50 for the same number of nodes. This problem emphasizes how composite Trapezoidal quadrature beats composite Simpson's quadrature for periodic functions. As the following plot indicates, this is at least partially due to the composite Trapezoidal approximation picking up the advantages of error cancellation for periodic functions and composite Simpson's approximation losing it.

As can be seen, the composite Trapezoidal rule sometimes underestimates the function and sometimes overestimates it, resulting in error cancellation. Meanwhile the composite Simpson's rule consistently overestimates the function.

Part 3 Code
This problem was solved by Brendan Mahon

Problem Statement
Refer to lecture notes [[media:nm1.s11.mtg25.djvu|25-2]] and [[media:nm1.s11.mtg25.djvu|26-1]] for detailed description

Given
Linear State Space Model without Random Noise (LSSM):
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{x}_{k+1}}=F{{x}_{k}}

$$ Linear State Space Model without Random Noise (LSSMRN):
 * }
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{x}_{k+1}}=F{{x}_{k}}+G{{w}_{k+1}}.

$$ For this specific case here, we choose
 * }
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{F}_{n\times n}}={{I}_{n\times n}}+\Delta {{A}_{n\times n}},n=2

$$ And $$I=\left( \begin{matrix}  1 & 0  \\   0 & 1  \\ \end{matrix} \right)$$, $$A=\left( \begin{matrix}   -0.2 & 1  \\   -1 & -0.2  \\ \end{matrix} \right)$$, $$\Delta =0.02$$, $${{x}_{0}}=\left( \begin{matrix}   3  \\   -2  \\ \end{matrix} \right)$$
 * }

Objective
1). Run LSSM and plot $${{x}_{j}},j=0,1,2,\cdots $$ in space $$({{x}^{1}},{{x}^{2}})$$. 2). Find equilibrium point as
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\underset{k\to \infty }{\mathop{\lim }}\,{{x}_{k+1}}=\underset{k\to \infty }{\mathop{\lim }}\,{{F}^{k+1}}{{x}_{0}}=_ – ^ – :_ – ^ – \hat{x}

$$ Plot $$\hat{x}$$ as big red dot and $${{x}_{0}}$$ as big blue dot in the same plot for $${{x}_{j}},j=1,2,\cdots $$ small dots. 3). Let $$G=\left( \begin{matrix} 1 \\   1  \\ \end{matrix} \right)\alpha $$, thus $${{w}_{k+1}}=({{w}_{k+1}})$$. Use Matlab randn to generate $$({{w}_{j}},j=0,1,2,\cdots )$$. Plot $${{x}_{j}},j=0,1,2,\cdots $$for $$\alpha =0.5,1,2$$. 4). Same as 3). but with Cauchy random noise. Hint: Find a Matlab command to generate $${{\theta }_{j}},j=0,1,2,\cdots $$in single slit diffraction.
 * }.

2).Find equilibrium point and run LSSM

 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:92%; padding:10px; border:2px solid #ff0000" |
 * style="width:92%; padding:10px; border:2px solid #ff0000" |

\hat{x}=\underset{k\to \infty }{\mathop{\lim }}\,{{F}^{k+1}}{{x}_{0}}=\underset{k\to \infty }{\mathop{\lim }}\,{{\left( \begin{matrix}  0.996 & 0.02  \\   -0.02 & 0.996  \\ \end{matrix} \right)}^{k+1}}\left( \begin{matrix}   3  \\   -2  \\ \end{matrix} \right)=\left( \begin{matrix}   0  \\   0  \\ \end{matrix} \right)

$$
 * }

4). Run LSSM with Cauchy Random Noise


This problem was solved by Hailong Chen

=Problem 5.3 Cauchy Heavy Tails=

Given
For 1) and 2), Quartile points: $$ Q_1 \ $$, $$ Q_2 \ $$ and $$ Q_3 \ $$.

Cauchy pdf :

Gauss(normal) pdf :

For 3) and 4) $$ x_0 = \mu = 0 \ $$ and $$ \gamma^{C}=1 \ $$ $$ \gamma^{C}= \ $$ half width of $$ C(x_0, \gamma^{C}) \ $$ $$ \gamma^{G}= \ $$ half width of $$ N(\mu, \gamma^{C}) \ $$

Objectives
1) Find $$ \left \{ Q_1, Q_2 \right \} \ $$ for $$ C \left ( x_0, \gamma \right ) \ $$

2) Find $$ \left \{ Q_1, Q_2 \right \} \ $$ for $$ N \left ( \mu, \sigma \right ) \ $$

3) Find $$ \sigma\ ^1$$ such that $$ \gamma\ ^G =1 $$, and Plot $$ C \left ( 0,1 \right ) $$ and $$ N \left ( 0, \sigma\ ^1 \right ) $$

4) Find and plot $$ \left \{ Q_1^c, Q_3^c \right \}  $$ for $$ C \left ( x_0, \gamma\ \right ) $$ and $$ C \left ( 0, 1 \right ) $$ also find and plot $$ \left \{ Q_1^G,Q_3^G  \right \}  $$ for $$ N \left ( \mu\, \sigma\ \right ) $$ and $$ N \left ( 0, \sigma\ ^1 \right ) $$

The cumulative distribution function of Cauchy pdf
Substitue Q1 into x of eq(3.1) we get eq(3.2).

From eq(3.2)

Substitue Q3 into x of eq(3.1) we get eq(3.4).

From eq(3.5)

The cumulative distribution function of Gauss pdf
Substitue Q1 into x of eq(3.8) we get eq(3.8).

Substitue Q3 into x of eq(3.8) we get eq(3.10).

Find standard deviation for half width of Gaussian distribution
Gaussian pdf is

The height of Gaussian pdf is

We want to find sigma which will place the half width at 1.

MatLab Code

Plot



===Find $$ \left \{ Q_{1}^{C}, Q_{3}^{C} \right \} $$ for $$ C(x_0,\gamma) \ $$ and $$ C(0,1) \ $$ and find $$ \left \{ Q_{1}^{G}, Q_{3}^{G} \right \} $$ for $$ N(\mu,\sigma) \ $$ and $$ N(0,\sigma^1) \ $$.===

From 1)

From 1)

Plot $$ \left \{ Q_{1}^{C}, Q_{3}^{C} \right \} $$ for $$ C(0,1) \ $$ and $$ \left \{ Q_{1}^{G}, Q_{3}^{G} \right \} $$ for $$ N(0,\sigma^1) \ $$

MatLab Code

Plot



Objective
1. drive eq of motion in terms of d, c, k, m, u 2. let $$\underline{x}=\{\begin{matrix} d \\ {\dot{d}} \\ \end{matrix}\}$$. Find ( $$\underline{F}$$, $$\underline{G}$$ ). 3. find C in terms of k, m, such that systen is critically damped. 4. let k=1, m=1/2, x0=$${{[0.8,-0.4]}^{T}}$$ a. For u=0, plot $$\underline$$ for $$c=0.5*{{C}_{cr}},{{C}_{cr}},1.5*{{C}_{cr}}$$ b. For u=0.5 gaussian noise and c=$$1.5*{{C}_{cr}}$$, plot $$\underline$$ c). For u=0.5 Cauchy noise and c=$$1.5*{{C}_{cr}}$$, plot $$\underline$$

solution
1. According the balance of force on the mass of M, we obtain that:
 * {| style="width:100%" border="0"

$$\begin{align} & {{F}_{c}}=c\centerdot \dot{d} \\ & {{F}_{k}}=k\centerdot d \\ \end{align}$$ $$u-{{F}_{c}}-{{F}_{k}}=m\centerdot a=m\centerdot \ddot{d}$$ So, the equation of motion can be represented as: $$u-c\centerdot \dot{d}-k\centerdot d=m\centerdot \ddot{d}$$ 2. From above, we can drive the following equation:
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$\ddot{d}=-\frac{k}{m}\centerdot d-\frac{c}{m}\centerdot \dot{d}+\frac{u}{m}$$

And it is obvious that :
 * 
 * }
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$\dot{d}=0\centerdot d+1\centerdot \dot{d}+0$$

Then, we can form the matrix equation using the above two equations:
 * 
 * }
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$\left[ \begin{matrix} {\dot{d}} \\ {\ddot{d}} \\ \end{matrix} \right]=\left[ \begin{matrix} 0 & 1 \\   -\frac{k}{m} & -\frac{c}{m}  \\ \end{matrix} \right]\centerdot \left[ \begin{matrix} d \\ {\dot{d}} \\ \end{matrix} \right]+\frac{1}{m}\centerdot \left[ \begin{matrix} 0 \\   u  \\ \end{matrix} \right]$$

Discretization of model with $$\underline=({{\underline{x}}_{k+1}}-{{\underline{x}}_{k}})/h$$, then:
 * 
 * }
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$${{\underline{x}}_{k+1}}=[\underline{I}+h\centerdot \left[ \begin{matrix} 0 & 1 \\   -\frac{k}{m} & -\frac{c}{m}  \\ \end{matrix} \right]]\centerdot {{\underline{x}}_{k}}+\frac{h}{m}\centerdot \left[ \begin{matrix} 0 \\   u  \\ \end{matrix} \right]$$

Simplying the above equation,
 * 
 * }
 * {| style="width:100%" border="0"

$${{\underline{x}}_{k+1}}=\left[ \begin{matrix} 1 & h \\ -\frac{kh}{m} & 1-\frac{hc}{m} \\ \end{matrix} \right]\centerdot {{\underline{x}}_{k}}+\frac{h}{m}\centerdot \left[ \begin{matrix} 0 \\   u  \\ \end{matrix} \right]$$ So,
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }
 * {| style="width:100%" border="0"

$$\begin{align} & \underline{F}=\left[ \begin{matrix} 1 & h \\ -\frac{kh}{m} & 1-\frac{hc}{m} \\ \end{matrix} \right] \\ & \underline{G}=\frac{h}{m} \\ \end{align}$$
 * style="width:95%" |
 * style="width:95%" |
 * 
 * }

3. The equation of motion is represented as:
 * {| style="width:100%" border="0"


 * style="width:95%" |
 * style="width:95%" |

$$\ddot{d}=-\frac{k}{m}\centerdot d-\frac{c}{m}\centerdot \dot{d}+\frac{u}{m}$$


 * 
 * }
 * Then, the natural frequency and damping ration are: $$\omega_0 = \sqrt{\frac{k}{m}}$$ and $$\zeta = \frac{c}{2 m \omega_0} \, .$$
 * Critically damped(ζ=1): The system returns to equilibrium as quickly as possible without oscillating.


 * Then, $${{C}_{cr}}=2\centerdot \sqrt{km}$$

4. a). For u=0, plot $$\underline$$ for $$c=0.5*{{C}_{cr}},{{C}_{cr}},1.5*{{C}_{cr}}$$

b). For u=0.5 gaussian noise and c=$$1.5*{{C}_{cr}}$$, plot $$\underline$$



c). For u=0.5 Cauchy noise and c=$$1.5*{{C}_{cr}}$$, plot $$\underline$$



This problem was solved by shengfeng yang

=Problem 5.5: Modifying MATLAB Code to Calculate the Composite Trapezoidal Rule More Efficiently=

Objective

 * 1) Modify the MATLAB code used in HW 2.4 to make calculating the composite trapezoidal rule more efficient, i.e., $$ \displaystyle T_0(2^j)=T_0(2^{(j-1)})+\cdots$$
 * 2) Create a Romberg table and compare results to those initially calculated in HW 2.4

Modified MATLAB Code and Development Process
Recall the formula for the composite trapezoidal rule


 * $$T_0(n)=\frac{(b-a)}{n}\left(\frac{1}{2}f(x_0)+f(x_1)+f(x_2)+\cdots+f(x_{n-1})+\frac{1}{2}f(x_n)\right)$$

where $$ h = \frac{(b-a)}{n},,\;\;x_0=a,$$ and $$\displaystyle x_n=b$$. In practice it is common to successively double n (typically a power of two to begin with), the number of panels used, until a satisfactory convergence is obtained, effectively halving each subinterval. The new set of nodes would include all of the previously used $${x_i, i=0,1,2,...,n}$$ as well as new nodes which bisect the previous panels. To increase computation efficiency, the values of the function evaluated at all the nodes during the previous computation will be reused to calculate successive estimates. This yields the following relationship

Note that the $$\frac{1}{2}$$ factor in the result of (5.1) is due to the halving of the step-size; $$ h_{new} = \frac{1}{2}h_{old}$$.

This concept was employed to modify a previous MATLAB code to use this technique to cut down on computational cost. The modified code is found below.

Romberg Table
The Romberg Table involves using previously calculated numerical estimates of an integral to produce higher order estimates to surprisingly high accuracy. Equation [[media:nm1.s11.mtg29.djvu| (3)p.29-5]] gives the relation

A MATLAB code was developed to take the output of the modified composite trapezoidal rule and create a Romberg table and is found below:

These codes were used to find estimates of the integral

The following MATLAB script performed the computations for this case:

Matlab Code:

Comments
The 2nd column, corresponding to the composite trapezoidal rule approximations match those previously calculated in 2.4 exactly. It it observed that for higher levels of the trapezoidal rule (higher k) that a much smaller number of panels is needed for far greater accuracy. This shows that the Romberg table is a powerful tool for acquiring accurate numerical estimations of integrals using minimal computational effort.

Solved by William Kurth.

=Problem 5.6: Computation of In using CTk(n)=

Refer to lecture slide [[media:nm1.s11.mtg30.djvu|30-1]] for the problem statement.

Given

 * {| style="width:70%" border="0" align="center"



(6.1)
 * $$\displaystyle I:=\int_{a}^{b}\underbrace{\frac{e^x-1}{x}}_{f(x)}dx$$
 * 
 * }
 * }

For


 * {| style="width:70%" border="0"




 * $$\displaystyle a = -1$$


 * }
 * }

and


 * {| style="width:70%" border="0"




 * $$\displaystyle b = 1$$


 * }
 * }

Corrected Trapezoidal Rule:


 * {| style="width:70%" border="0" align="center"



(6.2)
 * $$\displaystyle CT_k(n) = CT_{k-1}(n) + a^0_kh^{2k} + O(h^{2(k+1)})$$
 * 
 * }
 * }

Where


 * {| style="width:70%" border="0" align="center"



(6.3)
 * $$\displaystyle a_i^0 = d_i \left[ f^{(2i-1)}(b) - f^{(2i-1)}(a) \right]$$
 * 
 * }
 * }

and


 * {| style="width:70%" border="0"




 * $$\displaystyle h = \frac{b-a}{n}$$


 * }
 * }

Composite Trapezoidal Rule:


 * {| style="width:70%" border="0" align="center"



(6.4)
 * $$\displaystyle CT_{0}(n) = T_0(n) = \frac{b-a}{n} \left[ \frac{1}{2}f_0 + f_1 + \ldots + f_{n-1} + \frac{1}{2}f_n \right]$$
 * 
 * }
 * }

Where


 * {| style="width:70%" border="0"




 * $$\displaystyle d_i = -B_{2i}/(2i)!$$


 * }
 * }

and


 * {| style="width:70%" border="0"




 * $$\displaystyle B_{i} = Bernoulli Number$$


 * }
 * }

Objective
a.) Compute In using CTk(n), for k = 1 and n = 2, 4, 8, 16, ... until error is of order 10-6

b.) Compute In using CTk(n), for k = 2 and n = 2, 4, 8, 16, ... until error is of order 10-6

c.) Compute In using CTk(n), for k = 3 and n = 2, 4, 8, 16, ... until error is of order 10-6

Solution
Using Wolfram Alpha we find that the exact value of the integral to 12 significant digits is:

I = 2.11450175075

a.) Compute In using CTk(n) for k = 1
Using equations 6.2, 6.3 and 6.4 we compute CT1(n)


 * {| style="width:70%" border="0" align="center"



(6.5)
 * $$\displaystyle CT_1(n) = T_0(n) + a^0_1h^{2} + O(h^{4})$$
 * 
 * }
 * }


 * {| style="width:70%" border="0" align="center"




 * $$\displaystyle a^0_1 = d_1 \left[ f^{(1)}(1) - f^{(1)}(-1) \right] = \frac{-1}{12}\left[1 - 0.264241117657 \right] =    -0.0613132401952$$


 * }
 * }

The MatLab code below was used for different values of n to generate the following table:

function [CTk1,Enk1] = CorTrapk1(a,b,n) %Corrected Trapezoidal Rule for k = 1 %  f(x) = (e^x - 1)/x

h = (b - a)/n; ni = n+1; I = 2.11450175075; a1 = -.061313240195;

x = linspace(a,b,ni);

f = (exp(x) - 1)./x;

f((n/2)+1) = 1;

i = 1;

To = 0;

Toi = zeros(1,ni);

while i <= ni   if i == 1 || i == ni        Toi(i) = 0.5*f(i); else Toi(i) = f(i); end

To = To + (Toi(i));

i = i+1; end

To = h*To; CTk1 = To + a1*h^2 Enk1 = abs(I - CTk1)

end

b.) Compute In using CTk(n) for k = 2
Using equations 6.2, 6.3 and 6.5 we compute CT2(n)


 * {| style="width:70%" border="0" align="center"



(6.6)
 * $$\displaystyle CT_2(n) = CT_{1}(n) + a^0_2h^{4} + O(h^{6})$$
 * 
 * }
 * }


 * {| style="width:70%" border="0" align="center"




 * $$\displaystyle a^0_2 = d_2 \left[ f^{(3)}(1) - f^{(3)}(-1) \right] = \frac{1}{720}\left[0.563436343081 - 0.113928941256 \right] = 0.000624315835 $$


 * }
 * }

The MatLab code below was used for different values of n to generate the following table:

function [CTk2,Enk2] = CorTrapk2(a,b,n) %Corrected Trapezoidal Rule for k = 2 %  f(x) = (e^x - 1)/x

h = (b - a)/n; ni = n+1; I = 2.11450175075; a2 = -.000624315835;

CTk2 = CorTrapk1(a,b,n) + a2*h^4

Enk2 = I - CTk2

end

c.) Compute In using CTk(n) for k = 3
Using equations 6.2, 6.3 and 6.6 we compute CT3(n)


 * {| style="width:70%" border="0" align="center"



(6.7)
 * $$\displaystyle CT_3(n) = CT_{2}(n) + a^0_3h^{6} + O(h^{8})$$
 * 
 * }
 * }


 * {| style="width:70%" border="0" align="center"




 * $$\displaystyle a^0_3 = d_3 \left[ f^{(3)}(1) - f^{(3)}(-1) \right] = \frac{-1}{30240}\left[0.395599547802 - 0.071302178109 \right] = -0.000010724119 $$


 * }
 * }

The MatLab code below was used for different values of n to generate the following table:

function [CTk3,Enk3] = CorTrapk3(a,b,n) %Corrected Trapezoidal Rule for k = 3 %  f(x) = (e^x - 1)/x

h = (b - a)/n; ni = n+1; I = 2.11450175075; a3 = -0.000010724119;

CTk3 = CorTrapk2(a,b,n) + a3*h^6

Enk3 = I - CTk3 end

This problem was solved by Erle Fields

=Problem 5.7: Pros and Cons of Various Numerical Integration Techniques=

Objective
Discuss the pros and cons of the following numerical integration methods:
 * 1) Taylor Series
 * 2) Composite Trapezoidal Rule
 * 3) Composite Simpson's Rule
 * 4) Romberg Table(Richardson Extrapolation)
 * 5) Corrected Trapezoidal Rule

Solution
Solved by William Kurth.

=Problem 5.8 Proof identities=

Given
Refer to Lecture notes [[media:nm1.s11.mtg30.djvu|30-2]] and [[media:nm1.s11.mtg30.djvu|30-3]] for detailed description.

Objective
Show that


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

E_{n}^{T}=\frac{h}{2}\sum\limits_{k=0}^{n-1}{[\int\limits_{-1}^{+1}{{{g}_{k}}(t)dt}-\{{{g}_{k}}(-1)+{{g}_{k}}(+1)\}]}

$$
 * }


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} & \int\limits_{-1}^{+1}{(-t)g_{k}^{(-1)}(t)dt}=\int\limits_{-1}^{+1}{{{g}_{k}}(t)dt}-[{{g}_{k}}(-1)+{{g}_{k}}(+1)] \end{align}

$$
 * }


 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

g_{k}^{(i)}(t)={{(\frac{h}{2})}^{i}}{{f}^{(i)}}(x(t))

$$
 * }

1).Function transformation

 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} & E_{n}^{T}=I-{{I}_{n}}=\int\limits_{a}^{b}{f(x)dx}-{{T}_{0}}(n) \\ & =\sum\limits_{k=0}^{n-1}{[\int\limits_^{f(x)dx}-\frac{h}{2}\{f({{x}_{k}})+f({{x}_{k+1}})\}]} \\ \end{align}

$$ Where $${{x}_{k}}:=a+kh,_ – ^ – h=(b-a)/n$$. Now, we transfer $$\int\limits_^{f(x)dx}$$ to $$\int\limits_{-1}^{+1}{f(x(t))dt}$$.
 * }
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} & x(t)=\frac{{{x}_{k+1}}-{{x}_{k}}}{2}t+\frac{{{x}_{k+1}}+{{x}_{k}}}{2} \\ & =\frac{h}{2}t+\frac{{{x}_{k+1}}+{{x}_{k}}}{2} \\ \end{align}

$$ From above transformation relationship, it’s obviously that
 * }
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\left\{ \begin{matrix} x(-1)={{x}_{k}} \\ x(0)=\frac{{{x}_{k+1}}+{{x}_{k}}}{2} \\ x(+1)={{x}_{k+1}} \\ dx=\frac{h}{2}dt \\ \end{matrix} \right.

$$ Hence we have:
 * }.
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} & E_{n}^{T}=\sum\limits_{k=0}^{n-1}{[\int\limits_{-1}^{+1}{f(x(t))\frac{h}{2}dt}-\frac{h}{2}\{f(x(-1))+f(x(+1))\}]} \\ & =\frac{h}{2}\sum\limits_{k=0}^{n-1}{[\int\limits_{-1}^{+1}{f(x(t))dt}-\{f(x(-1))+f(x(+1))\}]} \\ \end{align}

$$ If we define $${{g}_{k}}(t)=f(x(t))_ – ^ – such_ – ^ – that_ – ^ – x\in [{{x}_{k}},{{x}_{k+1}}]$$, then we have
 * }
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:92%; padding:10px; border:2px solid #ff0000" |
 * style="width:92%; padding:10px; border:2px solid #ff0000" |

E_{n}^{T}=\frac{h}{2}\sum\limits_{k=0}^{n-1}{[\int\limits_{-1}^{+1}{{{g}_{k}}(t)dt}-\{{{g}_{k}}(-1)+{{g}_{k}}(+1)\}]}

$$
 * }

2).Integrate by part
Integrating by parts, we have
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:92%; padding:10px; border:2px solid #ff0000" |
 * style="width:92%; padding:10px; border:2px solid #ff0000" |

\begin{align} & \int\limits_{-1}^{+1}{(-t)g_{k}^{(-1)}(t)dt}=[(-t){{g}_{k}}(t)]_{x=-1}^{x=1}-\int\limits_{-1}^{+1}{{{g}_{k}}(t)d(-t)} \\ & =\int\limits_{-1}^{+1}{{{g}_{k}}(t)dt}-[{{g}_{k}}(-1)+{{g}_{k}}(+1)] \\ \end{align}

$$
 * }

3).Successively differentiation
Since we have already defined that $${{g}_{k}}(t)=f(x(t))_ – ^ – such_ – ^ – that_ – ^ – x\in [{{x}_{k}},{{x}_{k+1}}]$$ and from transformation relationship we have $$dx=\frac{h}{2}dt$$, successively differentiate function $${{g}_{k}}(t)$$ with respect to $$t$$, we can get
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:92%; padding:10px; border:2px solid #ff0000" |
 * style="width:92%; padding:10px; border:2px solid #ff0000" |

g_{k}^{(i)}(t)={{f}^{(i)}}(x(t))\cdot {{x}^{(i)}}(t)={{(\frac{h}{2})}^{i}}{{f}^{(i)}}(x(t))

$$
 * }

This problem was solved by Hailong Chen

=Contributing Members & Referenced Lecture=

= Signatures =


 * Brendan Mahon 16:34, 22 March 2011 (UTC)
 * shengfeng yang 18:34, 22 March 2011 (UTC)
 * Taeho Kim 20:38, 22 March 2011 (UTC)
 * Erle Fields 03:41, 23 March 2011 (UTC)
 * William Kurth 16:22, 23 March 2011 (UTC)
 * Hailong Chen 15:34, 23 March 2011 (EST)