User:Egm6341.s10.Team4.riherd

Mark Riherd

pmriherd@gmail.com

= 7.6 - Hermite-Simpson Integration of Coupled ODE's =

Given
This is a flight control problem. The dynamics of flight are well documented and will not be described in detail here.

General Method:

By using collocation to force our Hermite-Simpson interpolation to exactly solve the given ODE's at the mid point $$ \displaystyle z_{i+1/2}$$, we find that our time stepping method takes the form

$$ \displaystyle \mathbf{z_{i+1}}-\mathbf{z_{i}}=\frac{\tfrac{h}{2}}{3}(\mathbf{f_i}+4\mathbf{f_{i+1/2}+\mathbf{f_{i_1}}})$$

Additionally, using our interpolation $$ \displaystyle \mathbf{f_{i+1/2}}=\tfrac{1}{2}(\mathbf{z_i}+\mathbf{z_{i=1}}+\tfrac{h}{8}(\mathbf{f_i}-\mathbf{f_{i+1}}))$$.

However, no exact solution exists in order to determine $$ \displaystyle \mathbf{z_{i+1}}$$. Given $$ \displaystyle \mathbf{z_{i}}$$, the Newton-Rhapson method can be employed in order to find $$ \displaystyle \mathbf{z_{i+1}}$$. By converting $$ \displaystyle \mathbf{z_{i+1}}-\mathbf{z_{i}}=\frac{h}{6}(\mathbf{f_i}+4\mathbf{f_{i+1/2}+\mathbf{f_{i_1}}})$$ to $$ \displaystyle \mathbf{F(z_i,z_{i+1})}=\mathbf{0} $$, the Newton-Rhapson method takes the form

$$ \displaystyle \mathbf{z_{i+1}^{k+1}}=\mathbf{z_{i+1}^{k}}- \begin{bmatrix}\mathbf{\frac{dF(z_i,z_{i+1}^{k})}{dz_{i+1}}}\end{bmatrix}^{-1}\mathbf{F(z_i,z_{i+1}^{k})} $$

where

$$ \displaystyle \mathbf{F(z_i,z_{i+1})}= \mathbf{z_{i+1}}-\mathbf{z_{i}}- \frac{h}{6}(\mathbf{f_i}+4\mathbf{f_{i+1/2}+\mathbf{f_{i_1}}})=0 $$

Equations of Motion:

Let

$$ \displaystyle \begin{bmatrix} z \end{bmatrix} = \begin{bmatrix} x \\ y \\ v \\ \gamma\end{bmatrix}$$

The equations of motion that we are concerned with are

$$ \displaystyle \begin{bmatrix} \dot{z} \end{bmatrix} = \begin{bmatrix} \dot{x} \\ \dot{y} \\ \dot{v} \\ \dot{\gamma} \end{bmatrix} = \begin{bmatrix} v cos \gamma \\ v sin \gamma \\ \frac{1}{m}((T-D) cos \alpha - L sin \alpha - g sin \gamma)\\ \frac{T-D}{m}sin \alpha+\frac{L}{mv}cos \alpha -\frac{g}{v}cos \gamma \end{bmatrix} $$

The physical parameters for which we will solve using are

$$ \displaystyle D=\tfrac{1}{2}C_d\rho v^2 S_{ref} $$

$$ \displaystyle L=\tfrac{1}{2}C_l\rho v^2 S_{ref} $$

where

$$ \displaystyle C_d=a_1\alpha^2+a_2\alpha+a_3 $$

$$ \displaystyle C_l=b_1\alpha+b_2 $$

$$ \displaystyle \rho=c_1y^2+c_2y+c_3 $$

$$ \displaystyle a_1=-1.9431 \tfrac{1}{rad^2}$$

$$ \displaystyle a_2=-0.1499 \tfrac{1}{rad}$$

$$ \displaystyle a_3=0.2359 $$

$$ \displaystyle b_1=21.9 \tfrac{1}{rad}$$

$$ \displaystyle b_2=0.0 $$

$$ \displaystyle c_1=3.312E-9 \tfrac{kg}{m}$$

$$ \displaystyle c_2=-1.142E-4 \tfrac{kg}{m^2}$$

$$ \displaystyle c_3=1.224 \tfrac{kg}{m^3}$$

The initial conditions are

$$ \displaystyle x(t=0)=0.0 m$$

$$ \displaystyle y(t=0)=30.0 m $$

$$ \displaystyle v(t=0)=272.0 \tfrac{m}{s}$$

$$ \displaystyle \gamma(t=0)=0.0 rad $$

The flight is controlled using the angle of attack and the thrust, which are shown below



Find
We are asked to determine x,y,v, and \gamma using the Hermite-Simpson/Newton-Rhapson Method and then verify this with MATLAB's ode solver ode45.

Solution
In performing this analysis, we see that our Jacobian, $$ \displaystyle |\mathbf{\tfrac{dF}{dz_{i+1}}}|$$ can be described as

$$ \displaystyle \mathbf{\frac{dF(z_{i},z_{i+1})}{dz_{i+1}}}= \mathbf{\frac{dz_{i+1}}{dz_{i+1}}}-\mathbf{\frac{dz_{i}}{dz_{i+1}}}+ \frac{h}{6}(\mathbf{\frac{df_{i}}{dz_{i+1}}}+4\mathbf{\frac{df_{i+1/2}}{dz_{i+1}}}+ \mathbf{\frac{df_{i+1}}{dz_{i+1}}})$$

Using the interpolation for z_{i+1/2} as a function of z_{i} and z_{i+1} described above, this becomes

$$ \displaystyle \mathbf{\frac{dF(z_{i},z_{i+1})}{dz_{i+1}}}= \mathbf{\frac{dz_{i+1}}{dz_{i+1}}}-\mathbf{\frac{dz_{i}}{dz_{i+1}}}+ \frac{h}{6}(\mathbf{\frac{df_{i}}{dz_{i+1}}}+4\mathbf{\frac{df_{i+1/2}}{dz_{i+1/2}}\frac{dz_{i+1/2/2}}{dz_{i+1}}}+ \mathbf{\frac{df_{i+1}}{dz_{i+1}}})$$

In evaluating this equation, our Jacobian can be described as

$$ \displaystyle \mathbf{\frac{dF(z_{i},z_{i+1})}{dz_{i+1}}}= \mathbf{I}- \frac{h}{6}(3\mathbf{I}-\frac{h}{2}\mathbf{\frac{df_{i+1}}{dz_{i+1}}}) \mathbf{\frac{df_{i+1}}{dz_{i+1}}}$$

Thus, as long as $$ \displaystyle \tfrac{df}{dz} $$ can be calculated, the Jacobian can be found.

The obvious next step is to calculate $$ \displaystyle \tfrac{df}{dz} $$. The full expansion of $$ \displaystyle \tfrac{df}{dz} $$ can be seen to be

$$ \displaystyle

\frac{\mathbf{df}}{\mathbf{dz}}= \begin{bmatrix} 0 & 0 & cos(\gamma) & -v sin(\gamma) \\ 0 & 0 & sin(\gamma) & v cos(\gamma) \\ 0 & \frac{1}{m}(-\frac{dD}{dy}cos\alpha -\frac{dL}{dy}cos\alpha)& \frac{1}{m}(-\frac{dD}{dv}cos\alpha -\frac{dv}{dy}cos\alpha & \frac{1}{m}(-gcos\gamma)\\ 0 & \frac{1}{mv}(\frac{dD}{dy}sin\alpha+\frac{dL}{dy}cos\alpha)& -\frac{T}{mv^2}sin\alpha -\frac{1}{m}\frac{d}{dv}(\frac{D}{v})cos\alpha+\frac{1}{m}\frac{d}{dv}(\frac{L}{v})cos\alpha+\frac{g}{v^2}cos\gamma & \frac{g}{v}sin\gamma\\ \end{bmatrix}

$$

In performing the evaluations we can see that the two forms of numerical integration produce a very similar output.



Source Codes

Main Program

Calculate the Jacobian

Calculate f

Function for ode 45 to use

= 7.12 - Manipulation of Elliptical Integration =

Given
$$ \displaystyle C=a\int_{t=0}^{2\pi}\sqrt{1+e^2cos^2t} dt $$

Find
We are asked to show that $$ \displaystyle C=a\int_{t=0}^{2\pi}\sqrt{1+e^2cos^2t} dt =4 a\int_{t=0}^{\tfrac{\pi}{2}}\sqrt{1+e^2sin^2\alpha} d\alpha $$

Solution
Let $$ \displaystyle t=cos^{-1}(sin\alpha)$$ and $$ \displaystyle dt=cos\alpha\frac{1}{\sqrt{1-sin^2\alpha}}d\alpha=cos\alpha\frac{1}{cos\alpha}d\alpha=d\alpha $$.

Putting this into our equation,

$$ \displaystyle C = a \int_{t=0}^{2\pi} \sqrt{1+e^2 sin^2 \alpha} d\alpha $$

The function $$ \displaystyle \sqrt{1+e^2 sin^2 \alpha}$$ is periodic over a period of $$ \displaystyle \pi$$ and in this period the function is even around the point $$ \displaystyle \alpha=\tfrac{\pi}{2}$$. Thus, by symmetry, this integral can be broken up into 4 parts of equal magnitude such that

$$ \displaystyle C= 4 a\int_{t=0}^{\tfrac{\pi}{2}}\sqrt{1+e^2sin^2\alpha} d\alpha $$

= 6.6 - Inversion of Local Domain Transformation =

Given
In using our Hermit interpolation we use the transformation from $$ \displaystyle t \epsilon [t_i,t_{i+1}]$$ to $$ \displaystyle s  \epsilon [0,1]$$ using the transformation $$ \displaystyle t(s)=(1-s)t_i+st_{i+1}$$

Find
We are asked to invert this transformation from the form of $$ \displaystyle t=t(s)$$ to $$ \displaystyle s=s(t)$$

Solution
Inverting this transformation to find s in terms of t is a simple algebra problem.

$$ \displaystyle t=(1-s)t_i+st_{i+1} $$

$$ \displaystyle t=t_i+s(t_{i+1}-t_{i}) $$

$$ \displaystyle s=\frac{t-t_{i}}{t_{i+1}-t_{i}} $$

= 5.6 - Evaluation of d2i =

Given
The hyperbolic cotangent is defined as $$ \displaystyle\ coth(x)=\frac{cosh(x)}{sin(x)}=\frac{e^x+e^{-x}}{e^x-e^{-x}}. $$.

The function $$ \displaystyle\ f(x)=xcosh(x)=\sum_{i=0}^{\infty}\frac{2^{2i}B_{2i}}{(2i)!}x^{2i}$$, where Bi are the Bernoulli numbers.

Find
We are asked to verify the values of di for i=2,4, and 6, where di is the constant $$ \displaystyle\ d_{2i}=\frac{-B_{2i}}{(2i)!}$$. This is done previously given the values d2=-1/12, d4=1/720 and d6=-1/30240.

We are also asked to determine the values of d8 and d10.

Additionally, we are asked to compare these values of di to another relationship for di used in our error of the trapezoidal rule, where $$ \displaystyle\ d_{2i}=\frac{p_{2i}(1)}{2^{2i}}$$.

Solution
Expanding cosh(x) and sinh(x) in terms of the exponential function and then expanding those exponential functions as series around the point x=0, we see that

$$ \displaystyle\ f(x)=xcosh(x)=\sum_{i=1}^{\infty}-2^{2i}d_{2i}x^{2i}$$

$$ \displaystyle\ cosh(x)=1+\frac{1}{2}x^2-\frac{1}{24}x^4+\frac{1}{720}x^6+\frac{1}{40320}x^8+\frac{1}{3628800}x^{10}+...$$

$$ \displaystyle\ sinh(x)=x+\frac{1}{6}x^3+\frac{1}{120}x^5+\frac{1}{5040}x^7+\frac{1}{362880}x^9+\frac{1}{39916800}x^{11}+...$$

Substituting all of this into the function $$ \displaystyle\ f(x)=xcosh(x)$$, we see that

$$ \displaystyle\ f(x)=\frac{x+\frac{1}{2}x^3-\frac{1}{24}x^5+\frac{1}{720}x^7+\frac{1}{40320}x^9+\frac{1}{3628800}x^{11}...}{x+\frac{1}{6}x^3+\frac{1}{120}x^5+\frac{1}{5040}x^7+\frac{1}{362880}x^9+\frac{1}{39916800}x^{11}+...}$$

Performing the appropriate polynomial division (which will not be shown here due to length and no clear way to show long division in Latex), we see that

$$ \displaystyle\ f(x)=xcosh(x)=1+\frac{1}{3}x^2-\frac{1}{45}x^4+\frac{2}{975}x^6-\frac{1}{4725}x^8+\frac{2}{93555}x^{10}-... $$

This series expansion of xcoth(x) in hand, we are able to determine the values of $$ \displaystyle\ 2^{2i}d_{2i} $$ to be -

Additionally, we are asked to calculate d2i using a definition given by the polynomial functions defined in our trapezoidal error analysis, where $$ \displaystyle\ d_{2i}=\frac{p_{2i}(1)}{2^{2i}}$$.

From analysis done in class and coefficients as given by Kessler, P. (Trapezoidal Rule Error, http://www.mechanicaldust.com/UCB/math128a/extrap.pdf, 2008), we know that

$$ \displaystyle\ p_2(t)=-\frac{t^2}{2}+\frac{1}{6}, p_2(1)=-\frac{1}{3}$$

$$ \displaystyle\ p_4(t)=-\frac{t^4}{24}+\frac{t^2}{12}-\frac{7}{360}, p_4(1)=\frac{1}{45}$$

$$ \displaystyle\ p_6(t)=-\frac{t^6}{720}+\frac{t^4}{144}-\frac{7t^2}{720}+\frac{31}{15120}, p_6(1)=-\frac{2}{945}$$

$$ \displaystyle\ p_8(t)=-\frac{t^8}{40320}+\frac{t^6}{4320}-\frac{7t^4}{8640}+\frac{31t^2}{30240}-\frac{127}{604800}, p_8(1)=\frac{1}{4725} $$

$$ \displaystyle\ p_{10}=-\frac{t^10}{3628800}+\frac{t^8}{241920}-\frac{7t^6}{259200}+\frac{31t^4}{362880}-\frac{127t^2}{1209600}+\frac{73}{3421440}, p_{10}(t)=\frac{-2}{93555} $$

With these coefficients in hand, we are able to calculate the values for d2i using the formula as given above relating p2i(1) and d2i.

Comparing these values, we see that the formulas given to find d2i are consistent with each other.

= Problem 4.9 =

Given
We have been given various integration methods thus far throughout the class, we are now asked to compare their efficiency and accuracy.

Find
We are asked to compare the second column of the Rhomberg Table (T1(n)) to the Composite Simpson's Rule, and derive any relationship that exists.

We are asked to integrate the function $$ \displaystyle\ f(x)=\frac{exp(x)-1}{x}$$ over the interval $$ \displaystyle\ x \epsilon [0,1]$$ using the following methods and to compare the computational cost as a function of run-time -
 * Composite Trapezoidal (optimized)
 * Composite Simpson's Rule
 * Rhomberg Interpolation
 * chebfun sum command - a function that uses Chebyshev polynomials to integrate
 * the matlab functions trapz, quad, and clencurt.

Solution
For the first three methods, pre-existing codes written by the authors were used in order to evaluate the computational time required to perform the integration. For the Rhomberg Extrapolation, the computation time was determine by the time required to compute T0(n) plus the time to perform the extrapolation.

When the Simpson's Rule and T1 (Richardsen Extrapolation) are compared to each other, the results are identical out to all of the significant digits calculated, thus we believe that the two are equivalents processes. Furthermore,

$$ \displaystyle T_1(2n)=\frac{4T_0(2n)-T_0(n)}{4-1}$$

Breaking this into $$ \displaystyle T_0(n)=\frac{h}{2}(\frac{1}{2}f_0+f_1+f_2+...\frac{1}{2}f_n$$ and $$ \displaystyle T_0(n/2)=h(\frac{1}{2}f_0+f_2+f_4+...\frac{1}{2}f_n$$ we can see that $$ \displaystyle T_1(n)=\frac{h}{6}(f_0+4f_1+2f_2+4f_3+f_n)$$, which is equal to the Simpson's Rule for 2n points.

For the Trapezoidal Rule and Rhomberg Extrapolation

For the Simpson's Rule

For Matlab functions and chebshev

= Problem 4.10 =

Find
We are asked to integrate the function $$ \displaystyle\ f(x)=\frac{1}{1+x^2}$$ over the interval $$ \displaystyle\ x \epsilon [-5,5]$$ using the following methods and to compare the computational cost as a function of run-time -
 * Composite Trapezoidal (optimized)
 * Composite Simpson's Rule
 * Rhomberg Interpolation
 * chebfun sum command - a function that uses Chebyshev polynomials to integrate
 * the matlab functions trapz, quad, and clencurt.

Solution
Operating in the same manner as above, only for a different $$ \displaystyle f(x)$$ and bounds of integration.

For the Trapezoidal and Rhomberg Extrapolation

For the Composite Simpson's Rule

For the Matlab functions

= Problem 3.last=

Given
Rhomberg interpolation for the composite trapezoidal rule is defined as $$ \displaystyle\ T_k(n)=\frac{2^{2k}T_{k-1}(2n)-T_{k-1}(n)}{2^{2k}-1}$$

Find
We are asked to develop an efficient code that calculates and determines the efficiency of Rhomberg interpolation and compares it to the results to the Taylor Series integration method, using the function $$ \displaystyle\ f(x)=\frac{e^x-1}{x}$$.

Solution
The method for applying the trapezoidal rule has been well defined in previous work, thus it will not be shown again here.

The Rhomberg interpolation is described above, thus we will move right to out results.

In performing the Rhomberg interpolation, we found that with each additional interpolation of the composite trapezoidal series, an additional level of accuracy was obtained, with a linear return on the level of interpolation. However, even with this increased convergence to the exact solution, Rhomberg interpolation still does not converge nearly as quickly as the Taylor Series approximation of this integral does.

Thus, while the Rhomberg interpolation technique is helpful, the Taylor series method still converges much more quickly as a method of numerical integration.

Figure - Error of Various Integration Methods as a Function of the Number of Sub-Intervals Integrated Over

Code to run calculations -

= Problem 2.6 =

Given
We know that E(x) is defined as $$ \displaystyle\ E(x)=f(x)-f_n(x)$$.

Find
We are asked to show that the $$(n+1)^{th}$$ derivative of our error $$ \displaystyle\ E(x)=f(x)-f_n(x)$$ is $$\displaystyle\ E^{(n+1)}(x)=f^{(n+1)}(x)$$.

Solution
Assuming that $$ \displaystyle\ f_n(x)$$ is n+1 times differentiable, then $$ \displaystyle\ E_n^{(n+1)}(x)=f^{(n+1)}(x)-f_n^{(n+1)}(x)$$.

Let us assume because $$ \displaystyle\ f_n(x) \epsilon P_n$$, $$ \displaystyle\ f_n(x)$$ takes the form $$ \displaystyle\ f_n(x)=a_nx^n+a_{n-1}x^{n-1}+...+a_1x+a_0$$.

We also know that $$ \displaystyle\ f_n(x) \epsilon P_n $$. Taking the $$n^{th}$$ derivative of $$ \displaystyle\ f_n(x)$$, we see that $$ \displaystyle\ f^{(n)}_n(x)=(n!)a_n$$, a constant. Taking the $$(n+1)^{th}$$ derivative of $$ \displaystyle\ f_n(x)$$, we see that $$ \displaystyle\ f^{(n+1)}_n(x)=0$$.

Substituting this back into our equation for $$ \displaystyle\ E^{(n+1)}_n(x)$$, we see that $$ \displaystyle\ E^{(n+1)}(x)=f^{(n+1)}-f^{(n+1)}_n(x)=f^{(n+1)}(x)-(0)=f^{(n+1)}(x)$$.

=Problem 2.12=

Given
$$ \displaystyle\ \alpha(t)=\int_{-t}^{t} f(x(t'))dt'$$ where $$ \displaystyle\ f(x(t))=F(t)$$.

Find
Show that $$ \displaystyle\ \alpha(t)=F(-t)+F(t) $$.

Solution
We are asked to show that for $$ \displaystyle\ \alpha(t)=\int_{-t}^{t} f(x(t'))dt'=\int_{-t}^{t} F(t')dt',\alpha^{(1)}(t)=F(-t)+F(t)$$, where $$ \displaystyle\ f(x(t))=F(t)$$.

Using The Fundamental Theorem of Calculus (Second Part), we know that for some differentiable function $$ \displaystyle\ G(t)$$ with with derivative $$ \displaystyle\ g(t)=G'(t)$$, $$ \displaystyle\ \int_a^b g(t)dt=G(b)-G(a)$$.

Allowing $$ \displaystyle\ f(x(t))=F(t)$$ and $$ \displaystyle\ G'(t)=F(t)$$, then $$ \displaystyle\ \alpha(t)=\int_{-t}^{t} F(t')dt'=G(t)-G(-t)$$.

If the derivative of $$ \displaystyle \alpha $$ is now taken, the $$ \displaystyle\ \alpha^{(1)}(t)=\frac{d}{dt}(G(t)-G(-t))=G'(t)+G'(-t)$$.

Substituting in our definitions $$ \displaystyle\ F(t)=G'(t)$$ we see that

$$ \displaystyle\ \alpha^{(1)}(t)=F(-t)+F(t) $$.

= Homework #1 =

Problem 1.8
We are asked to evaluate the integral $$\int_0^1 f(x)dx$$ for which $$f(x)=\frac{e^x-1}{x}$$ using an approximation of $$f(x)$$ using Taylor Series, and also using the trapezoidal rule and Simpson's Rule.

Taylor Series

The function $$e^x$$ can be approximated using Taylor series to find that $$e^x=\displaystyle\sum_{i=0}^{\infty}\frac{x^i}{i!}.$$ Inserting this into our function, $$f(x)$$, and simplifying the expression, we find that $$f(x)=\displaystyle\sum_{i=1}^{\infty}\frac{x^{(i-1)}}{i!}.$$ While this form of our function, $$f(x)$$, presents an exact solution, we need to truncate this in order to solve it numerically. Therefore, let $$f_n(x)$$ be our approximation of $$f(x)$$ and let $$f_n(x)=\displaystyle\sum_{i=1}^{n}\frac{x^{(i-1)}}{i!}.$$ This function, $$f_n(x)$$ is able to be integrated analytically, when $$\displaystyle\ \int_0^1 \frac{e^x-1}{x}dx=\sum_{i=1}^{\infty}\frac{x^{i}}{(i!)i}$$ the results of that are presented below in Table 1.8.1.

While $$f_n(x)$$ can be integrated numerically, it still has some degree of error because it is only an approximation of $$f(x)$$. The error in this approximation can be analyzed by integrating the remainder of $$f_n(x)$$, where $$R \displaystyle\ _{n}(x)=f(x)-f_n(x)=\frac{x^{n}}{(n+1)!}exp(\xi),$$ for some $$\displaystyle\ \xi   \epsilon   [0,x] $$.

Integrating the remainder of our function, $$ f(x)$$, we come to see that our error is $$ \displaystyle\ E_n=I-I_n=\int_0^1 [f(x)-f_n(x)]dx=\int_0^1 \frac{x^n}{(n+1)}exp(\xi)dx$$. This can be evaluated to find that $$ \displaystyle\ E_n=exp(\xi)\frac{1}{(n+1)!(n+1)} $$. Thus, our maximum error using the Taylor series is $$ \displaystyle\ E_{n,max}=\max|exp(\xi)|\frac{1}{(n+1)!(n+1)} $$. The results of this are also shown in Figure 1.8.1 below.

Trapezoidal Rule

The trapezoidal rule is a method of numerical integration that uses linear interpolation between two points in order to calculate the area under a given curve, $$f(x)$$.

For two points on a curve, where x=a,b. The trapezoidal rule can be expressed as $$\displaystyle\ I_1=\int_a^b f_n(x)dx =\frac{b-a}{2}(f(a)+f(b)).$$

For n+1 number of points, the domain of $$ \displaystyle\ x \epsilon [a,b] $$ can be broken up into n segments of equal length, h, and the trapezoidal rule works our such that $$ \displaystyle\ I_n=\frac{h}{2}(f_0+2f_1+2f_2+...+2f_{n-1}+f_n)$$, where $$\displaystyle\ f_i=f(x_i)$$.

For the case that $$f(x)=\frac{e^x-1}{x}$$, results of this method are shown below in Table 1.8.1.

The error of this case, based on the result of using the Taylor series approximation of f(x) for a large value of n, is also available in Table 1.8.1 below.

Simpsons Rule

Whereas the trapezoidal rule uses a linear method of interpolation, Simpson's Rule uses a 2nd order polynomial to determine the area under a given curve, $$f(x)$$.

For three points on a curve, for x=a, (a+b)/2,b. $$\displaystyle\ I_2=\int_a^b f_n(x)dx =\frac{b-a}{6}(f(a)+4f(\frac{a+b}{2})+f(b))$$.

For n+1 number of points, the domain of $$ \displaystyle\ x \epsilon [a,b]$$ can be broken into n segments of equal length, h, and Simpson's Rule works out such that $$ \displaystyle\ I_n=\frac{h}{6}(f_0+4f_1+2f_2+4f_3+...2f_{n-2}+4f_{n-1}+f_n)$$, where $$ \displaystyle\ f_i=f(x_i)$$.

For the case that $$f(x)=\frac{e^x-1}{x}$$, results of this method are shown below in Table 1.8.1.

The error of this case, based on the result of using the Taylor series approximation of f(x) for a large value of n, is also available in Table 1.8.1 below.

Problem 1.11
From interpolation theory, we know that $$ \displaystyle l_i=\prod _{j=0,j\ne i}^{n} \frac {x-x_j}{x_i-x_j}$$.

Additionally, we define $$ \displaystyle P_2(x)=\sum_{i=0}^{2}l_{i}(x)f(x_i)=f(x)$$.

For $$ n=2$$:

$$\displaystyle l_0=\prod _{j=0,j\ne i}^{n=2} \frac {x-x_j}{x_i-x_j}=\frac {x-x_1}{x_0-x_1}\frac {x-x_2}{x_0-x_2}$$

$$\displaystyle l_1=\prod _{j=0,j\ne i}^{n=2} \frac {x-x_j}{x_i-x_j}=\frac {x-x_0}{x_1-x_0}\frac {x-x_2}{x_1-x_2}$$

$$\displaystyle l_2=\prod _{j=0,j\ne i}^{n=2} \frac {x-x_j}{x_i-x_j}=\frac {x-x_0}{x_2-x_0}\frac {x-x_1}{x_2-x_1}$$

Evaluating these functions $$ \displaystyle l_0(x), l_1(x) and l_2(x)$$ at the interpolation points $$ \displaystyle x_0, x_1 $$ and $$ \displaystyle x_2$$, we find that:

$$ \displaystyle l_0(x_0)=1, l_0(x_1)=0 $$ and $$ \displaystyle l_0(x_2)=0$$

$$ \displaystyle l_1(x_0)=0, l_1(x_1)=1 $$ and $$ \displaystyle l_1(x_2)=0$$

$$ \displaystyle l_2(x_0)=0, l_2(x_1)=0 $$ and $$ \displaystyle l_2(x_2)=1$$

Reinserting these values of $$ \displaystyle l_i(x_j)$$ back into $$ \displaystyle P_2(x_j)$$, we can see that

$$ \displaystyle P_2(x_0)=(1)f(x_0)+(0)f(x_1)+(0)f(x_2)=f(x_0)$$

$$ \displaystyle P_2(x_1)=(0)f(x_0)+(1)f(x_1)+(0)f(x_2)=f(x_1)$$

$$ \displaystyle P_2(x_2)=(0)f(x_0)+(0)f(x_1)+(1)f(x_2)=f(x_2)$$

Thus, we can see that $$ \displaystyle P_2(x_j)_{j=0,1,2}=\sum_{i=0}^{n=2}l_i(x_j)f(x_i)=f(x_j)$$.