University of Florida/Egm6341/s10.team3.aks/HW4

Ref. Lecture notes [[media:Egm6341.s10.mtg20.djvu|p.20-2]]

Problem Statement
Discuss Pros and cons of below methods.

1. Taylor Series 2. Composite Trapezoidal rule 3. Composite Simpson's rule 4. Romberg table (including Richardson's extrapolation) 5. Corrected Trapezoidal rule $$ CT_k (n) $$

Solution
 Taylor Series : 

Pros

1. Powerful tool that helps to approximate any type of simple or complex functions into a polynomial form with an error in the form of remainder.

2. For oscillatory functions, a high order Taylor seems a good choice, but can lead to problems of loss of significance in the computation if not programmed carefully.

Cons

1. The function should be differentiable in the given limits in which it is approximated as a polynomial.

2. Number of operations can be large. This is not always a problem these days considering how cheap and fast computers are at  present. In general the step size can be made larger the higher the order of the Taylor scheme. But a computational count will tell you whether it is more effective to compute a lot at each step, and take larger steps, or compute little at each step and make due with smaller step sizes. Usually it is more advantageous to go with good lower order method and small steps, but this depends on the problem.

Composite Trapezoidal rule:

Pros:

1. This methods works well for periodic functions like trigonometric functions (sinx, cosx) and it is very accurate because the error is dependent on the difference between the odd derivatives of the limits.

2. Better accuracy then simple trapezoidal rule and also easy to use.

Cons :

1. It's convergence rate is low and we need to have more number of sub intervals.

Composite Simpson's rule:

Pros:

1. Exact for cubic (less then that) polynomials.

2. More accurate then composite trapezoidal rule.

Cons:

1. This rule can be applied to only even number of finite intervals between the limits. So the value of 'n' in Simpson's rule is always an even number. i.e. n=2i; where (i=1,2,3,4....)

 Romberg table (including Richardson's extrapolation): 

Pros:

1. The successive computation of $$\;I(2n)$$ is cheaper and faster when $$\;I(n)$$ is already computed.

2. Better accuracy then other methods.

Cons:

1. There is one limitation with this rule.Romberg integration is successful only when integrand satisfies the hypotheses of the Euler–Maclaurin.

= (5) Prove =
 * $$E:= \int\limits_{1}^{-1}(-t)g^{(1)}(t) dt = [P_2(t)g^{(1)}(t)]_{-1}^{1} - \int\limits_{1}^{-1} P_2(t) g^{(2)}(t) dt         $$

Ref. Lecture notes [[media:Egm6341.s10.mtg20.djvu|p.21-2]]

Problem Statement
Prove that $$E:= \int\limits_{1}^{-1}(-t)g^{(1)}(t) dt = [P_2(t)g^{(1)}(t)]_{-1}^{1} - \int\limits_{1}^{-1} P_2(t) g^{(2)}(t) dt   $$

where

$$ P_1(t)= -t $$

and $$ P_2(t)= \int\limits_{1}^{-1} P_1(t) dt $$

Solution
$$ E:= \int\limits_{1}^{-1} \underbrace{(-t)}_{P_1(t)} g^{(1)}(t) dt $$ $$  = \int\limits_{1}^{-1} P_1(t) g^{(1)}(t) dt  $$

$$  = g^{(1)}(t) \int P_1(t) dt - \int [\frac{d}{dt}g^{(1)}(t) \int P_1(t) dt]dt $$

$$  = [g^{(1)}(t) P_2(t))]_{-1}^{1} - \int [g^{(2)}(t) P_2(t)]dt $$

$$  where, P_2(t) = \int P_1(t) dt                        = \int (-t) dt                        = \frac{-t^2}{2} + {\alpha}\rightarrow_{Integration Constt.} $$