User:Egm6341.s11.team4/HW3

=Problem 3.1: Taylor Series and Lagrange Approximation Comparison=

As assigned in lecture slide 14-1.

Given
1. $$\displaystyle f(x)=sin(x) $$

Objectives
1. Find the Taylor Series approximation of f(x) about $$\frac{3\pi}{8}$$

2. Find the 4th degree Lagrange approximation of f(x)

3. Find the degree necessary for the error in the Taylor Series to be less than the Lagrange approximation at the point $$ \frac{7\pi}{8}$$.

4. Use the Lagrange Interpolation Estimation Theorem as presented in 11-3 to come up with an upper bound on the Lagrange error and check that the calculated Lagrange error is within this bound.

Solution
All MATLAB code is included in a collapsible box at the end of this problem.

In order to solve this problem, we must first develop our lagrange interpolation polynomial for sin(x) and our Taylor Series approximations of the function as expanded about the point $$x_{0}=\frac{3\pi}{8}$$.

Fortunately, we have already developed the Taylor Series for sin(x) about the point $$x_{0}=\frac{3\pi}{8}$$ previously in homework 2.

Though we will cover the derivation of the Taylor Series approximations here, for a more complete derivation please refer back to homework 2.

First, we will start with the general Taylor Series expansion as presented in the lecture slides


 * $$ P_{n}(x) = f(x_{0}) + \frac{(x-x_{0})}{1!} f^{(1)}(x_{0}) +...+\frac{(x-x_{0})^{n}}{n!}f^{(n)}(x_{0}) \!$$

For the case of sin(x), the equation then becomes


 * $$ f(x) = sin(x_{0}) + \frac{(sin(x)-sin(x_{0}))^{1}}{1!}cos(x_{0}) - \frac{(sin(x)-sin(x_{0}))^{2}}{2!}sin(x_{0}) - ...\!$$

The periodicity of the sine function shows up in our evaluation of the Taylor Series and can be capitalized upon. Since the derivative of sine is the sine function itself every 4 derivatives evaluating the nth derivative is a matter of determining what the remainder is when n is divided by 4 and matching the derivative with the appropriate trigonometric function based on what the remainder is. This process is captured in code below using the sub-function TaylorDer.

The general Taylor Series equation is developed in the code presented below in the section labeled Taylor.

Next, we must develop the Lagrange approximation of the function sin(x). New code had to be developed for this, but the basic premise is to develop the Lagrange basis polynomials and then take a linear combination of them to obtain the Lagrange approximation.

In a Lagrange approximation, we start with a set of data points that are a part of the function that we are attempting to approximate.


 * $$(x_0, y_0),\ldots,(x_j, y_j),\ldots,(x_k, y_k)$$

The Lagrange basis polynomials are then obtained using the following formula.


 * $$\ell_j(x) := \prod_{\begin{smallmatrix}

0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m} = \frac{(x-x_0)}{(x_j-x_0)} \cdots \frac{(x-x_{j-1})}{(x_j-x_{j-1})} \frac{(x-x_{j+1})}{(x_j-x_{j+1})} \cdots \frac{(x-x_k)}{(x_j-x_k)}.$$

where k is the degree of the Lagrange approximation. It should also be noted that the notation $$ \ell_j(x) $$ is actually short hand notation for $$ \ell_{j,k}(x) $$. Since the Lagrange basis polynomials are products of k 1st order terms $$ (\frac{x-x_m}{x_j-x_m}) $$, the lagrange basis polynomials are all kth order polynomials.


 * $$ \ell_{j,k}(x) = \mathcal{P}_{k} $$

In our case, we are told to take the 4th degree Lagrange approximation, so k=4. This means our Lagrange basis polynomials will all be 4th order. Now, to obtain our actual Lagrange approximation, we take a linear combination of our Lagrange basis polynomials.


 * $$L(x) := \sum_{j=0}^{k} y_j \ell_j(x)$$

Where $$ y_j $$ is the true function value at our j-th intersection point and is known from our original data set. Since the final Lagrange approximation is simply a linear combination of the basis polynomials, it too will be of the k-th order. The Lagrange approximation intersects with the true function value at k+1 points over the given interval. These intersection points are the original data points from which we developed our Lagrange basis polynomials.

We now can develop the Lagrange approximation and Taylor Series approximations necessary for this problem, but we are also instructed to analyze the Lagrange error using the Lagrange Interpolation Error Theorem as presented in lecture slide 11-3. This is stated below


 * $$ e^{L}_{n}(t)=\frac{q_{n+1}(t)}{(n+1)!}f^{n+1}(\xi)       $$

where


 * $$ q_{n+1}(t):=\prod_{j=0}^{n}=(x-x_{j}) \in \mathcal{P}_{n+1} $$

This equation can be simplified for our problem. Since we are dealing with the function sin(x), a function we know to be cyclic in its derivatives, the term $$ f^{n+1}(\xi) $$ can be bounded. The derivative of the sine function will always be positive or negative sine or cosine. Furthermore, we know the range of these trigonometric functions to be bounded between -1 and 1. This allows to say that


 * $$ |e^{L}_{4}(t)|\leq \frac{|q_{5}(t)|}{5!} $$

Since we want to determine the degree of the Taylor Series about $$ x_{0}=\frac{3\pi}{8} $$ required such that the Taylor Series becomes a better approximation of the function sin(x) than the 4th degree Lagrange approximation when evaluated at the point $$ \frac{7\pi}{8} $$, we expect to see a final relationship of the errors


 * $$ |e^{T}_{4}(\frac{7\pi}{8})| \leq |e^{L}_{4}(\frac{7\pi}{8}) | \leq |\frac{q_{5}(\frac{7\pi}{8})}{5!}| $$

Now that we have a system to develop our Lagrange approximation, our Taylor Series approximations, and an upper bound on our remainder, we have to write code that develops these functions automatically and then determines the Taylor Series degree n at which the Taylor Series error at $$ \frac{7\pi}{8} $$ will be less than the Lagrange error. To do this, we develop the Lagrange approximation and error in its own independent loop. Then, we develop the bound on the remainder in its own independent loop. The Taylor Series approximation and error, meanwhile, is developed in a while loop that develops the next degree of the Taylor Series approximation while the condition that the Taylor Series error is greater than the Lagrange approximation error is met. Once this condition is no longer met the code prints the Taylor Series degree at which the condition was satisfied along with the Taylor Series error at the point $$ \frac{7\pi}{8} $$, the Lagrange approximation error, as well as the upper bound of the Lagrange error. The commented code for this problem is included below.

After running the program below, we obtain the following plot of our function and its approximations.



As can be seen, both our Lagrange and Taylor Series approximations are exceptionally close to the true function over the span 0 to $$ \pi $$ (the three functions are virtually indistinguishable on the above plot). Meanwhile, our code output tells us that the Taylor Series needed to be of degree 6 or higher before the Taylor Series produced a better approximation than the Lagrange Interpolation of degree 4. Additionally, as expected, our Taylor Series error was better than our Lagrange error, which was less than the upper bound in our Lagrange error predicted by the Lagrange Interpolation Error Theorem.

Code Output:

Taylor Series Degree = 6

$$ ( |e^{T}_{4}(\frac{7\pi}{8})|= 0.000905)       <            (|e^{L}_{4}(\frac{7\pi}{8})| = 0.001476)             <              (|\frac{q_{5}(\frac{7\pi}{8})}{5!}|   = 0.008170)   $$

This problem was solved by Brendan Mahon

Given
Note: shorthand notation was used, however it is implied that these equations refer to the Lagrange interpolation of a continuous function $$\displaystyle f$$ which has $$(n+1)$$ continuous derivatives on $$\displaystyle I_t := \mathcal{H}(t,x_0,x_1,...,x_n)$$ (the smallest interval containing the points $$t, x_0, x_1,...,x_n$$) as defined on lecture slides [[media:Nm1.s11.mtg11.djvu|11-2,11-3]].

Objective
Show that

Solution
First we will rearrange the second term in (2.1) as follows

From equation (2) on lecture slide [[media:Nm1.s11.mtg11.djvu|11-3]]:

where $$\displaystyle \xi \in I_t$$.

Rearranging (2.4) and substituting it back into (2.3) yields

From equation (3) on lecture slide [[media:Nm1.s11.mtg11.djvu|11-3]]

Upon inspection it can easily be seen that

From (2.5) and (2.7)

Solved by William Kurth.

Given
1.

Function $$ f(x)=\frac{e^{x}-1}{x} $$    on [-1,1]

Discretization Points (uniform) $$ \displaystyle [x_{0},...,x_{4}] $$, n=4

Evaluation Point $$ \displaystyle t=5 $$

Evaluation Point $$ \displaystyle x=0.75 $$

Lagrange Basis Polynomial to Plot $$\displaystyle i=2 \Rightarrow l_{2,4}(\cdot) $$

2.

Function $$ f(x)=\frac{1}{1+4x^{2}} $$    on [-5,5]

Discretization Points (uniform) $$\displaystyle [x_{0},...,x_{8}] $$, n=8

Evaluation Point $$ \displaystyle t=4.5 $$

Evaluation Point $$\displaystyle x=3 $$

Lagrange Basis Polynomial to Plot $$\displaystyle i=3 \Rightarrow l_{3,8}(\cdot) $$

Objectives
Plot 3 figures similar to those on 14-2. Using uniform discretization.

1.

1.1: Find $$ e^{L}_{4}(f,t) $$ and $$ e^{L}_{4}(f,x) $$

1.2: Plot the Lagrange Interpolation and the original function on the same plot.

1.3: Plot the stated Lagrange Basis Polynomial (i=2) over the span of the function, ensuring that it is zero valued at all the basis data points except the i-th data point corresponding that basis polynomial, where has a value of one.

1.4: Plot the remainder component $$ q_{5} $$ to ensure that it is zero valued at all the basis data points. This ensures the Lagrange interpolation is exact when evaluated at the original basis data points.

2.

2.1: Find $$ e^{L}_{8}(f,t) $$ and $$ e^{L}_{8}(f,x) $$

2.2: Plot the Lagrange Interpolation and the original function on the same plot. Additionally, comment on the observed Runge Phenomenon in the Lagrange Interpolation.

2.3: Plot the stated Lagrange Basis Polynomial (i=3) over the span of the function, ensuring that it is zero valued at all the basis data points except the i-th data point corresponding that basis polynomial, where has a value of one.

2.4: Plot the remainder component $$ q_{9} $$ to ensure that it is zero valued at all the basis data points. This ensures the Lagrange interpolation is exact when evaluated at the original basis data points.

Solution
All MATLAB code is included in a collapsible box at the end of this problem.

Both parts 1 and 2 can be solved in the same manner. We will rigorously develop the method used in evaluating part 1 and then show how the same method is used in part 2. MATLAB code, affixed to the end of this solution, was used to develop all plots. The same MATLAB code was used for both parts 1 and 2 with only minor variations to accomodate for the differences in the stated problems.

1.

First, we must obtain the data points for our Lagrage Interpolation. Since we want a uniform discretization, we make a vector x that divides the span over which we're plotting the function into k+1 points. For problem 1, this is 5 points ranging from -1 to 1.


 * $$ [-1, -0.5, 0, 0.5, 1] $$

Now, we must evaluate the function f(x) at these five points to develop our vector y of function values. This is done by simply plugging in the individual x values into our f(x) function. For problem 1, it is worth noting that in Homework 1 Problem 1.1 we found that there is a singularity at 0 for our given function f(x), but that the limit of the function at 0 is valued at 1 when taken from either side. Therefore, when evaluating the function value at 0 instead of plugging the value 0 into our function f(x), we explicitly state that the value is 1. This allows us to proceed with our Lagrange Interpolation as before.

With the five data points required for a 4th degree Lagrange interpolation now obtained, we use the following algorithm to obtain our Lagrange Basis polynomials, as done previously in problem 3.1.


 * $$\ell_j(x) := \prod_{\begin{smallmatrix}

0\le m\le k\\ m\neq j\end{smallmatrix}} \frac{x-x_m}{x_j-x_m} = \frac{(x-x_0)}{(x_j-x_0)} \cdots \frac{(x-x_{j-1})}{(x_j-x_{j-1})} \frac{(x-x_{j+1})}{(x_j-x_{j+1})} \cdots \frac{(x-x_k)}{(x_j-x_k)}.$$

Where k is the degree of the Lagrange Interpolation (in our case k=4) and $$ x_j $$ is the jth x value from our data points. These Lagrange Basis Polynomials are obtained in their own for loop and stored as an array (this will be helpful in plotting an individual Langrange Basis over the relevant span later on). They are then used in linear combination to obtain our final Lagrange Interpolation according to the following formula.


 * $$L(x) := \sum_{j=0}^{k} y_j \ell_j(x)$$

Since our individual Lagrange Basis Polynomials are all of degree k, our final Lagrange Interpolation function is also of degree k (k=4 in this problem).

The following plot shows both the true function and the Lagrange Interpolation over the span -1 to 1.



As can be seen, the two functions are a near perfect match over the region. This is because the Lagrange Interpolation weaves in and out of the true function, intersecting at the data points obtained from our uniform discretization. When using the Lagrange Interpolation outside of this region, the error can grow rapidly, since the interpolation no longer weaves about the true function. This can be seen in the following image, a plot of the true function and the Lagrange Interpolation over the region -1 to 5.



The value 5 was selected since we must later on determine the error of the Lagrange interpolation at this point (as can be seen graphically, we should expect the error to be very large since 5 is well outside of the region -1 to 1 from which the interpolation was obtained).

It can be seen from both of these figures that the Lagrange Interpolation is exactly correct at the data points from which the Lagrange Basis Polynomials are obtained. The reason for this can quickly be seen from a plot of an individual Lagrange Basis polynomial over the relevant span.



This is the Lagrange Basis Polynomial corresponding to point i=2. It can be seen that the function is zero valued at every point in our uniform discretization (these points are indicated by the grid lines) except for point $$ x_2 $$, the point to which this Lagrange Basis Polynomial corresponds. At this point, the basis polynomial has a value of 1. This pattern is true for every Lagrange Basis Polynomial. Since the Lagrange Interpolation is a linear combination of the basis polynomials times the function value corresponding them, it should be clear that the value of the Lagrange Interpolation at point $$ x_2 $$ is


 * $$\displaystyle L(x_2)=0+0+1*y_2+0+0 $$
 * $$\displaystyle L(x_2)=y_2 $$

Next, we want to determine the function $$ q_5(t) $$. This is a component of the Lagrange Interpolation Estimation Theorem as presented in Lecture Slide 11-3.


 * $$ e_{n}^{L}(f,t):=f(t)-f_{n}^{L}(t) = \frac{q_{n+1}}{(n+1)!}f^{n+1}(\xi) $$

Where $$ \xi $$ is a point somewhere in the set [-1,1]. Using the following formula to obtain $$ q_{n+1} $$


 * $$ q_{n+1} := \prod_{j=0}^{n}(x-x_j)  \in \mathcal{P}_{n+1}  $$

In our case, n=4. To obtain this function in code, a separate for loop was made to evaluate the equation $$ q_5 $$. After this equation was determined, it was plotted over the relevant span of -1 to 1



As can be seen, the function is zero valued at every one of our data points (indicated by the grid lines). This is to be expected since we have previously shown that the Lagrange Interpolation exactly evaluates the function at the data points from which it is obtained.

Lastly, we wish to evaluate the errors $$ e_{4}^{L}(f,t) $$ and $$ e_{4}^{L}(f,x) $$ at t=5 and x=0.75 respectively.

This is done using the following equation


 * $$ e_{n}^{L}(f,t):=f(t)-f_{n}^{L}(t) $$

Substituting in for t=5, we obtain


 * $$ e_{4}^{L}(\frac{e^{x}-1}{x},5)=\frac{e^{5}-1}{5}-f_{4}^{L}(5)=11.0239 $$

This error is, as expected, very large. We could graphically see from our figure showing the true function and the Lagrange Interpolation that, when the evaluation point is far outside of the region of data points from which we developed our Lagrange Interpolation the error is very large. The Lagrange Interpolation no longer weaves about the true function and the divergence between the two is unbounded. When determining the error at the point $$x=0.75 $$, however, we will find that the error is very small.


 * $$ e_{4}^{L}(\frac{e^{x}-1}{x},0.75)=\frac{e^{0.75}-1}{0.75}-f_{4}^{L}(0.75)=1.6274e-04 $$

This is also to be expected, since the value 0.75 is within the span from which we obtained the points our Lagrange Interpolation is based. Since the Lagrange Interpolation must intersect with the true function both before and after the point at which we are evaluating the error we expect that Lagrange Interpolation will not stray excessively far from the true function (in part 2, however, we will see that the problem of Runge's Phenomenon means that this assumption does not always hold true).

2.

We now analyze the function

$$ \displaystyle f(x)=\frac{1}{1+4x^{2}} $$ on [-5, 5]

using uniform discretization and an 8th degree Lagrange Interpolation.

First, we must obtain our data points. Using uniform discretization, we obtain the following vector of our x values

$$ [x_0, ..., x_8]=[-5, -3.75, -2.5, -1.25, 0, 1.25, 2.5, 3.75, 5] $$

Now, we must evaluate the function f(x) at these nine points to develop our vector y of function values. This is done by simply plugging in the individual x values into our f(x) function. With these quantities in hand, we proceed with developing our Lagrange Interpolation in an identical manner as was done in part 1 (in fact, the same code was simply copied and pasted with the only adjustment being our value for the degree of the approximation was changed from k=4 to k=8).

After obtaining our Lagrange Interpolation, we plotted the Lagrange Interpolation along with the value of our true function over the span -5 to 5.



As can be seen, our high order Lagrange Interpolation of this function is actually very poor over this region, particularly at the edges. This is due to Runge's Phenomenon, wherein a polynomial interpolation of high order tends to oscillate substantially at the edges of the interval. The reason for this oscillation and increase in error becomes obvious when looking at the Lagrange Interpolation Estimation Theorem.


 * $$ e_{n}^{L}(f,t):=f(t)-f_{n}^{L}(t) = \frac{q_{n+1}}{(n+1)!}f^{n+1}(\xi) $$

Let's isolate the derivative term in this equation. For the equation we previously analyzed in part 1, it can be seen (using Wolfram Alpha that the nth derivative value gets smaller and smaller when evaluated at the interval edge as n approaches infinity. For the function in question, however, the value of increasing derivatives at the interval edges explodes, causing an increasing error in our Lagrange Interpolation Estimation Theorem.  Using Wolfram Alpha, we obtain


 * $$|\frac{\partial ^{4}}{\partial x^{4}}\frac{1}{1+4x^{2}}| \approx 0.00179 $$
 * $$ |\frac{\partial ^{5}}{\partial x^{5}}\frac{1}{1+4x^{2}}| \approx 0.0021  $$
 * $$ |\frac{\partial ^{6}}{\partial x^{6}}\frac{1}{1+4x^{2}}| \approx 0.0029 $$
 * $$ |\frac{\partial ^{7}}{\partial x^{7}}\frac{1}{1+4x^{2}}| \approx 0.0044 $$
 * $$ |\frac{\partial ^{8}}{\partial x^{8}}\frac{1}{1+4x^{2}}| \approx 0.0077 $$

The trend shows that as we use a higher degree Lagrange interpolation, the error at the interval edge actually becomes more severe, explaining the severe oscillations we see in our figure.

Despite these oscillations, our Lagrange Basis polynomials should still be zero valued at every data point except the one corresponding to it, at which point it should have a value of one. Plotting the basis polynomial $$ l_{3,8}(x) $$ we see



The Lagrange Basis Polynomial is, as expected, zero at every evaluation point except the one corresponding to it, where it has a value of one. The grid lines indicate the evaluation points, with $$ x_{3}=-1.25 $$ being the point at which we expect to see the function reach a value of one. This plot matches our expecations.

Next, we evaluate the function $$ q_{9} $$. This function is a significant component of the Lagrange Interpolation Estimation Theorem and, as before, we expect to see it zero valued at every one of evaluation points.



As expected, the function has a value of zero at the grid marks corresponding to our evaluation points. It should be noted, however, that the function oscillates significantly (reaching a value of almost 40,000 near the interval edge). Since the Lagrange Interpolation Estimation Theorem states that the error is proportional to the function value of $$ q_{9}(x) $$, this plot further indicates that our Lagrange Interpolation will have severe errors near the interval edge.

Lastly, we must find our Lagrange errors for t=4.5 and x=3. We will use the same formula used previously


 * $$ e_{n}^{L}(f,t):=f(t)-f_{n}^{L}(t) $$

When evaluated for our function and our point t=4.5


 * $$ e_{8}^{L}(\frac{1}{1+4x^{2}},4.5):=|\frac{1}{1+4*4.5^{2}}-f_{8}^{L}(4.5)|=1.7840 $$

Since the true value is about 0.012, the error in the Lagrange Interpolation at this point very near our edge interval is, as expected, very large due to Runge's Phenomenon. Our next evaluation point is x=3. This point is further away from the edge interval and consequently, since Runge's phenomenon will be less severe away from the edge intervals, we expect the error to be substantially less.


 * $$ e_{8}^{L}(\frac{1}{1+4x^{2}},3):=|\frac{1}{1+4*3^{2}}-f_{8}^{L}(3)|=0.3786 $$

Though this error is, as expected, much smaller than at t=4.5, it is still very large when compared to the true function value of about 0.027. When looking at our plot of both the true function and its Lagrange Interpolation, it can be seen that the value of x=3 lies roughly at an oscillatory peak of our Lagrange Interpolation and, consequently our error is almost at a local maximum at that point. Since the error produced by our Lagrange Interpolation in part 2 was greater than the error produced by our interpolation in part 1, it is clear that one should be aware of Runge's phenomenon before deciding to employ a high order polynomial approximation for function evaluation, especially when evaluating points near the interval edge.

This problem was solved by Brendan Mahon

Given
The equation of the bifolium curve in polar coordinates:$$r(t)=2\sin (t){{\cos }^{2}}(t)$$, with $$t\in [0,\pi ]$$.

Objectives
1. Do literature research to find history and application(if any) of this classic curve

2. Compute the area in one leaf to $$10^{-6}$$ accuracy.

History and application of bifolium
The folium curve is first described by Johannes Kepler in 1609. And it has two general equation, with $$({{x}^{2}}+{{y}^{2}})({{x}^{2}}+bx+{{y}^{2}})=4axy$$ in Cartesian form and $$\rho =4a\cos t{{\sin }^{2}}t-b\cos t$$ in polar coordinates. It has the meaning of “leaf-shaped” in Latin. It has three forms, known as simple folium(or single folium), the bifolium(or doublefolium), and the trifolium, corresponding to the cases when $$b=4a$$, $$b=0$$ and $$b=a$$, respectively. In 1638, Rene Descartes first discussed the type with Cartesian equation of $${{x}^{3}}+{{y}^{3}}=3axy$$, which thereafter named the folium of Descartes. Rene Descartes has found the correct shape of the curve in the positive quadrant, but he wrongly view that this leaf shape was repeated in each quadrant like the four petals of a flower. The problem to determinate the tangent to the curve was proposed to Gilles de Roberval who, having made the same incorrect assumption, called the curve fleur de jasmine after the four-petal jasmine bloom, a name that was later dropped. The folium of Descartes has an asymptote $$x+y+a=0$$. See The universal book of mathematics, by David J. Darling, for more information.

The bifolium has been studied by Longchamps (1886) and Henri Brocard(1887).

For more information about bifolium, refer to wolfram_Bifolium and 2Dcurves_Bifolium.

Matlab Code for above plots:

Compute the area of one leaf
The exact value of one leaf: Integrate the curve equation with $$t\in (0,\frac{\pi }{2})$$, we get the exact value of area of one leaf:
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:92%; padding:10px; border:2px solid #ff0000" |
 * style="width:92%; padding:10px; border:2px solid #ff0000" |

Area=\int\limits_{0}^{\frac{\pi }{2}}{2\sin (t){{\cos }^{2}}(t)}dt=\frac{\pi }{16}

$$
 * }

Method 1: Using Composite Trapezoidal rule and Composite Simpson's rule


In this method, we have two approaches to calculate the area of one leaf, which are Composite Trapezoidal rule and Composite Simpson’s rule. In order to compute the area of one leaf, we divide the area between the upper leaf curve and the vertical axis into two parts B and C, as depicted in the figure above. We use the equation in Cartesian coordinates in this method, that is $$({{x}^{2}}+{{y}^{2}})=2{{x}^{2}}y$$. First, we use Matlab to find the right-most and top-most point of the folium curve of one leaf:

The right-most point is(0.52, 0.748410929036851).

The top-most point is (0.495377246516107,0.499957647395650).

Matlab Code: The equation in Cartesian coordinates is implicit. And it’s unable for us to express dependent variable y in terms of independent variable x. But we can express x in terms of y. For the reason of convenience, we choose to project in the y-direction to calculate the area of one leaf. Matlab Code for solving explicit express: Using Matlab, we can get following expressions for x of one leaf:
 * {| style="width:100%" border="0"

$$\displaystyle
 * style="width:95%" |
 * style="width:95%" |

\begin{align} & {{x}_{1}}=\sqrt{y+y\sqrt{(1-2y)}-{{y}^{2}}} \\ & {{x}_{2}}=\sqrt{y-y\sqrt{(1-2y)}-{{y}^{2}}} \\ \end{align}

$$ $${{x}_{1}}$$ is the equation for bottom curve, and $${{x}_{2}}$$ is the equation for the top curve. Next, we use two approaches to calculate the area of the two different parts.
 * }

Using Composite Trapezoidal rule:

Matlab Code: The result:



Using Composite Simpson’s rule: Matlab Code: The result:



Method 2: Using sum of area of subdivided triangles


We use the equation in polar coordinates in this method, that is $$r=2\sin (t){{\cos }^{2}}(t)$$. Matlab Code:

The result:



Comment
As seen from above results using three different methods to compute the area of one leaf, we can conclude that:

Composite Simpson's rule converges much faster than Composite Trapezoidal rule;

The Sum of Triangle Area method converges much faster than both Composite Trapezoidal rule and Composite Simpson's rule.



Matlab Code:

Reference
1.Wolfram_Bifolium

2.Composite Trapezoidal rule

3.Composite Simpson's rule

Problem solved by Hailong Chen.

=Homework 3.5 Prove $${{e}^{(n+1)}}(x)={{f}^{(n+1)}}(x)-0 $$=

Refer to lecture slide [[media:nm1.s11.mtg16.djvu|mtg-16]] for the problem statement.

Given

 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{e}_{n}}(x)=f(x)-{{f}_{n}}(x)

$$     (Eq 1) Where, $$\displaystyle {{f}_{n}}(x)=\sum\limits_{i=0}^{n}{{{l}_{i,n}}(x)f({{x}_{i}})}$$
 * 
 * }

Objectives
Prove :

Solutions
Substituting Eq. (2) into Eq. (1) yields to:
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{e}_{n}}(x)=f(x)-{{f}_{n}}(x)=f(x)-\sum\limits_{i=0}^{n}{{{l}_{i,n}}(x)f({{x}_{i}})}

$$     (Eq 3) Where, $$\displaystyle {{l}_{i,n}}(x)=\prod\limits_{j=0,j\ne i}^{n}{\frac{x-{{x}_{i}}}{{{x}_{i}}-{{x}_{j}}}}$$ is a n-order polynomial. Next, take (n+1) –order derivative on both sides of Eq.(4), it becomes:
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{e}^{(n+1)}}(x)={{f}^{(n+1)}}(x)-{{[{{f}_{n}}(x)]}^{(n+1)}}={{f}^{(n+1)}}(x)-{{[\sum\limits_{i=0}^{n}{{{l}_{i,n}}(x)f({{x}_{i}})}]}^{(n+1)}}

$$     (Eq 4) Because $$\displaystyle {{l}_{i,n}}(x)=\prod\limits_{j=0,j\ne i}^{n}{\frac{x-{{x}_{i}}}{{{x}_{i}}-{{x}_{j}}}}$$ is n-order polynomial, $$\displaystyle {{f}_{n}}(x)=\sum\limits_{i=0}^{n}{{{l}_{i,n}}(x)f({{x}_{i}})}$$is also a n-order polynomial, thus taking n+1-order derivative yields:
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:95%" |
 * style="width:95%" |

{{[{{f}_{n}}(x)]}^{(n+1)}}={{[\sum\limits_{i=0}^{n}{{{l}_{i,n}}(x)f({{x}_{i}})}]}^{(n+1)}}=0

$$     (Eq 5) So,
 * 
 * }
 * {| style="width:100%" border="0"

$$ \displaystyle
 * style="width:92%; padding:10px; border:2px solid #8888aa" |
 * style="width:92%; padding:10px; border:2px solid #8888aa" |

{{e}^{(n+1)}}(x)={{f}^{(n+1)}}(x)-{{[{{f}_{n}}(x)]}^{(n+1)}}={{f}^{(n+1)}}(x)-0

$$     (Eq 6)
 * 
 * }

This problem was solved by shengfeng yang

=Problem 3.6: Prove that qn+1(n+1) =  (n+1)!=

Refer to lecture slide [[media:nm1.s11.mtg16.djvu|16-3]] for the problem statement.

Given

 * {| style="width:70%" border="0" align="center"



(6.1)
 * $$\displaystyle q_{n+1}^{(n+1)} =  (n+1)!$$
 * 
 * }
 * }

Objective
Prove the equality given in equation (6.1).

Solution
From lecture [[media:nm1.s11.mtg11.djvu|11-3]] we have


 * {| style="width:70%" border="0" align="center"



(6.2)
 * $$\displaystyle q_{n+1} :=  \prod_{j=0}^n = (x-x_j) = (x-x_0)(x-x_1)\cdots(x-x_n)$$
 * 
 * }
 * }

Since (6.2) is a polynomial we can re-write it as:


 * {| style="width:70%" border="0" align="center"



(6.3)
 * $$\displaystyle (x-x_0)(x-x_1)\cdots(x-x_n) = x^{n+1} + c_nx^n + \cdots + c_2x^2 + c_1x + c_0$$
 * 
 * }
 * }

Where $$ c_n $$ through $$ c_0 $$ are constants.

We now have to calculate the $$(n+1)^{th}$$ derivative.

The 1st derivative is:


 * {| style="width:70%" border="0" align="center"



(6.4)
 * $$\displaystyle \frac{d}{dx}\big(x^{n+1} + c_nx^n + \cdots + c_2x^2 + c_1x + c_0\big)$$
 * 
 * }
 * }


 * {| style="width:70%" border="0" align="center"




 * $$\displaystyle = (n+1)x^{n} + nc_nx^{n-1} + \cdots + 2c_2x + c_1 + 0$$


 * }
 * }

The 2nd derivative:


 * {| style="width:70%" border="0" align="center"



(6.5)
 * $$\displaystyle \frac{d^2}{dx^2}\big(x^{n+1} + c_nx^n + \cdots + c_2x^2 + c_1x + c_0\big)$$
 * 
 * }
 * }


 * {| style="width:70%" border="0" align="center"




 * $$\displaystyle = n(n+1)x^{n-1} + (n-1)nc_nx^{n-2} + \cdots + 2c_2 + 0$$


 * }
 * }

As we can see the lower orders of x are being zeroed out as we continue to take each derivative.

The nth derivative:


 * {| style="width:70%" border="0" align="center"



(6.6)
 * $$\displaystyle \frac{d^n}{dx^n}\big(x^{n+1} + c_nx^n + \cdots + c_2x^2 + c_1x + c_0\big)$$
 * 
 * }
 * }


 * {| style="width:70%" border="0" align="center"




 * $$\displaystyle = (2)*(3)*\cdots*(n-1)*n*(n+1)x + (1)*(2)*(3)*\cdots*(n-1)*n + 0$$


 * }
 * }

Finally, the $$ (n+1)^{th} $$ derivative:


 * {| style="width:70%" border="0" align="center"



(6.7)
 * $$\displaystyle \frac{d^{n+1}}{dx^{n+1}}\big(x^{n+1} + c_nx^n + \cdots + c_2x^2 + c_1x + c_0\big)$$
 * 
 * }
 * }


 * {| style="width:70%" border="0" align="center"




 * $$\displaystyle = (1)*(2)*(3)*\cdots *(n-1)*n*(n+1) + 0 = (n+1)!$$


 * }
 * }

Thus proving the original statement.

This problem was solved by Erle Fields

=Problem 3.7=

Objectives
Show that the below equation is equivalent.

Solution
Change sign and delete a vertical bar. eq(7.1)

Integrate from a to b. eq(7.2)

Get 1/6 out of bracket and simplify parenthesis. eq(7.3)

Simplify bracket by deleting common elements. eq(7.4)

We showed that the given equation is equivalent. eq(7.5)

Objectives
Show that the below equation is equivalent.

Solution
Divide the domain of integration from [a, b] to [a,(a+b)/2] and [(a+b)/2, b]. eq(8.1)

Change sign and delete a vertical bar. eq(8.2)

Integrate two functions then we can get eq(8.3).

Simplify eq(8.3), then we can get eq(8.4).

Get 1/32 out of bracket and simplify parenthesis. eq(8.4)

Simplify parenthesis of eq(8.4) and cancel common elements. eq(8.6)

We showed that the given equation is equivalent. eq(8.7)