User:Egm4313.s12.team8.tclamb/R4

=Problem R4.4 - Extend the Accuracy of the Solution Beyond $$\hat x = 0$$=

Problem Statement
Back up away a little from the brink of non-convergence at $$x = 1$$ for the Taylor series of $$\log (1+x)$$ about $$\hat x = 0$$, and consider the point $$x_1 = 0.9$$.

Find the value of $$y_n(x_1)$$, $$y_n'(x_1)$$ that will serve as initial conditions for the next iteration to extend the domain of accuracy of the analytical solution. Find $$n$$ sufficiently high so that $$y_n(x_1)$$ and $$y_n'(x_1)$$ do not differ from the numerical solution by more than $$10^{-5}$$.

Solution
The following MATLAB code generates the numerical solution to the problem:

The following MATLAB function will generate the output, as well as the coefficients, of a series solution to the ODE:

Plotting the absolute error at $$x_1$$ as a function of $$n$$ gives:

The minimum error is thus found at $$n = 15$$ for both $$y_n(x_1)$$ and $$y_n'(x_1)$$ and is roughly:

This is a couple orders of magnitude greater than the desired $$10^{-5}$$. Error increases for $$n > 15$$ due to the truncation error inherent in computing numerical answers.

Therefore, the initial conditions of the next iteration so as to extend the domain of accuracy of the analytical solution are: $$\begin{align} y_{15}(x_1) &= -22.1530 \\ y_{15}'(x_1) &= -55.9732 \end{align}$$