User:Elve~enwikiversity

Components for the project report

Introduction
The subject of this work is the order of accuracy of numerical methods for solving ordinary differential equations. When we mention "order of accuracy" for a particular numerical method, we usually mean the order of the global truncation error. The global error is the cummulative error in the numerical solution that is produced on an interval that we need the ODE to solve on. A local truncation error is the error in the numerical solution (the round off errors produced by computing are excluded) generated at a particular step, when the previous step solution is considered exact (although it is not exact, unless the previous step is the boundary or initial condition). The local truncation error is usually denoted as $$O(h^{(n+1)})\,$$, where n is the order of accuracy on the entire interval of the differential equation and h is the step size (for fixed step size discretization). The importance of the order of a numerical method is reflected in the fact that we need fewer steps to numerically solve an ODE with a prescribed error tolerance, if we use a higher order numerical method. This issue becomes very important when we try to solve an ODE that describes a relatively fast changing physical proces, such as an internal combustion in the IC machines, vibrations in structures, etc.

The following two examples show how to determine the order of a numerical method. The first example is about a set of Runge Kutta methods of the second order. It shows how we can derive the conditions on the parameters that we introduce in the general form of the method and the possible choices for the parameters. The second example can be used as an exercise, since some steps are hidden and can be seen by selecting the marked button to show the missing steps that complete the derivation. The example shows how the conditions are derived for a Runge Kutta type general form with slope evaluations at three different locations within the current step. The main discussion is about the comparison of the Taylor series expansion and the corresponding numerical method recurrence equation. Based on this comparison, term by term, we conclude on the order of the local truncation error, which is then used to make a statement about the order of the method. The examples are followed by a concept quiz, which is convenient for a user to review the theoretical details that are used in the examples. Finally, we end up the analysis with a conclusion.

Motivation
The main reason for choosing the topic on a numerical method order determination is a bit of lack of examples showing how we determine the order for those methods. Numerical Analysis pages on wikiversity/wikipedia contain quite frequently mentioned "order of the method", but very few proofs for such statements. It is very often sufficient to know the order of the method, without getting into details about how to prove it. However, for thorough understanding of a method, we do need to know those details. This is particullarly the case when we need some modifications on a method, or if we need to combine two or more methods (such as using a lower order method for computing initial points for a multistep method, etc.)

Example 1. Determination of the parameters to establish a second order Runge Kutta method
Let a single step numerical method for solving ODE of the type

be given by the following [2,4] recurrence formula

$$ y_{n+1}=y_n+h(a_1k_1+a_2k_2) \,$$

with

$$k_1=f(t_n,y_n)\,$$

$$k_2=f(t_n+ph,y_n+hqk_1)\,$$

Using Taylor series expansion about tn, the following can be obtained

Assuming that the previous point in (1.2) is exact (which we may because we are analyzing the local truncation error, i.e. the error generated at the last step), we will denote $$ y(t_n)=y_n, y(t_n+h)=\overline{y_{n+1}}\,$$. Therefore, both $$y_n\,$$ and $$\overline{y_{n+1}}\,$$ are considered exact values of $$y(t)\,$$ at $$t=t_n\,$$ and $$t=t_n+h\,$$, respectively.

Then,

Using the differential equation (1.1) and replacing the derivatives of y, (1.3) becomes

where

or in the compact form (1.5) is $$f'(t_n,y_n)=\left (f_t+f_yf) \right)_{t_n,y_n},\,$$

and

or in the compact form (1.6) is $$f''(t_n,y_n)=\left ( f_{tt}+2f_{ty}f+f_tf_y+f_{yy}f^2+f_yf_yf \right )_{t=t_n,y=y_n}\,$$

After substituting the expressions for $$f'$$ and $$f''$$ into the Taylor series expansion (1.4), the following is obtained

Now, our objective is to adjust the method's recurrence equation (1.2), such that it can be compared, term by term, with the equation obtained using the Taylor series expansion (1.7). This step requires the Taylor series expansion of the $$ f(t_n+ph,y_n+hqk_1)\,$$ term about the $$(t_n,y_n)\,$$ point, as follows.

or in the compact symbolic form (1.8) is:

By substituting (1.9) into (1.7), the following is obtained

Finally, the method's equation (1.10) and the exact one step solution (1.7) can be compared. By substracting (1.10) from (10.7), the following error expression is obtained.

where

It can be noticed that the multiplier expression of $$h^3\,$$ in (1.12) does not cancel out in (1.11) for any combination of the parameters, since there are terms that appear only once without any parameter. This means that the dominant term in the error expression cannot be of higher order than $$O(h^3)\,$$. In order to achieve the local truncation error of the third order, all terms in the error expression containing $$ h^0, h^1, h^2\,$$ must be eliminated, by adjusting the 4 parameters. By satisfying the equations (1.13-1.15), the local error is of $$O(h^3)\,$$, and, consequently, the global error is of $$O(h^2)\,$$ order. Therefore, the resulting method is a second order method.

The equation set that must satisfied is the following

There are three equations (1.13,1.14,1.15) but with four unknown parameters. This means that one parameter must be chosen. Now, it is tempting to ask whether the extra parameter (additional degree of freedom) could be used to cancel next term in the error expression. However, we have already seen that it is not possible in this case, due to the additional term that are not associated (multiplied) with a parameter.

Since one parameter can be chosen, then there is non-unique form of a second order method.


 * (case 1) Choose $$a_1=a_2\,$$, then $$a_1=a_2=1/2, p=q=1\,$$

The method equation (1.2) for these parameter becomes

$$ y_{n+1}=y_n+\frac{h}{2}(k_1+k_2) \,$$

with

$$ k_1=f(t_n,y_n)\,$$

$$k_2=f(t_n+h,y_n+hk_1)\,$$

or in a compact form

$$ y_{n+1}=y_n+\frac{h}{2}(f(t_n,y_n)+f(t_n+h,y_n+hf(t_n,y_n))) \,$$


 * (case 2) If $$ a_1=0\,$$ is chosen (notice that $$ a_2=0\,$$ cannot be selected, because of the last two equations), then

$$ y_{n+1}=y_n+hk_2 \,$$

$$k_1=f(t_n,y_n)\,$$

$$k_2=f(t_n+1/2h,y_n+1/2hk_1)\,$$

or in a compact form

$$ y_{n+1}=y_n+hf(t_n+h,y_n+hf(t_n,y_n)) \,$$

Example/Exercise 2. A single step ODE numerical method order computing with three slope evaluations (Runge Kutta 3-rd order)
Let the recurrence equation of a method be given by the following of Runge Kutta type with three slope evaluations at each step

with

$$ k_1=f(t_n,y_n),\,$$

$$ k_2=f(t_n+p_1h,y_n+q_{11}k_1),\,$$

$$ k_3=f(t_n+p_2h,y_n+q_{21}k_1+q_{22}k_2),\,$$

Taylor series expansion of $$y(t_n+h)\,$$ about $$t_n\,$$ is the same as in Example 1. Therefore, we will just use the final expression (1.7), since the procedure of the derivation is the same. For convenience, the final expression is repeated, which is going to be a reference equation for the comparison with the method's recurrence equation. Since the formulas for the given form of recurrence equation will get complicated, we will use the compact symbolic notation for the derivatives, which is shown in Example 1.

The Taylor expansion of the terms in (2.2) is shown up to $$O(h^4)\,$$, rather then up to $$O(h^5)\,$$, as we should in order to check that eventually next higher order terms cancel out, but we will assume that the method cannot achieve better local accuracy then fourth order, or equivalently, the global error of the third order. This will save us getting into the third level expansion of the two variable function f, which has 18 terms and would not be appropriate due to its length (even if the compact symbolic notation is used).

After we prepared the Taylor series expansion, we need to adjust the method's recurrence equation such that it can be compared with the Taylor series (2.2).

The Taylor expansion (equation 2.3)

$$y_{n+1}=y_n+ha_1f(t_n,y_n)+ha_2(f+p_1hf_t+q_{11}hf_yf+\frac {h^2}{2}(f_{tt}p_1^2+f_{yy}q_{11}^2f^2+2f_{ty}p_1q_{11}f)+O_2(h^3))+\,$$

$$ha_3(f+p_2hf_t+f_y\left(q_{21}hf+q_{22}h(f+p_1hf_t+q_{11}hf_yf+O_3(h^2))\right)+\frac {1}{2}(f_{tt}(p_2h)^2+f_{yy}h^2(q_{21}f+\,$$

$$q_{22}(f+\underbrace{p_1hf_t+q_{11}hf_yf+O_4(h^2)}_{O_5(h)}))^2+2f_{ty}p_2h^2(q_{21}f+q_{22}(f+\underbrace{p_1hf_t+q_{11}hf_yf+O_4(h^2)}_{O_5(h)}))))\,$$

Now, we need to group the terms in the similar way they are grouped in the Taylor series (2.2), such that we can establish the conditions on the parameters that will yield the same terms as in the Taylor expansion up to the terms containing $$h^4\,$$.

By comparing the two expressions (2,4) and (2,2), the following system of equations is obtained.

At the first glance, the system is closed, the number of equations is (2.5 through 2.12) which matches the number of undetermined parameters. However, only 6 equations are independent, the rest of them can be obtained from those 6 equations. By dividing (2.10) with (2.12), we can obtain that $$q_{11}=p_1\,$$. Similarly, by substracting (2.11) from (2.9) equation, we see that $$p_2=q_{21}+q_{22}\,$$. When we replace these two results into the rest of the equations, it is evident that the (2.6) and the (2.7) are the same, and (2.8) and the (2.9) equations are the same. Therefore, two equations can be obtained from other six, and we have to choose two variables in order to obtain a solution for the parameters.

For example, we can choose that $$p_2=1, q_{11}=1/2\,$$, then we obtain the following recurrence equation.

where

$$k_1=f(t_n,y_n)\,$$

$$k_2=f(t_n+1/2h,y_n+1/2hk_1)\,$$

$$k_3=f(t_n+h,y_n-hk_1+2hk_2)\,$$

The recurrence equation (2.13) is known Runge Kutta third order method [1,3] (List of Runge–Kutta methods), which indicates that our approach was correct.

Quiz - Method order computations, local and global truncation error
{What is the main importance of having a higher ODE method order -It always better to increase the accuracy of the solution +It requires fewer steps for the same accuracy restriction -We need the method to be better consistent -We need the method to be more stable -It provides better margin of stability
 * type=""}

{The number of slope predictions (number of k's) in an RK method is related to the order of the method as follows. The order of the method is:
 * type=""}

-exactly equal to the number slope predictions -equal to number of k's (slope evaluations) +1 +upper bounded by the number of k's -not related to the number of k's

{An n-th order of the method means that -the global truncation error is of the order $$O(h^{n+1})\,$$ -the local truncation error is of the order $$O(h^{n})\,$$ +the global truncation error is of the order $$O(h^{n})\,$$ -the maximum of the global and local error is n-th order
 * type=""}

{The order of the local truncation error is related to the global truncation error in the following way -they are about equal -they are not related -the global is one higher order than the local +the local is one higher order than the global
 * type=""}

{Is the following true? The global truncation error is always less then the local, since the errors partially cancel out over steps. -true +false
 * type=""}

{What is the highest order that we can achieve with a Runge Kutta type method by using 5 k's? -We can achieve 5-th order -That method is not used, since it is not consistent. +We can achieve 4-th order -We can achieve even higher order, it is sixth order accurate.
 * type=""}

{How is the order of single step methods related to consistency of the method? +At least first order of the method indicates that the method is consistent -To be consistent, a method must be second order or higher -not related at all, a single step method can be inconsistent and be of any order.
 * type=""}

{What is the main "tool" to prove the order of a numerical method for solving ODE's of type $$y'=f(t,y(t))\,$$? -Solving a specific "stiff" problem $$y'=f(t,y(t))=-Ky\,$$ and showing that it converges for specific real positive K -Taylor series expansion for one variable +Taylor series expansion for one and two variables -Solving the ODE equation analytically and then numerically
 * type=""}

{If we show that all terms in a numerical method for ODE recurrence equation match the terms in the Taylor series expansion up to the terms with $$h^n\,$$, but we do not analyze whether further cancellation of the terms exist, the following is true -The method is of the order n -The method is of the order n, since the global error is of the order $$O(h^{n})\,$$ +The order of the method could be higher than n-1, but it is at least n-1
 * type=""}

{If we use an explicit n-th (n>1) order accurate multistep method with s steps (s>2) to solve an ODE, but we use an n-1 order method to calculate the missing initial points that we need to start using the multistep method, what is the global order of accuracy of our calculation? -There is no such combined method, we cannot do that -The overall method is n-th order, since we just use the n-1 order method for few initial points +The overall method is of the order n-1, since the initial error from the lower accurate method remains in the cummulative error (global) -The order of the method could be either n or n-1 -The order of the method is something between n-1 and n, just like the RK 45 method
 * type=""}

{What do we get as an order of error from the following expression

$$\frac{1}{h}O(h^2)+10O(h^3)\,$$?


 * type=""}

-Something between $$O(h^2)\,$$ and $$O(h^3)\,$$ -$$O(h^2)\,$$ -$$O(h^3)\,$$ +$$O(h)\,$$

{What do we get as an order of error from the following expression

$$sin(h^2)\frac{1}{h}O(h^2)\,$$?


 * type=""}

-Something between $$O(h^2)\,$$ and $$O(h)\,$$ -$$O(h^2)\,$$ +$$O(h^3)\,$$ -$$O(h)\,$$

{What do we get as an order of error from the following expression

$$e^hcos(h)\frac{1}{h}O(h^2)\,$$?


 * type=""}

-It must be of order $$O(h^2)\,$$ -$$O(h^3)\,$$ +$$O(h)\,$$

Discussion: Proposed and actual changes/contributions to the website
The added material within this topic is a bit different then the proposed material. The main deviation from the proposal is that the second example is turned into a third order instead of the proposed one of the fourth order, due to the complexity and the length of the Taylor series expansion for two variable function up to the fourth order terms with respect to the step size h. The second deviation is that the mentioned exercise is shown as a second example, but an additional quiz is included for users to test their understanding of the subject material. However, the added material will be adjusted several times as an addition of an interesting example or a quiz question.

Conclusion
The main reason that we need to consider the order of a numerical method for solving ODE's is that it directly determines the step size i.e. the number of steps that we need to calculate the recurrence equation for. This directly influences the amount of computational work to obtain the prescribed accuracy.

The main approach for computing the order of a numerical method is that we first use Taylor series expansion for the exact differential equation to obtain the next value, namely $$y_{n+1}\,$$ using the previous value $$y_n\,$$ and the right hand side function $$ f(t,y(t))\,$$ evaluated at $$t_n\,$$. We cannot say in advance up to which order we need to expand those terms in the Taylor series, since we are solving for the order of the method. The general rule is to expand the terms one order higher that we expect the method order is.

The second part of the method order computing is to write all terms in the given method recurrence equation in terms of the functions f and y evaluated at $$t_n\,$$. Hence, the function f appearing in the recurrence equation as an evaluation at time different than $$t_n\,$$, need to be replaced with a truncated Taylor series expansion about $$(t_n,y_n)\,$$.

The third part is to compare those two expressions obtained in the two forementioned ways. The order with respect to the powers of h, up to which the terms from the adjusted recurrence equation match the terms in the Taylor series expansion, represent the order of local truncation error. After we computed the order of the local truncation error, the global truncation error is one order less than the local error order.

Since the order of a numerical method for ODE is, by the definition, equal to the global error order, we only need to compute the local truncation error order and substract one from it.

There are some methods that do not have preciselly determined order. It is usually the case with the predictor corrector methods, like the Runge-Kutta method 45.

Project Proposal: Examples of the order determination for ordinary differential equations numerical methods
This is the Project Proposal for adding new material in the section of Numerical analysis at Wikiversity

Title: Numerical method for ordinary differential equations: Examples of order determination

Introduction

The subject of this work is order of accuracy of numerical methods for solving ordinary differential equations. When we mention "order of accuracy" for a particular numerical method, we usually mean the order of the global truncation error. The local truncation error is usually denoted as $$O(h^{(n+1)})\,$$, where n is the order of accuracy on the entire interval of the differential equation and h is the step size (for fixed step size discretization).

Review of some of existing methods order (This will be a list on the website with short descriptions of the order for existing known methods)

Plan of specific steps/activities

In the frame of the project of adding new material to the website, the following activities are planned.

The activities will be consisted of adding:
 * 1) New examples of determination of local and global truncation error
 * 2) *The general approach how to derive a second order Runge-Kutta conditions on the parameters involved in the formulas (Starting from general parameters, the conditions such as where to evaluate intermediate steps, the weighting coefficients relations and consistency conditions will be derived)
 * 3) *Examples how to prove the second order accuracy for at least two different Runge-Kutta methods
 * 4) *Examples how to prove the fourth order accuracy for at least two different Runge-Kutta methods
 * 5) *New examples (different than the existing ones on wikipedia) on how to prove an order of a multistep method.
 * 6) Exercises for determination of the truncation errors order
 * 7) Links to additional specific material where necessary and/or fairly helpful.

Conclusion

The purpose of this project is to add new material, primarily examples, to the website, such that an interested reader can get familiar with the ways how an order of a numerical method is determined. Significant amount of material in the form of formulas and derivations with short explanations will be added. Runge-Kutta (single step) methods and a couple of multistep method formulas will be analyzed. The results of the analysis will be in the form of the $$O(h^{(n+1)})\,$$ for each particular method that is analyzed.

References

Various textbooks, articles and websites will be used to acquire several formulas for methods that will be analyzed. Those references will be added.

test for equation numbering

”Five interesting facts about Wikipedia/Wikiversity websites are:


 * 1) It is an open source platform where everybody can contribute an appropriate study material
 * 2) A user can create an account relatively easily, but even without an account a user can edit existing pages
 * 3) Beginners can exercise editting pages using the Sandbox. This is the place where one can write related or unrelated material. Since the pages in the "Sandbox" are deleted regularly, it is not important to be precise here (unless the user wants to transfer the page as a regular page from the Sandbox).
 * 4) A file can be uploaded to the website and a link created using  [[Media:Filename]] syntax.
 * 5) The symbols/formulae can be added by typing textual commands (unlike with the MS Word equation editor).

An example of a formula follows:


 * $$Pressure=\chi(\omega)\left(\mathcal{P}\int\limits_{-\infty}^{\infty}{F_x(\alpha,x,y,z)\over\alpha-\alpha_1}d\alpha\right)^{\nabla\left({\sum\limits_{k=0}^{k=\infty}{\frac{\partial G_k(x,y,z)}{\partial x}}}\right)}$$

Newton's method convergence rate

(a) By reading the page on the Newton's method, one can notice that there is no proof of the convergence rate. Although there are many ways in which this method can fail, the main advantage of this method over many other methods is that it's convergence rate is quadratic. It would be convenient to have a proof directly displayed or attached as a file via a link. This would also help students dealing with this or similar subjects to see the general approach how a convergence rate of a numerical method can be analyzed, such that similar analysis can be used to analyze other numerical methods.


 * For equation numbering, see w:Help:Displaying_a_formula.
 * Only number equations that you actually refer to.
 * Put in cross links to topics you mention, like Taylor's Theorem

(b) Proof of quadratic convergence for the Newton's iterative method The function $$\text{f(x)}$$ can be represented by a Taylor series expansion about a point that is considered to be close to a root of f(x). Let us denote this root as $${\alpha}$$. The Taylor series expansion of f(x) about an $${x_n}\,$$ is:
 * $$ f(x)=f(x_0)+{f^\prime(x_0)}\left(x-x_0\right)+\frac 1 {2!}{f^{\prime\prime} (x_0)}\left(x-x_0\right)^2+\frac 1 {3!}{f^{\prime\prime\prime} (x_0)}\left(x-x_0\right)^3+...                                                $$ ______(1)

For x=$$\alpha$$; f(x)=f($$\alpha$$)=0, since x=$$\alpha$$ is the root. Then, (1) becomes:
 * $$ 0=f(x_0)+{f^\prime(x_0)}\left(\alpha-x_0\right)+\frac 1 {2!}{f^{\prime\prime} (x_0)}\left(\alpha-x_0\right)^2+\frac 1 {3!}{f^{\prime\prime\prime} (x_0)}\left(\alpha-x_0\right)^3+...                                                $$ ______(2)

Using the Lagrange form of the Taylor series expansion remainder
 * $$ R_m=\frac 1 {(m+1)!}f^{(m+1)}(\xi)(\alpha-x_0)^{(m+1)}, \text{ where } \xi\in[x_0,\alpha] $$ ______(3)

the equation (2) becomes
 * $$ 0=f(x_0)+{f^\prime(x_0)}\left(\alpha-x_0\right)+\frac 1 {2!}{f^{\prime\prime} (\xi)}\left(\alpha-x_0\right)^2 $$ ______(4)

It should be noted that the point $$x_0$$ stands for the initial guess, but the same form of expansion is valid for Taylor series expansion about an arbitrary point $$x_n\,$$ obtained after n iterations
 * $$ 0=f(x_n)+{f^\prime(x_n)}\left(\alpha-x_n\right)+\frac 1 {2!}{f^{\prime\prime} (\xi)}\left(\alpha-x_n\right)^2 $$ ______(5)

By deviding equation (5) by $$ f^\prime(x_n)\,$$ and rearanging the expression, following can be obtained:
 * $$ \frac {f(x_n)}{f^\prime(x_n)}+\left(\alpha-x_n\right)=-\frac 1 {2!}\frac {f^{\prime\prime} (\xi)}{f^\prime(x_n)}\left(\alpha-x_n\right)^2 $$ ______(6)

Now, the goal is to bring in the new iteration n+1 and relate it to the old one n. At this point, the Newton's formula can be used to transform the expression (6). The Newton's formula is given by
 * $$ x_{n+1}=x_{n}-\frac {f(x_n)}{f^\prime(x_n)}\,$$ _____(7)

The intention is to eliminate the (generaly unknown) root $$\alpha$$ from (6) via using formula (7) and get the error terms involved in the relation. Equation (6) can be adjusted for expression (7):
 * $$ \underbrace{\underbrace{\frac {f(x_n)}{f^\prime(x_n)}-x_n}_{-x_{n+1}}+\alpha}_{\epsilon_{n+1}}=-\frac 1 {2!}\frac {f^{\prime\prime} (\xi)}{f^\prime(x_n)}\underbrace{\left(\alpha-x_n\right)^2}_{{\epsilon^2}_{n}} $$ ______(8)

That is,
 * $$ \epsilon_{n+1}=-\frac 1 {2!}\frac {f^{\prime\prime} (\xi)}{f^\prime(x_n)}{{\epsilon}^2}_n \, $$ _____(9)

After taking absolute value (scalar function of a scalar variable is considered here) of both sides of (9), following is obtained:
 * $$ \left | {\epsilon_{n+1}}\right | = {\frac 1 {2}\left |{\frac {f^{\prime\prime} (\xi)}{f^\prime(x_n)}}\right |{{\epsilon}^2}_n} \, $$ _____(10)

Equation (10) shows that the convergence rate is quadratic if "no problem" occurs with the first derivative (e.g. the casses when the first derivative at the root is zero, such as $$ f(x)=(x-a)^3, a\in R) \,$$ and the initial guess is not "too far away" from the root, such that $$ f^{\prime\prime} (\xi) \approx f^{\prime\prime} (x_n)\,$$ Finally, (10) can be expressed in the following way:
 * $$ \left | {\epsilon_{n+1}}\right | \le M{{\epsilon}^2}_n \, $$ _____(11)

where M would be the supremum on the interval between $$ x_n\,$$ and $$ \alpha\, $$ of the variable coefficient of $${\epsilon^2}_n \,$$

(The equation numbering works on Wikipedia, but it does not work on Wikiversity, I am moving my page to Wikipedia. The new page is http://en.wikipedia.org/wiki/User:Elvek) (add on November 2, 2010: the problem with equation numbering has been solved by user mjmohio, the numbering works now)

Interesting Exercise
What do you think is $$2\lim_{t\rightarrow t_*} (sin(t-t_*)tan(t-t_*)/(t-t_*)^2)$$

After you try best to answer, click here to see the correct answer: 2

More complete answer is here: $$2\lim_{t\rightarrow t_*} (sin(t-t_*)tan(t-t_*)/(t-t_*)^2)=2\lim_{t\rightarrow t_*} \frac {1} {cos(t-t_*)}\lim_{t\rightarrow t_*} \frac {sin^2(t-t_*)} {(t-t_*)^2}= 2\left[ \lim_{t\rightarrow t_*} \frac {sin(t-t_*)} {(t-t_*)}\right ]^2=2$$

Quizzes
{SVD decomposition exists - only in special cases of an orthogonal matrix + always for any square matrix - only for matrices with a zero determinant (singular matrices) - only when all eigenvalues of a symmetric matrix are positive (positive definite) - maybe, I have no idea
 * type=""}

{ The integral $$\int_0^\pi \left[\begin{array}{c c}t & t^2 \\sin(t) & tcos(t)\end{array} \right]dt $$ is { $$ \left[\begin{array}{c c}\pi^2/2 & \pi^3/3 \\2 & -2\end{array} \right]$$ }
 * type="{}"}

The determinant of the integral above is { $$ -\pi^2(1+ \frac {2}{3} \pi) $$ }

{ For the following right hand-side vector vector b
 * type="{}"}

$$ \displaystyle \ b_1 = -1$$ $$ \displaystyle \ b_2 = -5 $$ $$ \displaystyle \ b_3 = -5 $$

in the following matrix equation $$ \begin{bmatrix} i&0&0\\0&5i&0\\0&0&5i\end{bmatrix}$$ X $$\begin{bmatrix} x_1\\x_2\\x_3\end{bmatrix} $$ = $$\begin{bmatrix} b_1\\b_2\\b_3\end{bmatrix} $$

type in the unique solution

$$ \displaystyle \ x_1 = $$ { i } $$ \displaystyle \ x_2 = $$ { i } $$ \displaystyle \ x_3 = $$ { i }