User:Hh687711

A list of five things you learned about wikipedia/ wikiversity

 * An amazing and astounding creation in 21th century.
 * A good source to gain lots of information and knowledge.
 * A collection and brainstorm of knowledge devoted from tons of wikipedia/wikiversity users.
 * A thoughtful design for every language speaker.
 * An open source to express and share your skills and knowledge.

Some complicated math formula

 * 1) The (sample) mean of a set of observations $$x_1, x_2, ..., x_n$$ is equal to  $$\bar{X} = \frac{1}{n}\sum_{i=1}^{n}x_i$$


 * 1) The (sample) variance of a set of observations $$x_1, x_2, ..., x_n$$ is defined as  $$s^2 = \frac{1}{n-1}\sum_{i=1}^{n}(x_i - \bar{X})^2$$


 * 1) The (sample) standard deviation is denoted by $$s$$ and is defined as the square root of the variance: $$s = \sqrt{s^2}$$


 * 1) The (sample) coefficient of variation of a set of observations $$x_1, x_2, ..., x_n$$ is $$cv = \frac{s}{\bar{X}}$$


 * 1) A normal distribution with mean $$\mu$$ and variance $$\sigma^2$$: $$f(x) = \frac{1}{\sqrt{2\pi\sigma^2}}e^{-\frac{1}{2\sigma^2}(x-\mu)^2}$$

A link to my edit on the Talk page
http://en.wikipedia.org/wiki/Talk:Bisection_method

It is a bad proposal, since it is not be explained clearly and throughout. The page editor should notice whether there are some similar demonstrations. If there are, the editor can combine it with the new contents that are desired to be added. Next time I will illustrate a theorem or a concept more compactly and carefully.

Correction: Given a continuous function f(x) defined on the interval [a, b], with f(a) and f(b) of the opposite sign. According to Intermediate Value Theorem, there exists a number c in (a, b) with f(c) = 0. It is a repeated halving or bisecting of subintervals of [a, b], and at each step, locating the half containing c.
 * Find points $$a_0$$ and $$b_0$$ such that $$a_0 < b_0$$ and $$f (a_0)$$ and $$f (b_0)$$have opposite signs, that is, $$f(a_0) \times f(b_0) < 0$$.
 * Find the midpoint $$c_0 = (b_0+a_0)/2$$ and find $$f(c_0)$$.
 * If $$f(c_0)$$ = 0, then $$c = c_0$$, and we are done.
 * If $$f(c_0)$$ and $$f(a_0)$$ have the same sign, c belongs to $$(c_0, b_0)$$. Set $$a_1 = c_0$$ and $$b_1 = b_0$$.
 * If $$f(c_0)$$ and $$f(a_0)$$ have opposite signs, c belongs to $$(a_0, c_0)$$. Set $$a_1 = a_0$$ and $$b_1 = c_0$$.
 * Reapply the process to interval $$[a_1, b_1]$$.

Example
Find the mean value $$\bar{x}$$. of the following datapoints:

x: 2, 3, 5, 8, -10, 4. Solution: According to the definition of mean value,

$$\bar{x} = \frac{1}{n}\sum_{i=1}^{n}x_i$$.

Thus, $$\bar{x} = \frac{1}{6}\sum_{i=1}^{6}x_i = \frac{1}{6}(2 + 3 + 5 + 8 -10 +4) = 2 $$.

Quiz
{What is $$\frac{1}{n}\sum_{i=1}^{n}x_i$$ - Variance value. - Covariance. + Mean. - Standard deviation.
 * type=""}

{ Find the variance $$s^2$$ of the following datapoint:
 * type="{}"}

x: 2, 3, 5, 8, -10, 4

The variance $$ \displaystyle \ s^2 = $$ { 103.6 }

Edit a Wikiversity page
Having reviewed the Numerical Analysis Wikiversity page, I found that the polynomial interpolation concept quiz missed a thing, i.e. the error term of an interpolation polynomial. Hence, I added a question about the error term.

http://en.wikiversity.org/wiki/Topic:Numerical_analysis/Polynomial_interpolation_concept_quiz

Claims of Final Project Topic
My final project topic is "truncation error". I plan to use the Wikipedia page as the template and to improve some parts. I will demonstrate the definitions of truncation error in more details. Besides, I will add proofs of the local truncation errors (LTE) and the global truncation errors (GTE). Moreover, I will attach some examples to illustrate how to determine and prove the order of numerical method. Hope my project can help some people who is interested in learning truncation errors.


 * Do not duplicate things already on Wikipedia. You could add proofs in Wikiversity and link to them from Wikipedia. Avoid overlapping with Topic:Numerical Analysis/Order of RK methods. Mjmohio (talk) 16:28, 7 November 2012 (UTC)

Project Report for User:Hh687711
For Introduction to Numerical Analysis, Fall 2012.

Introduction
My final project is about Truncation Errors. The topic is important as truncation error is one of the most significant errors in Numerical Analysis. Until now it is not included in Topic:Numerical analysis. It is difficult to understand using only Wikipedia because the content of Wikipedia page is slightly general. I am trying to take a much deeper look on this topic.

To facilitate learning of this topic, I added some theories that local truncation errors can help us to tell how the solution to the differential equation fails to solve the different equation. Moreover, I organized the proof of the relationship between local and global truncation errors and provided a graph showing it. Also added an exercise to develop skill at knowing how to use the Taylor series to estimate truncation errors.

Contribution
I created the Truncation Errors page of Numerical Analysis content in Wikiversity which contains a brief expression of truncation errors, the reason why truncation errors matter, and the proof that the relationship between local truncation error and global truncation error.

I chose this particular example because I tried to present, in an easy to understand format, how to perform Taylor expansions to derive the approximation method and to find the truncation error.

I also edited Truncation error (numerical integration) to correct a typo in Relationship between local and global truncation errors. I revised that $$\tau_{n} $$ into $$\tau_{n+1} $$. Thus, $$ e_{n+1} = e_n + h \Big( A(t_n, y(t_n), h, f) - A(t_n, y_n, h, f) \Big) + \tau_{n+1}. $$

Future Work
I decided that although I included the concept of avoiding truncation error would be good it was too much for this project, so I just made an outline that others can fill in.

It would be beneficial if someone can give a throughout explanation of the numerical error. (The total numerical error is the summation of the truncation and Round-off error.) As well, we can understand that the truncation error generally increases as the step size increases, while the roundoff error decreases as the step size increases.

Conclusions
In this project I reinforce that truncation errors occur when exact mathematical formulations are represented by approximations.

I think this is a valuable contribution because it illustrated the fundamental and essential ideas of truncation error and help us to know that use Taylor Series to estimate truncation errors.