Trigonometric interpolation

This lesson is currently under construction!

What is trigonometric interpolation?
Say we have a set of data points that is approximately periodic. An example of such data would be a sound byte, or an electric signal. We then want to approximate the behavior of such phenomena using an interpolant. Hence, we are looking for an interpolant that has the property of being periodic. So, what examples of periodic functions exist? The obvious ones are the sine and cosine functions. Hence, we would want to relate the sine and cosine functions to a set of data points. Therefore, we seek a trigonometric polynomial $$\pi_n(x)$$ such that:


 * $$\pi_n(x) = a_n + \sum_{j=1}^{n} (a_j \cos (jx) + b_j \sin (jx))$$

As one can see, we have 2n + 1 unknowns to evaluate in order to derive $$\pi_n$$. Why do we call this function a polynomial? In fact, it is a degree $$n$$ polynomial in $$\cos x$$ and $$\sin x$$. This can be seen from de Moivre's theorem:


 * $$e^{ix} = \cos x + i \sin x$$
 * $$e^{ijx} = (\cos x + i \sin x)^j = \cos (jx) + i \sin (jx)$$

Hence, we can see that, in fact, the interpolant is indeed a polynomial in terms of $$\cos x$$ and $$\sin x$$. From the above identities, one can derive these two statements:


 * $$\cos x = \frac{e^{ix}+e^{-ix}}{2}$$      $$\sin x = \frac{e^{ix} - e^{-ix}}{2i}$$

These identities will be the basis of how we will interpolate a function trigonometrically, as our new function will be of the form:


 * $$\pi_n(x) = \sum_{j=0}^{n-1} \gamma_j*e^{ijx}$$

Discrete Fourier Transform
For this lesson, we will focus on the case where a sequence of data given at equal time steps is given as the data to be interpolated.

Now we need a technique to interpolate a set of given data points into a function of the same form as that of $$\pi_n$$. First of all, since the results of our $$\pi_n$$ need to be real numbers, one can determine that the coefficients $$\gamma_j$$ need to occur in complex conjugate pairs. Now, what would be a good choice of nodes to use for the interpolation? It is now time to introduce a concept called the nth roots of unity:

Definition: The nth roots of unity, denoted $$\omega_n^0, \omega_n^1, \cdots, \omega_n^{n-1}$$ are all of the complex numbers z such that $$z^n - 1 = 0$$ holds true.

A geometric interpretation of the nth roots of unity would be to first, draw the unit circle on the complex plane. Then, since 1 is always a root of $$z^n - 1 = 0$$, draw the first dividing line from the origin to the point (1,0). After that, then divide the rest of the circle into n equal parts. This is demonstrated above in the case with $$n=4$$. Indeed, the four solutions to $$z^4 - 1 = 0$$ are 1, -1, i, and -i. After doing so, simply take the exponential of each of these four roots to obtain the 4th roots of unity. From this geometric method, we can derive, using de Moivre's theorem, the general form of $$\omega_k^n$$:

$$ \omega_n^k = e^{i\frac{2k\pi}{n}} = \cos\left(\frac{2k\pi}{n}\right) - i \sin \left(\frac{2k\pi}{n}\right)$$

This formula makes intuitive sense to us, as one would first start at 0 radians and then split the entire circle of $$2\pi$$ radians into n parts, into which we then travel clockwise through k of those divisions to obtain our desired root.

Now that we have our nodes for interpolation, it is actually quite easy to now find the coefficients. The method for doing so is called the Discrete Fourier Transform.

Definition: Let $$\mathbf{x} = [x_0, x_1, \cdots, x_{n-1}]$$. The Discrete Fourier Transform $$\mathbf{y} = [y_0, y_1, \cdots, y_{n-1}]$$ is determined by:


 * $$y_m = \sum_{k=0}^{n-1} x_k\omega_n^{mk}$$, $$m = 0, 1, \cdots, n-1$$.

It is often easier to think of this process in matrix-vector notation. First of all, we will define the Fourier matrix:

Definition: For a sequence of n data points, the Fourier matrix $$\mathbf{F}_n$$ used in the DFT algorithm is simply the Vandermonde matrix of the n-1th roots of unity. For example, if we have $$n = 4$$ then:


 * $$\mathbf{F}_4 = \left( \begin{array}{cccc}

1 & 1 & 1 & 1 \\ 1 & \omega_4 & \omega_4^2 & \omega_4^3 \\ 1 & \omega_4^2 & \omega_4^4 & \omega_4^6 \\ 1 & \omega_4^3 & \omega_4^6 & \omega_4^9 \end{array} \right) = \left( \begin{array}{cccc} 1 & 1 & 1 & 1 \\ 1 & -i & -1 & i \\ 1 & -1 & 1 & -1 \\ 1 & i & -1 & -i \end{array} \right)$$

We can now write the DFT in terms of matrix-vector multiplication:


 * $$\mathbf{y} = \mathbf{F}_n \mathbf{x}$$

Because our transformation is a matrix-vector multiplication, finding the coefficients, which are the elements of the y vector in our DFT, can be done in $$O(n^2)$$ time. It is also easy to convert back from these coefficients to the original x vector. It can be shown that the following is true of the Fourier matrix:


 * $$\mathbf{F}_n^{-1} = \frac{1}{n}\mathbf{F}_n^H$$

Here, H denotes the Hermitian of $$\mathbf{F}_n$$. Hence, all we need to do to find the inverse DFT matrix is to simply transpose the matrix, take the conjugates of the entries, and then divide all entries by n, all of which can be done within reasonable amount of time. We now have what is called the Inverse Discrete Fourier Transform, which, in matrix-vector notation, is simply:


 * $$\mathbf{x} = \mathbf{F}_n^{-1}\mathbf{y}$$

Fast Fourier Transform
Now that we have a method of transforming samples from the raw data into the Fourier coefficients, one can wonder if we can do better than $$O(n^2)$$. Well, we can, and the method for doing so is called the Fast Fourier Transform. In this method, n is assumed to be even.

This method takes advantage of some properties of the Fourier Matrix. Let $$\mathbf{P}_n$$ be the permutation matrix that places the even number columns before the odd numbered columns. Let $$\mathbf{D}_{n/2}$$ be the matrix that has the first $$(n/2)-1$$ nth roots of unity on the diagonal with zeros for the other entires. First of all if we were to multiply the matrix $$\mathbf{F}_4$$ by $$\mathbf{P}_n$$, we would have:


 * $$\mathbf{F}_4\mathbf{P}_4 = \left( \begin{array}{cccc}

1 & 1 & 1 & 1 \\ 1 & -1 & -i & i \\ 1 & 1 & -1 & -1 \\ 1 & -1 & i & -i \end{array} \right)$$

Now, comparing to the matrix $$\mathbf{F}_2 = \left( \begin{array}{cc} 1 & 1 \\                                                                          1 & -1 \end{array} \right)$$, we can see that the top left and bottom left corners of $$\mathbf{F}_4\mathbf{P}_4$$ contain the $$\mathbf{F}_2$$ matrix. Given that $$  \mathbf{D}_2 = \left( \begin{array}{cc} 1 & 0 \\                                        0 & -i \end{array} \right)$$, one can also see the the top right corner of the matrix is $$\mathbf{D}_2\mathbf{F}_2$$ and the bottom right corner of the matrix is $$-\mathbf{D}_2\mathbf{F}_2$$. Hence, we have the following statement for $$\mathbf{F}_4$$:

$$\mathbf{F}_4 = \left( \begin{array}{cc} \mathbf{F}_2 & \mathbf{D}_2\mathbf{F}_2 \\ \mathbf{F}_2 & -\mathbf{D}_2\mathbf{F}_2 \\ \end{array} \right)$$

In fact, this recursive behavior applies in general for any Fourier matrix of size $$2^n$$. This recursive behavior is the basis for the Fast Fourier Transform algorithm for computing the DFT. Here is a summary of Fast Fourier Transform:

Given x, the input vector, y the output vector, n the number of data points, and omega the root of unity:

1. If n = 1, let y = x and exit procedure. Otherwise, continue to step 2. 2. Split x into 2 sequences p and s, with p containing the numbers in the even positions of the sequence x and s containing those numbers in the odd positions of x.  3. Recursively call this algorithm with input p, output q, n = n/2, and omega = omega^2 4. Recursively call this algorithm with input s, output t, n = n/2, and omega = omega^2 5. Output y such that for k = 0, 1, ..., n-1 $$y[k] = q[k \mod(n/2)]+\omega^kt[k \mod(n/2)]$$.

This algorithm does $$O(n)$$ operations per level of recursion and has $$O(\log_2 n)$$ levels of recursion, so the running time for this particular method is $$O(n \log_2 n)$$, which is significantly faster than that of Discrete Fourier Transform. Also, the inverse DFT can be calculated using this algorithm as well with a few modifications.

However, there are some limitations to this algorithm. First, the algorithm assumes that the number of data points is a power of two, and that they are equally spaced and periodic. Therefore, if one tries to do a FFT of a set of points that is not periodic or tries to "scale" a sequence to be a power of 2, noise can result.

Applications
There are many applications of DFT and trigonometric interpolation. It is used to filter noise in a signal by first applying DFT to a discrete signal, then setting all of the undesired frequencies to zero. After that, inverse DFT is performed on the new data to obtain the filtered signal.

Some convolutions are easier computed in the frequency domain, so one would use DFT to convert the data needed from the time domain to the frequency domain.

Fast Fourier Transform can be used to evaluate polynomials more quickly, by choosing n points as being the roots of unity for the algorithm.