User:Egm6322.s14.team2/ChrisTermPaper

Investigation and Comparison Spectral Methods, Spectral Element Methods, and Pseudo-Spectral Methods

=Introduction=

The goal of this article is to introduce the basics of spectral, spectral element, and pseudo-spectral methods for solving partial differential equations. The scope of the paper will be limited to information and discussion that would be suitable for someone without a deep knowledge of these numerical methods. Therefore there will not be a great deal of equations in this article, but rather a high level discussion of the methods. The tone will be an informal investigation of these methods. It is my goal to take the reader on the journey of discovery and insight that I had when investigating the topic.

Background
Scientists and engineers have always been working on how to best solve differential equations. Particularly tricky differential equations are partial differential equations(PDE). For some partial differential equations there exist methods for which a solution can be determined analytically. One example would be the separation of variables method. For other partial differential equations, there exists no developed theory to determine the analytical solution. That is where numerical methods come into the picture. Engineers and scientists opt to solve the troublesome partial differential equation using approximations of the terms of the PDE in an effort to obtain an approximate solution to the equation. The spectral methods discussed in this report are numerical methods for solving partial differential equations.

=1 - What are Spectral Methods?=

Spectral methods were originally developed in 1944 but only came to popularity in the 1990s after much work by two scientists, Gottlieb and Orszag, in the 1980s. The spectral methods can be used as a type of spatial discretization. They are similar to the methods of weighted residuals in that they assume a solution and then seek to minimize the error caused by the assumed solution.

Description of the method
The goal of the spectral method is to solve a partial differential equation. For simple illustration purposes consider a general linear partial differential equation like the one below.

In this equation the $$ \mathbf{x} $$ represents a vector that could contain multiple dimensions i.e. x, y, z, etc. The L is the lazy way of writing a combination of a bunch of derivatives. $$ s(\mathbf{x})$$ is just a function that is known; we want to solve for $$ u(\mathbf{x}) $$, not $$ s(\mathbf{x})$$.

The solution, $$ u(\mathbf{x}) $$ is expanded in a linear combination of basis functions, which are referred to as trial functions. This is an approximation because the true solution may not be able to be expressed perfectly as a linear combination of the selected basis functions.

You may be asking yourself: 'how do I "expand" a function?' 'Where did these basis functions come from?' 'What is a basis function?'

Basis Functions
Basis functions are essentially any group of terms that can be linearly combined to express a function. The most familiar basis would be a vector basis. Given a random vector, we apply a orthogonal coordinate system and use a combination of little unit vectors that point along the coordinate lines in order to express the original vector in terms of the basis vectors. Mathematically, the expansion of a function into a linear combination of basis functions looks something like the following.

The above equation reads "The function u(x) can be approximated by a sum of a bunch of functions, $$ \phi_n(x) $$. Basis functions need not always be orthogonal, but the spectral methods utilize orthogonal basis functions. The finite element method uses non-orthogonal basis functions . Consider figure 1.1 below. Observe how in the case of the vector V the orthogonal green and red basis vectors can be arranged to express V in terms of the basis vectors. This is an example where the expression of the vector is perfectly described by the basis vectors. In the case of functions, the basis functions may not always be able to perfectly describe the function that is being approximated. Such is the case for when a square wave is approximated using trigonometric functions in a Fourier series; the resulting approximation can never be perfectly accurate because of the presence of a Gibbs phenomenon.



In spectral methods the basis functions are often Fourier for periodic domains and orthogonal polynomials, such as Chebyshev polynomials for non-periodic domains.

Continuing On
An excellent presentation of the spectral method was given by H. Isliker. I will use his or her presentation as a guide for the following explanation.

To keep things simple we could consider only a 1 dimensional differential equation with the following definition for L.

Therefore the differential equation given by ($$) takes the following form.

We can write this equation in another form by moving all terms to one side.

So we want to solve this equation. It is a very simple non-homogenous second order linear ordinary differential equation with constant coefficients. There are many methods to solve this type of problem. One example would be the method of undermined coefficients. Taking from what we talked about before, we expand the solution in terms of orthogonal functions ( trial functions).

where $$ u^{N}(x)$$ is a representation for the numerical approximation. We plug this approximation into the original differential equation. Keep in mind that we have arbitrarily assumed a form for the solution which means that the right hand side of ($$) will not be exactly equal to zero over the whole domain. There will be an error, which is called a residual. That residual can be a function i.e. it is not a constant.

Now we have an issue. Our trial functions have many unknown coefficients, but we only have 1 differential equation. How do we solve for all of the unknown coefficients? That is where the idea of test functions come into play. These test functions are just another set of functions where we demand that the product of the residual multiplied by the test function be zero over the domain i.e. the scalar product.

The functions $$ \chi_n $$are the test functions, and they can be any form. The parenthesis represent the inner product of the test functions with the residual. The inner product is defined in general as follows.

Now the choice of the test functions is a point where the user of the method has freedom. Any test functions can be selected. Common test functions can be: 1.) Test functions = trial functions (Galkerkin method) 2.) Test functions = Dirac delta functions ( Collocation or pseudo-spectral method)

Once the choice of test functions is made the only step left is to solve for the coefficients of the trial functions that would enforce the requirement of a zero inner product of the residual with the test functions over the domain.

Accuracy of the Method
In literature the accuracy of the spectral methods is referred to as 'spectral accuracy'. This sounds fancy. The accuracy of spectral methods is much better than any finite difference method. Spectral methods exhibit something called 'exponential convergence'. This high accuracy means that solutions can be obtained on coarser grids. That is, given a required level of accuracy, spectral methods will require fewer grid points to obtain the solution. This is important in the field of high performance computing where very fine grids are used to solve an array of differential equations. Any algorithms that can give higher accuracy solutions for less computing power are critical. This is especially important in the emerging filed of exascale computing, where minimizing energy consumption is the most important parameter.

Limitations of the Method
Despite the wonderful accuracy of the spectral element methods, they are not perfectly robust numerical methods. They assume a solution for the PDE to be a collection of smooth global orthogonal functions over the entire domain. This global nature of the scheme means that any discrepancies in the solution at any point in the domain reverberate throughout the entire domain. This results in error being propagated throughout the domain. An example of this would be a shock in a compressible flow simulation. Therefore the original spectral methods as they were developed are great for solving incompressible flows with simple geometry.

Summary
To summarize this section: 1.) Find a differential equation. 2.) Assume a trial solution. 3.) Substitute trial solution into original differential equation to find the residual. 4.) Find a desired way to express that the residual be minimized via test functions i.e. Galerkin or collocation method. 5.) Solve for the unknown coefficients of the trial function using the equations derived from step 4. 6.) Plot solution and compare accuracy.

= 2 - What are Spectral Element Methods?=

The spectral element method as compared to the spectral method discussed in section 1 is distinguished by the division of the global domain into finite sub domains. In this respect it is considered as a finite element method. The spectral element method takes from the idea of finite element method. The domain is divided into many small sub-domains. In those sub-domains high order polynomials are used as basis functions much like in the spectral method discussed in section 1.

Description of the method
This method is used just like the finite element method. The spectral element methods are often called hp-finite element methods. The h represents step size or grid spacing, and p represents the order of the polynomial basis used within each element. Thus one can expect to increase accuracy by either decreasing the grid spacing, h, or increasing the order of the polynomial basis The spectral element method appears to be a sort of limit to the finite element method where the order of the polynomial basis(p) within each element becomes very large. Also in finite element methods the basis functions are not always orthogonal, but the polynomial basis functions used in the spectral element method are orthogonal polynomials such as Chebyhev polynomials.

Accuracy Spectral Element Methods
The spectral element method is an attractive choice for solving partial differential equations. A famous incompressible flow solver, NEK 5000 uses a spectral element method for spatial discretization. It is a code that is known for its performance and parallel efficiency. Because the elements approximate the solution locally, the global solution is not sensitive to discontinuities in the domain like in spectral methods. The use of elements in the spectral element method also allows for solutions to be obtained for complicated geometries. I believe the spectral methods encounter Gibbs phenomenon when applied to complicated geometries or domains with discontinuities in the solutions.

Limitations of Spectral Element Methods
Because the spectral element method is not exactly a spectral method, the order of accuracy of the elements would be less than what one would expect from a pure spectral method. While this is a limitation, the robustness of the method for complicated geometries mitigates the lower order accuracy by widening the range of applications that the method can be applied to.

= 3 - What are Pseudo-Spectral Methods?=

The pseudo spectral methods appear to have been developed from a need for computational efficiency. The are very similar to the spectral methods discussed in section 1. They can be considered as a special case of spectral methods.

Description of Pseudo-Spectral Methods
Pseudo-spectral methods are also called spectral collocation methods. Collocation essentially means that instead of having test functions that vary smoothly, the test functions are Dirac delta functions. These Dirac delta functions are zero everywhere in the domain except for at one location; the positioning of the points where the Dirac delta functions are non-zero is called 'collocation'. There is much research on the best way to position these points. Therefore the trial functions must minimize the residual to be exactly zero at the collocation points that are picked within the domain.

A motivation for the pseudo in pseudo-spectral method is make the computational cost of approximating the solution as small as possible. The calculation of the product of two functions in the frequency domain is costly and can be related to a cpu time of $$ O(N^{2})$$, whereas converting the frequency domain functions to real space and performing the multiplication results in cpu times $$ O(Nlog_{2}(N))$$. Thus the pseudo-spectral methods solve the differential equations using expressions in the frequency and real domain. The fast Fourier Transform (FFT) is used to convert the solutions quickly from one domain to the other in pseudo-spectral methods.

To summarize this section: 1.) Find a differential equation. 2.) Assume a trial solution. 3.) Substitute trial solution into original differential equation to find the residual. 4.) Express that the residual be minimized via collocation method. 5.) Take any non-linear terms and FFT to the real domain and perform multiplication then FFT back to frequency domain. 5.) Solve for the unknown coefficients of the trial function using the equations derived from step 4. 6.) Plot solution and compare accuracy.

Accuracy of Pseudo-Spectral Methods
From my research, the main strength of the pseudo-spectral methods is in its ability to be easily used by computers in an efficient way. The method appears to have accuracy on the order of the accuracy of the standard spectral methods. I would imagine that the accuracy would not be exactly the same because of the error that is introduced by converting the frequency domain functions to the real domain for the multiplication steps.

Limitations of Pseudo-Spectral Methods
Recall that the pseudo-spectral method solves for the solution to the differential equation using approximations for the solution in the frequency domain and the physical domain. When any expression that exists in the frequency domain is expressed in terms of the physical domain on a finite grid there will be aliasing. This is an interesting problem to consider.

As a thought experiment consider a function that is composed of many different frequencies. Now suppose that we wish to express that function in a real space on a grid with a finite number of points. There is a limitation that the highest wavenumber ( frequency ) given by the Nyquist frequency.



Imagine that the black dots in figure 3.1 are grid points that you wish to use to construct a wave. Observe how the high frequency red wave is not properly constructed when the black dots are connected. The blue wave has the same amplitude of the high frequency wave, but its frequency is much lower. Keep that in mind: high frequency waves are translated into low frequency waves because of the finite resolution of the grid.

Recall that the frequency domain basis functions must be converted to real domain functions for the pseudo-spectral method to perform the multiplication of any two functions(again, because it is computationally easier). The basis functions will experience aliasing on the finite grid that they are being translated to. To grasp this image that there are many buckets that hold basis functions in the real space. Each bucket holds a particular frequency function. As we begin moving basis functions from the frequency domain to the real domain we pick them up and drop them into the appropriate bucket. Up to the Nyquist frequency(which is set by the grid spacing) this works fine. As we move on to functions with frequencies higher than the Nyquist frequency we begin to see the high frequency functions as lower frequency functions just like in Figure 3.1, and because we see a lower frequency we put those functions into the wrong buckets. This can have negative effects on your solution because it appears that extra functions are coming out of nowhere. Also if frequencies that are important arise as a result of the multiplication, then they will be aliased to lower frequencies and will not be properly captured.

There are ways to counteract this issue that have been thought up. One example is to essentially use more grid points during the translation phase. That would allow the functions to be captured more accurately and reduce aliasing. This is one of the common ways of dealing with this issue.

= Closing Remarks =

I hope that the reader can appreciate the fundamentals of spectral method and the differences between a few of the different flavors of the spectral methods. From my research it is clear that the field of spectral methods has a large and rich history which I have only scratch the surface of in this article. It was wonderful to write about, and in the process, learn about the world of spectral methods. -Chris Neal 2014

=References=