Quantum Simulation/Exact diagonalization

The many-body problem
A common theme in all branches of quantum physics is to identify the eigenstates $$|\phi_i\rangle$$ of a Hamiltonian $$H$$, and the respective eigenenergies $$E_i$$. Once equipped with the eigenstates, we can relatively easily calculate many interesting aspects such as the time evolution, or its thermal properties, according to the relation


 * $$\rho = Z^{-1}\exp(-\beta H) = Z^{-1} \sum\limits_i \exp(-\beta E_i)|\phi_i\rangle\langle\phi_i|$$,

where $$Z = \mathrm{Tr} \{\exp(-\beta H)\}$$, which gives us the density operator $$\rho$$ at the inverse temperature $$\beta$$. A special case occurs at (or close to) zero temperature, as then only the ground state $$|\phi_g\rangle$$ will provide the dominant contribution. Nevertheless, the system can still undergo phase transitions if the ground state energy $$E_g$$ itself exhibits non-analytic behavior. Consequently, finding the ground state is often a crucial step to establish a thorough understanding of a quantum system.

In the following, we will mainly focus on discussing ground state properties of many-body systems containing only few local degrees of freedom. A prominent class of such many-body models are spin $$1/2$$ models, which consist of two-level systems localized at the sites of some lattice. As a specific example, let us turn to the to the one-dimensional transverse field Ising model, whose Hamiltonian is given by


 * $$H = g\sum\limits_{i=1}^N \sigma_x^{(i)} - \sum\limits_{i=1}^{N-1}\sigma_z^{(i)}\sigma_z^{(i+1)}.$$

Here, we have expressed the spins with the help of Pauli matrices, while $$g$$ can be interpreted as the strength of a field transverse to the quantization axis of the spins, and all energies are measured in the strength of the interaction between two neighboring spins. While this model is actually exactly solvable in the limit $$N \to \infty$$, we will use it to illustrate various approaches to many-body problems.

Without going into specific details, let us consider the two possible limits of the transverse Ising model. For $$g\to\infty$$, we can ignore the interaction, and the problem reduces to a single spin in a magnetic field. Consequently, the ground state is given by
 * $$|\phi_g\rangle = \prod\limits_{i=1}^N \frac{1}{\sqrt{2}}(|\uparrow\rangle_i-|\downarrow\rangle_i).$$

In the limit of zero external field, $$g\to 0$$, we just have to minimize the interaction energy, which leads to two distinct fully polarized ground states, $$|\phi_g^{(1)}\rangle = |\downarrow\downarrow\downarrow\cdots\rangle$$ and $$|\phi_g^{(2)}\rangle = |\uparrow\uparrow\uparrow\cdots\rangle$$. Note that the Hamiltonian does not distinguish between up and down spins (a so-called $$\mathbb{Z}_2$$ symmetry), but the two possible ground states do. This symmetry breaking is a manifestation of the system being in two different quantum phases for $$g \ll 1$$ (ferromagnet) and $$g \gg 1$$ (paramagnet). From the exact solution, it is known that the quantum phase transition occurs at a critical transverse field of $$g_c = 1$$.

Exact and not-so-exact diagonalization
The term "exact diagonalization" is often used in a slightly misleading manner. In general, to find the eigenvalues of a $$d$$-dimensional Hamiltonian, one has to find the roots to the characteristic polynomial of degree $$d$$, for which in general no exact solution can be found for $$d>4$$. Of course, we can still hope to numerically approximate the eigenvalues to an arbitrary degree, but the fact that we have to work with computers operating with fixed precision numbers makes this endeavour substantially more complicated.

Keeping that aside, finding the eigenvalues by solving the characteristic polynomial is a bad idea, as finding roots of high-degree polynomials is a numerically tricky task. In fact, one of the most powerful methods for finding the roots of such a polynomial is to generate a matrix that has the same characteristic polynomial and find its eigenvalues using a different algorithm! A much better strategy is to find a unitary (or orthogonal, if all matrix elements of the Hamiltonian are real) transformation that makes the Hamiltonian diagonal, i.e.,


 * $$ H \to U^{\dagger} H U$$

The general strategy is to construct the matrix $$U$$ in an iterative way,


 * $$ H \to U_1^\dagger H U_1 \to U_2^\dagger U_1^\dagger H U_1 U_2 \to \cdots$$

until the matrix becomes diagonal. The columns of $$U = U_1 U_2 U_3 \cdots$$ then contains the eigenvectors of $$H$$. There are many different algorithms for actually performing the diagonalization, and it is usually a good idea to resort to existing libraries (such as LAPACK) for this task. However, if we are only interested in finding the low-energy eigenvalues of a particular Hamiltonian, we can make a few simplifications that will allow for much faster computations.

The power method
Initially, we pick a random state $$|\phi_0\rangle$$ which has a very small but finite overlap with the ground state, i.e., $$\langle \phi_0|\phi_g\rangle \ne 0$$. Then, we repeatedly multiply the Hamiltonian with this initial state and normalize the result,


 * $$|\phi_{n+1}\rangle = \mathcal{N} H |\phi_n\rangle$$,

where $$\mathcal{N}$$ is the normalization operation. This method will eventually converge to the eigenvector with the largest absolute eigenvalue, so by subtracting a constant energy from the Hamiltonian, we can always ensure that this will be the ground state. The key advantage is that the matrix-vector multiplications occuring at each iteration can be implemented very fast: in most cases (as for the transverse Ising model), for each column the number of nonzero entries in such a sparse Hamiltonian are much smaller than the dimension of the Hilbert space (here: $$N$$ vs. $$2^N$$).

Lanczos algorithm
The power method can be readily improved by using not only a single state during each iteration, but employing a larger set of states which will be extended until convergence is reached. This procedure is known as Lanczos algorithm and is implemented as follows : b_1 & a_1 & b_2 & 0 & \cdots\\ 0 & b_2 & a_2 & b_3 & \cdots\\ 0 & 0 & b_3 & a_3 & \cdots\\ \vdots & \vdots & \vdots & \vdots & \ddots\end{array}\right).$$
 * 1) Pick a random state $$|\phi_0\rangle$$ as in the Power method
 * 2) Construct a second state $$|\phi_1\rangle$$ according to
 * $$|\phi_1\rangle = H|\phi_0\rangle - \frac{\langle\phi_0|H|\phi_0\rangle}{\langle \phi_0|\phi_0\rangle} |\phi_0\rangle$$
 * 1) Starting with $$n=2$$, recursively construct an orthogonal set of states given by
 * $$|\phi_{n+1}\rangle = H|\phi_n\rangle - a_n |\phi_n\rangle - b_n^2|\phi_{n-1}\rangle,$$
 * where the coefficients $$a_n$$ and $$b_n$$ are given by
 * $$ a_n = \frac{\langle\phi_n|H|\phi_n\rangle}{\langle \phi_n|\phi_n\rangle}\;\;\;b_n^2 = \frac{\langle\phi_n|\phi_n\rangle}{\langle \phi_{n-1}|\phi_{n-1}\rangle}.$$
 * 1) Diagonalize the matrix given by
 * $$ H = \left(\begin{array}{ccccc}a_0 & b_1 & 0 & 0 & \cdots\\
 * 1) If the ground state energy has not converged to the desired accuracy, proceed at step 3 by increasing $$n$$ by one.



While the exact ground state is only reached when $$n$$ is equal to the dimension of the Hilbert space, the remarkable feature of the Lanczos algorithm is that typically only a few hundred iterations are necessary. However, there is one important caveat: due to finite-precision arithmetic, the orthogonality between the states $$|\phi_n\rangle$$ is quickly lost. For practical applications, it is therefore advisable to use an improved algorithm that re-orthogonalizes the states.

Some remarks on complexity
Let us briefly consider the difficulty inherent in exact diagonalization studies. On each lattice site, we have 2 degrees of freedom, refering to a spin pointing either up or down. However, for $$N$$ spins, we have $$2^N$$ possible spin configurations. The consequences of this exponential scaling cannot be underestimated: if we want to store the vector corresponding to the ground state in a computer using double precision, we would need 8 TB of memory for $$N=40$$, and for $$N=300$$, the number of basis states already exceeds the number of atoms in the universe! From these considerations, we see that exact diagonalization can only work in the limit of small system sizes.