Interactive Quantum-Many-Body Simulations using Matrix Product States

Matrix product states (MPS) are way to reduce computational cost when calculating properties of quantum many body systems on a computer.

If you want to learn more skip to the theory part, but if you just want to let your computer calculate some cool physics, select a model and an algorithm and press start!

Simulation

Initialising content...
This site requires your browser to support Javascript and WebAssembly.

1. Select a Model

A quantum mechanical model describes particles that behave according to the rules of quantum mechanics. In many-body physics, most used models are simplifications of real physical situations. Simplifications and approximations are almost always necessary because the formulas get complicated very fast. However, these models can still provide useful predictions of real world experiments.

1D Transverse Field Ising Model

The one-dimensional Ising model describes fermionic particles aligned on a chain, where each particle only interacts only with their nearest neighbors. Additionally, we add a transverse magnetic field. The Ising model can describe paramagnetic and ferromagnetic behaviour.

1D Bose-Hubbard Model

The one-dimensional Bose-Hubbard model describes a Boson chain with nearest neighbor interactions.

Details

The Hamiltonian of the transverse field Ising model is:

H=Ji,jσz,iσz,jBiσx,j

Nearest neighbor interactions in one dimension mean that each particle interacts only with their left and right neighbor. This is interaction is described by the first term: Jσz,iσz,j. J is the interaction strength between two neighboring spins σz on sites i and j. The alignment is ferromagnetic (parallel or ) if J>0 and antiferromagnetic (antiparallel or ) if J<0. The sum Σ then accounts this interaction for all particle interactions.

The second term with the σx Pauli matrix describes a spin particle interacting with a magnetic field of strength B in the x direction. In general, spins want to align themselves in the direction of the external field, so in this case in x direction.

For small B (in relation to J), the system is in an ordered phase since the spins align in z or z direction. When B gets larger, the system goes into a quantum disordered phase because the spins do not align in z direction anymore.

The Hamiltonian of the one-dimensional Bose-Hubbard model is:

H=ti(a^ia^i+1+a^ia^i+1)+U2in^i(n^i1)μin^i

Here t is the hopping matrix element, a^i and a^i the creation and annihilation operator acting on lattice site i, respectively, U the on-site repulsion, n^ the particle number operator and μ the chemical potential.

The first sum describes bosons hopping to a neighboring site, the second term describes the repulsion of many bosons on the same site and the third sum describes the energy required to add or remove a particle from the system.

This model exhibits two phases, a superfluidic phase and a Mott-insulating phase. In the Mott-insulating phase, the system is incompressible and has integer fillings, meaning that every site is occupied by an integer number of particles. Although Bose-Einstein condensation does not occur in this one dimensional model, the system still exhibits long-range coherence in the superfluidic phase.

0 1 2 3 4 5 6 7 Site:

2. Select an Algorithm

These algorithms can be used to calculate properties of quantum many-body models.

Time Evolving Block Decimation (TEBD)

TEBD is a computational method used to simulate how quantum systems evolve over time.

Calculate a Phase Diagram with TEBD

We can plot a phase diagram for different models by calculating the ground state for a given parameter set. Many ground state calculations are performed, and the end result of each calculation becomes one pixel in the phase diagram.

Density Matrix Renormalization Group (DMRG)

DMRG is a computational method to efficiently compute the ground state of a quantum system.

Calculate a Phase Diagram with DMRG

We can plot a phase diagram for different models by calculating the ground state for a given parameter set. Many ground state calculations are performed, and the end result of each calculation becomes one pixel in the phase diagram.

Details
steps steps

Time evolution of a quantum state is achieved by applying the time evolution operator U=eiH^t onto the wavefunction ψ.

To do this efficiently, we perform a Suzuki-Trotter expansion of U, meaning we decompose it into smaller steps. Many types of expansions are possible, but here we use a simple first-order "brickwall" scheme. We decompose U into nsteps smaller steps of δt:

U=exp(iH^t)[exp(iH^δt)]nsteps

We now take ψ in MPS form and apply the small-time-step U only onto two sites at once, in an alternating fashion (shown in the illustration). After repeating this nsteps times, we evolved ψ in time by t.

The size of the time step δt is very important: Is it too large, then the numerical error becomes large. However, if it is too small, the simulation takes very long or might not advance at all.

Imaginary time evolution

TEBD can also be used to find the ground state of H^. If we take an imaginary time step iδt, U becomes:

U=[exp(H^δt)]nsteps

This corresponds to an exponential decay of the initial wavefunction into the ground state.

In practice, DMRG is usually much faster for computing than TEBD with imaginary time evolution.

Here, we use the TEBD algorithm with imaginary time evolution to calculate the ground states while varying two parameters of the given model.
Before you run this, you should run a normal TEBD calculation with imaginary time evolution to find the optimal parameters. nSteps must be large enough for the energy to converge, meaning that the energy does not change significantly when adding more steps. If this is not the case, the TEBD algorithm is not calculating the correct ground state and the phase diagram will be wrong.

What is a phase diagram?

From these ground states we can then determine and plot a specific observable against the two varied parameters. Any jumps, or weird, non-linear shapes usually indicate a phase transition. That means the material behaves fundamentally different in these parameter regions. One example of this is the phase diagram of water, where the parameters are temperature and pressure. A phase of matter can not only be fluid, solid, or gaseous, it can also refer to the material being ferromagnetic, paramagnetic, ordered, disordered, insulating, conducting, compressible, incompressible and a lot more.

Different models have different interesting observables which show different phases of matter:

  • Transverse field Ising model: M magnetization
  • Bose-Hubbard model: n particle density:
    In the Mott-Insulating phase, the particle density is in integer
Reshape eff Ground state SVD Up date

To use DMRG, we start by representing the energy expectation value ψ|H^|ψ in matrix product state form using matrix product operators (MPO) W for the Hamiltonian.

Then we perform many two-site updates in sequence. For sites n and n+1, the two-site update looks like this:

  1. Split the MPS into an effective Hamiltonian Heff and the two-site wavefunction Θ. The effective Hamiltonian consists of the two blocks L[n] and R[n+1], Θ and its complex conjugated Θ, and the two MPOs between the latter.
  2. Reshape Heff into a matrix and Θ into a vector.
  3. Using the Lanczos algorithm, we now find the ground state of Heff, which we call Θ~. The Lanczos algorithm requires an initial guess for the ground state, for which we use Θ.
  4. Now we compress the new Θ~ to keep the computational effort low. This is done by performing a singular value decomposition, which rewrites Θ~ as a product of three matrices Θ~=USV, where S contains the eigenvalues. We now simply throw away very small eigenvalues ("truncation"), since they do not contribute much information.
  5. Finally, we use the new optimized tensors to update L[n+1] and R[n], which will be used in later sweeps

We have to perform multiple "sweeps" of these two site updates, until the (energy-, magnetization-, ...) expectation values converge. One sweep usually starts on the left with sites 1 and 2, progresses to sites N1 and N, and then goes back to 1 and 2.

Before you run this, you should run a normal DMRG calculation to find the optimal parameters.

What is a phase diagram?

From these ground states we can then determine and plot a specific observable against the two varied parameters. Any jumps, or weird, non-linear shapes usually indicate a phase transition. That means the material behaves fundamentally different in these parameter regions. One example of this is the phase diagram of water, where the parameters are temperature and pressure. A phase of matter can not only be fluid, solid, or gaseous, it can also refer to the material being ferromagnetic, paramagnetic, ordered, disordered, insulating, conducting, compressible, incompressible and a lot more.

Different models have different interesting observables which show different phases of matter:

  • Transverse field Ising model: M magnetization
  • Bose-Hubbard model: n particle density:
    In the Mott-Insulating phase, the particle density is in integer

3. Adjust Parameters and Start

Select the parameters for your model and algorithm. You can also choose one or multiple presets, which may even give you an explanation of the results! Canceling may still take a few seconds, the calculation can not be resumed afterwards.

Presets

Start Calculation

Simulations might take a looooong time with certain parameters. This browser tab must be visible, otherwise the calculation will pause.

Simulation time prediction


4. Results

Final results

Start a simulation to get results.

Matrix Product State Theory

Why are quantum many body models interesting?

Because our world is full of them! For instance, all matter consists of atoms, which in turn consist of protons, neutrons and electrons. These small particles obey the laws of quantum mechanics and, within any material, there are many of them, all interacting with each other. Thus, it is a quantum many-body sytem.

In physics, we often start at the very fundamental model and then expand from there, until we are happy with the accuracy of the model's predictions. The Ising model, for example, can be considered basic, but it can already describe material properties. It assumes a grid of spins, each interacting with only with its nearest neighbors. This may sound unrealistic at first, but it is actually not too far off from reality: The magnetic properties of metals are mostly due to the spin of their valence electrons.

While the valence electrons are not perfectly aligned on a grid, they are spaced out rather consistently. Also, every electron interacts with all other electrons and the nuclei, but the interaction with the closest valence electrons will be the strongest. That's why the nearest-neighbor assumption can be used to save a lot of computation effort. In summary, the (3D) Ising model is not too bad of a description here. It already shows different phases of matter for different parameters, namely a (anti)-ferromagnetic and a disordered phase (you can see this for yourself when you run the "Phase Diagram" algorithm above!)

While we can create 2D, 1D and even 0D systems in the real world, most systems are 3D. Every dimension makes the calculations exponentially harder, so I am sticking to 1D for now. You have to start somewhere!

Why are quantum many body models so hard to calculate?

If we again use the Ising model (but the 1D version), we are working with L spins that each can be in two states: either up or down . This means the local dimension, d, equals 2.

Often we are interested in the ground state energy (the lowest energy state that the system can be in) E0 of such systems, as the ground states contains lots of interesting physics. To calculate this energy, we need to diagonalize a matrix of size dL×dL. This can be extremely hard, even for just dozens of spins or particles.

How do matrix product states help?

The idea of matrix product states (MPS) is to split big matrices and vectors up into a product of smaller ones. This can dramatically reduce the computational effort because we can often deal with a few sites at a time, instead of the whole system all at once.

This is shown in the illustration, where we turn the wave function ψ into a product of wave functions M[i], one for each lattice site.

Such a schematic is called a tensor network diagram: Each box is scalar value, vector, matrix or tensor, depending on the number of "legs" (connections). Each leg corresponds to one dimension of the container: A vector has one leg, a matrix two and a tensor more than two. We see that we actually make a product of tensors (and not a matrices, as the name "matrix product state" might suggest) for each lattice site. That's why the general term is tensor network or tensor product state. Matrix product states are just a special case in one dimension, where the tensor contractions resemble matrix multiplications.

If you are interested in more details, take a look at the Wikipedia article or this freely accessible paper.

Implementation

This project was inspired by a lecture on computational quantum many-body physics from my master studies. Most code is a reimplementation of what we wrote or were supplied with in the lecture, but in C++.

I then compiled the C++ code to WebAssembly using emscripten and glued it together with Javascript. When you load the page, your browser compiles the web assembly to machine code that can run natively on your device. This way, the code is executed much faster than if everything was implemented in Javascript, but it still works on every (somewhat-modern) device.

The C++ code that performs the calculations can be found here.

Performance

Benchmarks

I benchmarked the single-threaded TEBD algorithm with the TFI model for Chromium, Firefox and native (meaning the code is run like a normal program and not in the browser). To keep the comparison between native and browser fair, the whole benchmark process is contained in a single function call to a C++ function. One parameter varied while the others were kept fixed at L=12, nmax=200, ϵ=1010 and χmax=20. Both times the code was compiled with the O2 optimization option, the native code with gcc and the web assembly with emscripten. I ran each benchmark on my desktop PC and my much weaker laptop.

2024-11-04T23:32:32.766753 image/svg+xml Matplotlib v3.9.2, https://matplotlib.org/
2024-11-04T23:32:32.856417 image/svg+xml Matplotlib v3.9.2, https://matplotlib.org/

The plots show the run time of the algorithm for the three different platforms against the parameter that was varied, therefore lower is better. The markers represent the measured values and the lines fitted functions. We observe that native has a clear performance advantage over the two browsers. Interestingly, Firefox performed much better than Chromium, which I did not expect. Note that I just performed one run on each computer, which is not overall representative for comparing browser performance.

The fit function for the L benchmark is a power function f(L)=a·2b·L+c, the one for the number of steps is linear: f(nsteps)=a·nsteps+b.

Potential improvements

Currently, everything is run in a single process. Some TEBD decompositions and especially the phase diagrams could be easily improved through multiprocessing. However, I have not yet found a way to get it working with emscripten.

One of the biggest performance killers on the website is the live plotting. Since the calculations and the website share the same thread, we have to periodically hand back control to the browser, otherwise it freezes while a calculation is in progress. Here, I am doing this after every algorithm step: a javascript function updating the plots is called from the C++ side and then the control flow is handed back to browser using emscripten_sleep(0). Skipping this and letting the browser freeze increased the performance by factor of up to 4, depending on the model and algorithm.

MathML

There is a "new" way to render mathematical formulas and symbols in websites! Finally since 2023, all major browsers support MathML. No more rendering images of formulas in advance or with mathjax! Admittedly, it looks a bit worse than mathjax in some cases, but there is no javascript required, it is trivially styleable using CSS and even trivial to integrate using a markdown-to-html converter.

E=mc2+AI
Click this text to unlock benchmark options

The number of lattice sites. The computational cost increases exponentially with larger L.

The maximum bond dimension. The bigger χmax, the more numbers can be used to represent the interaction between two neighboring particles. This can increases the accuracy but also the computational cost.

When performing the Schmidt-decomposition, Schmidt values smaller that ϵ are disregarded, which reduces the bond dimension. A smaller ϵ can increases the accuracy but also the computational cost. In general it should be very small, at least 1010.

After each algorithm step, the system energy is calculated. If the energy difference between two successive algorithm steps is less that τ, the simulation calculation is stopped, as the result is no longer changing. Set τ=0 to force the algorithm to run all n steps/sweeps.

The number of time steps δt to run the TEBD algorithm for.
If τ>0, the calculation might be stopped early after converging.

The amount of time the system is evolved with each step. If δt is too large, the accuracy is reduced. If δt is very little, you need much more steps which increases the computational cost.

This selects wether δt is a real or imaginary number.
If it is real, the time evolution operator is complex:

U=exp(iH^δt)

Because exp(ix)=cos(x)+isin(x) the system will evolve in real time and perform oscillations.
If it is imaginary (complex) with δt=iδτ, the imaginary unit in the exponent cancels out and the time evolution operator is an exponential function:

U=exp(H^δτ)

The system will decay into the ground state.

The decomposition determines in which order the time evolution operator is applied onto the bonds. Higher order decompositions improve the accuracy but may increase computational cost.

TEBD applies the time evolution operator only on two particles at once. After one TEBD step, all particles must have been evolved by δt. The decomposition is the way of achieving this: One could evolve each bond by δt, or evolve each bond just by δt/2 but twice, and so on.

The number of DMRG sweeps to perform. One sweep optimizes all bonds from left to right and then right to left.
If τ>0, the calculation might be stopped early after converging.

A large positive exchange energy means the spins want to align themselves parallel to each other.
A large negative exchange energy means the spins want to align themselves antiparallel to each other.

A larger magnetic field means the spins want to point in x direction more.

The direction in which the spins point when the simulation starts.

The on-site repulsion. The larger it is, the worse it is for particles to sit on the same lattice site.

The hopping matrix element describe how mobile the bosons are. A larger t means the particles can hop to the neighboring site more easily.

The chemical potential. The higher it is, the more particles want to sit on the same site.

The maximum number of particles than can occupy a lattice site.
One of the tensor dimensions is determined by this number.
This means we need to limit it, otherwise the tensors become huge and the calculation takes ages.

The initial number of particles per lattice site.

Note: TEBD can get stuck when the initial configuration is totally symmetric. That's why a small perturbation is applied to the occupation number of each lattice site.
This is done by adding a random value between [0, 0.1] to each initial B[i] tensor entry and then renormalizing the tensors.