Interactive Quantum-Many-Body Simulations using Matrix Product States
Matrix product states (MPS) are way to reduce computational cost when calculating properties of quantum many body systems on a computer.
If you want to learn more skip to the theory part, but if you just want to let your computer calculate some cool physics, select a model and an algorithm and press start
!
Simulation
Initialising content...
This site requires your browser to support Javascript and WebAssembly.
1. Select a Model
A quantum mechanical model describes particles that behave according to the rules of quantum mechanics. In many-body physics, most used models are simplifications of real physical situations. Simplifications and approximations are almost always necessary because the formulas get complicated very fast. However, these models can still provide useful predictions of real world experiments.
1D Transverse Field Ising Model
The one-dimensional Ising model describes fermionic particles aligned on a chain, where each particle only interacts only with their nearest neighbors. Additionally, we add a transverse magnetic field. The Ising model can describe paramagnetic and ferromagnetic behaviour.
1D Bose-Hubbard Model
The one-dimensional Bose-Hubbard model describes a Boson chain with nearest neighbor interactions.
The Hamiltonian of the transverse field Ising model is:
Nearest neighbor interactions in one dimension mean that each particle interacts only with their left and right neighbor. This is interaction is described by the first term: . is the interaction strength between two neighboring spins on sites and . The alignment is ferromagnetic (parallel or ) if and antiferromagnetic (antiparallel or ) if . The sum then accounts this interaction for all particle interactions.
The second term with the Pauli matrix describes a spin particle interacting with a magnetic field of strength in the direction. In general, spins want to align themselves in the direction of the external field, so in this case in direction.
For small (in relation to ), the system is in an ordered phase since the spins align in or direction. When gets larger, the system goes into a quantum disordered phase because the spins do not align in direction anymore.
The Hamiltonian of the one-dimensional Bose-Hubbard model is:
Here is the hopping matrix element, and the creation and annihilation operator acting on lattice site i, respectively, the on-site repulsion, the particle number operator and the chemical potential.
The first sum describes bosons hopping to a neighboring site, the second term describes the repulsion of many bosons on the same site and the third sum describes the energy required to add or remove a particle from the system.
This model exhibits two phases, a superfluidic phase and a Mott-insulating phase. In the Mott-insulating phase, the system is incompressible and has integer fillings, meaning that every site is occupied by an integer number of particles. Although Bose-Einstein condensation does not occur in this one dimensional model, the system still exhibits long-range coherence in the superfluidic phase.
2. Select an Algorithm
These algorithms can be used to calculate properties of quantum many-body models.
Time Evolving Block Decimation (TEBD)
TEBD is a computational method used to simulate how quantum systems evolve over time.
Calculate a Phase Diagram with TEBD
We can plot a phase diagram for different models by calculating the ground state for a given parameter set. Many ground state calculations are performed, and the end result of each calculation becomes one pixel in the phase diagram.
Density Matrix Renormalization Group (DMRG)
DMRG is a computational method to efficiently compute the ground state of a quantum system.
Calculate a Phase Diagram with DMRG
We can plot a phase diagram for different models by calculating the ground state for a given parameter set. Many ground state calculations are performed, and the end result of each calculation becomes one pixel in the phase diagram.
Time evolution of a quantum state is achieved by applying the time evolution operator onto the wavefunction .
To do this efficiently, we perform a Suzuki-Trotter expansion of , meaning we decompose it into smaller steps. Many types of expansions are possible, but here we use a simple first-order "brickwall" scheme. We decompose into smaller steps of :
We now take in MPS form and apply the small-time-step only onto two sites at once, in an alternating fashion (shown in the illustration). After repeating this times, we evolved in time by .
The size of the time step is very important: Is it too large, then the numerical error becomes large. However, if it is too small, the simulation takes very long or might not advance at all.
Imaginary time evolution
TEBD can also be used to find the ground state of . If we take an imaginary time step , becomes:
This corresponds to an exponential decay of the initial wavefunction into the ground state.
In practice, DMRG is usually much faster for computing than TEBD with imaginary time evolution.
Here, we use the TEBD algorithm with imaginary time evolution to calculate the ground states while varying two parameters of the given model.
Before you run this, you should run a normal TEBD calculation with imaginary
time evolution to find the optimal parameters. must
be large enough for the energy to converge, meaning that the energy does not change significantly when adding more steps.
If this is not the case, the TEBD
algorithm is not calculating the correct ground state and the phase diagram will be wrong.
What is a phase diagram?
From these ground states we can then determine and plot a specific observable against the two varied parameters. Any jumps, or weird, non-linear shapes usually indicate a phase transition. That means the material behaves fundamentally different in these parameter regions. One example of this is the phase diagram of water, where the parameters are temperature and pressure. A phase of matter can not only be fluid, solid, or gaseous, it can also refer to the material being ferromagnetic, paramagnetic, ordered, disordered, insulating, conducting, compressible, incompressible and a lot more.
Different models have different interesting observables which show different phases of matter:
- Transverse field Ising model: magnetization
- Bose-Hubbard model: particle density:
In the Mott-Insulating phase, the particle density is in integer
To use DMRG, we start by representing the energy expectation value in matrix product state form using matrix product operators (MPO) for the Hamiltonian.
Then we perform many two-site updates in sequence. For sites and , the two-site update looks like this:
- Split the MPS into an effective Hamiltonian and the two-site wavefunction . The effective Hamiltonian consists of the two blocks and , and its complex conjugated , and the two MPOs between the latter.
- Reshape into a matrix and into a vector.
- Using the Lanczos algorithm, we now find the ground state of , which we call . The Lanczos algorithm requires an initial guess for the ground state, for which we use .
- Now we compress the new to keep the computational effort low. This is done by performing a singular value decomposition, which rewrites as a product of three matrices , where contains the eigenvalues. We now simply throw away very small eigenvalues ("truncation"), since they do not contribute much information.
- Finally, we use the new optimized tensors to update and , which will be used in later sweeps
We have to perform multiple "sweeps" of these two site updates, until the (energy-, magnetization-, ...) expectation values converge. One sweep usually starts on the left with sites and , progresses to sites and , and then goes back to and .
Before you run this, you should run a normal DMRG calculation to find the optimal parameters.
What is a phase diagram?
From these ground states we can then determine and plot a specific observable against the two varied parameters. Any jumps, or weird, non-linear shapes usually indicate a phase transition. That means the material behaves fundamentally different in these parameter regions. One example of this is the phase diagram of water, where the parameters are temperature and pressure. A phase of matter can not only be fluid, solid, or gaseous, it can also refer to the material being ferromagnetic, paramagnetic, ordered, disordered, insulating, conducting, compressible, incompressible and a lot more.
Different models have different interesting observables which show different phases of matter:
- Transverse field Ising model: magnetization
- Bose-Hubbard model: particle density:
In the Mott-Insulating phase, the particle density is in integer
3. Adjust Parameters and Start
Select the parameters for your model and algorithm. You can also choose one or multiple presets, which may even give you an explanation of the results! Canceling may still take a few seconds, the calculation can not be resumed afterwards.
Presets
Start Calculation
Simulations might take a looooong time with certain parameters. This browser tab must be visible, otherwise the calculation will pause.
4. Results
Final results
Start a simulation to get results.
Matrix Product State Theory
Why are quantum many body models interesting?
Because our world is full of them! For instance, all matter consists of atoms, which in turn consist of protons, neutrons and electrons. These small particles obey the laws of quantum mechanics and, within any material, there are many of them, all interacting with each other. Thus, it is a quantum many-body sytem.
In physics, we often start at the very fundamental model and then expand from there, until we are happy with the accuracy of the model's predictions. The Ising model, for example, can be considered basic, but it can already describe material properties. It assumes a grid of spins, each interacting with only with its nearest neighbors. This may sound unrealistic at first, but it is actually not too far off from reality: The magnetic properties of metals are mostly due to the spin of their valence electrons.
While the valence electrons are not perfectly aligned on a grid, they are spaced out rather consistently. Also, every electron interacts with all other electrons and the nuclei, but the interaction with the closest valence electrons will be the strongest. That's why the nearest-neighbor assumption can be used to save a lot of computation effort. In summary, the (3D) Ising model is not too bad of a description here. It already shows different phases of matter for different parameters, namely a (anti)-ferromagnetic and a disordered phase (you can see this for yourself when you run the "Phase Diagram" algorithm above!)
While we can create 2D, 1D and even 0D systems in the real world, most systems are 3D. Every dimension makes the calculations exponentially harder, so I am sticking to 1D for now. You have to start somewhere!
Why are quantum many body models so hard to calculate?
If we again use the Ising model (but the 1D version), we are working with spins that each can be in two states: either up or down . This means the local dimension, , equals 2.
Often we are interested in the ground state energy (the lowest energy state that the system can be in) of such systems, as the ground states contains lots of interesting physics. To calculate this energy, we need to diagonalize a matrix of size . This can be extremely hard, even for just dozens of spins or particles.
How do matrix product states help?
The idea of matrix product states (MPS) is to split big matrices and vectors up into a product of smaller ones. This can dramatically reduce the computational effort because we can often deal with a few sites at a time, instead of the whole system all at once.
This is shown in the illustration, where we turn the wave function into a product of wave functions , one for each lattice site.
Such a schematic is called a tensor network diagram: Each box is scalar value, vector, matrix or tensor, depending on the number of "legs" (connections). Each leg corresponds to one dimension of the container: A vector has one leg, a matrix two and a tensor more than two. We see that we actually make a product of tensors (and not a matrices, as the name "matrix product state" might suggest) for each lattice site. That's why the general term is tensor network or tensor product state. Matrix product states are just a special case in one dimension, where the tensor contractions resemble matrix multiplications.
If you are interested in more details, take a look at the Wikipedia article or this freely accessible paper.
Implementation
This project was inspired by a lecture on computational quantum many-body physics from my master studies. Most code is a reimplementation of what we wrote or were supplied with in the lecture, but in C++.
I then compiled the C++ code to WebAssembly using emscripten and glued it together with Javascript. When you load the page, your browser compiles the web assembly to machine code that can run natively on your device. This way, the code is executed much faster than if everything was implemented in Javascript, but it still works on every (somewhat-modern) device.
The C++ code that performs the calculations can be found here.
Performance
Benchmarks
I benchmarked the single-threaded TEBD algorithm with the TFI model for Chromium, Firefox and native (meaning the code is run like a normal program and not in the browser).
To keep the comparison between native and browser fair, the whole benchmark process is contained in a single function call to a C++ function.
One parameter varied while the others were kept fixed at , , and .
Both times the code was compiled with the O2
optimization option, the native code with gcc
and the web assembly with emscripten
.
I ran each benchmark on my desktop PC and my much weaker laptop.
The plots show the run time of the algorithm for the three different platforms against the parameter that was varied, therefore lower is better. The markers represent the measured values and the lines fitted functions. We observe that native has a clear performance advantage over the two browsers. Interestingly, Firefox performed much better than Chromium, which I did not expect. Note that I just performed one run on each computer, which is not overall representative for comparing browser performance.
The fit function for the benchmark is a power function , the one for the number of steps is linear: .
Potential improvements
Currently, everything is run in a single process. Some TEBD decompositions and especially the phase diagrams could be easily improved through multiprocessing. However, I have not yet found a way to get it working with emscripten.
One of the biggest performance killers on the website is the live plotting.
Since the calculations and the website share the same thread,
we have to periodically hand back control to the browser, otherwise it freezes while a calculation is in progress.
Here, I am doing this after every algorithm step: a javascript function updating the plots is called from the C++ side and then the control flow is handed back to browser using emscripten_sleep(0)
.
Skipping this and letting the browser freeze increased the performance by factor of up to 4, depending on the model and algorithm.
MathML
There is a "new" way to render mathematical formulas and symbols in websites!
Finally since 2023, all major browsers support MathML.
No more rendering images of formulas in advance or with mathjax
! Admittedly, it looks a bit worse than mathjax
in some cases,
but there is no javascript required, it is trivially styleable using CSS and even trivial to integrate using a markdown-to-html converter.