Activating
project at `~/ResearchTidbits/posts/phase-diagram`
Given a quantum many-body Hamiltonian, one of the first goals usually is to account for possible ground state phases and map out the corresponding phase diagram in the parameter space. Generally, each phase is associated with an observable called the order parameter that only takes a non-zero value if the ground state is in that particular phase. Plotting out the phase diagram then just becomes a matter of determining the regions where various order parameters vanish.
While solving the whole many-body problem is a computationally expensive task, we will try to show that plotting the phase diagram does not necessarily require the entire information contained in the ground state, allowing us to make some smart optimizations.
The Bose-Hubbard model
Let us consider a really simple system for demonstrating this; the 1D Bose-Hubbard model which describes interacting bosons in an optical lattice. What follows is largely based on things I worked on during my master’s thesis, so I will skip some details that can be found there. The Hamiltonian for the system is given as follows:
\[ \hat{H} = -t\sum_{i} (\hat{a}^{\dagger}_i \hat{a}_{i+1} + \hat{a}^{\dagger}_{i+1} \hat{a}_{i} ) + \frac{U}{2}\sum_{i} n_i (n_i - 1) - \mu \sum_i n_i \]
where \(\hat{a}_i\)/\(\hat{a}^{\dagger}_i\) are bosonic creation/annihilation operators satisfying the canonical commutation relations \([\hat{a}^{\dagger}_i, \hat{a}_j] = \delta_{i,j}\). The system is governed by three parameters; \(t\) - the hopping strength which controls tunneling between sites, \(U\) - the repulsive interaction between atoms on the same site, and \(\mu\) - the chemical potential, determining the number of atoms in the system. We may consider all quantities in units of \(U\), reducing our parameter space to \((t/U, \mu/U)\).
Examining the limiting cases of Hamiltonian, we can get an idea of the possible ground state phases of the system. In the limit of \(t/U << 1\), the system tends to localize into the lattice sites, resulting in a Mott-insulator phase which is incompressible and has fixed particle number per site. On the other hand, for \(t/U >> 1\) the ground state is a completely delocalized state of atoms condensed into the lowest Bloch mode. One may then expect some kind of a Bose-Einstein condensate phase that has a coherent superposition of Fock states on each lattice site. We then expect that the global \(U(1)\) symmetry is broken, allowing us to use \(\langle \hat{a}_i \rangle\) as an order parameter for this phase.
(*Strictly speaking, what we have is a superfluid phase since a true BEC may not occur in 1D at any temperature. However, in what follows we will work in the mean-field limit where we explicitly break the \(U(1)\) symmetry anyways. Within this approximation, we effectively do have a BEC precisely whenever the system exhibits superfluidity, even though the true many-body state may not spontaneously break the symmetry. I will elaborate on this in a future post.)
A mean-field analysis
While we could proceed with a full many-body simulation, perhaps using matrix product states, the point can be made just at the level of a mean-field. Note that from a physics standpoint, the mean-field approach is absolutely terrible in 1 dimension as it ignores quantum fluctuations which are dominant in lower dimensions. As such, the rest of this post should be treated as an overview of numerical trickeries involved rather than a truthful display of the nature of the phase transition of the full many-body system.
The mean field approach proceeds as follows: (1) we first decompose the creation operator in terms of its expectation value with the ground state, \(\Psi_i = \langle \hat{a}_i \rangle\) (which is also the order parameter), and a small fluctuation, \(\delta \hat{a}_i\) such that \(\hat{a}_i = \Psi_i + \delta \hat{a}_i\), and then (2) ignore \(\mathcal{O}(\delta \hat{a}_i^2)\) contributions to the Hamiltonian. Assuming that we work in the thermodynamic limit of a translationally invariant system, we reduce the problem to solving a single-site Hamiltonian (which is the same for every site by virtual of translation invariance);
\[ \hat{H}_{i, MF}(\Psi) = -2t\Psi(\hat{a}_i + \hat{a}^{\dagger}_i) + \frac{U}{2}n_i(n_i-1) - \mu n_i + zt |\Psi|^2 \]
We see that this is now a set of self-consistent equations and we need to find the fixed point of \(f(\Psi) =\) (Diagonalize \(H(\Psi)\) and compute \(\psi_{gs}(\Psi)\)) \(\to \langle \psi_{gs}(\Psi)| \hat{a}_i | \psi_{gs}(\Psi) \rangle\) in order to extract the order parameter for the ground state.
A naive phase diagram
We will solve this self-consistent system using the most naive technique; fixed-point iteration. Basically, we begin with some initial guess \(\Psi^{(0)}\) and compute \(\Psi^{(n)} = f(\Psi^{(n-1)})\) repeatedly until convergence is achieved. It is important to note that the function \(f\) must satisfy certain constraints in order to guarantee convergence regardless of the initial guess. While I do not know of a mathematical proof for this particular system, numerically it seems that it does indeed converge (although this is no longer true if nearest neighbour interactions are introduced in the system).
# cutoff necessary since bosonic space is infinite dimensional
mean_field_ops(nmax) = (diagm(1 => sqrt.(1:nmax)), diagm(0 => 0:nmax))
BHM(â, n̂; t, mu, psi, U=1.0) = -mu * n̂ + 0.5 * U * n̂ * (n̂ - I) - 2t * psi * (â + â') + 2t * psi^2 * I
function solve(â, n̂, H; atol=1e-8, callback=identity)
# initial guess
= rand(eltype(â), size(â, 1))
state = state' * â * state
psi = psi
psi_old
while true
## f(psi)
= eigvecs(H(psi))[:, 1]
state ./= sqrt(norm(state))
state = state' * â * state
psi ##
# check convergence
norm(psi - psi_old) < atol) && break
(= psi
psi_old
# for injecting other custom logic later on; by default it does nothing
callback(psi)
end
return state, psi
end
solve(â, n̂; t, mu, atol=1e-8) = solve(â, n̂, psi -> BHM(â, n̂, t=t, mu=mu, psi=psi), atol=atol)
solve (generic function with 2 methods)
We can now plot a naive phase diagram by simply looping over a grid of parameter values and visualizing the magnitude of the order parameter.
begin
= mean_field_ops(4)
â, n̂ = 50
npoints = range(0.0, 0.12, npoints)
ts = range(0.0, 3.0, npoints)
mus = zeros(length(mus), length(ts))
res
# can be multi-threaded since each computation is entirely independant from the others
Threads.@threads for idx in CartesianIndices(res)
= psi -> BHM(â, n̂, t=ts[idx.I[2]], mu=mus[idx.I[1]], psi=psi)
H = solve(â, n̂, H)
_, res[idx] end
heatmap(ts, mus, res)
end
As expected, there are Mott insulator lobes where the order parameter vanishes and there is a second order transition to the superfluid phase where \(U(1)\) symmetry is broken. However, if all we’re interested in is the phase boundary and not the actual value of the order parameter, we can do much better.
Since we know that what we’re looking for is a fixed point, we might expect that simply checking whether \(f(0) = 0\) would immediately tell us if we’re in the Mott insulator regime without having to go through the iterative procedure. However, it turns out that \(\Psi = 0\) is always a fixed point of the system; it is simply an unstable one when the system is in the superfluid regime. Perhaps we can still salvage this line of thinking. Let us check the actual convergence of the order parameter for some parameter values to get a better idea of whats going on:
# callback to record data on every iteration
struct RecordObservable{T}
::Vector{T}
dataend
::RecordObservable)(psi) = push!(o.data, psi)
(o
function convergence_plot(t, mu; n=7, xlims)
= plot(framestyle=:box, ylabel="Order parameter", xlabel="Number of iterations", title="\n(t=$t | mu=$mu)", legend=:topright, xlims=xlims)
p
= RecordObservable(Float64[])
history
for _ in 1:n
= RecordObservable(Float64[])
history solve(â, n̂, psi -> BHM(â, n̂, t=t, mu=mu, psi=psi), callback=history)
plot!(history.data, lab="", ls=:dashdot, lw=1.5)
end
hline!([history.data[end]], c=:black, lw=2, alpha=0.75, lab="Converged value")
return p
end
display(plot(convergence_plot(0.06, 0.5, xlims=[1, 20]), convergence_plot(0.1, 0.5, xlims=[1, 20]), size=(1000, 400), margins=10Plots.mm))
We see that the convergence proceeds strictly monotonically towards the stable fixed point. This means that all we need to do is begin with \(\Psi^{(0)} = \epsilon \sim 0\) and check whether it increases or decreases in a single iteration to determine whether the system is a Mott insulator or a superfluid. The accuracy of the phase boundary is then of course limited by how small we choose \(\epsilon\). Using this technique, we can plot the phase diagram much faster.
function isSuperfluid(H; psi=1e-8)
## f(psi)
= eigvecs(H(psi))[:, 1]
state ./= sqrt(norm(state))
state return (state' * â * state) > psi
end
= zeros(length(mus), length(ts))
res
Threads.@threads for idx in CartesianIndices(res)
= psi -> BHM(â, n̂, t=ts[idx.I[2]], mu=mus[idx.I[1]], psi=psi)
H = isSuperfluid(H)
res[idx] end
heatmap(ts, mus, res)
This is already great, but we can do even better. Notice how for a given value of \(\mu/U\), there is exactly one point \(t/U\) where the order parameter jumps from 0 to a finite value. This means that we can precisely find the point of transition by simply using the bisection method along \(t/U\) for each \(\mu/U\), giving us a phase boundary of high precision for very little computational work (the error drops as \(2^{-n}\) for \(n\) bisections).
function bisection(f, low, high; atol=1e-8)
= 0
mid
while !isapprox(low, high, atol=atol)
= (low + high) / 2
mid
if f(mid)
= mid
low else
= mid
high end
end
return mid
end
= range(0.0, 3.0, length=100)
mus = zeros(size(mus))
ts Threads.@threads for idx in eachindex(mus)
= bisection(t -> !isSuperfluid(psi -> BHM(â, n̂, t=t, mu=mus[idx], psi=psi)), 0.0, 0.1)
ts[idx] end
plot(ts, mus, legend=:topright, lw=2, c=:black, lab="")
scatter!(ts, mus, c=:black, lab="", xlims=[0, 0.12])
To obtain a boundary of similar resolution by solving for the ground state in an entire grid of parameter values would be several orders of magnitude slower and wasteful in terms of the information that we actually utilize. Of course, there are tons of information in the actual ground state such as the correlations in the system that provide more insight into the nature of the phase transition, but these are not required if all we want is a boundary. Such a scheme would still work if there are more than two phases although more book-keeping may be involved to find a combination of order parameters to serve as a binary quantity to identify each regime.
Finally, we note that obviously the observations involved here were specific to the fact that the mean-field approach resulted in a self-consistent set of equations. Furthermore, if the phase boundary gets more complex, for example, with two jumps in \(t/U\) for a given \(\mu/U\), as in the true 1D phase diagram, using the bisection method may also prove a bit harder. So I suppose the goal of this post was not to provide a one-size-fits-all solution, but rather to display some general ideas that could be adapted or serve as inspiration to simplify the determination of phase boundaries for other systems/numerical methods.