Akshay Shankar
2022-11-28
Self-consistency loop: \[H \equiv
H\{\psi_i, \rho_i\} \hspace{1cm}\forall i \in \text{unit cell}\]
\[H \cdot|\Psi_{gs}\rangle = E_{gs} \cdot
|\Psi_{gs}\rangle\] \[\psi_i = \langle
\Psi_{gs} | \hat{a}_i | \Psi_{gs} \rangle \hspace{1cm} \rho_i = \langle
\Psi_{gs} | \hat{n}_i | \Psi_{gs} \rangle\]
Recast in terms of a multi-variate function:
\[f(\{\psi_i, \rho_i\}) \rightarrow
\text{Diagonalize } H\{\psi_i, \rho_i\} \rightarrow \text{Evaluate
}\Psi_{gs} \text{ expectation values} \rightarrow \{\psi_i',
\rho_i'\}\]
Self consistency \(\equiv\) Find fixed point of the function.
\[f(\{\psi_i^*, \rho_i^*\}) = \{\psi_i^*, \rho_i^*\}\]
Given a function \(f(x)\), find a fixed point \(x^*\) such that \(f(x^*) = x^*\).
\[f(x^{(0)}) = x^{(1)}\] \[f(x^{(1)}) = x^{(2)}\] \[\vdots\] \[f(x^{(n)}) = x^{*}\]
Repeatedly apply \(f\) on an initial guess \(x^{(0)}\) till convergence;
\[f(f(f(...f(x)))) \rightarrow
x^*\]
Seems easy enough?
Manifests as 2-cycles, like so:
\[f(\{\psi_A, \psi_B, \rho_A, \rho_B\}) = \{\psi_A', \psi_B', \rho_A', \rho_B'\}\] \[f(\{\psi_A', \psi_B', \rho_A', \rho_B'\}) = \{\psi_A, \psi_B, \rho_A, \rho_B\}\]
Deceptively similar to a 2-cycle in short iterations:
\[f(\{\psi_A, \psi_B, \rho_A, \rho_B\})
\approx \{\psi_A', \psi_B', \rho_A', \rho_B'\}\]
\[f(\{\psi_A', \psi_B', \rho_A',
\rho_B'\}) \approx \{\psi_A, \psi_B, \rho_A, \rho_B\}\]
Relative error with the actual value scales sub-linearly with the number of iterations. \[\underbrace{f(f(f(...f(\{\psi_A, \psi_B, \rho_A, \rho_B\}))))}_{\text{a looooot of times}} \rightarrow \{\psi_A^*, \psi_B^*, \rho_A^*, \rho_B^*\}\]
Use more sophisticated solvers (Anderson/Nesterov
acceleration).
Naive method: Compute for a grid of parameter values and find
the points where the order parameter jumps.
Train the network weights to minimize \(\langle \hat{H} \rangle\).
Requires better gradient descent algorithm.
Start with a spin-1/2 heisenberg model;
\[H = -J \sum_{\langle i, j\rangle}
\vec{S_i} \cdot \vec{S_j}\]
\[H =
-J \sum_{\langle b \rangle} \frac{1}{2} (S_{i(b)}^+S_{j(b)}^- +
S_{j(b)}^+S_{i(b)}^-) + S_{i(b)}^z S_{j(b)}^z\]
\[H = -J\sum_b \underbrace{H_{b,
1}}_{\text{off-diagonal}} + \underbrace{H_{b,
2}}_{\text{diagonal}}\]
\[Z = Tr(\exp(-\beta H))\]
\[Z = Tr\left[ \sum_n
\frac{(-\beta)^n}{n!} \cdot \left(\sum_b H_{b, 1} + H_{b, 2}\right)^n
\right ]\]
\[Z = \sum_n
\frac{(-\beta)^n}{n!} \cdot \sum_{|\alpha\rangle}\sum_{S_n} \langle
\alpha | \left (\prod_{\{b, i\} \in S_n} H_{b, i} \right) | \alpha
\rangle\]
Configuration of the system is \([|\alpha\rangle, S_n]\). Sample these
ergodically to compute diagonal observables.
XXZ spin-\(1/2\) model:
\[H = \frac{J_x}{2} \sum_{\langle i, j \rangle} (S_{i}^+S_{j}^- + S_{j}^+S_{i}^-) + J_z\sum_{\langle i, j \rangle} S_{i}^z S_{j}^z + h_z \sum_i S_i^z\] eBHM w/ hard-core bosons:
\[H = -t\sum_{\langle i, j \rangle} (a_i^{\dagger} a_j + a_j^{\dagger}a_i) + V\sum_{\langle i, j\rangle} n_i n_j - \mu \sum_i n_i\]
Map the operators like so: \[S_i^+ \equiv a_i^{\dagger} \hspace{1cm} S_i^z \equiv (n_i - 1/2)\] Analogous quantities: \[t \equiv \frac{J_x}{2} \hspace{1cm} V \equiv J_z \hspace{1cm} \mu = J_z - h_z\]
Binder’s cumulant: \[U_L = 1 - \frac{\langle m^4 \rangle_L}{3\langle m^2 \rangle^2_L}\]
\[t = 0 \hspace{0.2cm} (T = T_c) \hspace{1cm} \implies \hspace{1cm} U_L = constant \hspace{0.2cm} \forall L.\]
Transition point does not seem to match?
Finite-size scaling not discernible visually?