Hartree-Fock (HF) theory is an algorithm for finding an approximative expression for the ground state of a given Hamiltonian. The basic ingredients are
We will show that the Hartree-Fock Hamiltonian \( \hat{h}^{\mathrm{HF}} \) equals our definition of the operator \( \hat{f} \) discussed in connection with the new definition of the normal-ordered Hamiltonian (see later lectures), that is we have, for a specific matrix element $$ \langle p |\hat{h}^{\mathrm{HF}}| q \rangle =\langle p |\hat{f}| q \rangle=\langle p|\hat{t}+\hat{u}_{\mathrm{ext}}|q \rangle +\sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}, $$ meaning that $$ \langle p|\hat{u}^{\mathrm{HF}}|q\rangle = \sum_{i\le F} \langle pi | \hat{V} | qi\rangle_{AS}. $$ The so-called Hartree-Fock potential \( \hat{u}^{\mathrm{HF}} \) brings an explicit medium dependence due to the summation over all single-particle states below the Fermi level \( F \). It brings also in an explicit dependence on the two-body interaction (in nuclear physics we can also have complicated three- or higher-body forces). The two-body interaction, with its contribution from the other bystanding fermions, creates an effective mean field in which a given fermion moves, in addition to the external potential \( \hat{u}_{\mathrm{ext}} \) which confines the motion of the fermion. For systems like nuclei, there is no external confining potential. Nuclei are examples of self-bound systems, where the binding arises due to the intrinsic nature of the strong force. For nuclear systems thus, there would be no external one-body potential in the Hartree-Fock Hamiltonian.
Before we proceed we need some definitions. We will assume that the interacting part of the Hamiltonian can be approximated by a two-body interaction. This means that our Hamiltonian is written as the sum of some onebody part and a twobody part $$ \begin{equation} \hat{H} = \hat{H}_0 + \hat{H}_I = \sum_{i=1}^A \hat{h}_0(x_i) + \sum_{i < j}^A \hat{v}(r_{ij}), \label{Hnuclei} \end{equation} $$ with $$ \begin{equation} H_0=\sum_{i=1}^A \hat{h}_0(x_i). \label{hinuclei} \end{equation} $$ The onebody part \( u_{\mathrm{ext}}(x_i) \) is normally approximated by a harmonic oscillator or Woods-Saxon potential or for electronic systems the Coulomb interaction an electron feels from the nucleus. However, other potentials are fully possible, such as one derived from the self-consistent solution of the Hartree-Fock equations to be discussed here.
Our Hamiltonian is invariant under the permutation (interchange) of two particles. Since we deal with fermions however, the total wave function is antisymmetric. Let \( \hat{P} \) be an operator which interchanges two particles. Due to the symmetries we have ascribed to our Hamiltonian, this operator commutes with the total Hamiltonian, $$ [\hat{H},\hat{P}] = 0, $$ meaning that \( \Psi_{\lambda}(x_1, x_2, \dots , x_A) \) is an eigenfunction of \( \hat{P} \) as well, that is $$ \hat{P}_{ij}\Psi_{\lambda}(x_1, x_2, \dots,x_i,\dots,x_j,\dots,x_A)= \beta\Psi_{\lambda}(x_1, x_2, \dots,x_i,\dots,x_j,\dots,x_A), $$ where \( \beta \) is the eigenvalue of \( \hat{P} \). We have introduced the suffix \( ij \) in order to indicate that we permute particles \( i \) and \( j \). The Pauli principle tells us that the total wave function for a system of fermions has to be antisymmetric, resulting in the eigenvalue \( \beta = -1 \).
In our case we assume that we can approximate the exact eigenfunction with a Slater determinant $$ \begin{equation} \Phi(x_1, x_2,\dots ,x_A,\alpha,\beta,\dots, \sigma)=\frac{1}{\sqrt{A!}} \left| \begin{array}{ccccc} \psi_{\alpha}(x_1)& \psi_{\alpha}(x_2)& \dots & \dots & \psi_{\alpha}(x_A)\\ \psi_{\beta}(x_1)&\psi_{\beta}(x_2)& \dots & \dots & \psi_{\beta}(x_A)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \psi_{\sigma}(x_1)&\psi_{\sigma}(x_2)& \dots & \dots & \psi_{\sigma}(x_A)\end{array} \right|, \label{eq:HartreeFockDet} \end{equation} $$ where \( x_i \) stand for the coordinates and spin values of a particle \( i \) and \( \alpha,\beta,\dots, \gamma \) are quantum numbers needed to describe remaining quantum numbers.
The single-particle function \( \psi_{\alpha}(x_i) \) are eigenfunctions of the onebody Hamiltonian \( h_i \), that is $$ \hat{h}_0(x_i)=\hat{t}(x_i) + \hat{u}_{\mathrm{ext}}(x_i), $$ with eigenvalues $$ \hat{h}_0(x_i) \psi_{\alpha}(x_i)=\left(\hat{t}(x_i) + \hat{u}_{\mathrm{ext}}(x_i)\right)\psi_{\alpha}(x_i)=\varepsilon_{\alpha}\psi_{\alpha}(x_i). $$ The energies \( \varepsilon_{\alpha} \) are the so-called non-interacting single-particle energies, or unperturbed energies. The total energy is in this case the sum over all single-particle energies, if no two-body or more complicated many-body interactions are present.
Let us denote the ground state energy by \( E_0 \). According to the variational principle we have $$ E_0 \le E[\Phi] = \int \Phi^*\hat{H}\Phi d\mathbf{\tau} $$ where \( \Phi \) is a trial function which we assume to be normalized $$ \int \Phi^*\Phi d\mathbf{\tau} = 1, $$ where we have used the shorthand \( d\mathbf{\tau}=dx_1dr_2\dots dr_A \).
Before we proceed with a more compact representation of a Slater determinant, we would like to repeat some linear algebra properties which will be useful for our derivations of the energy as function of a Slater determinant, Hartree-Fock theory and later the nuclear shell model.
The inverse of a matrix is defined by $$ \mathbf{A}^{-1} \cdot \mathbf{A} = I $$ A unitary matrix \( \mathbf{A} \) is one whose inverse is its adjoint $$ \mathbf{A}^{-1}=\mathbf{A}^{\dagger} $$ A real unitary matrix is called orthogonal and its inverse is equal to its transpose. A hermitian matrix is its own self-adjoint, that is $$ \mathbf{A}=\mathbf{A}^{\dagger}. $$
Relations | Name | matrix elements |
---|---|---|
\( A = A^{T} \) | symmetric | \( a_{ij} = a_{ji} \) |
\( A = \left (A^{T} \right )^{-1} \) | real orthogonal | \( \sum_k a_{ik} a_{jk} = \sum_k a_{ki} a_{kj} = \delta_{ij} \) |
\( A = A^{ * } \) | real matrix | \( a_{ij} = a_{ij}^{ * } \) |
\( A = A^{\dagger} \) | hermitian | \( a_{ij} = a_{ji}^{ * } \) |
\( A = \left (A^{\dagger} \right )^{-1} \) | unitary | \( \sum_k a_{ik} a_{jk}^{ * } = \sum_k a_{ki}^{ * } a_{kj} = \delta_{ij} \) |
Since we will deal with Fermions (identical and indistinguishable particles) we will form an ansatz for a given state in terms of so-called Slater determinants determined by a chosen basis of single-particle functions.
For a given \( n\times n \) matrix \( \mathbf{A} \) we can write its determinant $$ det(\mathbf{A})=|\mathbf{A}|= \left| \begin{array}{ccccc} a_{11}& a_{12}& \dots & \dots & a_{1n}\\ a_{21}&a_{22}& \dots & \dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ a_{n1}& a_{n2}& \dots & \dots & a_{nn}\end{array} \right|, $$ in a more compact form as $$ |\mathbf{A}|= \sum_{i=1}^{n!}(-1)^{p_i}\hat{P}_i a_{11}a_{22}\dots a_{nn}, $$ where \( \hat{P}_i \) is a permutation operator which permutes the column indices \( 1,2,3,\dots,n \) and the sum runs over all \( n! \) permutations. The quantity \( p_i \) represents the number of transpositions of column indices that are needed in order to bring a given permutation back to its initial ordering, in our case given by \( a_{11}a_{22}\dots a_{nn} \) here.
A simple \( 2\times 2 \) determinant illustrates this. We have $$ det(\mathbf{A})= \left| \begin{array}{cc} a_{11}& a_{12}\\ a_{21}&a_{22}\end{array} \right|= (-1)^0a_{11}a_{22}+(-1)^1a_{12}a_{21}, $$ where in the last term we have interchanged the column indices \( 1 \) and \( 2 \). The natural ordering we have chosen is \( a_{11}a_{22} \).
With the above we can rewrite our Slater determinant in a more compact form. In the Hartree-Fock method the trial function is the Slater determinant of Eq. \eqref{eq:HartreeFockDet} which can be rewritten as $$ \Phi(x_1,x_2,\dots,x_A,\alpha,\beta,\dots,\nu) = \frac{1}{\sqrt{A!}}\sum_{P} (-)^P\hat{P}\psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_A)=\sqrt{A!}\hat{A}\Phi_H, $$ where we have introduced the antisymmetrization operator \( \hat{A} \) defined by the summation over all possible permutations of two particles.
It is defined as $$ \begin{equation} \hat{A} = \frac{1}{A!}\sum_{p} (-)^p\hat{P}, \label{antiSymmetryOperator} \end{equation} $$ with \( p \) standing for the number of permutations. We have introduced for later use the so-called Hartree-function, defined by the simple product of all possible single-particle functions $$ \Phi_H(x_1,x_2,\dots,x_A,\alpha,\beta,\dots,\nu) = \psi_{\alpha}(x_1) \psi_{\beta}(x_2)\dots\psi_{\nu}(x_A). $$
Both \( \hat{H}_0 \) and \( \hat{H}_I \) are invariant under all possible permutations of any two particles and hence commute with \( \hat{A} \) $$ \begin{equation} [H_0,\hat{A}] = [H_I,\hat{A}] = 0. \label{commutionAntiSym} \end{equation} $$ Furthermore, \( \hat{A} \) satisfies $$ \begin{equation} \hat{A}^2 = \hat{A}, \label{AntiSymSquared} \end{equation} $$ since every permutation of the Slater determinant reproduces it.
The expectation value of \( \hat{H}_0 \) $$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = A! \int \Phi_H^*\hat{A}\hat{H}_0\hat{A}\Phi_H d\mathbf{\tau} $$ is readily reduced to $$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = A! \int \Phi_H^*\hat{H}_0\hat{A}\Phi_H d\mathbf{\tau}, $$ where we have used Eqs. \eqref{commutionAntiSym} and \eqref{AntiSymSquared}. The next step is to replace the antisymmetrization operator by its definition and to replace \( \hat{H}_0 \) with the sum of one-body operators $$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = \sum_{i=1}^A \sum_{p} (-)^p\int \Phi_H^*\hat{h}_0\hat{P}\Phi_H d\mathbf{\tau}. $$
The integral vanishes if two or more particles are permuted in only one of the Hartree-functions \( \Phi_H \) because the individual single-particle wave functions are orthogonal. We obtain then $$ \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau}= \sum_{i=1}^A \int \Phi_H^*\hat{h}_0\Phi_H d\mathbf{\tau}. $$ Orthogonality of the single-particle functions allows us to further simplify the integral, and we arrive at the following expression for the expectation values of the sum of one-body Hamiltonians $$ \begin{equation} \int \Phi^*\hat{H}_0\Phi d\mathbf{\tau} = \sum_{\mu=1}^A \int \psi_{\mu}^*(x)\hat{h}_0\psi_{\mu}(x)dx. \label{H1Expectation} \end{equation} $$
We introduce the following shorthand for the above integral $$ \langle \mu | \hat{h}_0 | \mu \rangle = \int \psi_{\mu}^*(x)\hat{h}_0\psi_{\mu}(x)dx, $$ and rewrite Eq. \eqref{H1Expectation} as $$ \begin{equation} \int \Phi^*\hat{H}_0\Phi d\tau = \sum_{\mu=1}^A \langle \mu | \hat{h}_0 | \mu \rangle. \label{H1Expectation1} \end{equation} $$
The expectation value of the two-body part of the Hamiltonian is obtained in a similar manner. We have $$ \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = A! \int \Phi_H^*\hat{A}\hat{H}_I\hat{A}\Phi_H d\mathbf{\tau}, $$ which reduces to $$ \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \sum_{i\le j=1}^A \sum_{p} (-)^p\int \Phi_H^*\hat{v}(r_{ij})\hat{P}\Phi_H d\mathbf{\tau}, $$ by following the same arguments as for the one-body Hamiltonian.
Because of the dependence on the inter-particle distance \( r_{ij} \), permutations of any two particles no longer vanish, and we get $$ \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \sum_{i < j=1}^A \int \Phi_H^*\hat{v}(r_{ij})(1-P_{ij})\Phi_H d\mathbf{\tau}. $$ where \( P_{ij} \) is the permutation operator that interchanges particle \( i \) and particle \( j \). Again we use the assumption that the single-particle wave functions are orthogonal.
We obtain $$ \begin{align} \int \Phi^*\hat{H}_I\Phi d\mathbf{\tau} = \frac{1}{2}\sum_{\mu=1}^A\sum_{\nu=1}^A &\left[ \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j) dx_idx_j \right. \label{_auto1}\\ &\left. - \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) \hat{v}(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j) dx_idx_j \right]. \label{H2Expectation} \end{align} $$ The first term is the so-called direct term. It gives rise to the Hartree term in Hartree-Fock theory, while the second is due to the Pauli principle and is called the exchange term and gives rise to the Fock term in the Hartree-Fock equations. The factor \( 1/2 \) is introduced because we now run over all pairs twice.
The last equation allows us to introduce some further definitions. The single-particle wave functions \( \psi_{\mu}(x) \), defined by the quantum numbers \( \mu \) and \( x \) are defined as the overlap $$ \psi_{\mu}(x) = \langle x | \mu \rangle . $$
We introduce the following shorthands for the above two integrals $$ \langle \mu\nu|\hat{v}|\mu\nu\rangle = \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j)\hat{v}(r_{ij})\psi_{\mu}(x_i)\psi_{\nu}(x_j) dx_idx_j, $$ and $$ \langle \mu\nu|\hat{v}|\nu\mu\rangle = \int \psi_{\mu}^*(x_i)\psi_{\nu}^*(x_j) \hat{v}(r_{ij})\psi_{\nu}(x_i)\psi_{\mu}(x_j) dx_idx_j. $$
Our functional can then be written in a compact version as $$ E[\Phi] = \sum_{\mu}^A \langle \mu | \hat{h}_0 | \mu\rangle+ \frac{1}{2}\sum_{\mu\nu}^A\left[\langle \mu\nu |\hat{v}|\mu\nu\rangle-\langle \nu\mu |\hat{v}|\mu\nu\rangle\right]. $$
Since the interaction is invariant under the interchange of two particles it means for example that we have $$ \langle \mu\nu|\hat{v}|\mu\nu\rangle = \langle \nu\mu|\hat{v}|\nu\mu\rangle, $$ or in the more general case $$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle = \langle \nu\mu|\hat{v}|\tau\sigma\rangle. $$
The direct and exchange matrix elements can be brought together if we define the antisymmetrized matrix element $$ \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}= \langle \mu\nu|\hat{v}|\mu\nu\rangle-\langle \mu\nu|\hat{v}|\nu\mu\rangle, $$ or for a general matrix element $$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= \langle \mu\nu|\hat{v}|\sigma\tau\rangle-\langle \mu\nu|\hat{v}|\tau\sigma\rangle. $$ It has the symmetry property $$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= -\langle \mu\nu|\hat{v}|\tau\sigma\rangle_{AS}=-\langle \nu\mu|\hat{v}|\sigma\tau\rangle_{AS}. $$ The antisymmetric matrix element is also hermitian, implying $$ \langle \mu\nu|\hat{v}|\sigma\tau\rangle_{AS}= \langle \sigma\tau|\hat{v}|\mu\nu\rangle_{AS}. $$
With these notations we rewrite the energy functional as $$ \begin{equation} \int \Phi^*\hat{H_I}\Phi d\mathbf{\tau} = \frac{1}{2}\sum_{\mu=1}^A\sum_{\nu=1}^A \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}. \label{H2Expectation2} \end{equation} $$
Adding the contribution from the one-body operator \( \hat{H}_0 \) to \eqref{H2Expectation2} we obtain the energy functional $$ \begin{equation} E[\Phi] = \sum_{\mu=1}^A \langle \mu | h | \mu \rangle + \frac{1}{2}\sum_{{\mu}=1}^A\sum_{{\nu}=1}^A \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}. \label{FunctionalEPhi} \end{equation} $$ In our coordinate space derivations below we will spell out the Hartree-Fock equations in terms of their integrals.
The calculus of variations involves problems where the quantity to be minimized or maximized is an integral.
In the general case we have an integral of the type $$ E[\Phi]= \int_a^b f(\Phi(x),\frac{\partial \Phi}{\partial x},x)dx, $$ where \( E \) is the quantity which is sought minimized or maximized. The problem is that although \( f \) is a function of the variables \( \Phi \), \( \partial \Phi/\partial x \) and \( x \), the exact dependence of \( \Phi \) on \( x \) is not known. This means again that even though the integral has fixed limits \( a \) and \( b \), the path of integration is not known. In our case the unknown quantities are the single-particle wave functions and we wish to choose an integration path which makes the functional \( E[\Phi] \) stationary. This means that we want to find minima, or maxima or saddle points. In physics we search normally for minima. Our task is therefore to find the minimum of \( E[\Phi] \) so that its variation \( \delta E \) is zero subject to specific constraints. In our case the constraints appear as the integral which expresses the orthogonality of the single-particle wave functions. The constraints can be treated via the technique of Lagrangian multipliers
Let us specialize to the expectation value of the energy for one particle in three-dimensions. This expectation value reads $$ E=\int dxdydz \psi^*(x,y,z) \hat{H} \psi(x,y,z), $$ with the constraint $$ \int dxdydz \psi^*(x,y,z) \psi(x,y,z)=1, $$ and a Hamiltonian $$ \hat{H}=-\frac{1}{2}\nabla^2+V(x,y,z). $$ We will, for the sake of notational convenience, skip the variables \( x,y,z \) below, and write for example \( V(x,y,z)=V \).
The integral involving the kinetic energy can be written as, with the function \( \psi \) vanishing strongly for large values of \( x,y,z \) (given here by the limits \( a \) and \( b \)), $$ \int_a^b dxdydz \psi^* \left(-\frac{1}{2}\nabla^2\right) \psi dxdydz = \psi^*\nabla\psi|_a^b+\int_a^b dxdydz\frac{1}{2}\nabla\psi^*\nabla\psi. $$ We will drop the limits \( a \) and \( b \) in the remaining discussion. Inserting this expression into the expectation value for the energy and taking the variational minimum we obtain $$ \delta E = \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi\right)\right\} = 0. $$
The constraint appears in integral form as $$ \int dxdydz \psi^* \psi=\mathrm{constant}, $$ and multiplying with a Lagrangian multiplier \( \lambda \) and taking the variational minimum we obtain the final variational equation $$ \delta \left\{\int dxdydz\left( \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi\right)\right\} = 0. $$ We introduce the function \( f \) $$ f = \frac{1}{2}\nabla\psi^*\nabla\psi+V\psi^*\psi-\lambda\psi^*\psi= \frac{1}{2}(\psi^*_x\psi_x+\psi^*_y\psi_y+\psi^*_z\psi_z)+V\psi^*\psi-\lambda\psi^*\psi, $$ where we have skipped the dependence on \( x,y,z \) and introduced the shorthand \( \psi_x \), \( \psi_y \) and \( \psi_z \) for the various derivatives.
For \( \psi^* \) the Euler-Lagrange equations yield $$ \frac{\partial f}{\partial \psi^*}- \frac{\partial }{\partial x}\frac{\partial f}{\partial \psi^*_x}-\frac{\partial }{\partial y}\frac{\partial f}{\partial \psi^*_y}-\frac{\partial }{\partial z}\frac{\partial f}{\partial \psi^*_z}=0, $$ which results in $$ -\frac{1}{2}(\psi_{xx}+\psi_{yy}+\psi_{zz})+V\psi=\lambda \psi. $$ We can then identify the Lagrangian multiplier as the energy of the system. The last equation is nothing but the standard Schroedinger equation and the variational approach discussed here provides a powerful method for obtaining approximate solutions of the wave function.
In deriving the Hartree-Fock equations, we will expand the single-particle functions in a known basis and vary the coefficients, that is, the new single-particle wave function is written as a linear expansion in terms of a fixed chosen orthogonal basis (for example the well-known harmonic oscillator functions or the hydrogen-like functions etc). We define our new Hartree-Fock single-particle basis by performing a unitary transformation on our previous basis (labelled with greek indices) as $$ \begin{equation} \psi_p^{HF} = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}. \label{eq:newbasis} \end{equation} $$ In this case we vary the coefficients \( C_{p\lambda} \). If the basis has infinitely many solutions, we need to truncate the above sum. We assume that the basis \( \phi_{\lambda} \) is orthogonal. A unitary transformation keeps the orthogonality, as discussed in exercise 1 below.
In the previous slide we stated that a unitary transformation keeps the orthogonality, as discussed in exercise 1 below. To see this consider first a basis of vectors \( \mathbf{v}_i \), $$ \mathbf{v}_i = \begin{bmatrix} v_{i1} \\ \dots \\ \dots \\v_{in} \end{bmatrix} $$ We assume that the basis is orthogonal, that is $$ \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$ An orthogonal or unitary transformation $$ \mathbf{w}_i=\mathbf{U}\mathbf{v}_i, $$ preserves the dot product and orthogonality since $$ \mathbf{w}_j^T\mathbf{w}_i=(\mathbf{U}\mathbf{v}_j)^T\mathbf{U}\mathbf{v}_i=\mathbf{v}_j^T\mathbf{U}^T\mathbf{U}\mathbf{v}_i= \mathbf{v}_j^T\mathbf{v}_i = \delta_{ij}. $$
This means that if the coefficients \( C_{p\lambda} \) belong to a unitary or orthogonal trasformation (using the Dirac bra-ket notation) $$ \vert p\rangle = \sum_{\lambda} C_{p\lambda}\vert\lambda\rangle, $$ orthogonality is preserved, that is \( \langle \alpha \vert \beta\rangle = \delta_{\alpha\beta} \) and \( \langle p \vert q\rangle = \delta_{pq} \).
This propertry is extremely useful when we build up a basis of many-body Stater determinant based states.
Note also that although a basis \( \vert \alpha\rangle \) contains an infinity of states, for practical calculations we have always to make some truncations.
Before we develop the Hartree-Fock equations, there is another very useful property of determinants that we will use both in connection with Hartree-Fock calculations and later shell-model calculations.
Consider the following determinant $$ \left| \begin{array}{cc} \alpha_1b_{11}+\alpha_2sb_{12}& a_{12}\\ \alpha_1b_{21}+\alpha_2b_{22}&a_{22}\end{array} \right|=\alpha_1\left|\begin{array}{cc} b_{11}& a_{12}\\ b_{21}&a_{22}\end{array} \right|+\alpha_2\left| \begin{array}{cc} b_{12}& a_{12}\\b_{22}&a_{22}\end{array} \right| $$
We can generalize this to an \( n\times n \) matrix and have $$ \left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & \sum_{k=1}^n c_k b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & \sum_{k=1}^n c_k b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots & \dots \\ a_{n1}& a_{n2} & \dots & \sum_{k=1}^n c_k b_{nk} &\dots & a_{nn}\end{array} \right|= \sum_{k=1}^n c_k\left| \begin{array}{cccccc} a_{11}& a_{12} & \dots & b_{1k} &\dots & a_{1n}\\ a_{21}& a_{22} & \dots & b_{2k} &\dots & a_{2n}\\ \dots & \dots & \dots & \dots & \dots & \dots\\ \dots & \dots & \dots & \dots & \dots & \dots\\ a_{n1}& a_{n2} & \dots & b_{nk} &\dots & a_{nn}\end{array} \right| . $$ This is a property we will use in our Hartree-Fock discussions.
We can generalize the previous results, now with all elements \( a_{ij} \) being given as functions of linear combinations of various coefficients \( c \) and elements \( b_{ij} \), $$ \left| \begin{array}{cccccc} \sum_{k=1}^n b_{1k}c_{k1}& \sum_{k=1}^n b_{1k}c_{k2} & \dots & \sum_{k=1}^n b_{1k}c_{kj} &\dots & \sum_{k=1}^n b_{1k}c_{kn}\\ \sum_{k=1}^n b_{2k}c_{k1}& \sum_{k=1}^n b_{2k}c_{k2} & \dots & \sum_{k=1}^n b_{2k}c_{kj} &\dots & \sum_{k=1}^n b_{2k}c_{kn}\\ \dots & \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots &\dots \\ \sum_{k=1}^n b_{nk}c_{k1}& \sum_{k=1}^n b_{nk}c_{k2} & \dots & \sum_{k=1}^n b_{nk}c_{kj} &\dots & \sum_{k=1}^n b_{nk}c_{kn}\end{array} \right|=det(\mathbf{C})det(\mathbf{B}), $$ where \( det(\mathbf{C}) \) and \( det(\mathbf{B}) \) are the determinants of \( n\times n \) matrices with elements \( c_{ij} \) and \( b_{ij} \) respectively. This is a property we will use in our Hartree-Fock discussions. Convince yourself about the correctness of the above expression by setting \( n=2 \).
With our definition of the new basis in terms of an orthogonal basis we have $$ \psi_p(x) = \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x). $$ If the coefficients \( C_{p\lambda} \) belong to an orthogonal or unitary matrix, the new basis is also orthogonal. Our Slater determinant in the new basis \( \psi_p(x) \) is written as $$ \frac{1}{\sqrt{A!}} \left| \begin{array}{ccccc} \psi_{p}(x_1)& \psi_{p}(x_2)& \dots & \dots & \psi_{p}(x_A)\\ \psi_{q}(x_1)&\psi_{q}(x_2)& \dots & \dots & \psi_{q}(x_A)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \psi_{t}(x_1)&\psi_{t}(x_2)& \dots & \dots & \psi_{t}(x_A)\end{array} \right|=\frac{1}{\sqrt{A!}} \left| \begin{array}{ccccc} \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_1)& \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{p\lambda}\phi_{\lambda}(x_A)\\ \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{q\lambda}\phi_{\lambda}(x_A)\\ \dots & \dots & \dots & \dots & \dots \\ \dots & \dots & \dots & \dots & \dots \\ \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_1)&\sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_2)& \dots & \dots & \sum_{\lambda} C_{t\lambda}\phi_{\lambda}(x_A)\end{array} \right|, $$ which is nothing but \( det(\mathbf{C})det(\Phi) \), with \( det(\Phi) \) being the determinant given by the basis functions \( \phi_{\lambda}(x) \).
It is normal to choose a single-particle basis defined as the eigenfunctions of parts of the full Hamiltonian. The typical situation consists of the solutions of the one-body part of the Hamiltonian, that is we have $$ \hat{h}_0\phi_{\lambda}=\epsilon_{\lambda}\phi_{\lambda}. $$ The single-particle wave functions \( \phi_{\lambda}({\bf r}) \), defined by the quantum numbers \( \lambda \) and \( {\bf r} \) are defined as the overlap $$ \phi_{\lambda}({\bf r}) = \langle {\bf r} | \lambda \rangle . $$
In our discussions hereafter we will use our definitions of single-particle states above and below the Fermi (\( F \)) level given by the labels \( ijkl\dots \le F \) for so-called single-hole states and \( abcd\dots > F \) for so-called particle states. For general single-particle states we employ the labels \( pqrs\dots \).
In Eq. \eqref{FunctionalEPhi}, restated here $$ E[\Phi] = \sum_{\mu=1}^A \langle \mu | h | \mu \rangle + \frac{1}{2}\sum_{{\mu}=1}^A\sum_{{\nu}=1}^A \langle \mu\nu|\hat{v}|\mu\nu\rangle_{AS}, $$ we found the expression for the energy functional in terms of the basis function \( \phi_{\lambda}({\bf r}) \). We then varied the above energy functional with respect to the basis functions \( |\mu \rangle \). Now we are interested in defining a new basis defined in terms of a chosen basis as defined in Eq. \eqref{eq:newbasis}. We can then rewrite the energy functional as $$ \begin{equation} E[\Phi^{HF}] = \sum_{i=1}^A \langle i | h | i \rangle + \frac{1}{2}\sum_{ij=1}^A\langle ij|\hat{v}|ij\rangle_{AS}, \label{FunctionalEPhi2} \end{equation} $$ where \( \Phi^{HF} \) is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis}.
Using Eq. \eqref{eq:newbasis} we can rewrite Eq. \eqref{FunctionalEPhi2} as $$ \begin{equation} E[\Psi] = \sum_{i=1}^A \sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | h | \beta \rangle + \frac{1}{2}\sum_{ij=1}^A\sum_{{\alpha\beta\gamma\delta}} C^*_{i\alpha}C^*_{j\beta}C_{i\gamma}C_{j\delta}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}. \label{FunctionalEPhi3} \end{equation} $$
We wish now to minimize the above functional. We introduce again a set of Lagrange multipliers, noting that since \( \langle i | j \rangle = \delta_{i,j} \) and \( \langle \alpha | \beta \rangle = \delta_{\alpha,\beta} \), the coefficients \( C_{i\gamma} \) obey the relation $$ \langle i | j \rangle=\delta_{i,j}=\sum_{\alpha\beta} C^*_{i\alpha}C_{i\beta}\langle \alpha | \beta \rangle= \sum_{\alpha} C^*_{i\alpha}C_{i\alpha}, $$ which allows us to define a functional to be minimized that reads $$ \begin{equation} F[\Phi^{HF}]=E[\Phi^{HF}] - \sum_{i=1}^A\epsilon_i\sum_{\alpha} C^*_{i\alpha}C_{i\alpha}. \label{_auto2} \end{equation} $$
Minimizing with respect to \( C^*_{i\alpha} \), remembering that the equations for \( C^*_{i\alpha} \) and \( C_{i\alpha} \) can be written as two independent equations, we obtain $$ \frac{d}{dC^*_{i\alpha}}\left[ E[\Phi^{HF}] - \sum_{j}\epsilon_j\sum_{\alpha} C^*_{j\alpha}C_{j\alpha}\right]=0, $$ which yields for every single-particle state \( i \) and index \( \alpha \) (recalling that the coefficients \( C_{i\alpha} \) are matrix elements of a unitary (or orthogonal for a real symmetric matrix) matrix) the following Hartree-Fock equations $$ \sum_{\beta} C_{i\beta}\langle \alpha | h | \beta \rangle+ \sum_{j=1}^A\sum_{\beta\gamma\delta} C^*_{j\beta}C_{j\delta}C_{i\gamma}\langle \alpha\beta|\hat{v}|\gamma\delta\rangle_{AS}=\epsilon_i^{HF}C_{i\alpha}. $$
We can rewrite this equation as (changing dummy variables) $$ \sum_{\beta} \left\{\langle \alpha | h | \beta \rangle+ \sum_{j}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}\right\}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. $$ Note that the sums over greek indices run over the number of basis set functions (in principle an infinite number).
Defining $$ h_{\alpha\beta}^{HF}=\langle \alpha | h | \beta \rangle+ \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}, $$ we can rewrite the new equations as $$ \begin{equation} \sum_{\gamma}h_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{HF}C_{i\alpha}. \label{eq:newhf} \end{equation} $$ The latter is nothing but a standard eigenvalue problem.
It suffices to tabulate the matrix elements \( \langle \alpha | h | \beta \rangle \) and \( \langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS} \) once and for all. Successive iterations require thus only a look-up in tables over one-body and two-body matrix elements. These details will be discussed below when we solve the Hartree-Fock equations numerically.
Our Hartree-Fock matrix is thus $$ \hat{h}_{\alpha\beta}^{HF}=\langle \alpha | \hat{h}_0 | \beta \rangle+ \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. $$ The Hartree-Fock equations are solved in an iterative waym starting with a guess for the coefficients \( C_{j\gamma}=\delta_{j,\gamma} \) and solving the equations by diagonalization till the new single-particle energies \( \epsilon_i^{\mathrm{HF}} \) do not change anymore by a prefixed quantity.
Normally we assume that the single-particle basis \( |\beta\rangle \) forms an eigenbasis for the operator \( \hat{h}_0 \), meaning that the Hartree-Fock matrix becomes $$ \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{j=1}^A\sum_{\gamma\delta} C^*_{j\gamma}C_{j\delta}\langle \alpha\gamma|\hat{v}|\beta\delta\rangle_{AS}. $$ The Hartree-Fock eigenvalue problem $$ \sum_{\beta}\hat{h}_{\alpha\beta}^{HF}C_{i\beta}=\epsilon_i^{\mathrm{HF}}C_{i\alpha}, $$ can be written out in a more compact form as $$ \hat{h}^{HF}\hat{C}=\epsilon^{\mathrm{HF}}\hat{C}. $$
The Hartree-Fock equations are, in their simplest form, solved in an iterative way, starting with a guess for the coefficients \( C_{i\alpha} \). We label the coefficients as \( C_{i\alpha}^{(n)} \), where the subscript \( n \) stands for iteration \( n \). To set up the algorithm we can proceed as follows:
Solving the Hartree-Fock eigenvalue problem yields then new eigenvectors \( C_{i\alpha}^{(1)} \) and eigenvalues \( \epsilon_i^{HF(1)} \).
We can rewrite the ground state energy by adding and subtracting \( \hat{u}^{HF}(x_i) \) $$ E_0^{HF} =\langle \Phi_0 | \hat{H} | \Phi_0\rangle = \sum_{i\le F}^A \langle i | \hat{h}_0 +\hat{u}^{HF}| j\rangle+ \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F}^A \langle i |\hat{u}^{HF}| i\rangle, $$ which results in $$ E_0^{HF} = \sum_{i\le F}^A \varepsilon_i^{HF} + \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]-\sum_{i\le F}^A \langle i |\hat{u}^{HF}| i\rangle. $$ Our single-particle states \( ijk\dots \) are now single-particle states obtained from the solution of the Hartree-Fock equations.
Using our definition of the Hartree-Fock single-particle energies we obtain then the following expression for the total ground-state energy $$ E_0^{HF} = \sum_{i\le F}^A \varepsilon_i - \frac{1}{2}\sum_{i\le F}^A\sum_{j \le F}^A\left[\langle ij |\hat{v}|ij \rangle-\langle ij|\hat{v}|ji\rangle\right]. $$ This form will be used in our discussion of Koopman's theorem.
We have $$ E[\Phi^{\mathrm{HF}}(N)] = \sum_{i=1}^H \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1}^N\langle ij|\hat{v}|ij\rangle_{AS}, $$ where \( \Phi^{\mathrm{HF}}(N) \) is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis} for \( N \) electrons (same \( Z \)). If we assume that the single-particle wave functions in the new basis do not change when we remove one electron or add one electron, we can then define the corresponding energy for the \( N-1 \) systems as $$ E[\Phi^{\mathrm{HF}}(N-1)] = \sum_{i=1; i\ne k}^N \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1;i,j\ne k}^N\langle ij|\hat{v}|ij\rangle_{AS}, $$ where we have removed a single-particle state \( k\le F \), that is a state below the Fermi level.
Calculating the difference $$ E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{i=1;i\ne k}^N\langle ik|\hat{v}|ik\rangle_{AS} + \frac{1}{2}\sum_{j=1;j\ne k}^N\langle kj|\hat{v}|kj\rangle_{AS}, $$ we obtain $$ E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{j=1}^N\langle kj|\hat{v}|kj\rangle_{AS} $$ which is just our definition of the Hartree-Fock single-particle energy $$ E[\Phi^{\mathrm{HF}}(N)]- E[\Phi^{\mathrm{HF}}(N-1)] = \epsilon_k^{\mathrm{HF}} $$
Similarly, we can now compute the difference (we label the single-particle states above the Fermi level as \( abcd > F \)) $$ E[\Phi^{\mathrm{HF}}(N+1)]- E[\Phi^{\mathrm{HF}}(N)]= \epsilon_a^{\mathrm{HF}}. $$ These two equations can thus be used to the electron affinity or ionization energies, respectively. Koopman's theorem states that for example the ionization energy of a closed-shell system is given by the energy of the highest occupied single-particle state. If we assume that changing the number of electrons from \( N \) to \( N+1 \) does not change the Hartree-Fock single-particle energies and eigenfunctions, then Koopman's theorem simply states that the ionization energy of an atom is given by the single-particle energy of the last bound state. In a similar way, we can also define the electron affinities.
As an example, consider a simple model for atomic sodium, Na. Neutral sodium has eleven electrons, with the weakest bound one being confined the \( 3s \) single-particle quantum numbers. The energy needed to remove an electron from neutral sodium is rather small, 5.1391 eV, a feature which pertains to all alkali metals. Having performed a Hartree-Fock calculation for neutral sodium would then allows us to compute the ionization energy by using the single-particle energy for the \( 3s \) states, namely \( \epsilon_{3s}^{\mathrm{HF}} \).
From these considerations, we see that Hartree-Fock theory allows us to make a connection between experimental observables (here ionization and affinity energies) and the underlying interactions between particles. In this sense, we are now linking the dynamics and structure of a many-body system with the laws of motion which govern the system. Our approach is a reductionistic one, meaning that we want to understand the laws of motion in terms of the particles or degrees of freedom which we believe are the fundamental ones. Our Slater determinant, being constructed as the product of various single-particle functions, follows this philosophy.
With similar arguments as in atomic physics, we can now use Hartree-Fock theory to make a link between nuclear forces and separation energies. Changing to nuclear system, we define $$ E[\Phi^{\mathrm{HF}}(A)] = \sum_{i=1}^A \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1}^A\langle ij|\hat{v}|ij\rangle_{AS}, $$ where \( \Phi^{\mathrm{HF}}(A) \) is the new Slater determinant defined by the new basis of Eq. \eqref{eq:newbasis} for \( A \) nucleons, where \( A=N+Z \), with \( N \) now being the number of neutrons and \( Z \) th enumber of protons. If we assume again that the single-particle wave functions in the new basis do not change from a nucleus with \( A \) nucleons to a nucleus with \( A-1 \) nucleons, we can then define the corresponding energy for the \( A-1 \) systems as $$ E[\Phi^{\mathrm{HF}}(A-1)] = \sum_{i=1; i\ne k}^A \langle i | \hat{h}_0 | i \rangle + \frac{1}{2}\sum_{ij=1;i,j\ne k}^A\langle ij|\hat{v}|ij\rangle_{AS}, $$ where we have removed a single-particle state \( k\le F \), that is a state below the Fermi level.
Calculating the difference $$ E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{i=1;i\ne k}^A\langle ik|\hat{v}|ik\rangle_{AS} + \frac{1}{2}\sum_{j=1;j\ne k}^A\langle kj|\hat{v}|kj\rangle_{AS}, $$ which becomes $$ E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \langle k | \hat{h}_0 | k \rangle + \frac{1}{2}\sum_{j=1}^A\langle kj|\hat{v}|kj\rangle_{AS} $$ which is just our definition of the Hartree-Fock single-particle energy $$ E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \epsilon_k^{\mathrm{HF}} $$
Similarly, we can now compute the difference (recall that the single-particle states \( abcd > F \)) $$ E[\Phi^{\mathrm{HF}}(A+1)]- E[\Phi^{\mathrm{HF}}(A)]= \epsilon_a^{\mathrm{HF}}. $$ If we then recall that the binding energy differences $$ BE(A)-BE(A-1) \hspace{0.5cm} \mathrm{and} \hspace{0.5cm} BE(A+1)-BE(A), $$ define the separation energies, we see that the Hartree-Fock single-particle energies can be used to define separation energies. We have thus our first link between nuclear forces (included in the potential energy term) and an observable quantity defined by differences in binding energies.
We have thus the following interpretations (if the single-particle field do not change) $$ BE(A)-BE(A-1)\approx E[\Phi^{\mathrm{HF}}(A)]- E[\Phi^{\mathrm{HF}}(A-1)] = \epsilon_k^{\mathrm{HF}}, $$ and $$ BE(A+1)-BE(A)\approx E[\Phi^{\mathrm{HF}}(A+1)]- E[\Phi^{\mathrm{HF}}(A)] = \epsilon_a^{\mathrm{HF}}. $$ If we use \( {}^{16}\mbox{O} \) as our closed-shell nucleus, we could then interpret the separation energy $$ BE(^{16}\mathrm{O})-BE(^{15}\mathrm{O})\approx \epsilon_{0p^{\nu}_{1/2}}^{\mathrm{HF}}, $$ and $$ BE(^{16}\mathrm{O})-BE(^{15}\mathrm{N})\approx \epsilon_{0p^{\pi}_{1/2}}^{\mathrm{HF}}. $$
Similalry, we could interpret $$ BE(^{17}\mathrm{O})-BE(^{16}\mathrm{O})\approx \epsilon_{0d^{\nu}_{5/2}}^{\mathrm{HF}}, $$ and $$ BE(^{17}\mathrm{F})-BE(^{16}\mathrm{O})\approx\epsilon_{0d^{\pi}_{5/2}}^{\mathrm{HF}}. $$ We can continue like this for all \( A\pm 1 \) nuclei where \( A \) is a good closed-shell (or subshell closure) nucleus. Examples are \( {}^{22}\mbox{O} \), \( {}^{24}\mbox{O} \), \( {}^{40}\mbox{Ca} \), \( {}^{48}\mbox{Ca} \), \( {}^{52}\mbox{Ca} \), \( {}^{54}\mbox{Ca} \), \( {}^{56}\mbox{Ni} \), \( {}^{68}\mbox{Ni} \), \( {}^{78}\mbox{Ni} \), \( {}^{90}\mbox{Zr} \), \( {}^{88}\mbox{Sr} \), \( {}^{100}\mbox{Sn} \), \( {}^{132}\mbox{Sn} \) and \( {}^{208}\mbox{Pb} \), to mention some possile cases.
We can thus make our first interpretation of the separation energies in terms of the simplest possible many-body theory. If we also recall that the so-called energy gap for neutrons (or protons) is defined as $$ \Delta S_n= 2BE(N,Z)-BE(N-1,Z)-BE(N+1,Z), $$ for neutrons and the corresponding gap for protons $$ \Delta S_p= 2BE(N,Z)-BE(N,Z-1)-BE(N,Z+1), $$ we can define the neutron and proton energy gaps for \( {}^{16}\mbox{O} \) as $$ \Delta S_{\nu}=\epsilon_{0d^{\nu}_{5/2}}^{\mathrm{HF}}-\epsilon_{0p^{\nu}_{1/2}}^{\mathrm{HF}}, $$ and $$ \Delta S_{\pi}=\epsilon_{0d^{\pi}_{5/2}}^{\mathrm{HF}}-\epsilon_{0p^{\pi}_{1/2}}^{\mathrm{HF}}. $$
Consider a Slater determinant built up of orthogonal single-particle orbitals \( \psi_{\lambda} \), with \( \lambda = 1,2,\dots,A \).
The unitary transformation $$ \psi_a = \sum_{\lambda} C_{a\lambda}\phi_{\lambda}, $$ brings us into the new basis. The new basis has quantum numbers \( a=1,2,\dots,A \).
a) Show that the new basis is orthogonal.
b) Show that the new Slater determinant constructed from the new single-particle wave functions can be written as the determinant based on the previous basis and the determinant of the matrix \( C \).
c) Show that the old and the new Slater determinants are equal up to a complex constant with absolute value unity.
Hint. Hint: \( C \) is a unitary matrix.
The single-particle energies obtained by solving the Hartree-Fock equations can be directly related to experimental separation energies. Since Hartree-Fock theory is the starting point for several many-body techniques (density functional theory, random-phase approximation, shell-model etc), the aim here is to develop a computer program to solve the Hartree-Fock equations in a given single-particle basis, here the harmonic oscillator.
The equations are often rewritten in terms of a so-called density matrix, which is defined as $$ \begin{equation} \rho_{\gamma\delta}=\sum_{i=1}^{N}\langle\gamma|i\rangle\langle i|\delta\rangle = \sum_{i=1}^{N}C_{i\gamma}C^*_{i\delta}. \label{_auto3} \end{equation} $$ It means that we can rewrite the Hartree-Fock Hamiltonian as $$ \hat{h}_{\alpha\beta}^{HF}=\epsilon_{\alpha}\delta_{\alpha,\beta}+ \sum_{\gamma\delta} \rho_{\gamma\delta}\langle \alpha\gamma|V|\beta\delta\rangle_{AS}. $$ It is convenient to use the density matrix since we can precalculate in every iteration the product of two eigenvector components \( C \).
Note that \( \langle \alpha\vert\hat{h}_0\vert\beta \rangle \) denotes the matrix elements of the one-body part of the starting hamiltonian. For self-bound nuclei \( \langle \alpha\vert\hat{h}_0\vert\beta \rangle \) is the kinetic energy, whereas for neutron drops, \( \langle \alpha \vert \hat{h}_0 \vert \beta \rangle \) represents the harmonic oscillator hamiltonian since the system is confined in a harmonic trap. If we are working in a harmonic oscillator basis with the same \( \omega \) as the trapping potential, then \( \langle \alpha\vert\hat{h}_0 \vert \beta \rangle \) is diagonal.
An example of a function written in python which performs the Hartree-Fock calculation is shown here. In setting up your code you will need to write a function which sets up the single-particle basis, the matrix elements \( t_{\alpha\gamma} \) of the one-body operator (called \( h0 \) in the function below) and the antisymmetrized TBMEs (called nninteraction in the code link below) and the density matrix elements \( \rho_{\beta\delta} \) (called densityMatrix below). The python program shows how one can, in a brute force way read in matrix elements in \( m \)-scheme and compute the Hartree-Fock single-particle energies for four major shells. The interaction which has been used is the so-called N3LO interaction of Machleidt and Entem using the Similarity Renormalization Group approach method to renormalize the interaction, using an oscillator energy \( \hbar\omega=10 \) MeV.
The nucleon-nucleon two-body matrix elements are in \( m \)-scheme and are fully anti-symmetrized. The Hartree-Fock programs uses the density matrix discussed above in order to compute the Hartree-Fock matrix. Here we display the Hartree-Fock part only, assuming that single-particle data and two-body matrix elements have already been read in.
The input file spdata.dat contains the information of all single-particle quantum numbers needed to define this space. In total we have \( 40 \) single-particle states labeled by \( n \), \( j \), \( l \) and \( m \), where \( m \) is the projection of the total single-particle angular momentum \( j \). To every set of single-particle quantum numbers there is a unique number \( p \) identifiying them, meaning that the two-body matrix elements in the file twobody.dat are identified as \( \langle pq \vert \hat{v}\vert rs \rangle \).
You will need to read these two files and set up arrays which store the matrix elements while running the program.
""" Star HF-iterations, preparing variables and density matrix """
""" Coefficients for setting up density matrix, assuming only one along the diagonals """
C = np.eye(spOrbitals) # HF coefficients
DensityMatrix = np.zeros([spOrbitals,spOrbitals])
for gamma in range(spOrbitals):
for delta in range(spOrbitals):
sum = 0.0
for i in range(Nparticles):
sum += C[gamma][i]*C[delta][i]
DensityMatrix[gamma][delta] = Decimal(sum)
maxHFiter = 100
epsilon = 1.0e-10
difference = 1.0
hf_count = 0
oldenergies = np.zeros(spOrbitals)
newenergies = np.zeros(spOrbitals)
while hf_count < maxHFiter and difference > epsilon:
print "############### Iteration %i ###############" % hf_count
HFmatrix = np.zeros([spOrbitals,spOrbitals])
for alpha in range(spOrbitals):
for beta in range(spOrbitals):
""" If tests for three-dimensional systems, including isospin conservation """
if l[alpha] != l[beta] and j[alpha] != j[beta] and mj[alpha] != mj[beta] and tz[alpha] != tz[beta]: continue
""" Setting up the Fock matrix using the density matrix and antisymmetrized NN interaction in m-scheme """
sumFockTerm = 0.0
for gamma in range(spOrbitals):
for delta in range(spOrbitals):
if (mj[alpha]+mj[gamma]) != (mj[beta]+mj[delta]) and (tz[alpha]+tz[gamma]) != (tz[beta]+tz[delta]): continue
sumFockTerm += DensityMatrix[gamma][delta]*nninteraction[alpha][gamma][beta][delta]
HFmatrix[alpha][beta] = Decimal(sumFockTerm)
""" Adding the one-body term, here plain harmonic oscillator """
if beta == alpha: HFmatrix[alpha][alpha] += singleparticleH[alpha]
spenergies, C = np.linalg.eigh(HFmatrix)
""" Setting up new density matrix in m-scheme """
DensityMatrix = np.zeros([spOrbitals,spOrbitals])
for gamma in range(spOrbitals):
for delta in range(spOrbitals):
sum = 0.0
for i in range(Nparticles):
sum += C[gamma][i]*C[delta][i]
DensityMatrix[gamma][delta] = Decimal(sum)
newenergies = spenergies
""" Brute force computation of difference between previous and new sp HF energies """
sum =0.0
for i in range(spOrbitals):
sum += (abs(newenergies[i]-oldenergies[i]))/spOrbitals
difference = sum
oldenergies = newenergies
print "Single-particle energies, ordering may have changed "
for i in range(spOrbitals):
print('{0:4d} {1:.4f}'.format(i, Decimal(oldenergies[i])))
hf_count += 1
Running the program, one finds that the lowest-lying states for a nucleus like \( {}^{16}\mbox{O} \), we see that the nucleon-nucleon force brings a natural spin-orbit splitting for the \( 0p \) states (or other states except the \( s \)-states). Since we are using the \( m \)-scheme for our calculations, we observe that there are several states with the same eigenvalues. The number of eigenvalues corresponds to the degeneracy \( 2j+1 \) and is well respected in our calculations, as see from the table here.
The values of the lowest-lying states are (\( \pi \) for protons and \( \nu \) for neutrons)
Quantum numbers | Energy [MeV] |
---|---|
\( 0s_{1/2}^{\pi} \) | -40.4602 |
\( 0s_{1/2}^{\pi} \) | -40.4602 |
\( 0s_{1/2}^{\nu} \) | -40.6426 |
\( 0s_{1/2}^{\nu} \) | -40.6426 |
\( 0p_{1/2}^{\pi} \) | -6.7133 |
\( 0p_{1/2}^{\pi} \) | -6.7133 |
\( 0p_{1/2}^{\nu} \) | -6.8403 |
\( 0p_{1/2}^{\nu} \) | -6.8403 |
\( 0p_{3/2}^{\pi} \) | -11.5886 |
\( 0p_{3/2}^{\pi} \) | -11.5886 |
\( 0p_{3/2}^{\pi} \) | -11.5886 |
\( 0p_{3/2}^{\pi} \) | -11.5886 |
\( 0p_{3/2}^{\nu} \) | -11.7201 |
\( 0p_{3/2}^{\nu} \) | -11.7201 |
\( 0p_{3/2}^{\nu} \) | -11.7201 |
\( 0p_{3/2}^{\nu} \) | -11.7201 |
\( 0d_{5/2}^{\pi} \) | 18.7589 |
\( 0d_{5/2}^{\nu} \) | 18.8082 |
We can use these results to attempt our first link with experimental data, namely to compute the shell gap or the separation energies. The shell gap for neutrons is given by $$ \Delta S_n= 2BE(N,Z)-BE(N-1,Z)-BE(N+1,Z). $$ For \( {}^{16}\mbox{O} \) we have an experimental value for the shell gap of \( 11.51 \) MeV for neutrons, while our Hartree-Fock calculations result in \( 25.65 \) MeV. This means that correlations beyond a simple Hartree-Fock calculation with a two-body force play an important role in nuclear physics. The splitting between the \( 0p_{3/2}^{\nu} \) and the \( 0p_{1/2}^{\nu} \) state is 4.88 MeV, while the experimental value for the gap between the ground state \( 1/2^{-} \) and the first excited \( 3/2^{-} \) states is 6.08 MeV. The two-nucleon spin-orbit force plays a central role here. In our discussion of nuclear forces we will see how the spin-orbit force comes into play here.
Our Hamiltonian contains one-body, two-body and three-body contributions and in the equations below, we label states below the Fermi level \( F \) as \( i,j,\ldots \) while states above the Fermi level are defined by \( a,b,\ldots \). General single-particle states are given by the letters \( p,q \dots \). The quantities \( pq\dots \) represent the quantum numbers of various single-particle states, namely \( p=(n_p,l_p,j_p,m_{j_p},t_{z_p}) \). The commutation relations for creation and annihilations operators with respect to a given reference state are then given by $$ \begin{equation*} \left\{a_p^\dagger, a_q \right\}= \delta_{pq}, p, q \leq F \hspace{0.5cm} \left\{a_p, a_q^\dagger \right\} = \delta_{pq}, p, q > F. \end{equation*} $$
Introducing a reference state \( |\Phi_0\rangle \) as our new vacuum state leads to a redefinition of the Hamiltonian in terms of a constant reference energy \( E_0 \) defined as $$ \begin{equation*} E_0 = \sum_{i\le F}\langle i | \hat{h}_0|i\rangle+\frac{1}{2}\sum_{ij\le F} \langle ij|\hat{v}|ij\rangle +\frac{1}{6}\sum_{ijk\le F} \langle ijk|\hat{w}|ijk\rangle, \end{equation*} $$ and a normal-ordered Hamiltonian $$ \begin{equation*} \hat{H}_N=\sum_{pq}\langle p|\tilde{f}|q\rangle a^\dagger_p a_q+\frac{1}{4} \sum_{pqrs} \langle pq|\tilde{v}|rs\rangle a^\dagger_p a^\dagger_q a_s a_r +\frac{1}{36} \sum_{\substack{pqr \\stu}} \langle pqr|\hat{w}|stu\rangle a^\dagger_p a^\dagger_q a^\dagger_r a_u a_t a_s \end{equation*} $$ where $$ \begin{equation*} \langle p| \tilde{f}|q\rangle = \langle p|\hat{h}_0|q\rangle +\sum_{i\le F} \langle pi|\hat{v}|qi\rangle+\frac{1}{2}\sum_{ij\le\alpha_F} \langle pij|\hat{w}|qij\rangle, \end{equation*} $$ represents a correction to the single-particle operator \( \hat{h}_0 \) due to contributions from the nucleons below the Fermi level.
An important ingredient in studies of effective interactions and their applications to nuclear structure, is the so-called monopole interaction, normally defined in terms of a nucleon-nucleon interaction \( \hat{v} \) $$ \begin{equation} \bar{V}_{\alpha_p\alpha_q} = \frac{\sum_{J}(2J+1) \langle (\alpha_p\alpha_q)J | \hat{v} | (\alpha_p\alpha_q)J \rangle }{\sum_{J}(2J+1)}, \label{eq:monopole1} \end{equation} $$ where the total angular momentum of a two-body state \( J \) runs over all possible values. In the above equation we have defined a nucleon-nucleon interaction in a so-called angular-momentum coupled representation with the symbol \( \alpha_{p,q} \) representing all possible quantum numbers except the magnetic substates \( m_{j_{p,q}} \) The monopole Hamiltonian can be interpreted as an angle-averaged matrix element. We have assumed that the single-particle angular momenta \( j_p \) and \( j_q \) couple to a total two-particle angular momentum \( J \). The summation over \( J \) with the value \( 2J+1 \) can be replaced by \( \sum_J(2J+1)=(2j_p+1)(2j_q+1) \) if \( \alpha_p\ne \alpha_q \). If \( \alpha_p=\alpha_q \) we can generalize this equation to, assuming that our states can represent either protons or neutrons, $$ \begin{equation} \sum_{J}(2J+1)=(2j_p+1)(2j_q+1-\delta_{\alpha_p\alpha_q}). \label{eq:jvalue} \end{equation} $$
The spherical single-particle states, provide an important ingredient for the formation of shells and interplay between spherical configurations and deformation in nuclei. Large shell gaps obtained from a monopole Hamiltonian are a prerequisite to obtain certain magic numbers. Equation \eqref{eq:monopole1} can also be expressed in terms of the medium-modified two-body interaction defined in Eq. \eqref{eq:twobodyveff}, that is we can have $$ \begin{equation} \tilde{V}_{\alpha_p\alpha_q} = \frac{\sum_{J}(2J+1) \langle (\alpha_p\alpha_q)J | \tilde{v} | (\alpha_p\alpha_q)J \rangle }{\sum_{J}(2J+1)}. \label{eq:monopole2} \end{equation} $$
The single-particle energy \( \epsilon_p \) resulting from for example a self-consistent Hartree-Fock field, or from first order in many-body perturbation theory, is given by (in an uncoupled basis) $$ \begin{equation*} \epsilon_p=\langle p| \tilde{f}|p\rangle = \langle p|\hat{h}_0|p\rangle +\sum_{i\le F} \langle pi|\hat{v}|pi\rangle+\frac{1}{2}\sum_{ij\le F} \langle pij|\hat{w}|pij\rangle, \end{equation*} $$ where we have included the three-body interaction as well.
In Eqs. \eqref{eq:spequation1} and \eqref{eq:spequation2}, we have used a compact notation for the single-particle states, with the symbol \( \alpha_p \) etc representing all possible quantum numbers except the magnetic substates \( m_{j_p} \), that is \( \alpha_p=(n_p,l_p,j_p,t_{z_p}) \). The symbol \( \alpha_F \) stands now for all single-particle states up to the Fermi level, excluding again the magnetic substates. In the above two-body interaction matrix elements \( \langle (\alpha_p\alpha_i)J | \hat{v}(\tilde{v}) |(\alpha_p\alpha_i)J \rangle \) we have dropped additional quantum numbers like the isospin projection. Our interactions are diagonal in the projection of the total isospin but breaks both isospin symmetry and charge symmetry.
Using the definition of the single-particle energy in Eq. \eqref{eq:spequation1}, the definition of the monopole matrix element in Eqs. \eqref{eq:monopole1} or \eqref{eq:monopole2} and Eq. \eqref{eq:jvalue}, we can rewrite Eq. \eqref{eq:spequation1} as $$ \begin{equation} \epsilon_{\alpha_{p}}=\langle \alpha_p|\hat{h}_0|\alpha_p\rangle+\sum_{\alpha_i\le \alpha_F}N_{\alpha_i}\bar{V}_{\alpha_p\alpha_i}, \label{eq:spmono1} \end{equation} $$ with \( N_{\alpha_i}=2\alpha_i+1 \), and Eq. \eqref{eq:spequation2} as $$ \begin{equation} \epsilon_{\alpha_{p}}=\langle \alpha_p|\hat{h}_0|\alpha_p\rangle+\sum_{\alpha_i\le \alpha_F}N_{\alpha_i}\tilde{V}_{\alpha_p\alpha_i}. \label{eq:spmono2} \end{equation} $$
The effective interaction is a scalar two-body operator. A general scalar two-body operator \( \hat{v} \) can be written as $$ \begin{equation} \hat{v} = \sum_{k} \hat{v}_{k} = \sum_{k} C^{(k)}\cdot Q^{(k)}, \label{_auto4} \end{equation} $$ where the operators \( C^{(k)} \) and \( Q^{(k)} \) are irreducible spherical tensor operators of rank \( k \), acting in spin and coordinate space, respectively. The value of \( k \) is limited to \( k\le 2 \) since the total eigenspin of the two-nucleon system is either \( 0 \) or \( 1 \). The term with \( k=0 \) refers to the central component of the two-body operator. The values of \( k=1 \) and \( k=2 \) are called the vector and the tensor components, respectively. The vector term is also called the two-body spin-orbit term, although it also contains the anti-symmetric spin-orbit term. Using standard angular momentum algebra it is rather straightforward to relate the matrix elements \( \hat{v}_{k} \) to those of say \( \hat{v} \) or \( \tilde{v} \).
One possible decomposition of the effective interaction is to express the \( k \)-th component of the interaction \( \langle (\alpha_p\alpha_q)J|\hat{v}_{k}|(\alpha_r\alpha_s)J\rangle \) in a \( jj \)-coupled basis, where \( \hat{v}_k \) is related to the matrix elements \( \langle (\alpha_p\alpha_q)J|\hat{v}|(\alpha_r\alpha_s)J\rangle \) (or \( \langle (\alpha_p\alpha_q)J|\tilde{v}|(\alpha_r\alpha_s)J\rangle \) through the relation $$ \begin{align} \langle (\alpha_p\alpha_q)J|\hat{v}_{k}|(\alpha_r\alpha_s)J\rangle&=(-1)^{J}(2k+1) \sum_{LL'SS'}\langle \alpha_p\alpha_q|LSJ\rangle \langle \alpha_r\alpha_s|L'S'J\rangle \left\{ \begin{array}{ccc} L&S&J\\ S'&L'&k \end{array} \right\} \nonumber \\ &\times \sum_{J'}(-1)^{J'}(2J'+1)\left\{ \begin{array}{ccc} L&S&J'\\ S'&L'&k \end{array} \right\} \sum_{\alpha_{p}'\alpha_{q}'\alpha_{r}'\alpha_{s}'}\langle \alpha_p'\alpha_q'|LSJ'\rangle \nonumber\\ &\times \langle \alpha_r'\alpha_s'|L'S'J'\rangle \langle (\alpha_p'\alpha_q')J'|\hat{v}|(\alpha_r'\alpha_s')J'\rangle . \label{eq:jj} \end{align}$$
The two-particle matrix elements are normalized and antisymmetrized. A similar expression applies to the medium-modified two-body interaction \( \tilde{v} \) of Eq. \eqref{eq:twobodyveff} as well. The symbol \( \langle \alpha_p\alpha_q|LSJ \rangle \) is a shorthand for the \( LS-jj \) transformation coefficient, $$ \begin{equation*} \langle \alpha_p\alpha_q|\lambda SJ \rangle = \sqrt{(2j_{p}+1)(2j_{q}+1)(2\lambda+1)(2S+1)} \left\{ \begin{array}{ccc} l_{p}&\frac{1}{2}&j_{p}\\ l_{q}&\frac{1}{2}&j_{q}\\ \lambda &S &J \end{array} \right\} \end{equation*} $$
To derive Eq. \eqref{eq:jj}, we have used the fact that the two-body matrix elements of \( \hat{v}_k \) can also be interpreted in the representation of the \( LS \)-coupling scheme.