Friday, February 27, 2015

Matrix for 2 interacting electrons.

I have had second thoughts about discarding the two states that had spin 1. In retrospect, I think it would be better for illustrating and understanding the essential physics if we kept them and worked with a 6x6 matrix. Over the weekend please put in the off-diagonal matrix elements, representing coupling between states connected by a single "hop".  Come to class Monday with your own 6x6 Matrix made of \(E_1\), U and I.

Thursday, February 26, 2015

Special homework project.

It would be really helpful for me to get an idea of your thoughts and feelings on what we have covered so far. For each section, would you discuss:
how much you liked it,
what parts you understood or did not understand so well,
the meaning and significance of each part to you,
any connections you see with other topics in this class (or elsewhere), and anything else you might like to add.

If I remember correctly, the things we have cover so far could be briefly described as:
1) Bloch states for a spatially periodic 1D crystal (starting with the Schrodinger equation)
2) band which form from a 3 bound state square well atom arranged into a 1D crystal,
3) electron specific heat of a metal,
4) magnetic susceptibility of a metal,
5) influence of e-e interaction on magnetic susceptibility,
6) origin of ferromagnetic instability, ferromagnetism as a spontaneous spin alignment driven by e-e interaction,
7) Description of a single band with only nearest neighbor overlap (I) (for a 1D crystal) using a matrix vector form of the Schrodinger equation,
8) Introduction of disorder into the matrix formulation.

You can send me your thoughts by email. It is fine, encouraged even, to do a quick response and then follow up with a more nuanced, detailed response when you have time.  Thanks. 
Zack

PS. Please feel free to ask questions here or by email.

Tuesday, February 24, 2015

Developed Time Step Simulation

Here is a more developed form of the time step simulation. I've included the actual python script used to run it at the bottom of this post (if anybody knows a better way to post it let me know). Note that it is based on python 2, and requires the NumPy and Gnuplot python libraries to run. I've tried to document it well, but let me know if you have any questions.

Below are a few examples of what it can do. The first is a simulation of a perfect set of 1000 wells. The second is a simulation with the exact same parameters, but now with random gaussian disorder* of I/2 throughout, where I = 1 eV. The third has random gaussian disorder of I/3, and (as was requested by Mark) a linear adjustment to the wells, representing something like an electric field. Note that this last one also includes a new feature of a title with the disorder level and a countdown timer.

*disorder means the standard deviation of the gaussian distribution


Perfect Wells




Disorder I/2



Disorder I/3 and linear dependence
 




Sunday, February 22, 2015

Special Guest, reading assignment.

On Wednesday we will have a special guest. Any questions that you have about the fundamental physics of electrons in crystals may be discussed on Wednesday. (Also, if you have a chance to  read the paper "More is different" (you can google it) before Wednesday, that would be good.)

Time Evolution Preliminary results

I've got a workable form of the time evolution simulation. Below are the 40 frame gif results of two different trials. The first was generated using 2,000 time steps of hbar/I units each, and the second was generated using 200,000 time steps of hbar/(100*I) units each. In both cases I was set to 1 eV, the initial wave was localized to well 500, and the initial matrix and vector was of size 1000. I'm thinking I must have made a mistake in how I treated the the second one though, as it looks drastically different from the first. My thought was that if I reduce the time interval by a factor of 100, then I should increase the number of intervals to be even. Is that idea incorrect?




Thursday, February 19, 2015

Sunday, February 15, 2015

Special Projects (additional items added)

A. Calculate the magnetic susceptibility, \(\chi\), and specific heat of a metal in the normal state for a given bandwidth and several values of e-e interaction strength. Make them both intensive quantities. Figure out the units of susceptibility (this is an E&M problem. I think that perhaps magnetization (m) and the thing called H in E&M have the same units? Is that wrong?). Graph them as a function of T from T=0 to 100 K.  Show how you can use specific heat measurement to infer both the bandwidth (or density of states at the Fermi level) and the strength of the e-e interaction in a non-magnetic (aka paramagnetic, aka normal) metal. Make up a problem that has about 20 simulated data points for \(\chi\) and \( C_v\) and where the student os asked to fit the data points and infer bandwidth and e-e interaction strength.
      Suggestion: Start with \(e^2/a =0\). Then work up to 0.5 eV, 1 eV, 1.5 eV.  Use a bandwidth of 2 or 3 eV. What is the essential nature of \(\chi\) vs T in a reasonable range of T?

B. Consider a degenerate "Fermi liquid" at low temperature. Specifically, consider a half-filled band with upward curvature. (You can do downward curvature as well as you like.) Explain how an applied electric field shifts the occupation boundaries and leads to a modest number of electrons (much less than N) carrying current. Explain what fraction of electrons are responsible for the net current and what their typical speed is. Get an estimate of the conductivity. You can assume that the size of the boundary shift is very small compared to \(\pi/a\). That assumption can be used to greatly simplify the integral over q and to simplify your final result for the conductivity. (There is no benefit, and significant cost, to not making that assumption!)
     Then redo the whole thing for a 1/8 or a 1/4 filled band using the approximation that E is quadratic in k and with an effective mass in the denominator (i.e.,  \( E_q = \hbar^2 q^2/(2 m_o m^*)/) . Try to get simple expression for the conductivity. Use n = N/V where appropriate. Conductivity should be defined to be intensive.

C. Use energy eigenstate formulation to calculate the time dependent spreading of a wave function expressed as a wave packet. Start with a free electron in a gaussian initial state. Then do this also for an electron in a crystal using the energy eigenstates associated with a single band of crystal energy eigenstates. What range of crystal eigenstates does it take to localize an electron to one lattice site?  What range of energy eigenstates does it take to localize an electron to a finite, but larger region, say 100 lattice sites. For the latter problem you could use a Gaussian envelope function to define your initial t=0 wave function.
      (For this project I think you will need an atomic wave-function to work with. My suggestion would be a gaussian, due to the absence of a singularity at the origin. The delta function ground state might also be a possibility. Actual square well states will be difficult to work with because of the piece-wise nature. I think that the results are not much effected by this choice, as long as the atomic state is exponentially localized.

D. Use a matrix formulation to calculate the time dependence (spreading) of a wave function representing one electron in a spatially periodic perfect lattice. (Do this numerically, say for a 1000 site lattice.) After doing this for a perfect lattice, try doing it for a model in which the off-diagonal elements are all the same (as before), but the diagonal elements have some degree of random disorder with respect to a central mean value (a mean value the same as you used in the previous part).
 
E. Consider a lattice with only two sites and two electrons
. Each electron can be localized on either site in single-electron eigenstate in which it has an energy of -8 eV (or call it \(E_o\).  If the electrons are on the same site then there is a coulomb energy of +2 eV. If they are not on the same site the coulomb energy is zero. Including spin, enumerate all possible states of this system and what the energy of each state is. Draw simple pictures to illustrate.
        Do the same thing for 3 electrons on 3 sites, and then perhaps for 4 electrons on 4 sites.

F. Investigate and discuss the nature and origin of anti-ferromagnetism.  What is a half-filled Hubbard band? What does it have to do with magnetism? ... ....  (perhaps we can flesh this out more. Also, I think this is related to E.)

G. Understanding superconductivity: The key thing in superconductivity is not so much the specific nature of the interaction that leads to paring (which can vary), but rather the tendency of electrons at a Fermi surface to be highly susceptible to pairing.  The Fermion nature of electrons this plays a critical role. In 3-dimensions a Fermi surface could be a sphere in k-space inside which all states are occupied (by electrons) and outside which all states are empty. Cooper showed that electrons at the Fermi surface could be unstable with respect to pairing to form pairs of Fermions which then act as bosons. These boson pairs then develop a phase coherence that leads to a superfluidity know as superconductivity. As a project one could study Cooper pair formation, phase coherence and superfluidity  or some other aspect of superconductivity.

H. Calculations of basic quantities in the effective mass approximation. ... more to follow

Feel free to ask questions and suggest other projects. Allow yourself enough time to understand the problem etc… There may be flaws in these problems that will take time and discussion to work out. (or maybe not?)

Wednesday, February 11, 2015

Today's Class.

I hope you have all the sign issues sorted out and are getting some interesting results for \(\chi (T)\), in the normal state, and \( m(T)\) below the temperature at which spontaneous magnetization emerges, \(T_c\). Your results should take the form of graphs, with very well-written figure captions. Additionally, there should a discussion, but make the figures, the graphs, central to the readers experience.  I can't make it to class today, so you can turn those in to my mailbox anytime tomorrow if you like.

Tuesday, February 3, 2015

Helmholtz free energy *first draft

\( F=U-TS \) is that weird formula that we wanted to minimize. But why?

Note: This is my first attempt, I am not sure if its valid to imagine a system being built at constant temperature T.

To answer why, one must understand what U and TS are in this expression and the conditions our little crystal is in. It is in a huge temperature bath at T and our crystal has constant volume so no work can be done on or by it. Hence the only interaction with the outside system is thermal.

In general U in the Helmholtz free energy equation is just the internal energy of the system, I think that we made an approximation in class as we plugged in U at T=0 K.

If you want to be persuaded that in class the U we used is the energy of the system at T=0K read the following:

*Explanation: The above holds since we did not use any fermi-dirac statistics to calculate U instead we plugged in f(T)=1 in \( U= \int ED(E)f(T,E)dE \). Where in \(f(T)=\frac{1}{e^(E-\mu)/kT+1}\) by definition the fermi energy is greater than any occupied state making the exponent negative hence \(lim_{T\Rightarrow0}f(T)=1 \)

How can TS be interpreted?

First, \(Q=TdS\) by definition hence if a system is at constant temperature T and you raise its entropy from 0 to S you get \( \int_0^STds=TS \)

Therefore TS can be thought as the total heat added to a system at temperature T as it is raised from 0 to S entropy.

Now we combine these two quantities to get \(F=U-TS \)

Minimizing this quantity will allow us to have a minimum energy U internally while extracting as much heat from the system as we can!

I think that I need to think more and maybe redo all of it


Special Projects.

Seems like people are interested in learning more about how Fermi statistics effect the electron-electron interaction and why we minimize the Helmholtz energy, F= E - TS, to find the equilibrium state of a system coupled to a heat bath (that is, at a well-defined temperature T).

For the latter issue, if someone(s) want to look into that and report back, I think people would be interested. I think that it is possible to connect that directly to the 2nd law or thermo, the one about entropy increasing. The Helmholtz energy plays a big role in a lot of physics so it is worth understanding.

For the e-e interaction issue, perhaps the following calculation might be helpful. Consider a square well from x=0 to 1 nm.  Suppose there are two electrons in the wells and that the occupied states are sin(10 pi x) and sin(11 pi x), for example. I think that you can make a symmetric combination of those or an anti-symmetric combination. For those two cases, calculate the expectation value of \(|x_1-x_2|\).  See of they are substantially different? Perhaps this will tell us if there are correlations built into these states and if electrons avoid each other more in one than the other? Does this make sense? Feel free to ask questions about it. Here is a normalization integral to get things started. (I could not get W-A to do the integral when the \(|x_1-x_2| = |x-y| \) term was added.)

Integrate [2 sin^2(9 pi x) 2 sin^2(8 pi y)] from x=0 to 1, y=0 to 1


Monday, February 2, 2015

Midterm Question



This looks great. Let's make this a 3-part problem.
part A: Calculate the spontaneous magnetization as a function of temperature for \(e^2/a = 1.5\; eV\) and \(E_{band}\) = 1.4 eV. (This part is for zero applied magnetic field). For this part create a graph of magnetization vs T. Define a \(T_c\) and look at the behavior in the vicinity of that. The part in the square root is a lot more important that the 1/T outside that. What is the critical exponent?

Part B: Calculate the magnetic susceptibility in the normal state, as Arjun has explained.
For both parts, the most important thing is your graph. That is what I will look at first. Do a really nice graph, hand drawn, not too large, with a nice title, labels and scales, and with an excellent caption that explains everything in a succinct and beautiful manner. 

Part C. (added Feb 6) Calculate and plot the magnetic susceptibility as a function of T for \(e^2/a = 1.5\; eV\) and \(E_{band}\) = 1.7 eV.
===========================================

I decided to go back to my entropy calculation after something Zack mentioned, and I found a sign error in my arithmetic. \( C_0 \) remains the same, but the expression for the polynomial up to order \( O(x^m) \) is:
\( \sum_{k \in \text{evens} }^m 2^k \left[ \frac{1}{(k-1)N^{k-1} } - \frac{(1-N)}{kN^k} \right] \left( N_\uparrow - \frac{N}{2} \right)^k \)

Alternatively written as
\( \sum_{k \in \text{evens} }^m 2^k \left[ \frac{N - (k-1)}{k(k-1)N^k} \right] \left( N_\uparrow - \frac{N}{2} \right)^k \)

The first two terms (to order \( O(x^6) \) ) are:
\( \frac{N-1}{2N^2} 2^2(N_\uparrow -\frac{N}{2})^2 + \frac{N-3}{12N^4} 2^4(N_\uparrow -\frac{N}{2})^4 \)

And this is definitely 100% correct. I checked it against Mathematica and everything. This is a Taylor expansion from
\( 0.5 \ln (N^2 - N_\Delta ^2 ) + N_\Delta \tanh ^{-1} \left( \frac{N_\Delta }{N} \right) \quad \text{where} \quad N_\Delta = N_\uparrow - N_\downarrow \)

which is itself part of the Stirling Approximation for
\( \ln \left( \frac{N!}{N_\uparrow ! N_\downarrow ! } \right) \)
-Aaron


EDIT: Added the revised expression that Aaron derived for the entropy and changed the symbol for the band energy to avoid confusion.  -Arjun

Sunday, February 1, 2015

Entropy

I think that if you set the bandwidth equal to zero, that will make the entropy calculation easier. I guess what we need to know is: for a given value of \(N_{\uparrow} - N_{\downarrow}\), how many states are there?  That is, what is \(\Omega\)?

Then we can find the equilibrium state of the system by looking for the minima of F=E-TS,
where \( S= k ln(\Omega)\).  Does that make sense?

Here is a suggestion. I think what we really want is just a graph of \( kT ln\Omega\) as a function of \(N_{\uparrow} - N_{\downarrow}\) and then to maybe add that to the e-e energy term and graph that to see which one controls the equilibrium.  (or really, how they both influence it...) So instead of getting all caught up in formal math, and Stirling's formula etc. (which is fine, but might take too much time) maybe someone could just do that numerically for a particular case like N=1000 or so, and for a particular value of \(e^2/a\), like say 1 or 2 eV. Then graph the Free Energy as a function of \(N_{\uparrow} - N_{\downarrow}\) for a few values of T. Does that make any sense?

By whatever method, either analytically or numerically, what we need to keep moving forward is an approximate expression for S that includes terms up to 4th order in \(N_{\uparrow} - N_{\downarrow}\). I think S has a maximum at \(N_{\uparrow} - N_{\downarrow}=0\) and one can do a Taylor series expansion near there including the \((N_{\uparrow} - N_{\downarrow})^2\) and \((N_{\uparrow} - N_{\downarrow})^4\) terms. One can also fit a numerically generated S in that way.

Review of Fridays class