Wednesday, March 18, 2015

Band Width and Wave localization as a function of Disorder

Links for program, Mathematica Script, etc.:
simluation.c
Mathematica Package
etc

First I'd like to showcase my results using the simulation and band widths all using disorder in increments of 0.02 from 0 to 1 scaling with t, the tunneling energy.


I was a bit unsure at the time if it was physical that the energy of the well could be greater than the value of t. The Bounded data represents the wells that were capped at this value, or in other words generated from a cutoff normal distribution at the ends. I eventually dismissed the idea, and allowed the wells to have energy relative to each one. This is the unbounded data, and seems to fit a quadratic equation given as \(4x^2+x+4\). \(x\) and the band width is on the scale of \(t\). Here's a link to what the distribution tends to look like. 12 Samples were used. These take the longest to solve.


Next I ran a simulation with a sample rate of 50 comparing the width of the wave through the lattice on the same scale of disorder. As you can see, it seems to fit very well with an inverse relationship. Something about the nature of Normal distributions tell me this should have been expected, but I can't remember at the moment.

These next images show both the nature of the simulation, and how the width is changed through out the simulation.


Here a small sample was taken over a short span of time. You can clearly see how at higher disorder, the width converges more rapidly then lower disorder. With no disorder, the width never converges.


And here is nearly the same graph with a much higher sample rate and over a much longer time. You can see by the error bars that even with 500 samples, the wave can behave quite surprisingly as it moves about finding the lowest energy state.

I'll try to run more simulations to get a more accurate Width vs. Disorder plot, but at 50,000 iterations and 500 samples the program ran more than 8 hours so I'll decrease the accuracy by a bit in the code.

If anyone wants to run the code themselves, I've included the current version as source. The inputs can only be changed by editing the file itself, and the output should be redirected to a file. It is formatted so that it can be instantly inserted into excel.

I'd like to thank the people over at Florida State University, from whom I used the Normal Random Number Generators package to do these simulations in C.

I'd like to discuss the actual physical constrains on these types of simulations, and understand the limitations.

Saturday, March 14, 2015

Fresh post! Bound for more discussion and more comments. Three electron three well matrix formulation




































*(Don't stare on the picture too long!)

On the left of the picture above you can see the basis I used to create the 20x20 matrix much like the two electron two well system.This also is the paper I derived the matrix ( I have checked it multiple times) with the squares being the overlap integral and the squares with a asterisk are the negative ones. As you can see it's hermitian just as expected from the hamiltonian.

A hard part of this task was to determine when to put a minus sign in front of a transition. I have no idea why we put the minus sign before but I came up with a pattern and it seems like it works. I had a positive contribution I when \( \uparrow,.. \rightarrow ..,\uparrow\) and \( ..,\downarrow \rightarrow \downarrow,.. \) . Negative contribution -I when \( ..,\uparrow \rightarrow \uparrow,.. \) and \( \downarrow,.. \rightarrow ..,\downarrow \) . Also \( \uparrow\downarrow,.. \rightarrow \downarrow,\uparrow \) is negative I and \( \uparrow\downarrow,.. \rightarrow  \uparrow,\downarrow\) is positive I . I know this seems weird but I needed to come up with a pattern that applies in all cases i encountered and that incorporates the pattern used for the 2 electron 2well system .

Since its a huge matrix, I decided to plug in U=4ev and I= 1ev to get numerical values as solutions.(Note:since its a 20x20 matrix, it has 20 eigenvectors and eigenvalues.) Bellow you can see our beautiful hamiltonian H :

0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0
0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0
0  0  0  0  0  0  0  0  0  0  0  0  1  0  0  0  1  0  0  0
0  0  0  0  0  0  0  0  0  0  1  0 -1  0  1  0 -1  0  0  0
0  0  0  0  0  0  0  0  0  0 -1  0  0  0 -1  0  0  0  0  0
0  0  0  0  0  0  0  0  0  0  0  1  0  0  0  1  0  0  0  0
0  0  0  0  0  0  0  0  0  0  0 -1  0  1  0 -1  0  1  0  0
0  0  0  0  0  0  0  0  0  0  0  0  0 -1  0  0  0 -1  0  0
0  0  0  0  0  0  0  0  4  0 -1  0 -1  0  0  0  0  0  0  0
0  0  0  0  0  0  0  0  0  4  0 -1  0  1  0  0  0  0  0  0
0  0  0  1 -1  0  0  0 -1  0  4  0  0  0  0  0  0  0  0  0
0  0  0  0  0  1 -1  0  0 -1  0  4  0  0  0  0  0  0  0  0
0  0  1 -1  0  0  0  0 -1  0  0  0  4  0  0  0  0  0  0  0
0  0  0  0  0  0  1 -1  0  1  0  0  0  4  0  0  0  0  0  0
0  0  0  1 -1  0  0  0  0  0  0  0  0  0  4  0  0  0  1  0
0  0  0  0  0  1 -1  0  0  0  0  0  0  0  0  4  0  0  0 -1
0  0  1 -1  0  0  0  0  0  0  0  0  0  0  0  0  4  0  1  0
0  0  0  0  0  0  1 -1  0  0  0  0  0  0  0  0  0  4  0 -1
0  0  0  0  0  0  0  0  0  0  0  0  0  0  1  0  1  0  4  0
0  0  0  0  0  0  0  0  0  0  0  0  0  0  0 -1  0 -1  0  4

( to try out your own I and U just use the form above and replace the values accordingly)

Eigenvalue/Eigenvector solutions and comparison with two electron two well system
Just like the 2 electron 2 well case we get two 0 eigenvalues for the eigenvectors:
(1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0)=\( \downarrow, \downarrow, \downarrow \)
  and
(0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0) =\( \uparrow, \uparrow, \uparrow \)

Now lets start from the highest and lowest energy eigenvectors/eigenvalues

The highest energy,  E=5.819 eV I get the state:

(0,0,0,0,0,-0.122,0.244,-0.122,0,0.559,0,-0.509,0,0.509,0,-0.201,0,0.201,0,0)
Eigenvector,\( \mathbf{e}=-0.122 \cdot \uparrow, \downarrow, \downarrow +0.244 \downarrow, \uparrow, \downarrow-0.122\downarrow, \downarrow, \uparrow\)
\( + 0.559 \uparrow \downarrow, \downarrow, .. -0.509 \uparrow \downarrow, .., \downarrow+0.509 \downarrow, \uparrow \downarrow, ... -0.201 ..,\uparrow\downarrow, \downarrow + 0.201 \downarrow,..,\uparrow\downarrow \)

For comparison, in my older post with the derivation of the two electron two well system (also with I=1ev, U=4eV) I got a highest energy eigenvalue E=5eV.
with eigenvector,\(\mathbf{e}= 0.957\uparrow\downarrow,..+0.116 \uparrow, \downarrow -0.116 \downarrow, \uparrow+0.957 ..,\uparrow\downarrow\)

The lowest energy, E=-1.2 eV we get the eigenstate:

(0,0,0,0,0,0.365,-0.730,0.365,0,-0.088,0,-0.228,0,-0.211,0,2.11,0,0)
Eigenvector \( \mathbf{e}= 0.365 \uparrow, \downarrow, \downarrow - 0.730 \downarrow, \uparrow, \downarrow +0.365 \downarrow,\downarrow,\uparrow \)
\(-0.88 \uparrow \downarrow, \downarrow, .. - 0.228 \uparrow \downarrow, .. , \downarrow+0.228 \downarrow, \uparrow \downarrow, .. -0.211 ..,\uparrow\downarrow,\downarrow +0.211 \downarrow,..,\uparrow\downarrow \)

In my old post I also calculated the minimum eigenvalue,eigenvector with an eigenvalue of E=-1eV.
with eigenvector:\(\mathbf{e}=-1/9\uparrow\downarrow,.. +0.5\uparrow, \downarrow -0.5 \downarrow, \uparrow -1/9..,\uparrow\downarrow\)

As you can see that both lowest energy eigenstates do not completely avoid states with coulomb repulsion but still have the lowest energy. However we mentioned in class that for the 2\(e^-) 2well system the energy eigenstate has net spin 0 .  The 3\(e^-) 3well system does not have net spin 0, raising the following question: Is our previous assertion that there is a relationship between states we have obtained for the 2 electron system and anti-ferromagnetism well founded?
*Not sure if important:*
*********************************************************************************
 As a sanity check of my results, I wanted to mention that I did have a state of E=U=4eV for the three electron three well system that has the same energy as a eigenstate of the two electron system.

For 3e 3w system:
E=4eV,
\(\mathbf{e}= -1/2 \uparrow\downarrow,\downarrow,.. + 1/2 \uparrow\downarrow,..,\downarrow\)
\(+1/2 \downarrow, \uparrow\downarrow,.. - 1/2 ..,\uparrow\downarrow,\downarrow\)

For 2e 2w system:
E=4eV  with eigenvector:
=\(\mathbf{e}= \frac{1}{\sqrt{2}}[-(\uparrow\downarrow,..)+(..,\uparrow\downarrow)] \)
=\(.707[-(\uparrow\downarrow,..)+(..,\uparrow\downarrow)]\)
*********************************************************************************
If you are interested in more states other than the ones I already provided, you can read the next part of the post or explore them yourself using the basis and the results of the eigenvalues/eigenvector calculator in the end of the post.

Layout and Comparison of 4 lowest energy eigenstates
E=-1.2 eV

\(\mathbf{e}= 0.365 \uparrow, \downarrow, \downarrow - 0.730 \downarrow, \uparrow, \downarrow +0.365 \downarrow,\downarrow,\uparrow \)
\(-0.088 \uparrow \downarrow, \downarrow, .. - 0.228 \uparrow \downarrow, .. , \downarrow
+0.228 \downarrow, \uparrow \downarrow, .. -0.211 ..,\uparrow\downarrow,\downarrow
+0.211 \downarrow,..,\uparrow\downarrow \)

First note the high amplitude on the state without coulomb repulsion \(-0.730\downarrow, \uparrow, \downarrow\) which is connected via one hop to four coulomb repulsion states \(- 0.228 \uparrow \downarrow, .. , \downarrow\),\(
+0.228 \downarrow, \uparrow \downarrow, ..\),\( -0.211 ..,\uparrow\downarrow,\downarrow\) and
\(+0.211 \downarrow,..,\uparrow\downarrow\). Now note that 0.365 \( \uparrow, \downarrow, \downarrow\) and \(0.365 \downarrow,\downarrow,\uparrow\) have also a considerable amplitude and they do have their corresponding one hop coulomb repulsion pairs \(-0.228 \uparrow \downarrow, .. , \downarrow\),\(-0.211 ..,\uparrow\downarrow,\downarrow\)   and \(0.228 \downarrow,\uparrow\downarrow,..\), \( 0.211\downarrow,..,\uparrow\downarrow \) respectively. The above observations can yield the conclusion that the state \(\downarrow, \uparrow, \downarrow\) is special and has the highest amplitude because of it's ability to tunnel to four states. The overall form of the eigenvector allows tunneling between many states, even ones with coulomb repulsion, since the decrease in kinetic energy via reduction of confinement counteracts the coulomb repulsion. Ultimately note the very small amplitude state \(-0.088 \uparrow \downarrow, \downarrow,..\) that is only connected via one hop to \( \downarrow, \uparrow \downarrow,.. \). This last state will be discussed further under the next lowest state.

E=-1.162eV

\(\mathbf{e}= -0.369 \uparrow,\uparrow,\downarrow + 0.738 \uparrow, \downarrow, \uparrow -0.369 \downarrow, \uparrow, \uparrow\)
\( - 0.214 \uparrow\downarrow,..,\uparrow 0.214 \uparrow,\uparrow\downarrow,.. - 0.214 .., \uparrow\downarrow, \uparrow+ 0.214 \uparrow,..,\uparrow\downarrow \)

Note that this eigenvector is also composed of a high amplitude on the state without coulomb repulsion \( 0.738 \uparrow, \downarrow, \uparrow\) that is connected by one hop to the four states \(- 0.214 \uparrow\downarrow,..,\uparrow, 0.214 \uparrow,\uparrow\downarrow,.. ,- 0.214 .., \uparrow\downarrow, \uparrow\) and \(+ 0.214 \uparrow,..,\uparrow\downarrow\) . The eigenvector also has \(-0.369 \uparrow,\uparrow,\downarrow\) and \(-0.369 \downarrow, \uparrow, \uparrow \) with their one hop double tunneling partners \(0.214 \uparrow,\uparrow\downarrow,..\), \(0.214 \uparrow,..,\uparrow\downarrow\) and  \(- 0.214 \uparrow\downarrow,..,\uparrow\), \(- 0.214 .., \uparrow\downarrow\) respectively. This is strikingly similar to the lowest energy eigenstate but with up and down reversed. These tunneling relationships seem to be the key of what makes these two first states have so much lower energy!(Note:It turns out that including a state that has access to four electron states greately decreases the energy)

Another important note is that the lowest energy state had a small \(-0.088 \uparrow \downarrow, \downarrow,..\) state which seems to be the only striking difference between the two lowest energy states (everything else is the same if you reverse up and down directions). It seems to be the only explanation why the previous state has lower energy than this one. It allows the tunneling to a further state that must decrease the degree of confinement and hence the kinetic energy of the system.

E=-0.494eV

\(\mathbf{e}= 0.664 \uparrow,\uparrow,\downarrow -0.664 \downarrow,\uparrow,\uparrow\)
\(- 0.073\uparrow\downarrow,\uparrow,..-0.164 \uparrow\downarrow,.., \uparrow-0.164 \uparrow,\uparrow\downarrow,..-0.164..,\uparrow\downarrow, \uparrow- 0.164 \uparrow,..,\uparrow\downarrow+0.073 ..,\uparrow,\uparrow\downarrow \)

First note that this energy is higher from the previous one by 0.668 eV and that the two eigenstates discussed earlier have a difference of only 0.038 eV (only around 6% of 0.668eV). The first question to ask is why? So lets see the differences.

In contrast to the states presented previously, it does not include a high amplitude state that can tunnel to four different states via one hop. Also, it looks a lot like the previous eigenvector if one removed the contribution of the high amplitude \( \uparrow, \downarrow, \uparrow \) state and then renormalized accordingly. It can then be inferred that excluding the state that has access to four electron states increases the energy.

 More specifically, the eigenvector includes two high amplitude states, \( 0.664 \uparrow,\uparrow,\downarrow -0.664 \downarrow,\uparrow,\uparrow\) which can tunnel with one step only to two states each. More specifically, \(0.164\uparrow,..,\uparrow\downarrow \),\(0.164 \uparrow,\uparrow\downarrow,..\) and \(-0.164 \uparrow\downarrow,.., \uparrow\),\(-0.164..,\uparrow\downarrow, \uparrow \).

Finally the tiny amplitude contributions \(0.073 ..,\uparrow,\uparrow\downarrow\) and \(- 0.073\uparrow\downarrow,\uparrow,..\) seem to be similar to the small amplitude state in the lowest energy eigenvector . They reduce the energy allowing further tunneling although they include coulomb repulsion!

E=-0.472eV

\( \mathbf{e}=0.669 \uparrow,\downarrow,\downarrow -0.669 \downarrow,\downarrow,\uparrow \)
\(-0.150 \uparrow\downarrow,..,\downarrow -0.150 \downarrow, \uparrow\downarrow,.. -0.166 .., \uparrow\downarrow,\downarrow -0.166 \downarrow,..,\uparrow\downarrow-0.074 .., \downarrow, \uparrow\downarrow \)

The eigenstate above is related to the previous one just like the lowest with the next lowest energy eigenstate. It would be the same as the previous one if you switched the orientation of down and up (also noting that amplitudes are similar) however it differs in the small amplitude states.  This state has only one small amplitude state that includes coulomb repulsion \(-0.074 .., \downarrow, \uparrow\downarrow\), in contrast with the two states \(0.073 ..,\uparrow,\uparrow\downarrow\) and \(- 0.073\uparrow\downarrow,\uparrow,..\) we had before. I think that the 0.22 eV change in energy is solely because the two small amplitude states lower the kinetic energy and avoid coulomb repulsion more efficiently than the single amplitude state.

The two last eigenstates seem to have low energies using the same principle as the first two, tunneling. I think that they do a lousy job compered to the first two eigenstates but still get to have energy lower than \(E_0=0eV\).
Available info and solutions:
This is the basis used 










Important link!

This talk is very interesting, I think, and involves a number of things we have covered to some extent  including localization involving disorder and spontaneous symmetry breaking. It also has a nice introduction to "More is Different", as well as a discussion of non-equilibrium quantum stat mech. This person may be here as a new professor next year. (I think both the intro and then the part that starts about 1/3 of the way through might be most interesting and relevant to us.)
Please comment.
http://ic.ucsc.edu/videoarchive/candidate/rahul-nandkishore.mov

Wednesday, March 11, 2015

Thoughts on our discussion of the "Theory of Everything" article

I've been thinking more about our discussion today in class, and I think I have a better idea of why I don't like the concept proposed in the paper. The "Theory of Everything" paper suggests that, at a fundamental level, we cannot derive the laws/theories of some higher level processes with the theories of lower ones (as an example I'll stick with, that high temperature superconductivity might be fundamentally unexplainable with Schrodinger's equation and Maxwell's equations).

We talked about not liking it because we want all of the math to line up; that the math and equations for, say, high temperature super-conductivity, should ultimately line up with the equations of Maxwell's equations and Schrodinger's equation. But after thinking about it I'm not sure this the real problem (I at least have) with it. The real problem I think is causation.

The reason we think high-temp superconductivity's (HTS) equations should relate back to Maxwell/Schrodinger equations is that we think HTS should be caused by those more fundamental theories. And as we discussed, if that's not the case, that should just mean Maxwell and Schrodinger aren't the complete story, and we need a more general fundamental theory that will allow us to describe HTS from fundamental principles. What this paper describes, that fundamental principles simply cannot be used to describe higher level ones, seems to me to imply that higher level processes are not caused by lower level ones. If a process is caused by something, we should be able to trace that causation back, or start from fundamentals and show how the fundamentals cause the higher level behavior. My (potentially flawed) idea here is that if one thing causes another, we should, in one way or another (even if it requires something more advanced than our current tool of math), be able to trace that causation through theory.

If that is a correct assumption, that would mean that if HTS cannot be traced back to fundamental theories, that it must not have a cause. It would be a completely spontaneous process that occurs for absolutely no reason. It would mean that if you were to analyze HTS and perform the classic child analysis of continuously asking "why", that if you kept asking why some part of HTS happens the way it does, then you would eventually run into a dead end. Which flies in the face of everything (I think) being a physicist is about. We never let go of that child-like behavior, and we always strive towards some fundamental "why" of the universe. Which is why I'm so unwilling to accept the paper's premise. As far as I know, the one and only truly spontaneous process physicists are willing to accept (and only begrudgingly) is the big bang. From then on, I would think that all things must be caused by something else, and as such that all things can be traced back to more fundamental equations and theories.

I'm really curious about everybody else's opinion on this, and whether or not I might be off base.

Sunday, March 8, 2015

Superconductivity.

This will be our last new topic for this quarter.  I haven't thought of much to say yet, but I highlighted a paper on the BCS theory which is linked here.
https://drive.google.com/file/d/0B_GIlXrjJVn4SU5wQnd1MU5BSFU/view?usp=sharing
  Please feel free to comment, question, discuss, etc.

I think that one important thing to be aware of is that it is not so much the phonon aspect that is critical (there are other "mediating bosons"* that can cause pairing and lead to superconductivity). Rather it is the susceptibility of normal state electrons to pairing due to the existence of a Fermi surface, and Fermi-Dirac statistics. This is the rather startling and unexpected thing. Once the electrons pair, one can view the pairs as bosons and the superconductivity as a Bose condensation (into a charged super-fluid state), which is a non-trivial things as well because it involves quantum phase coherence (whatever that means).

Also, here is rough introduction to some of the theory from http://www.superconductors.org/oxtheory.htm: Electrical resistance in metals arises because electrons propagating through the solid are scattered due to deviations from perfect translational symmetry. These are produced either by impurities (giving rise to a temperature independent contribution to the resistance) or the phonons - lattice vibrations - in a solid. In a superconductor below its transition temperature Tc, there is no resistance because these scattering mechanisms are unable to impede the motion of the current carriers. The current is carried in all known classes of superconductor by pairs of electrons known as Cooper pairs. The mechanism by which two negatively charged electrons are bound together is still controversial in "modern" superconducting systems such as the copper oxides or alkali metal fullerides, but well understood in conventional superconductors such as aluminium in terms of the mathematically complex BCS (Bardeen Cooper Schrieffer) theory. The essential point is that below Tc the binding energy of a pair of electrons causes the opening of a gap in the energy spectrum at Ef (the Fermi energy - the highest occupied level in a solid), separating the pair states from the "normal" single electron states. The size of a Cooper pair is given by the coherence length which is typically 1000Å (though it can be as small as 30Å in the copper oxides). The space occupied by one pair contains many other pairs, and there is thus a complex interdependence of the occupancy of the pair states. There is then insufficient thermal energy to scatter the pairs, as reversing the direction of travel of one electron in the pair requires the destruction of the pair and many other pairs due to the nature of the many-electron BCS wavefunction. The pairs thus carry current unimpeded…

Wednesday, March 4, 2015

How do you feel about matrices?

Today while we were calculating some things some people raised some bona fide questions about the matrix approach. I don't want to sweep that under the rug. Let's discuss here how you feel about matrix formulation of quantum physics (for electrons in a lattice). Or anything else that is on your mind.

Physics 155 homework. due Friday.

1a. Write all eigenvectors and their eigenvalues to linear order in t/U (assuming t/U <1). (You can set \(E_o\) to zero if you want. I don't think it it matters, that is, I don't think it influences any eigenvectors and I think it is just added to each eigenvalue (energy).

1b. For each eigenevector, present your understanding of its nature. To what physical situation does it correspond?  Relate that to its energy.

2. Comment on the post "How do you feel about matrices"

3. What else would you like me to ask you? Any suggestions?

4. Extra credit: Is there a possible relationship between states we have obtained for the 2 electron system and anti-ferromagnetism?

Matrix multiplication problem.

The ability to readily visualize a matrix-vector multiplication is of great value in physics. Here is a little problem to test how comfortable you are with matrix-vector multiplication and using that to find eigenvectors and their eigenvalues. Give it a try. (Don't look at anything after the break unless you are stuck.)

Using just guessing and matrix-vector multiplication, find one or more eigenvectors of this matrix:
$$ \begin{matrix} 1 & 1 &1 &1 \\ 1 & 1 & 1 &1 \\  1 & 1 & 1 &1 \\ 1 & 1 & 1 &1 \\ \end{matrix} $$

As you may know, the sum of the eigenvalues is equal to the trace of the matrix. Is there one eigenvector that "exhausts the trace"?

There is new info

in the post showing results from numerical matrix calculations below. (Now called localization length vs disorder.) Please check that out and comment when you have time.

Tuesday, March 3, 2015

Physics 155 Outline

Here is an outline of our topics and goals for this quarter. Some parts have been changed to be how we should have done it, or how we would like to see it now, in retrospect.

1. Lattice vibrations in spatially periodic systems.  We used a mass-spring type approach to look at the vibrational eigenmodes of a 1-dimensional chain of nucleii. This didn't really fit in with anything else, and it turned out to be confusing, so it is not a point of emphasis.

2. Electron states in a spatially periodic system. For a lattice of atoms, we showed that each atom bound state evolves into a band of N crystal states. This association between atom states and crystal bands is a critical starting point for solid state physics.  With "projection techniques" (appendix A) and approximations, we showed how band state energies depend on an atom state energy, an overlap integral, \(I_{11}\), and a crystal quantum number q (or k).

3. Bands arising from a 3 bound state square well. We looked explicitly at the bands that form for a lattice composed of square wells. We saw that bandwidth depends critically (exponentially) on how far the atomic state wave-function extends outside the well and that the bandwidth of the highest energy band is largest because its atomic state extends furthest.

4. Conductivity of a metal. We discussed how conductivity can be described, for a half-filled band at T=0, as a shifting of occupation boundaries in k-space leading to an imbalance of electron velocities and thus a net current.

5. Specific heat of a metal. We calculated the specific heat of a metal at low T, \(C_v = dE/dT\), using an integral expression for E and taking the derivative of the Fermi function inside the integral and making approximations.

6. Spontaneous magnetization. We created a simplified model for electron-electron interaction and showed that, in the case of a narrow bandwidth, e-e interaction can lead to spontaneous magnetization below a critical temperature, \(T_c\). The driving force for this phase transition and instability is the electron-electron repulsion, which is diminished when electrons spins align. The e-e interaction has to compete with both the band energy and entropy, both of which favor a state with no spin alignment, known as the normal state.

7. Magnetic susceptibility. Using a free energy minimization approach, we showed that magnetic susceptibility is enhanced by e-e interaction even when that interaction is too weak to establish spontaneous magnetization.

8. Matrix approach to band theory. The states of a single band can be obtained by expressing the quantum Hamiltonian as a Hermitian matrix with \( E_o\) on the diagonal and \( I_{11}\) on the off-diagonal. This matrix is written in the "local basis", however, its eigenstates are Bloch states and its eigenvalues are the Bloch state energies that we got before (Appendix A). The matrix approach allows the introduction of disorder and other variation into the crystal.

9. Matrix approach generalized to include electron-electron repulsion. Bloch states extend throughout the crystal and are very useful, but they are basically one-electron states. Sometimes that is not good enough. For example, anti-ferromagnetism and superconductivity cannot be explained in any way using one-electron states. We introduce a model for 2 electrons in a 2 site lattice that includes on-site electron-electron repulsion, U.

10. The nature of superconductivity. ... I get the impression that people really want to learn about superconductivity, even though it might be difficult. Is that true?

11. "More is Different"  "The Theory of Everything" (see attached papers)

Appendix A: Electron eigenstates can be written in "Bloch form", i.e., as a sum involving the particular atomic state with a shifted peak position and systematically varying "phase factor" coefficients. These states are completely itinerant, i.e, the probability density is the same at every site in the crystal. The Bloch form is:
\(\psi_k (x) = \Sigma_n e^{inak} \psi_o (x-na)\).
The projection involves multiplying the expression \( E_k \psi_k (x) = H \psi_k (x)\) on the left by \(\psi_o(x)\) and integrating to get: \(E_k = E_o + 2 I_{11} cos (ak)\), where \( I_{11}\) is a nearest neighbor overlap integral with units of energy.

Two electrons, two wells matrix formulation, Updated with example !!!

https://drive.google.com/file/d/0BwYBTR2Eeem-ck52cTNoU0QtTHM/view?usp=sharing

Monday, March 2, 2015

Eigenvalue and eigenvectors for the 2 electron matrix.

Please post your questions, comments and results related to the 6x6 matrix describing 2 electrons and two sites. Feel free to edit the matrix in here, as well as some eigenvalues and their eigenvectors.

The best initial focus for this is, I believe, to complete the eigenvalue spectrum; that is to find all 6 eigenvalues and to think about their arrangement as a function of energy from lowest to highest. To get an explicit result from which we can begin to understand the essential physics, let's look for a result valid to 1st order in I/U, where I/U is less than one, and also the specific case I = 1 eV and U = 4 eV.

For each eigenvector there is a story. Why does it have the energy it has? How does it acquire that; what is the nature of its eigenvector? All of the eigenvectors have a story. Some of them have really interesting stories. Find them and tell their stories here. 

PS. To give you a starting point to think (and dream and wonder) about this, I believe eigenvectors think mostly about one thing: energy. Some of them are pretty vain and they are focused on their own eigenvalue. (To be fair, some of them think also about relationships, e.g., orthogonality.)

Localization Length vs Disorder

I've done some more studies with time evolution, and placed the results in a publicly accessible google drive directory linked below. In the "disorder" folder is a collection of plots that show the time evolution of the one electron system across a range of "disorder" values (the potential is defined as E_0 + R, where R is a random number from a gaussian distribution with mean of zero and standard deviation of "disorder"). In the moving_I/2 folder are a collection of similar plots, but now with a sloping potential defined as E_0 + R + well*(I/2).

Here is an interesting plot. This shows something we could call the localization length, as a function of disorder. It is calculated with zero sloping potential but with a site disorder as outlined above. The horizontal access is disorder. The quantity on the vertical access is the large-time asymptotic value of the width of the wave-function. Like,  as a function of time, the wave-function gets wider for a while but then it stops getting wider and settles on a particular width, hence we can call that a localization length. (For zero disorder, that would be infinite because it never stops getting wider not matter how long you go int time.)
I am not sure actually how this is defined since a wave-function width less than 1 is a little puzzling. Maybe there is a normalization issue or something? Also, another point is probably to get really accurate values at low disorder one might need to let the wave function evolve for a longer time. I wonder if there is a threshold level of disorder at which this quantity changes from being infinite (no localization) to finite?

Friday, February 27, 2015

Matrix for 2 interacting electrons.

I have had second thoughts about discarding the two states that had spin 1. In retrospect, I think it would be better for illustrating and understanding the essential physics if we kept them and worked with a 6x6 matrix. Over the weekend please put in the off-diagonal matrix elements, representing coupling between states connected by a single "hop".  Come to class Monday with your own 6x6 Matrix made of \(E_1\), U and I.

Thursday, February 26, 2015

Special homework project.

It would be really helpful for me to get an idea of your thoughts and feelings on what we have covered so far. For each section, would you discuss:
how much you liked it,
what parts you understood or did not understand so well,
the meaning and significance of each part to you,
any connections you see with other topics in this class (or elsewhere), and anything else you might like to add.

If I remember correctly, the things we have cover so far could be briefly described as:
1) Bloch states for a spatially periodic 1D crystal (starting with the Schrodinger equation)
2) band which form from a 3 bound state square well atom arranged into a 1D crystal,
3) electron specific heat of a metal,
4) magnetic susceptibility of a metal,
5) influence of e-e interaction on magnetic susceptibility,
6) origin of ferromagnetic instability, ferromagnetism as a spontaneous spin alignment driven by e-e interaction,
7) Description of a single band with only nearest neighbor overlap (I) (for a 1D crystal) using a matrix vector form of the Schrodinger equation,
8) Introduction of disorder into the matrix formulation.

You can send me your thoughts by email. It is fine, encouraged even, to do a quick response and then follow up with a more nuanced, detailed response when you have time.  Thanks. 
Zack

PS. Please feel free to ask questions here or by email.

Tuesday, February 24, 2015

Developed Time Step Simulation

Here is a more developed form of the time step simulation. I've included the actual python script used to run it at the bottom of this post (if anybody knows a better way to post it let me know). Note that it is based on python 2, and requires the NumPy and Gnuplot python libraries to run. I've tried to document it well, but let me know if you have any questions.

Below are a few examples of what it can do. The first is a simulation of a perfect set of 1000 wells. The second is a simulation with the exact same parameters, but now with random gaussian disorder* of I/2 throughout, where I = 1 eV. The third has random gaussian disorder of I/3, and (as was requested by Mark) a linear adjustment to the wells, representing something like an electric field. Note that this last one also includes a new feature of a title with the disorder level and a countdown timer.

*disorder means the standard deviation of the gaussian distribution


Perfect Wells




Disorder I/2



Disorder I/3 and linear dependence
 




Sunday, February 22, 2015

Special Guest, reading assignment.

On Wednesday we will have a special guest. Any questions that you have about the fundamental physics of electrons in crystals may be discussed on Wednesday. (Also, if you have a chance to  read the paper "More is different" (you can google it) before Wednesday, that would be good.)

Time Evolution Preliminary results

I've got a workable form of the time evolution simulation. Below are the 40 frame gif results of two different trials. The first was generated using 2,000 time steps of hbar/I units each, and the second was generated using 200,000 time steps of hbar/(100*I) units each. In both cases I was set to 1 eV, the initial wave was localized to well 500, and the initial matrix and vector was of size 1000. I'm thinking I must have made a mistake in how I treated the the second one though, as it looks drastically different from the first. My thought was that if I reduce the time interval by a factor of 100, then I should increase the number of intervals to be even. Is that idea incorrect?




Thursday, February 19, 2015

Sunday, February 15, 2015

Special Projects (additional items added)

A. Calculate the magnetic susceptibility, \(\chi\), and specific heat of a metal in the normal state for a given bandwidth and several values of e-e interaction strength. Make them both intensive quantities. Figure out the units of susceptibility (this is an E&M problem. I think that perhaps magnetization (m) and the thing called H in E&M have the same units? Is that wrong?). Graph them as a function of T from T=0 to 100 K.  Show how you can use specific heat measurement to infer both the bandwidth (or density of states at the Fermi level) and the strength of the e-e interaction in a non-magnetic (aka paramagnetic, aka normal) metal. Make up a problem that has about 20 simulated data points for \(\chi\) and \( C_v\) and where the student os asked to fit the data points and infer bandwidth and e-e interaction strength.
      Suggestion: Start with \(e^2/a =0\). Then work up to 0.5 eV, 1 eV, 1.5 eV.  Use a bandwidth of 2 or 3 eV. What is the essential nature of \(\chi\) vs T in a reasonable range of T?

B. Consider a degenerate "Fermi liquid" at low temperature. Specifically, consider a half-filled band with upward curvature. (You can do downward curvature as well as you like.) Explain how an applied electric field shifts the occupation boundaries and leads to a modest number of electrons (much less than N) carrying current. Explain what fraction of electrons are responsible for the net current and what their typical speed is. Get an estimate of the conductivity. You can assume that the size of the boundary shift is very small compared to \(\pi/a\). That assumption can be used to greatly simplify the integral over q and to simplify your final result for the conductivity. (There is no benefit, and significant cost, to not making that assumption!)
     Then redo the whole thing for a 1/8 or a 1/4 filled band using the approximation that E is quadratic in k and with an effective mass in the denominator (i.e.,  \( E_q = \hbar^2 q^2/(2 m_o m^*)/) . Try to get simple expression for the conductivity. Use n = N/V where appropriate. Conductivity should be defined to be intensive.

C. Use energy eigenstate formulation to calculate the time dependent spreading of a wave function expressed as a wave packet. Start with a free electron in a gaussian initial state. Then do this also for an electron in a crystal using the energy eigenstates associated with a single band of crystal energy eigenstates. What range of crystal eigenstates does it take to localize an electron to one lattice site?  What range of energy eigenstates does it take to localize an electron to a finite, but larger region, say 100 lattice sites. For the latter problem you could use a Gaussian envelope function to define your initial t=0 wave function.
      (For this project I think you will need an atomic wave-function to work with. My suggestion would be a gaussian, due to the absence of a singularity at the origin. The delta function ground state might also be a possibility. Actual square well states will be difficult to work with because of the piece-wise nature. I think that the results are not much effected by this choice, as long as the atomic state is exponentially localized.

D. Use a matrix formulation to calculate the time dependence (spreading) of a wave function representing one electron in a spatially periodic perfect lattice. (Do this numerically, say for a 1000 site lattice.) After doing this for a perfect lattice, try doing it for a model in which the off-diagonal elements are all the same (as before), but the diagonal elements have some degree of random disorder with respect to a central mean value (a mean value the same as you used in the previous part).
 
E. Consider a lattice with only two sites and two electrons
. Each electron can be localized on either site in single-electron eigenstate in which it has an energy of -8 eV (or call it \(E_o\).  If the electrons are on the same site then there is a coulomb energy of +2 eV. If they are not on the same site the coulomb energy is zero. Including spin, enumerate all possible states of this system and what the energy of each state is. Draw simple pictures to illustrate.
        Do the same thing for 3 electrons on 3 sites, and then perhaps for 4 electrons on 4 sites.

F. Investigate and discuss the nature and origin of anti-ferromagnetism.  What is a half-filled Hubbard band? What does it have to do with magnetism? ... ....  (perhaps we can flesh this out more. Also, I think this is related to E.)

G. Understanding superconductivity: The key thing in superconductivity is not so much the specific nature of the interaction that leads to paring (which can vary), but rather the tendency of electrons at a Fermi surface to be highly susceptible to pairing.  The Fermion nature of electrons this plays a critical role. In 3-dimensions a Fermi surface could be a sphere in k-space inside which all states are occupied (by electrons) and outside which all states are empty. Cooper showed that electrons at the Fermi surface could be unstable with respect to pairing to form pairs of Fermions which then act as bosons. These boson pairs then develop a phase coherence that leads to a superfluidity know as superconductivity. As a project one could study Cooper pair formation, phase coherence and superfluidity  or some other aspect of superconductivity.

H. Calculations of basic quantities in the effective mass approximation. ... more to follow

Feel free to ask questions and suggest other projects. Allow yourself enough time to understand the problem etc… There may be flaws in these problems that will take time and discussion to work out. (or maybe not?)

Wednesday, February 11, 2015

Today's Class.

I hope you have all the sign issues sorted out and are getting some interesting results for \(\chi (T)\), in the normal state, and \( m(T)\) below the temperature at which spontaneous magnetization emerges, \(T_c\). Your results should take the form of graphs, with very well-written figure captions. Additionally, there should a discussion, but make the figures, the graphs, central to the readers experience.  I can't make it to class today, so you can turn those in to my mailbox anytime tomorrow if you like.

Tuesday, February 3, 2015

Helmholtz free energy *first draft

\( F=U-TS \) is that weird formula that we wanted to minimize. But why?

Note: This is my first attempt, I am not sure if its valid to imagine a system being built at constant temperature T.

To answer why, one must understand what U and TS are in this expression and the conditions our little crystal is in. It is in a huge temperature bath at T and our crystal has constant volume so no work can be done on or by it. Hence the only interaction with the outside system is thermal.

In general U in the Helmholtz free energy equation is just the internal energy of the system, I think that we made an approximation in class as we plugged in U at T=0 K.

If you want to be persuaded that in class the U we used is the energy of the system at T=0K read the following:

*Explanation: The above holds since we did not use any fermi-dirac statistics to calculate U instead we plugged in f(T)=1 in \( U= \int ED(E)f(T,E)dE \). Where in \(f(T)=\frac{1}{e^(E-\mu)/kT+1}\) by definition the fermi energy is greater than any occupied state making the exponent negative hence \(lim_{T\Rightarrow0}f(T)=1 \)

How can TS be interpreted?

First, \(Q=TdS\) by definition hence if a system is at constant temperature T and you raise its entropy from 0 to S you get \( \int_0^STds=TS \)

Therefore TS can be thought as the total heat added to a system at temperature T as it is raised from 0 to S entropy.

Now we combine these two quantities to get \(F=U-TS \)

Minimizing this quantity will allow us to have a minimum energy U internally while extracting as much heat from the system as we can!

I think that I need to think more and maybe redo all of it


Special Projects.

Seems like people are interested in learning more about how Fermi statistics effect the electron-electron interaction and why we minimize the Helmholtz energy, F= E - TS, to find the equilibrium state of a system coupled to a heat bath (that is, at a well-defined temperature T).

For the latter issue, if someone(s) want to look into that and report back, I think people would be interested. I think that it is possible to connect that directly to the 2nd law or thermo, the one about entropy increasing. The Helmholtz energy plays a big role in a lot of physics so it is worth understanding.

For the e-e interaction issue, perhaps the following calculation might be helpful. Consider a square well from x=0 to 1 nm.  Suppose there are two electrons in the wells and that the occupied states are sin(10 pi x) and sin(11 pi x), for example. I think that you can make a symmetric combination of those or an anti-symmetric combination. For those two cases, calculate the expectation value of \(|x_1-x_2|\).  See of they are substantially different? Perhaps this will tell us if there are correlations built into these states and if electrons avoid each other more in one than the other? Does this make sense? Feel free to ask questions about it. Here is a normalization integral to get things started. (I could not get W-A to do the integral when the \(|x_1-x_2| = |x-y| \) term was added.)

Integrate [2 sin^2(9 pi x) 2 sin^2(8 pi y)] from x=0 to 1, y=0 to 1


Monday, February 2, 2015

Midterm Question



This looks great. Let's make this a 3-part problem.
part A: Calculate the spontaneous magnetization as a function of temperature for \(e^2/a = 1.5\; eV\) and \(E_{band}\) = 1.4 eV. (This part is for zero applied magnetic field). For this part create a graph of magnetization vs T. Define a \(T_c\) and look at the behavior in the vicinity of that. The part in the square root is a lot more important that the 1/T outside that. What is the critical exponent?

Part B: Calculate the magnetic susceptibility in the normal state, as Arjun has explained.
For both parts, the most important thing is your graph. That is what I will look at first. Do a really nice graph, hand drawn, not too large, with a nice title, labels and scales, and with an excellent caption that explains everything in a succinct and beautiful manner. 

Part C. (added Feb 6) Calculate and plot the magnetic susceptibility as a function of T for \(e^2/a = 1.5\; eV\) and \(E_{band}\) = 1.7 eV.
===========================================

I decided to go back to my entropy calculation after something Zack mentioned, and I found a sign error in my arithmetic. \( C_0 \) remains the same, but the expression for the polynomial up to order \( O(x^m) \) is:
\( \sum_{k \in \text{evens} }^m 2^k \left[ \frac{1}{(k-1)N^{k-1} } - \frac{(1-N)}{kN^k} \right] \left( N_\uparrow - \frac{N}{2} \right)^k \)

Alternatively written as
\( \sum_{k \in \text{evens} }^m 2^k \left[ \frac{N - (k-1)}{k(k-1)N^k} \right] \left( N_\uparrow - \frac{N}{2} \right)^k \)

The first two terms (to order \( O(x^6) \) ) are:
\( \frac{N-1}{2N^2} 2^2(N_\uparrow -\frac{N}{2})^2 + \frac{N-3}{12N^4} 2^4(N_\uparrow -\frac{N}{2})^4 \)

And this is definitely 100% correct. I checked it against Mathematica and everything. This is a Taylor expansion from
\( 0.5 \ln (N^2 - N_\Delta ^2 ) + N_\Delta \tanh ^{-1} \left( \frac{N_\Delta }{N} \right) \quad \text{where} \quad N_\Delta = N_\uparrow - N_\downarrow \)

which is itself part of the Stirling Approximation for
\( \ln \left( \frac{N!}{N_\uparrow ! N_\downarrow ! } \right) \)
-Aaron


EDIT: Added the revised expression that Aaron derived for the entropy and changed the symbol for the band energy to avoid confusion.  -Arjun

Sunday, February 1, 2015

Entropy

I think that if you set the bandwidth equal to zero, that will make the entropy calculation easier. I guess what we need to know is: for a given value of \(N_{\uparrow} - N_{\downarrow}\), how many states are there?  That is, what is \(\Omega\)?

Then we can find the equilibrium state of the system by looking for the minima of F=E-TS,
where \( S= k ln(\Omega)\).  Does that make sense?

Here is a suggestion. I think what we really want is just a graph of \( kT ln\Omega\) as a function of \(N_{\uparrow} - N_{\downarrow}\) and then to maybe add that to the e-e energy term and graph that to see which one controls the equilibrium.  (or really, how they both influence it...) So instead of getting all caught up in formal math, and Stirling's formula etc. (which is fine, but might take too much time) maybe someone could just do that numerically for a particular case like N=1000 or so, and for a particular value of \(e^2/a\), like say 1 or 2 eV. Then graph the Free Energy as a function of \(N_{\uparrow} - N_{\downarrow}\) for a few values of T. Does that make any sense?

By whatever method, either analytically or numerically, what we need to keep moving forward is an approximate expression for S that includes terms up to 4th order in \(N_{\uparrow} - N_{\downarrow}\). I think S has a maximum at \(N_{\uparrow} - N_{\downarrow}=0\) and one can do a Taylor series expansion near there including the \((N_{\uparrow} - N_{\downarrow})^2\) and \((N_{\uparrow} - N_{\downarrow})^4\) terms. One can also fit a numerically generated S in that way.

Review of Fridays class

Wednesday, January 28, 2015

Results\Progress from Class




In the above expression the first part of the sum is the energy assuming constant density of states and one energy band that starts from 0.

Considering the only first part we showed that differentiating with respect to \( N_+ \) we get \( N_+=N/2 \) for minimum energy. Therefore the equilibrium dictated equal distribution of spin up and spin down.

The next two terms in the sum above where derived by counting. The first term is for the up up or down down spin and the second for the up down spins. As you can see we followed Zack's instructions for the energy contribution of each configuration.

We did not have time to graph or to take the derivative of the expression to find the new equilibrium as the counting took some time but I think everyone is on the same page!

Rationalization of the expression above:

\(N_{\pm} \) C 2 yields the number of possible up-up or down-down pairs

\( N_+*(N-N_+) \) yields the number of up down spins.

A sanity check could be seeing whether the total pairs N C 2

N C 2= \(N_+ \) C 2 + \(N_- \) C 2 + \( N_+*(N-N_+) \)

*C means choose

Today's Class (Wednesday).

I won't be there for today's class. What I would like you to do is to work together, perhaps in groups, on creating the model for magnetism based on the concepts in the post "Influence of electron interactions in a band metal."
The key things are the dependence of the total band energy on \(N_{up}\) using the constant D(E) assumption, and the dependence of the total e-e energy on \(N_{up}\) based on counting pairs and our assumed difference between same spin and opposite spin pairs. (See my comment reply to Georges comment to the post "Influence of electron interactions in a band metal.".

I would strongly suggest that you not spend much time questioning the assumptions. I don't think you will get anywhere with that. We will address those issues on Friday.

Count pairs. Get graphs of the total band energy and the e-e energy as a function of \(N_{up}\). 

Phonon Density of States within Monoatomic 1D Lattice





Tuesday, January 27, 2015

Simplest 2-D 3-D phonon configurations solved, Discussion on 2-D 3-D density of states calculation




Hund's Rule

Underlying physics

The underlying physics that elucidates why the electron-electron might be influenced by the spin state involves multi-electron states, and that is something we haven't really gone over. The simplest case is two electrons, but even that has some interesting subtleties. It is not just what states are occupied that matters. I'll try to do a post on that later, but if anyone has some experience and interest in two electron states, feel free to do a post. I believe that the influence of the spin state on the e-e interaction has to do with the correlations built into the multi-electron state via symmetrization.

Influence of electron interactions in a band metal.

Let's make it a goal to finish by Friday our problem involving the possible influence of electron repulsion in a band metal. Specifically, we are considering N electrons in a band that has a total of 2N states and which has a constant DOS. So \(D_o B = 2N\), where B is the bandwidth and N is the number of sites in the crystal.

We know how to calculate the energy of the system (at T=0), by integrating over the occupied states. (Can someone post that integral here?) We spoke about imagining separating the DOS into two equal halves, one for spin up and one for down. With that in mind one can then calculate the total energy of the system as a function of N_up and N_down. Can someone do that and post it here. Can you post a graph of the energy of the system as a function of \(N_{up}\)? Then we can consider and discuss what is the equilibrium value and what might influence the extent of fluctuations away from equilibrium. We assume \(N_{up} + N_{down} =N\). Does that make sense? What does that plot look like? What is the equilibrium value?

Now let's suppose every electron interacts with every other electron with an interaction expectation value \(e^2/L\)  (where L=Na). How many electron pairs are there if there are 100 atoms and 100 electrons in this band? How many are there if there are N electrons? What is the total electron-electron interaction energy?

Then let's make a slightly different assumption. Let's suppose that every spin up electron interacts with every other spin up electron with a smaller energy, \(e^2/2L\). Same for down- down pairs. The interactions between up and down pairs we leave at the larger value \(e^2/L\). (Does that make sense? Perhaps someone an explain that more lucidly?) How does this new assumption change and influence things? What is the total electron electron repulsion energy with this assumption. Graph it as a function of \(N_{up}\). Think about it and discuss.

Sunday, January 25, 2015

Phonon Dispersion






E vs q for All States

Below are my general solutions to the energies for an infinite set of square wells, as well as my specific solutions to the E1 and E3 states.

Here are my constants:

Well width: a = 2b = 1 Angstrom 

Well Depth: \(V_0\) = 250 eV

Well Spacing (from one center to another): c = 1.75 Angstroms (chosen because it causes essentially no difference in E1, but a very large difference in E3)

E1:

E1 = -226 eV  , k = 2.51 Angstroms^(-1) , \(\kappa\) = 7.7 Angstroms^(-1)
                        D = 1.26 Angstroms^(-1/2), B = 0.39 Angstroms^(-1/2)


E3:

 E3 = -50 eV  , k = 7.24 Angstroms^(-1) , \(\kappa\) = 3.62 Angstroms^(-1)
                        D = 1.14 Angstroms^(-1/2), B = -1.01 Angstroms^(-1/2)


\( E_q^1 = -226 + 0.46cos(1.75q) \)

\( E_q^3 = -50 + 32cos(1.75q)      \)




Here are the steps to get here and general equations for all states, in case I got anything wrong:

Following from previous discussions we have

\( E_q = E_0 + 2cos(qc)I_{11}^m \)

\( I_{11}^m = < \phi_0 (x) | V (x-c) | \phi_0 (x-c)> \)

\( < \phi_0 (x) | V (x-c) | \phi_0 (x-c)> = \int_{c-b}^{c+b} \phi_0^* (x) V_0 (x-c) \phi_0  (x-c) dx \)

\(  = V_0 \int_{c-b}^{c+b} \phi_0^* (x) \phi_0  (x-c) dx \)

\(  \phi_0(x) \)  between c-b and c+b is the exponentially falling tail of the wave  function centralized at x = 0, and is the same for even and odd functions.

\(\phi_0 (x-c)\) between c-b and c+b is the sinusoidal wave function centered at x=c, and varies from even to odd.

\(  = V_0 \int_{c-b}^{c+b} e^{- \kappa x} \phi_0  (x-c) dx \)

odd \(  = V_0 D \int_{c-b}^{c+b} e^{- \kappa x} sin(k(x-c)) dx \)

even \(  = V_0 D \int_{c-b}^{c+b} e^{- \kappa x} cos(\kappa(x-c)) dx \)

odd \( \frac{V_0 D A e^{-\kappa c}}{k^2 + \kappa^2} [ k(e^{2b\kappa} - 1)   cos(bk) -  \kappa(e^{2b\kappa} + 1)   sin(bk)   ]   \)

even \( \frac{V_0 D B e^{-\kappa c}}{k^2 + \kappa^2} [ \kappa(e^{2b\kappa} - 1)   cos(bk) +  k(e^{2b\kappa} + 1)   sin(bk)   ]   \)



odd \( E_q = E_0 + \frac{V_0 D A e^{-\kappa c} cos(qc)}{k^2 + \kappa^2} [ k(e^{2b\kappa} - 1)   cos(bk) -  \kappa(e^{2b\kappa} + 1)   sin(bk)   ]   \)

even \( E_q = E_0 + \frac{V_0 D B e^{-\kappa c} cos(qc)}{k^2 + \kappa^2} [ \kappa(e^{2b\kappa} - 1)   cos(bk) +  k(e^{2b\kappa} + 1)   sin(bk)   ]   \)



\( E_q^1 = -226 + 327,050e^{-7.7c}cos(cq) \)

\( E_q^3 = -50 + 17,885e^{-3.62c}cos(cq)      \)





Does this look reasonable?

Friday, January 23, 2015

Finite Well Wave Functions


Here are some plots I made for our 1A wide, 250 eV deep well:



\(\Psi_0\):


\(\Psi_1\):



\(\Psi_3\):


 And here they are plotted on the same graph:

I'll add the constants used to generate them in a comment.


Wednesday, January 21, 2015

What to do now?

Here are my suggestions. Feel free to add your own.

1) I think we need a beautiful graph showing the 3 (or 4) bound states all together on the same scale so we can really compare how they look and how far they extend outside the well.

2) Some \(I_{11}^m\) calculations for m=1, 2, 3 for some value of a.

3) Some nice plots of E vs q with the bandwidths and band gaps all worked out?

4) other stuff? what would you like to see?

Also, can someone do a lattice specific heat post (in response to question 2).

How about if we post stuff here and try to get this all done by Friday. Then we can start some new interesting things.

PS. Multiplying by c^2 to get away from mks units and into eV-A or eV-nm can be helpful. For example, for calculating k and kappa, one can multiply through by c to get:
\( \kappa = \sqrt{\frac{2mc^2(-E)}{\hbar^2 c^2}} \)

\( k = \sqrt{\frac{2mc^2(E+V_0)}{\hbar^2c^2}} \)
where,
\(mc^2 = 0.511 \times 10^6 \) eV
and
\(\hbar c = 1970\) eV-A

Creating bands using square well eigenstates.

Now that we have the single square well bound state problem pretty well under control, let's use those states to create bands. One thing you can do to start is a conceptual drawing. In this we are guided by the Bloch form of the crystal eigenstate and the approximation we used before, where only \(I_{11}\) and maybe another overlap integral, one that is unitless, were kept. Keep it simple. Don't expand the approach beyond that.

1st suggestion:  Use specific states and actual numbers for A, B, D, k and kappa.  Anything you do with symbols will probably inconclusive and unhelpful. My preference is for the 250 eV deep, 1A well, but any of them are okay. Note that for the 1A wells E3 is the highest energy bound state.*

2nd suggestion: Pick a good separation. One with some overlap but not too much. Maybe about 1 A? or 0.5 A?

3rd suggestion: Sketch the integrands. Pay attention to signs. Remember that \(I_{11}\) is an overlap integral with units of energy and that the q dependence is separate from that. The sign of \(I_{11}\) effects the qualitative nature of the q dependence.

Tuesday, January 20, 2015

Sunday, January 18, 2015

Calculating energy as a function of lattice spacing

For this post I will attempt to write out expression for \( E_q \) in terms of even and odd \( \phi_0 \)
Odd wave functions \( \phi(x-mc)_{odd} = \begin{cases} -D e^{\kappa (x-mc)}, & x < mc-a/2 \\ A sin(kx), & mc-a/2 < x < mc+a/2 \\ D e^{-\kappa (x-mc)}, & x > mc+a/2 \end{cases}  \)

Even wave functions \( \phi(x-mc)_{even} = \begin{cases} D e^{\kappa (x-mc)}, & x < mc-a/2 \\ B cos(kx), & mc-a/2 < x < mc+a/2 \\ D e^{-\kappa (x-mc)} & x > mc+a/2 \end{cases} \)

Just like the First Homework we can construct a hamiltonian but this time

Preliminary Part for Homework

Here is what I got as result for the preliminary part:

The ground state energy, n=1, of a finite well with width a = 0.6 Å:

\(E_n = \frac{\hbar ^2 \pi^2}{2ma^2} n^2 \)

Saturday, January 17, 2015

Homework for Wednesday.

New homework post. I would really like you to do this by Wednesday. That way we can keep moving. Does that seem reasonable? Can you do it??  Plus, problem 1 provides the underpinning for this class and your future understanding of solid state physics, so it is worth wasting some time on I hope.

Friday, January 16, 2015

Specific Heat

Hey everyone.
I promised to complete a write-up about our findings in class, and here I am.  Earlier, we found the relationship between the total energy (let's call this \( U \) for now) and the density of states \( D(E) \) :
\[ U = \int _{\text{bottom}} ^{\text{top}} D(E) \, f(E) \, E \, dE \quad \text{where} \quad f(E) = \frac{1}{1+\exp{[(E-\mu)/kT]}} \]
For neatness' sake, I'll use \( E_b \) and \(E_t\) to denote the bottom and the top of the band, respectively.
Q: How do we relate this to specific heat?
From \( C_V = \frac{\Delta U}{\Delta T} \) (the specific heat at constant volume), we can write
\[ C_V = \frac{\partial}{\partial T} \int_{E_b}^{E_t} D(E)\,f(E)\,E\,dE \\
 \int_{E_b}^{E_t} D(E)\, \frac{\partial f(E)}{\partial T}\,E\,dE. \\
\text{Given that} \quad \frac{\partial f(E_b)}{\partial T}D(E_b) \approx 0 \quad \text{and} \quad \frac{\partial f(E_t)}{\partial T}D(E_t) \approx 0 \quad \text{rather strongly,} \]
we can justify \( D(E) \approx D(E_0) = constant \), the density of states at the ground energy.  So,
\[ C_V \approx D(E_0) \int_{E_b}^{E_t} \frac{\partial f(E)}{\partial T}\,E\,dE = D(E_0) \int_{E_b}^{E_t} \frac{\frac{E-\mu}{kT^2}\exp{[(E-\mu)/kT]}}{(1+\exp{[(E-\mu)/kT])^2}}\,E\,dE \]
where \( \mu \) is something we didn't really get to discuss in depth... anyway,
\[ C_V \approx \frac{D(E_0)k}{T} \int_{E_b}^{E_t} \frac{(E-\mu)\exp{[(E-\mu)/kT]}}{(1+\exp{[(E-\mu)/kT])^2}}\,E\,\frac{dE}{kT}. \]
We can make this integration easier by letting our limits go to infinity instead.  Physically, we're only including negligible contributions by doing this, so I'm pretty comfortable with that.  Furthermore, substituting  \( x = (E-\mu)/kT \)  and  \( dx = dE/kT \),  we get
\[ C_V \approx \frac{D(E_0)(kT)^2}{T} \int_{-\infty}^{\infty} \frac{\frac{E}{kT}xe^x}{(1+e^x)^2}\,dx  \\
  \approx  D(E_0)k^2T \left[ \int_{-\infty}^{\infty} \frac{\frac{E-\mu}{kT}xe^x}{(1+e^x)^2}\,dx + \int_{-\infty}^{\infty} \frac{\frac{\mu}{kT}xe^x}{(1+e^x)^2}\,dx \right] \\
\approx D(E_0)k^2T \left[ \int_{-\infty}^{\infty} \frac{x^2e^x}{(1+e^x)^2}\,dx + \frac{\mu}{kT} \int_{-\infty}^{\infty} \frac{xe^x}{(1+e^x)^2}\,dx \right] \\
\approx D(E_0)k^2T \int_{-\infty}^{\infty} \frac{x^2e^x}{(1+e^x)^2}\,dx \]
since the second integral indeed cancels out by asymmetry.  We are left with an immediately repulsive integral, but closer inspection à la Wolfram reveals true beauty:
\[ C_V \approx D(E_0)k^2T \frac{\pi^2}{3} \]
As promised, this constant (\( \pi^2/3 \)) is a small contribution in the scheme of things, since \( D(E_0) \), which is proportional to the number of atoms in the crystal, is such a large number by comparison.

Really, the take-away seems to be that heat capacity is linear with respect to temperature... at least to a first order approximation.  At the same time, it's interesting to note that this approach implies that different values of \(\mu\) and \(E\) don't affect the heat capacity whatsoever, though that could be a result inherent to either the rough handling of \(\mu\) or the idealized 1-dimensional nature of the problem.

Have a great 3-day weekend everyone.

Tuesday, January 13, 2015

Density of States

Additionally, let's work on density of states as a function of energy. So far we have calculated the energy of a bunch of states (one band) as a function of q. Can we turn that into a density of states as a function of energy? 

To provide ourselves a concrete goal, let's work toward calculating the specific heat* of a 1D crystal with a band of the states we just calculated that is:
1) full
2) half full

*The specific heat is the amount of heat per unit mass required to raise the temperature by one degree Celsius... (i googled it)

Discussion Regarding N Atom System with N Free Electrons

I wanted to start a discussion regarding the question posed at the end of the last class regarding how many states would be filled in a system of N atoms and N free electrons.

My thoughts are that N states would be filled when disregarding spin. I think this is sort of analogous to the problem we solved for the normal modes of masses connected by springs (although maybe it isn't). In the normal mode situation we found that for N degrees of freedom there are N normal modes and I think in the case of states if we have N atoms and N free electrons we will have we can kind of think of it as N degrees of freedom so N number of states filled. (I don't really like this explanation so if you have any better ideas please post them!)

Now when we include spin I think we will find that there will be N/2 states filled. I think that each electron has two possible states that it can fill. One of those being a shared state with another electron, but with opposite spin to the electron whose state it's sharing. If the probability of the electron occupying the same state as another electron is 1/2 which I think it is in our current model in which we disregard electron electron interaction than the number of states filled should be N/2.

Does this sound reasonable to you guys?

Friday, January 9, 2015

Homework problem

\(E_q \Psi_q = H \Psi_q \)
 find \(E_q\) vs q.

Would someone like to do a more detailed post of this problem? Please let me know if you would like to be added as an author for this blog.  Then you can do your own post which makes editing possible, etc.

For this problem, you can assume that you are given a single atom potential as well as the energy of the ground state for that and the ground state wave function. The crystal potential is a sum of single atom potentials with appropriate displacement, and we assume eigenstates in the form given in class, i.e., single atom wave functions with phase factors that depend on q and on the displacement.

Eigenvectors and dispersion relation.

Here is my summary of what we did last class with regard to finding the eigenvectors for an infinite 1D lattice. The eigenvectors are indexed by q and the oscillation frequency depends on q. The relationship, between frequency and q, is called a dispersion relation. One has to limit the range of q to avoid duplicate eigenvectors. Can someone delineate how that works and what the range of q should be before class today?  Please comment here.

Sunday, January 4, 2015

Latex posting and testing.

Feel free to test your latex phrases here. I think that things enclosed with a slash-paren will be interpreted as latex now. Anyone want to write a short tutorial?

\(E=E_o\)