Electronic Properties of Materials/Quantum Mechanics for Engineers/The Fundamental Postulates

From Wikibooks, open books for an open world
Jump to navigation Jump to search
Electronic Properties of Materials/Quantum Mechanics for Engineers
 ← Quantum Mechanics for Engineers/The Stern-Gerlach Experiment The Fundamental Postulates Quantum Mechanics for Engineers/Particle in a Box → 

There are four basic postulates that underlie quantum mechanics.

Postulate I: Observables and Operators are Related

Postulate II: Measurement collapses the Wave Function

Postulate III: There exists a state function that allows expectation values to be calculated.

Postulate IV: The wave function evolves according to the time-dependent Schrodinger equation.

Postulate I[edit | edit source]

Each self-consistent, well-defined, observable has a linear operator that satisfies the eigenvalue equation, , where is observable, is the operator, is the measured eigenvalue, and is the eigenfunction of . In a given system you have a different eigenfunction for every eigenvalue so often times you will see which specifies that is the eigenfunction of . Thus, this postulate links an observable to a mathematical operator.

What are Mathematical Operators?[edit | edit source]

An "operator" is thing or mathematical expression which operates on a function and makes it different. For example:

In this function, is the mathematical operator defined as the derivative with respect to . This means that if we later have operating on some function of , we can then apply additional operators to the function which change the result, but still follow the same rule. For example, let's apply an operator, , which rotates the function 90° about the z-axis.


Furthermore, applying a "divide by three" operator, or an Identity Operator, which leaves the function unchanged, yields similar results.

Physically Significant Operator Observables:[edit | edit source]

Physically meaningful observables all have operators, which come about in a variety of ways, but the way that you can start to think about them is as operators in the classical world which are further quantized with the addition of and . If you look at these case s long enough, you'll eventually start seeing that there's a pattern to it.

Let's take the example of linear momentum, . I will give it the operator, , a vector which is equal to . While you can look at the whole in three dimensions, the gradient allows us to look at it equally in parts so let's simplify this problem and look only at the x component of this vector.

In applying this operator to some function, , gives:

Solving this differential equation provides one solution by applying the planewave equations:

The solution is just a planewave with wave number, .

This isn't very exciting on its own as and can take any value, thus it doesn't look "quantized". Physically, this represents a free particle (i.e. a particle alone in an infinite vacuum), and the quantization comes from the boundary conditions we apply.

Application of Boundary Conditions[edit | edit source]

<FIGURE> "Born-von Karman Boundary Conditions" (These boundary conditions could be pictured as a box or as a ring.)

Let's apply periodic boundary conditions (PBC) called "Born-von Karman Boundary Conditions". <FIGURE> With this we are essentially putting the particle in a one-dimensional box where it is free to move within the box, but once it leaves the box it loops back around in space and reenters the box from the other side. The box has some size, , which gives us the quantization. This concept can also be pictured as a ring with radius .

These boundary conditions restrict the solutions, because the solutions must match at these boundaries. Thus:

This isn't obviously solvable so we go in and substitute sine and cosine as described in the planewave equations which gives:
Since the right hand side of the equation must be equal to a known value, we can conclude that . Following this logic:

Now we have a quantized solution. Going back to the idea of the ring boundary condition, and come upon the de Broglie hypothesis from Chapter 1 (), showing us that when Plank initially quantized particles he was thinking of a periodic situation. Additionally, we can develop the Bohr model of the atom by combining these two concepts.

<FIGURE> "Bohr Atom Model from de Broglie Equations" (Description)

Effect of Boundary Conditions[edit | edit source]

This is what makes nanoscience interesting! When the dimensions of a structure are small enough they affect the quantization. If we can control the dimensionality at a nanoscale, we can control the quantum nature of electrons.

Another well defined observable is energy. In classical mechanics there are several ways to formulate the equations of motion (Newtonian, Lagrangian, Hamiltonian). I'm not going to talk about these, but you should know that in quantum mechanics the formalism matches classical Hamiltonian formalism. For systems where the kinetic energy depends on momentum and potential energy or position, the Hamiltonian operator takes the simple form:

, where is the kinetic energy and is the potential energy.

For now we are going to talk about particles in a vacuum which sets the potential energy () to zero. For now we are simply looking at the kinetic energy (). We can take the equation for kinetic energy, , from classical mechanics and substitute in our momentum operator, , to get a simplified equation for , referred to as the Laplacian operator.

Simplification of nabla^2:

Once again, we can simplify this to a one dimensional problem, by utilizing the expanded form of .

We are taking the second derivatives so as the operator is operating it returns the curvature of the function showing us that the kinetic energy operator is proportional to a function's curvature. Thus, solutions with tighter curves will have higher energies than slowly changing functions.

Ideally, we want to solve: (Time-Independent Schrodinger Equation)

What solves this? Planewaves! As it turns out, planewaves are a common solution in quantum mechanics!


Here we can see that our eigenvalues are , thus breaking up the equation gives us:

These variables are consistent with our earlier finding that:

Note: Our earlier equation had one component due to the singe derivative present in the parent equation while our current solution has two components due to the double derivative present in the parent equation.

Here, the momentum is telling us what the value is and the and coefficients are telling us if it travels to the left or to the right. As you may have guessed, the energy and the momentum are commensurate with each other, we can know them both at the same time. In quantum mechanics, if operators "commute" then they share eigenfunctions. We should notice that if or are zero, then the eigenfunctions of energy are also the eigenfunctions of momentum. Generally, and commute if:

For example, let's look at momentum and energy, when is some test function:

Since , and commute.

Let's try a different operator. This time, let's compare position and momentum.

Here, , meaning that and do not commute. This means that momentum and position do not commute and thus do not share eigenfunctions. As it so happens, this is all tied to observation and the fundamental uncertainty in our knowledge.

Recall the Heisenberg Uncertainty Principle:

When operators commute then we say that the observables associated with the operators are "compatible" meaning that they can be measured simultaneously to arbitrary precision. (Related to the Schwartz inequality...) Without proof, I will tell you that:

If , then , where refers to "expectation value".

So, for , (working with ) *see B&J p.215

This is a BIG DEAL! It means that it is impossible to simultaneously know certain things. (Remember our thought experiment from Chapter 2?) What's more, this is purely a quantum effect. Consider again, momentum. What if we precisely measure the momentum to be , then the particle's wave function is .

Remember in the probabilistic interpretation:

<FIGURE> "Incompatible Observables" (Constant value )

But is just the normalization constant, so the probability distribution appears as (FIGURE). If we know precisely then we know nothing about ! It was an equal probability any where in the range .

Thus, and are incompatible observables.

Postulate II[edit | edit source]

A measurement of observable that yields value leaves the system in state .

We say that the measurement "collapses the wave function" to , where is the eigenfunction of the particular value measured Immediate subsequent measurements will thus yield the value as the eigenfunction will remain collapsed about that value until another property is measured, as seen in Chapter 2.

What is important here? Before the initial measurement, the expectation of the measurement is given statistically from , a superposition of possible states. After the act of measuring leaves , one particular state, for subsequent measurements. Note that this is very similar to solving partial differential equations. When solving a partial differential equation for a particular solution you get a linear superposition of all possible solutions which is analogues to what we see here.

Postulate III[edit | edit source]

There exists a state function, called the "wave function" that represents the state of the system at any given instant, and all the information we could know about the system is contained in this state function, , which is continuous and differentiable.

For any observable, , we can find the expectation value, for measuring from .

Here is the complex conjugate of , and is an abbreviation for

Review of Statistics (and the meaning of the "expectation value", )[edit | edit source]

In statistics, , is the expectation value of, , and when all goes well in sampling theory:

Within this function, if you know all the possibilities then you can essentially write the state function for the system. Let's say I have a bag with 5 pennies, 3 dimes, and 2 quarters. The probability of me pulling any given coin type out of the bag is:

For continuous probability distribution:

State Functions in Quantum Mechanics[edit | edit source]

Applying this statistical expectation value to our quantum state function gives us:

Where, since is just a number we can simplify to .

Postulate IV[edit | edit source]

The state function, , develops according to the equation:

This is the time dependent Schrodinger Equation and is true for non-relativistic space. (Note that this equation is a postulate, there is no proof for this.) As it happens, to account for relativity we either fix our solutions by perturbation methods or instead solve using the Dirac Equation:

These four postulates give us the basis for everything we do in Quantum Mechanics, and the reason they work out is tied to linear Hermitian operators. The solution to the eigenvalue equation has special properties, wherein the eigenfunctions are orthonormal. For an arbitrary system with bound states:

; where , and is the eigenvalue which corresponds to the eigenfunction .

Orthonormality[edit | edit source]

An orthonormal function...


Here, , is the Kronecker Delta Function. This function is a consequence of the Stern-Louisville Theorem where the set of ident functions, , span Hilbert space, sometimes only sub-space, the function-space where lives. Hilbert space can be thought of as an equivalent space to Euclidean space, where vectors live, which will have some set of vectors . If that set of vectors is orthonormal and span space, then they can act as a basis for all other vectors in that space, and we can write any arbitrary vector as a sum of these vectors .

Those who have taken linear algebra might also remember a bunch of rules about eigenvalues, pertinents, etc... Well, they all will apply to what you're going to see here, and in fact, there is a matrix notation that allows one to directly map all of quantum mechanics to sets of matrices and vectors.

Hilbert Space[edit | edit source]

With this orthogonal property, we can express using as a basis.

Just as with Euclidean space, are the projection of onto . The value of this being that we can solve for by taking the equivalent of an inner product. (dot product)

The fact that we can have a basis which is orthonormal, spans space, allows us to write the wave function, gives us a way to describe it in Hilbert space, and allows us to describe the coefficients as the projection of the wave function onto that particular eigenfunction, is very important!

Think back to expectation values, where . Solving for each term:

Thus,

Therefore the probability of measuring a particular value is , given by the coefficient which is the projection of the wave function onto that particular eigenfunction. If you think about this physically in vector space, it kind of makes sense! We're saying that if I have a vector that's mostly in the 1 direction, then it's going to have a behavior that's also "mostly" in the 1 direction. There is still a probability of measuring it in the other directions as well. So, when we talk about superposition, it's as a linear sum of eigenfunctions. Remembering that with each eigenfunction there is a coefficient which is the projection of the wave function onto that eigenfunction, this tells us the probability of measuring any particular value.

Return to Stern-Gerlach[edit | edit source]

We have some operator, , which operates on some function, , and returns the value . This system has only two solutions (in the case of the silver atom):

When we had that initial beam of atoms, passing through vacuum, initially we didn't know anything about the state; it was randomized.

This says that the probability of measuring each outcome is 50/50 odds! Furthermore, the wave function is normalized and the sum of the probabilities is equal to one. If this was not true we would have to go through and scale the vector until it is normalized. Now let's say we measure the case and find an "up" spin, meaning that has collapsed to . Now that we have measured the case, the probability of further finding an "up" case is now one and the probability of finding a down case is now zero.

What about ?

;

This system has two possible results, analogous to the ones shown with . We can write both systems together as:

The set and are incompatible. When we measure one, the vector function snaps to one of the basis, then again with the other.

Most importantly, we can collapse into either or , but not both. These two operators are incommensurate, as they don't commute, and if they don't commute they must form different basis sets within Hilbert space. We can write them out sideways, as each set is still equal to the wave function, but information about one set does not tell us anything about the other set.

The collapse of to or is unique to quantum mechanics and is why we can't simultaneously know these two observables!