## Angular momentum

Apologies for being so quiet lately. I thought that I’d share this little blog post examining sayabiki (the pulling back of the scabbard with the left hand to draw the Japanese katana with the right hand) from the physicist’s perspective of conservation of angular momentum:

Sayabiki and Angular Momentum

This is an interesting point, although I think that it is more relevant to, say, the dynamics of a throw in Aikidō than drawing the sword in Iaidō. While I don’t doubt that using sayabiki to exploit the conservation of angular momentum does allow a more powerful draw, I think that the primary reason for sayabiki is simple mechanics: if one does not retract the scabbard, then the sword will not come cleanly out of the scabbard mouth — if the sword comes out at all, then its cutting tip will damage the scabbard and indeed the Iaidōka’s hand!

## Torsors and affine spaces, or: I keep forgetting where I started

The topic of this post is torsors, which occur naturally throughout mathematics and physics whenever we have natural notions of relative — but not absolute — sizes, positions, temperatures and so forth. This post owes a lot to this 2009 post by John Baez, and so I’ll shamelessly steal borrow some (but not all) examples from him:

• Even after choosing a unit of voltage (e.g. the SI unit, the volt), it makes no sense to say that the voltage at some point p in a circuit is, say, 7V. It does, however, make sense to say that the voltage at p, relative to that at another point q, is 7V. Relative to that chosen reference value at q, voltages are real numbers — but we are free to change the reference point, and without a reference point, voltages are not real numbers, but they do live in a real torsor.
• A good geographical example is longitude: we habitually describe longitude on Earth as longitude using units of degrees and relative to the Greenwich meridian. However, the choice of the Greenwich meridian is basically arbitrary, and if we were to change to the Cairo, Paris, or Washington meridian instead, it would not change the difference in longitude between any two points on Earth. Longitudes are not elements of the circle group S1 (angles); it is longitude differences that are angles in S1, whereas longitudes live in an S1-torsor.
• Both the previous two examples indicate that, whatever a “torsor” is, it’s like a well-behaved algebraic structure (like the real line ℝ or circle group S1) in which the usual reference point, the origin (0 in ℝ and 1 in S1) has been “forgotten”. The usual setting of plane geometry going all the way back to ancient Greece is like this, too: there is no preferred origin for plane Euclidean geometry: you are free to work relative to one corner of your graph paper, or relative to some point in the ground in your Athenian sand-pit.

So… what’s going on here?

## Annihilation, creation, and ladder operators

These are some notes, mostly for my own benefit, on annihilation, creation, and ladder operators in quantum mechanics, with a few remarks towards the end on angular momentum, spin and Clebsch–Gordan coefficients.

First, the abstract definition: if T, LV → V are linear operators on a vector space V over a field K, then L is said to be a ladder operator for T if there is a scalar cK such that the commutator of T and L satisfies

$\displaystyle [T, L] := TL - LT = cL.$

The operator L is called a raising operator for T if c is real and positive, and a lowering operator for T if c is real and negative.

The motivation behind this definition is that if (λv) ∈ K × V is an eigenpair for T (i.e. Tv = λv), then a quick calculation reveals that (λ + cLv) is an eigenpair for T:

$\displaystyle T(Lv) = (TL)v = (LT - [T,L])v = LTv + cLv = (\lambda + c) (L v).$

Ladder operators come up in quantum mechanics because many of the elementary operations on quantum systems act as ladder operators and increase or decrease the eigenvalues of other operators. Those eigenvalues often encode important information about the system, and the increments and decrements provided by the ladder operators often come in discrete, rather than continuous, values. Annihilation and creation operators are a prime example of this phenomenon.

## Preprint: Stratified graphene-noble metal systems for low-loss plasmonics applications

Lauren Rast, Vinod Tewary and I have just posted to the arXiv a preprint of our joint paper “Stratified graphene-noble metal systems for low-loss plasmonics applications”, which has been accepted for publication in Physical Review B later this year.

Graphene — which is basically just a sheet of carbon graphite just a few atomic layers thick — is often hailed in the press as a “wonder material”, usually because of its material properties such as its exceptionally high strength-to-weight ratio. Another area of interest, however, is the study of the electronic and optical properties of graphene.

Graphene has a very high carrier mobility, meaning that if you excite it by shooting some light at it, then the electrons in the graphene sheet will wiggle about in a very pleasant manner (called a plasmon) and thereby transmit a large chunk of the energy contained in the incident light as electrical current through the graphene sheet. This is exactly what you want a solar cell to do if you want to generate electricity from sunlight — and a graphene-based “thin film” solar cell would be much thinner and lighter than today’s solar cells.

But there’s a tiny problem: graphene doesn’t do this very well at visible frequencies of light. Silver, on the other hand, does respond very sympathetically to visible light, but is much more lossy. But, a-ha! What is one were to make a graphene and silver sandwich? Could one have the best of both worlds, i.e. absorption in the visible range like silver, and low electronic energy loss like graphene?

Our paper is a theoretical and numerical exploration of such sandwich structures, and we show that such composites do go a long way towards this best of both worlds.

## Uncertainty Principles II: Entropy Bounds

In my previous post on uncertainty principles, the lower bounds were on the standard deviations of self-adjoint linear operators on a Hilbert space H. The most general such inequality was the Schrödinger inequality

σA2 σB2 ≥ | (〈{A, B}〉 − 2〈A〉〈B〉) ⁄ 2 |2 + | 〈[A, B]〉 ⁄ 2i |2,

and the classic special case was the (Kennard form of) the Heisenberg Uncertainty Principle, in which A and B are the position and momentum operators Q and P respectively:

σP σQ ≥ ℏ⁄2.

One problem with the Robertson and Schrödinger bounds, though, is that the lower bound depends upon the state ψ; this deficiency is obscured in the Kennard inequality because the commutator of the position and momentum operators is a constant, namely −iℏ. It would be nice to have uncertainty principles for more general settings in which the lower bound does not depend on the quantum state. Also, we don’t have to restrict ourselves to standard deviation as the only measure of non-concentration of a (measurement of a) distribution.

## Uncertainty Principles

I have been thinking recently about game theory, and some of the reading I’ve been doing has been on connections between mixed strategies in game theory and uncertainty principles — the most famous of which is probably the Heisenberg Uncertainty Principle from quantum mechanics. This inspired me to dust off my Hilbert space theory and prove the famous principle for myself.

In quantum mechanics, a state of a quantum system is just an element ψ of a suitable Hilbert space H that has unit norm, i.e. one for which ‖ψ‖ := √〈ψψ〉 =  1. Typically, H is a Lebesgue L2 space of complex-valued functions like L2(ℝ; ℂ) for a single particle with one degree of freedom — moving on a line, say — in which case the inner product is

ψφ〉 := ∫ ψ(xφ̅(x) dx.

The square of the absolute value of ψ is the probability distribution of the system, so you get the expected value of properties (“observables”) of the system by integrating them against |ψ|2.

More precisely, the average of a linear operator AH → H (in a particular state ψ) is denoted by 〈A〉 and is simply the inner product 〈Aψ, ψ〉. The standard deviation of A (again in a particular state ψ) is denoted by σA and defined to be 〈(A − 〈A〉)21⁄2, by analogy with the standard deviation of a random variable in probability/statistics. When the standard deviation of A is small, you have a very precise/certain measurement of some aspect A of the system. Uncertainty principles crop up when you ask the question, “Can I have high-precision measurements of two aspects of the system at the same time?”

The answer turns out to depend on whether or not the two measurement operators “commute”, i.e. whether or not you get the same answer regardless of the order in which you do them. For any two operators A and B from H into itself, we define their commutator by [AB] := AB − BA, and their anti-commutator by {AB} := AB + BA.

The best-known uncertainty principle is Heisenberg’s, which concerns the position and momentum operators Q and P respectively. The position operator Q acts as (Qψ)(x) := xψ(x), and the momentum operator P acts as (Pψ)(x) := −iψ′(x). The constant ℏ is the reduced Planck constant; the important point is that it’s a real number that’s strictly greater than zero. Kennard’s formulation of the Heisenberg Uncertainty Principle is the inequality

σP σQ ≥ ℏ⁄2,

from which it follows that we cannot have arbitrarily precise measurements of position and momentum at the same time, since Kennard’s inequality provides a non-zero lower bound on the product of their standard deviations. Kennard’s principle follows from the easy relationship [PQ] = −iℏ and the following uncertainty principle of Robertson for any two self-adjoint operators A and B:

σA σB ≥ | 〈[AB]〉 ⁄ 2i |.

Robertson’s principle, in turn, follows from an inequality of Schrödinger:

σA2 σB2 ≥ | (〈{AB}〉 − 2〈A〉〈B〉) ⁄ 2 |2 + | 〈[AB]〉 ⁄ 2i |2.

Schrödinger’s inequality is, it turns out, rather easy to prove. The only inequality you need is the Cauchy–Schwarz inequality (which is basically Inequality #1 in any introductory course on Hilbert spaces); the rest is a cunning change variables at the beginning and a few lines of algebraic massage.

Proof. Fix an arbitrary state ψ in H. The key notational trick is to introduce f := (A − 〈A〉)ψ and similarly g := (B − 〈B〉)ψ. With this notation, σA = ‖f‖ and σB = ‖g‖. Thus, by the Cauchy–Schwarz inequality,

σA2 σB2 = ‖f2 ‖g2 ≥ |〈fg〉|2.

This is actually the only inequality that we need; the rest is all algebraic massage of the right-hand side of the above inequality in z := 〈fg〉 and its complex conjugate z̄  = 〈gf〉. In particular, re-write the right-hand side of the above application of the Cauchy–Schwarz inequality as

|z|2 = |Re(z)|2 + |Im(z)|2 = |(z + z̄) ⁄ 2|2 + |(z − z̄) ⁄ 2i|2

and substitute in the the following two easy facts:

z := 〈fg〉 = 〈AB〉  − 〈A〉〈B

and, similarly, z̄  = 〈gf〉 = 〈BA〉 − 〈A〉〈B〉. When you do this, the Schrödinger inequality, as stated above, drops out very nicely. ∎