In my previous post on uncertainty principles, the lower bounds were on the standard deviations of self-adjoint linear operators on a Hilbert space *H*. The most general such inequality was the Schrödinger inequality

*σ*_{A}^{2} *σ*_{B}^{2} ≥ | (〈{*A*, *B*}〉 − 2〈*A*〉〈*B*〉) ⁄ 2 |^{2} + | 〈[*A*, *B*]〉 ⁄ 2*i* |^{2},

and the classic special case was the (Kennard form of) the Heisenberg Uncertainty Principle, in which *A* and *B* are the position and momentum operators *Q* and *P* respectively:

*σ*_{P} *σ*_{Q} ≥ ℏ⁄2.

One problem with the Robertson and Schrödinger bounds, though, is that the lower bound depends upon the state *ψ*; this deficiency is obscured in the Kennard inequality because the commutator of the position and momentum operators is a constant, namely −*i*ℏ. It would be nice to have uncertainty principles for more general settings in which the lower bound does not depend on the quantum state. Also, we don’t have to restrict ourselves to standard deviation as the only measure of non-concentration of a (measurement of a) distribution.

Let *A* and *B* be self-adjoint linear operators on a complex Hilbert space *H* of finite dimension *d*. This basically means that we’re considering the quantum mechanics of a system with *d* distinct (deterministic) states; up to re-weighting, the space of possible quantum states is ℂ^{d}. Let *a*_{1}, …, *a*_{d} and *b*_{1}, …, *b*_{d} be systems of orthonormal eigenstates (elements of *H*) for *A* and *B* respectively. Given a quantum state *ψ* let

*p*_{j} := |〈*a*_{j}, *ψ*〉|^{2} and *q*_{j} := |〈*b*_{j}, *ψ*〉|^{2}.

Note that, by orthonormality, the *p*_{j} and *q*_{j} are sequences of non-negative numbers each summing to 1, so they can be seen as probability distributions *p* and *q* on the finite set {1, …, *d*}. The **entropy** of any probability distribution *p* — in particular these distributions *p* and *q* associated to *A*, *B* and *ψ* — on this finite set is

Ent(*p*) := − ∑_{j} *p*_{j} log *p*_{j}.

In particular, we might write

Ent(*A*, *ψ*) := − ∑_{j} |〈*a*_{j}, *ψ*〉|^{2} log |〈*a*_{j}, *ψ*〉|^{2}

and similarly for Ent(*B*, *ψ*).

Maasen and Uffink establish the following lower bound on the *sum* of the two entropies (as opposed to the earlier inequalities on the *products* of standard deviations):

Ent(*A*, *ψ*) + Ent(*B*, *ψ*) ≥ − 2 log *c*(*A*, *B*).

Here *c*(*A*, *B*) is non-negative constant that depends only on *A* and *B* and not on the state *ψ*:

*c*(*A*, *B*) := max_{j, k} |〈*a*_{j}, *b*_{k}〉|.

Note that if *A* and *B* have an eigenstate in common, then this maximum is 1, and then the lower bound on the sum of the entropies is 0. The other extreme is when all the inner products |〈*a*_{j}, *b*_{k}〉| are 1⁄√*d* — in which case the two eigenstate bases are said to be **mutually unbiased** — in which case we get

Ent(*A*, *ψ*) + Ent(*B*, *ψ*) ≥ log *d* >; 0.

(It makes sense to assume that *d* ≥ 2, and hence that log *d* >; 0, since a system with only one possible state is rather uninteresting!)

There are extensions to other entropies such a Rényi entropies. The extension to self-adjoint operators on *infinite*-dimensional Hilbert spaces seems to be a tricky issue, though, as far as I can tell.