Uncertainty Principles II: Entropy Bounds

In my previous post on uncertainty principles, the lower bounds were on the standard deviations of self-adjoint linear operators on a Hilbert space H. The most general such inequality was the Schrödinger inequality

σA2 σB2 ≥ | (〈{A, B}〉 − 2〈A〉〈B〉) ⁄ 2 |2 + | 〈[A, B]〉 ⁄ 2i |2,

and the classic special case was the (Kennard form of) the Heisenberg Uncertainty Principle, in which A and B are the position and momentum operators Q and P respectively:

σP σQ ≥ ℏ⁄2.

One problem with the Robertson and Schrödinger bounds, though, is that the lower bound depends upon the state ψ; this deficiency is obscured in the Kennard inequality because the commutator of the position and momentum operators is a constant, namely −iℏ. It would be nice to have uncertainty principles for more general settings in which the lower bound does not depend on the quantum state. Also, we don’t have to restrict ourselves to standard deviation as the only measure of non-concentration of a (measurement of a) distribution.

Let A and B be self-adjoint linear operators on a complex Hilbert space H of finite dimension d. This basically means that we’re considering the quantum mechanics of a system with d distinct (deterministic) states; up to re-weighting, the space of possible quantum states is ℂd. Let a1, …, ad and b1, …, bd be systems of orthonormal eigenstates (elements of H) for A and B respectively. Given a quantum state ψ let

pj := |〈aj, ψ〉|2 and qj := |〈bj, ψ〉|2.

Note that, by orthonormality, the pj and qj are sequences of non-negative numbers each summing to 1, so they can be seen as probability distributions p and q on the finite set {1, …, d}. The entropy of any probability distribution p — in particular these distributions p and q associated to A, B and ψ — on this finite set is

Ent(p) := − ∑j pj log pj.

In particular, we might write

Ent(A, ψ) := − ∑j |〈aj, ψ〉|2 log |〈aj, ψ〉|2

and similarly for Ent(B, ψ).

Maasen and Uffink establish the following lower bound on the sum of the two entropies (as opposed to the earlier inequalities on the products of standard deviations):

Ent(A, ψ) + Ent(B, ψ) ≥ − 2 log c(A, B).

Here c(A, B) is non-negative constant that depends only on A and B and not on the state ψ:

c(A, B) := maxj, k |〈aj, bk〉|.

Note that if A and B have an eigenstate in common, then this maximum is 1, and then the lower bound on the sum of the entropies is 0. The other extreme is when all the inner products |〈aj, bk〉| are 1⁄√d — in which case the two eigenstate bases are said to be mutually unbiased — in which case we get

Ent(A, ψ) + Ent(B, ψ) ≥ log d >; 0.

(It makes sense to assume that d ≥ 2, and hence that log d >; 0, since a system with only one possible state is rather uninteresting!)

There are extensions to other entropies such a Rényi entropies. The extension to self-adjoint operators on infinite-dimensional Hilbert spaces seems to be a tricky issue, though, as far as I can tell.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s