I shall make no attempt at maintaining a uniform level of mathematics in this blog. For instance, the previous post showed a simple approach to a relatively advanced (or, at least, new-ish) concept, and the one before that I found to be quite difficult, and am still not sure I get it entirely. In this post, I’m going to try to explain a very basic concepts, which most people who read blogs like this may consider to be trivial. This would be somewhat like a talk I would give to students who have recently learnt what Hilbert spaces and bounded linear operators are. What I’m feeling for is a kind an intuition at a very basic level, which can be used to build upon for a more refined intuition in more complex cases. Unless I approach new concepts this way myself, I soon find myself lost. In a way, this post approximates the way I would approach a new concept, and thus is more about process than content.
The reason for this post is that I was reading a bit of ergodic theory again, with an eye to understanding the number-theoretic arguments that rely on it (Furstenberg’s proof of Szemerédi, and the like). When I saw the use of unitary operators in the formulation of von Neumann’s ergodic theorem, I wondered how I would explain the use of them to a student with only a scant knowledge of operator theory. The discussion will initially just focus on the kind of unitary transformation a student should already have encountered in linear algebra, but later on involve some more abstract notions.
Let’s first settle on a suitable Hilbert space to use. For now, since we’re going to discuss physical ideas involving lengths and angles, we’ll stick with Euclidean space, and see how the ideas generalise to other spaces later. We start with , with the usual inner product
Any real matrix can be seen as a bounded linear operator
. (Exercise: Show this.) Since the way in which we “measure” elements of the Hilbert space is through the inner product (and the norm deriving from it), interesting operators would be ones which have some quantifiable relationship with this. For instance, a subspace could be seen as the set of vectors that all satisfy some angular/directional relationship (such as being contained in a plane), and the projection onto this subspace isolates the parts of a vector that satisfy this relationship. So it would seem that a fairly natural question to ask is, which operators will preserve the length and angular relationships between vectors?
The simplest operations on a space might be considered to be translation, rotation and dilation (and the identity mapping, which is not that interesting). Starting with dilation, suppose that is an operator that takes a vector
and transforms it to a vector of length
in the same direction as
. Obviously, this operator acts as
The operator can of course be written as a diagonal matrix with all entries equal to
. It is easily seen that unless
, this operator does not preserve inner products, that is,
. The same goes for translation, that is, an operator that translates any
to
.
You may now complain that these operators do indeed preserve length and angular relationships, and they do to a certain extent. However, they do not preserve those relationships with the origin, which is why the lengths differ. The third kind of transformation does this. Any rotation on
can be seen to preserve inner products. When you think geometrically of the inner product in terms of the projection of one vector on another, it is clear that the absolute orientation of these vectors does not play a part. Furthermore, given that the rotation can be represented by a
matrix, we can now deduce the defining property of a unitary operator. Denoting the transpose of a matrix
by
, we see that
for all .
Since this must hold for all relevant and
, it must be that
. (This property will be satisfied by all matrices with determinant -1 or 1).
(Another nice example to consider is that of a rotation in a complex plane of, which is just a multiplication by a complex number of modulus 1.)
With some idea of what a unitary operator in this space is, we can try to make sense of the von Neumann ergodic theorem.
Theorem. Let be a unitary operator on a Hilbert space
. Let
be the orthogonal projection onto the subspace
. Then, for any
,
.
Suppose that we were to apply this theorem to the case of rotations in , which is not much different from considering rotations in three dimensions. We immediately run into a problem. If
is a rotation, what is an invariant subspace? Unless
is a full rotation, only the origin is invariant, and a full rotation is the same as the identity operator! However, this does not imply that the content of the theorem is entirely void. Since the zero operator is the projection on the invariant subspace which is the origin, we see that for any
,
Hence, if the sum up the rotations (which are not full rotations) of any vector, we must get an average of , which makes intuitive sense. In the case of rational rotations (i.e. rational mutiples of
), it is not too hard to prove, and in the case of irrational ones it would seem sensible from the Weyl equidistribution theorem.
One of the first examples one usually encounters in ergodic theory is the shift operator. Since my introduction to ergodic theory came via Furstenberg’s proof of Szemeredi’s theorem, this felt natural. However, supposing that you are used to unitarity meaning rotation, is there a way to interpret the shift operator as such?
To make sense of this, we have to slightly expand our idea of unitarity. Consider the following operator acting on vectors in :
It is easy to see that this matrix takes the unit vector to the unit vector
, and vice versa. Furthermore, the conjugate transpose of
is itself, and
. Thus
is unitary, and is in fact a permutation of the axes. Of course, in two dimensions there are only two permutations possible (it is the permutation group on two elements, after all!), so things will get more interesting in higher dimensions. (Exercise: Show that any permutation of axes in
is unitary. Can all of these permutations be seen as rotations?) In fact, any permutation of an orthonormal basis in a Hilbert space is unitary (think of how a permutation of the basis would affect the inner product). Thus, if we think of the shift operator as a permutation of a countable basis, we see that it fits into the general intuition of a unitary operator.
Clearly, there is still much to unpack here. What kind of unitary operators do we get on some common Hilbert spaces, such as or
? What does unitary equivalence mean? What does it mean when a unitary operator is self-adjoint? Posing questions like these is, I believe, more valuable than reading a single textbook’s introduction on the subject, because you’re trying to make sense of something in terms of what you already know, and you can allow curiosity to lead you. At some point, the concepts will take on a life of their own and your frame of reference will expand, ready to begin making sense of the next, more advanced concept.
Learning mathematics is a lot messier than most textbooks (and lectures) would have us believe. Embrace it.