Eigenvalues with NumPy (and Cython)

Caveat: I am not a Python expert. I’m just messing around with it.

In a previous post, I used Numpy to calculate some eigenvectors of a four-by-four random matrix . Of course, these values were not exact, since we weren’t using symbolic computation. I have been experimenting with Cython recently in order to speed up some of my (very long) Python calculations, and I was wondering if I could use it to write code which gets close to Numpy speed and accuracy in the computation of eigenvalues. Numpy libraries are C-optimised already, so we shouldn’t be surprised if I can’t easily surpass them. However, this is a good exercise and introduction to using Cython for matrices.

Using Cython is slightly more complicated than simply slotting new commands into Python. You have to install Cython, create a .pyx file with your code, and create a setup.py file. This tutorial has everything you need to get started:

https://pythonprogramming.net/introduction-and-basics-cython-tutorial/

We will now implement an algorithm in Numpy to calculate the largest eigenvalue of a matrix, and compare it to standard Numpy functions. This is based on the power method, which works as follows: Given an initial vector x_0, we form x_1 = A x_0, x_2 = A x_1... This will gives us an approximation to the eigenvector belonging to the dominant eigenvalue. To get the eigenvalue corresponding to the vector, we observe that the eigenvalue \lambda belonging to an eigenvector x is given by the Rayleigh quotient:

\lambda = \frac{(A x)^T x}{x^Tx}.

First, I am going to give the Python code, applied to a simple matrix for which it is easy to calculate the eigenvalues and eigenvectors by hand.

import numpy as np
import time

# This function takes iterates of the matrix multiplied by an initial vector 
# It also gives the Rayleigh quotient at the end 
# We give inputs the matrix x, the initial vector i, and the number of iterations

def largesteig(x,i,n):
    for j in range(n):
        y = np.matmul(x,i)
        i = y
    a = i    
    b = np.matmul(x,a)
    bb = b.transpose()
    bbb = np.matmul(bb,a)
    #print(bbb)
    c = np.matmul(a.transpose(),a)
    d = bbb/c
    return d

# We now choose the matrix to which we'll apply the function       
    
B = np.array([[2,-1,0],[-1,2,-1],[0,-1,2]])
I = np.array([[1,1,1]])
K = I.transpose()

t1 = time.time()

def eiglist(A,J,n):
    for i in range(n):
        print(largesteig(A,J,i))       

t2 = time.time() 

print('The algorithm took {} seconds'.format(t2-t1))

Note that we really weren’t careful in choosing the initial vector. The output of the iterations is as follows:

[[0.66666667]]
[[2.]]
[[3.33333333]]
[[3.41176471]]
[[3.41414141]]
[[3.41421144]]
[[3.4142135]]
[[3.41421356]]
[[3.41421356]]
[[3.41421356]]
[[3.41421356]]
[[3.41421356]]
[[3.41421356]]
[[3.41421356]]
[[3.41421356]]
The algorithm took 0.003749370574951172 seconds

It seems like this clearly converges, and quite quickly. (Of course, things can still go awry. I have had experiences where the output seems to stabilise, but with more iterations goes completely wrong.) We can compare the speed with Numpy’s usual way of solving for eigenvalues:

import numpy as np
import time

B = np.array([[2,-1,0],[-1,2,-1],[0,-1,2]])
I = np.array([[1,1,1]])
K = I.transpose()


t1 = time.time()

A = np.linalg.eigvals(B)
a = max(A)

t2 = time.time()

print(a)
print("Numpy took {} seconds.".format(t2-t1))

This give us the output

3.4142135623730914
Numpy took 0.0005006790161132812 seconds.

This is a lot faster, which should not surprise us greatly, since the Numpy code is optimised. We can increase the speed of our power method code by specifying fewer iterations (or preferably, by introducing some convergence criteria), but we still won’t get close to Numpy. Now let us see if we can close the gap by using some Cython. I am not under the illusion that we are going to beat Numpy to the largest eigenvalue, but even getting close should demonstrate the worth of using Cython in your Python code. All we are really going to do is say what types the different variables are. Usually, this is not necessary in Python, because it is “dynamically typed”. While this makes Python easy to read, it unfortunately slows it down quite a lot. The following script will not be the usual .py file, but .pyx instead.

import numpy as np
import time
cimport numpy as np

def largesteig(np.ndarray[np.int_t, ndim = 2] x, np.ndarray[np.int_t, ndim=2] i,int n):
    for j in range(n):
        y = np.matmul(x,i)
        i = y
    b = np.matmul(x,i)
    bb = b.transpose()
    bbb = np.matmul(bb,i)
    #print(bbb)
    c = np.matmul(i.transpose(),i)
    d = bbb/c
    return d

All we have done is specify the matrix properties (integer entries and dimension). Notice that we had to cimport numpy apart from importing numpy as usual. Before we can use it, we have to create the following .py script. Just call it setup.py:

from distutils.core import setup 
from Cython.Build import cythonize
import numpy

setup(ext_modules = cythonize('Power_method_with_Cython.pyx'),include_dirs=[numpy.get_include()])

In the terminal, we then run the command

home@home:pathtoyourfiles$ python setup.py build_ext --inplace

Finally, we have the following Python script to use what we have built on an example:

import Power_method_with_Cython as PmC
import numpy as np
import time

B = np.array([[2,-1,0],[-1,2,-1],[0,-1,2]])
I = np.array([[1,1,1]])
K = I.transpose()

t1 = time.time()

print(PmC.largesteig(B,K,10))

t2 = time.time() 

print('The algorithm took {} seconds'.format(t2-t1))

Once we have run the above, we get the output

[[3.41421356]]
The algorithm took 0.00029778480529785156 seconds

This is actually pretty unbelievable – we have only added a few type declarations in our functions, specifically to do with matrices, and we are already surpassing Numpy speed! The algorithm may still take longer than Numpy though, depending on the number of iterations used. Then there are further issues about convergence of the method, since a solution is not necessarily stable under more iterations. Still, pretty good for a small change in the code. There are even more ways to speed things up, some of which I might add to this post (once I understand them). Interestingly, I’ve discovered that some of the Cython commands which are supposed to speed things up could actually slow them down, but more one that later.

Some useful links:

https://stackoverflow.com/questions/28727717/how-are-numpy-eigenvalues-and-eigenvectors-computed

https://blog.paperspace.com/faster-numpy-array-processing-ndarray-cython/

Primality and duality in linear programming

Duality is an extraordinarily important concept in both linear and nonlinear programming. However, the equivalency of the primal and dual problems is not completely obvious, and difficult to give a geometric interpretation to. However, we can get a pretty good idea of how it works by examining the equivalency of the two problems. The following is mostly based on the notes from AA Ahmadi, but I have expanded them a little. Specifically, I show how you can also solve the dual problem from the primal. (I also recommend these notes for a really intuitive explanation of Farkas’ Lemma.)

Let us consider the following problem as the primal:

\begin{array}{ll}  & \min  c^T x \\ \text{subject to } &  Ax = b \\ \text{and } &  x\geq 0, \end{array}

where x\geq 0 if each component is \geq 0.

We contend that this problem will have a solution if the dual problem does:

\begin{array}{ll} & \max b^T y \\ \text{subject to } & A^T y \leq c. \end{array}

In both the above problems, we let x,c\in \mathbb{R}^n, b,y \in \mathbb{R}^m and A an m\times n real matrix. When necessary, we shall refer to the primal problem as (P) and the dual as (D). Note that it does not really matter which one you pick as primal or dual, since the dual of the dual has to be the primal again. What we want to do is to show that the primal problem will have a solution as long as the dual one does.

We can start be rephrasing the problem (P) as follows:

\begin{array}{ll}  & \max \gamma    \\ \textrm{ such that } &  \forall x \begin{bmatrix} Ax = b \\ x\geq 0 \end{bmatrix} \implies \gamma \leq c^T x. \end{array}

The strategy here can be seen as maximising the lower bound for c^T x, instead of minimising c^Tx directly. We can rephrase this as an ostensibly stronger problem. The above will have a solution as long as there exist \eta \in \mathbb{R}^m, \mu \in \mathbb{R}^n with \mu \geq 0 such that

\forall x \quad c^T x - \gamma = \eta^T (Ax-b) +\mu^T x.

The stronger problem can now be reformulated as

\begin{array}{ll} & \max_{\gamma, \eta, \mu } \gamma  \\ \text{such that } & \forall x:  \eta^T (Ax-b) +\mu^T x, \quad \mu \geq 0. \end{array}

Now, affine functions of x will be equal everywhere if and only if their coefficients are equal, hence the problem can be written as

\begin{array}{ll} & \max \gamma \\ \textrm{such that } & c = A^T \eta + \mu \\ & \gamma = \eta^T b \\& \mu \geq 0. \end{array}

But this is the same as solving the problem

\begin{array}{ll} & \max b^T y \\ \textrm{such that } & A^T y \leq c \\  \end{array}

which is the dual problem. Thus, being able to solve the dual problem means we can solve the primal problem. Conversely, suppose that we can solve the primal problem. We phrase the dual problem as

\begin{array}{ll} & \min \gamma \\ \textrm{ such that } & \forall y \quad  A^Ty \leq c \implies b^Ty \leq \gamma \end{array}

The dual problem can be solved if there exists some \mu \geq 0 such that:

\forall y \quad \gamma - b^T y   =  \mu^T (c-A^T y).

This just means that, since \mu is nonnegative, c - A^Ty \geq 0 implies that \mu^T (c-A^T y) is also nonnegative, meaning that we have a nonnegative \gamma - b^T y, and we can minimise \gamma to be as close to b^T y as possible. Once again, we look at the coefficients in the above affine function of y, and can set them to be equal. We therefore get the problem

\begin{array}{ll}  \min  &\gamma = c^Tx  \\ \textrm{such that } &  A x= b\\ & x \geq 0, \end{array}

which is of course just the primal problem again.

The usefulness of primality and duality is not limited to spaces such as \mathbb{R}^n. In fact, these basic concepts will hold for any vector lattice, which allows us to do things like semidefinite programming.

Random stuff I like 1

This is about stuff that has improved my life, even just by giving me a moment’s respite from the ongoing madness. (Weirdly, sometimes the best way to escape from a world in crisis is to read about other worlds in crisis.)

I am currently reading The Indian Clerk by David Leavitt as well as Frank Ramsey: A Sheer Excess of Powers by Cheryl Misak. Although the former is a novel, so far it seems to capture the spirit of Hardy and Littlewood’s Cambridge excellently. Since both take place at more or less at the same time, one gets to see the mathematical (and wider) scene of the time from some different perspectives. Of course, we know that both books end in tragedy, with two of the greatest minds of the 20th century lost far too early.

I love Cal Newport‘s work, and since we’re in similar jobs (I have also published a few things in computer science), I take his advice seriously. The principles in Deep Work have caused me to reevaluate basically everything I do, and I am about to read it yet again, because I feel I have strayed from the Deep Work path. This is mostly because during lockdown, my e-mail and administrative load has exploded. I suspect I am not alone in this. Because there is as yet no best-practice for this type of situation, there is much wasting of effort and redundancy. So, I am quite excited for his new book, A World Without E-Mail, because that just sounds like bliss to me. Unfortunately, I have no idea how I will convince the powers that be that they should stop sending me meeting requests at all hours…It is also worth giving his book Digital Minimalism a look, which has substantially changed the way I approach the working day.

I picked up Jupiter’s Legacy on a whim the other day, not even knowing that Netflix was making the series. Usually, anything Mark Millar writes is excellent, and so is this. If there were super-heroes, this is probably what would happen… (Careful when buying this though – the original collected editions are not all called “Jupiter’s Legacy”, but the Netflix editions are.)

The Three-Body Problem: The best sci-fi I have read in the last 5 years, bar none. I know I am a bit late to the party (as I so often am), because the book has been out for a while. Reading it became the best part of my day.

I’ve finally found the solution to having many, many notebooks cluttering up my study, and it is the Rocketbook. Expensive for a notebook, sure, but it allows me to easily capture my notes on Google Drive, and means I won’t run out of notebook space when travelling. Although I still have a weakness for a decent notebooks, especially European ones, whose paper quality is sometimes amazing.

Next time, back to the maths.

The spectral theorem 1: Matrices (with NumPy)

This post will again not contain anything very advanced, but try to explain a relatively advanced concept by breaking it down into the ideas that led to its formulation. Once again, the star is that wonderful thing, linear algebra. I have gone from really disliking it upon first encounter (as a naive undergrad), to having immense appreciation for the way in which it can illustrate fundamental mathematical concepts. Here, I will first present some computations that could lead to the formulation of the spectral theorem in the finite dimensional case, and in the next installment I will consider instances of the generalisation. This is similar to how I would approach any new mathematical concept, which is first to reduce it to something I have a decent understanding of, before trying to understand it in any generality.

One version of the spectral theorem can be stated as follows:

Theorem. Suppose A is a compact self-adjoint operator on a Hilbert space H. Then there is an orthonormal basis of H which consists of eigenvectors of A, and for which each eigenvalue is real.

Recall that a compact operator is a linear operator such that the image of any bounded subset has compact closure. To get a good idea of the Spectral Theorem, we must first get an idea of what a compact self-adjoint operator would be in a finite dimensional setting. For the moment, let us only consider real matrices. Since the adjoint of a matrix is its conjugate transpose, in this case we consider only symmetric matrices. The question of compactness is trivial in this case. Our Hilbert space will just be \mathbf{R}^n with the usual inner (dot) product. In this case, the spectral theorem states that we should be able to find an orthonormal basis of \mathbf{R}^n consisting of eigenvectors of our matrix T, with corresponding real eigenvalues. Let us consider some simple cases.

A = \begin{bmatrix} 3 & 2 & 1 \\ 2 & 2 & 3 \\  1 & 3 & 5 \end{bmatrix}.

Since I don’t feel like working this out by hand, I’ll use some Python:

import numpy as np
A = np.array([[3,2,1],[2,2,3],[1,3,5]])
a = np.linalg.eig(A)
print(a) 

This gives us the following output:

(array([ 7.66203005,  2.67899221, -0.34102227]),
 array([[-0.3902192 , -0.83831649,  0.38072882],
        [-0.53490509, -0.13015656, -0.83482681],
        [-0.74940344,  0.52941923,  0.39763018]])

Note that the eigenvectors are given as the columns of the array. Now, it would not be too much to expect that the eigenvectors will give us a basis for \mathbf{R}^3. It would be a stretch to expect these to be orthonormal, however. We can test this out quickly:

b = [np.dot(a[1][i],a[1][j]) for i in range(3) for j in range(3)]
print(b)  

(If you look at the code, you’ll see that I actually took the dot product of the rows, not the columns. Why would this have the same result?)

[0.9999999999999998, -5.551115123125783e-17, 2.7755575615628914e-16, -5.551115123125783e-17, 0.9999999999999998, -4.440892098500626e-16, 2.7755575615628914e-16, -4.440892098500626e-16, 1.0000000000000004]

Those values are awfully close to either 0 or 1, and we can suspect that a precise calculation will probably give us those exact answers. But was this just luck, or would it work on any matrix? Let’s generate a random 4 by 4 matrix, which we multiply by its transpose to make it symmetric (where else do we see this product being used to generate Hermitian operators?):

B = np.random.rand(4,4)
BB = np.matmul(B, B.transpose())
bb = np.linalg.eig(BB)
c = [np.dot(bb[1][i],bb[1][j]) for i in range(3) for j in range(3)]
print(c)
[1.0000000000000007, -8.881784197001252e-16, 5.689893001203927e-16, -8.881784197001252e-16, 1.000000000000001, 1.6653345369377348e-16, 5.689893001203927e-16, 1.6653345369377348e-16, 0.9999999999999989]

Once again, we get entries that are practically 0 or 1, and it seems as if NumPy actually gives us the vectors in the desired form. You can get exact results using symbolic computation, for instance using SymPy:

import sympy as sym
A = sym.Matrix([[3,2,1],[2,2,3],[1,3,5]])
A.eigenvects()

This takes surprisingly long however, and doesn’t tell us much we don’t know. We can now formulate a possible finite dimensional version of the spectral theorem:

Conjecture 1. The eigenvectors of a symmetric n \times n real matrix A form an orthonormal basis of \mathbf{R}^n.

Let us first check this in the case where the matrix has full rank, and we have n linearly independent eigenvectors. Suppose that \vec{v}_1, \vec{v}_2, \dots ,\vec{v}_n are linearly independent eigenvectors of A with eigenvalues \lambda_1, \lambda_2, \dots ,\lambda_n. For i \neq j:

(A\vec{v}_i)\cdot (A\vec{v}_j) = (A\vec{v}_i)^T (A\vec{v}_j) = \vec{v}_i A^T A \vec{v}_j = \lambda_j^2 \vec{v}_i^T \vec{v}_j

and

(A\vec{v}_j)\cdot (A\vec{v}_i) = (A\vec{v}_j)^T (A\vec{v}_i) = \vec{v}_j A^T A \vec{v}_i = \lambda_i^2 \vec{v}_j^T \vec{v}_i.

Since the dot product is commutative, we must have either that \lambda_i^2 = \lambda_j^2 or \vec{v}_i \cdot \vec{v}_j = 0. The latter part is clearly what we desire, but what if \lambda_i^2 = \lambda_j^2? If we take a look at the wording of the Spectral Theorem, we can see that it does not state that any set of eigenvectors form a basis, but that there is a basis which consists of eigenvectors. So if we have, say, that A\vec{v}_1 = \lambda \vec{v}_1 and A\vec{v}_2 = \lambda \vec{v}_2, we can form a new vector \vec{w} as follows:

\vec{w} = \vec{v}_2 - \frac{\vec{v}_2\cdot \vec{v}_2}{\| \vec{v}_1\|^2}\vec{v}_1.

This new vector is also an eigenvector with eigenvalue \lambda:

A\vec{w} = \lambda \vec{v}_2 - \frac{\vec{v}_1 \cdot \vec{v}_2}{| \vec{v}_1 \|^2}A\vec{v}_1 = \lambda \vec{w}.

Furthermore, \vec{w} will be orthogonal to \vec{v}_1. To get orthonormality, we just normalise \vec{w}. (If there are more than two vectors with the same eigenvalue, we just continue applying the Gram-Schmidt process.) We have therefore used the eigenvectors \vec{v}_1 and \vec{v}_2 with the same eigenvalue to find two orthonormal vectors (\vec{v}_1 and \vec{w}/\|\vec{w}\|) with the same eigenvalue as before.

We can see that we must now replace Conjecture 1 with one that is more in line with the statement of the Spectral Theorem:

Conjecture 2. Given a real n\times n symmetric matrix A, there is a basis of \mathbf{R}^n consisting of eigenvectors of A.

Now, where could this conjecture go wrong? In the above examples, we had nice, non-zero eigenvalues. So let us consider a matrix in which the columns are linearly dependent, and which has an eigenvalue of 0 with multiplicity 2:

A = \begin{bmatrix} 1&1&1\\1&1&1\\1&1&1 \end{bmatrix}.

A = np.array([[1,1,1],[1,1,1],[1,1,1]])
np.linalg.eig(A)

(array([-2.22044605e-16,  3.00000000e+00,  0.00000000e+00]),
 array([[-0.81649658,  0.57735027,  0.        ],
        [ 0.40824829,  0.57735027, -0.70710678],
        [ 0.40824829,  0.57735027,  0.70710678]]))

As expected, we only have one non-zero eigenvalue, but we still have three orthonormal eigenvectors. NumPy has kindle normalised them for us, but it is easy to see that our eigenvectors will be multiples of (-2,1,1), (1,1,1) and (0,-1,1). Clearly, these are orthogonal, and once again we have an orthonormal basis. (Think of a geometric reason why this case is actually simpler than the case of 3 distinct non-zero eigenvalues.)

All of this seems to support our conjecture (but is, of course, not yet a proof). Perhaps our conjecture can be made more general? Will it not perhaps hold for arbitrary matrices? Let’s pick an arbitrary integer matrix, generated for me by Python, and get its eigenvalues and eigenvectors:

[[ 1467.   418. -1015.]
 [  418.  1590. -1565.]
 [-1015. -1565. -1560.]]

D = np.linalg.eig(C)
print(D)

(array([-2369.06809548,  1115.59533457,  2750.47276091]), array([[ 0.20557476, -0.80791041,  0.55228596],
       [ 0.34091587,  0.58811029,  0.73341847],
       [ 0.91734148, -0.03751073, -0.39633011]]))

We check to see if we have an orthonormal basis:

E = [np.dot(D[1][i],D[1][j]) for i in range(3) for j in range(3)]
print(E)

[1.0000000000000002, 5.551115123125783e-17, 0.0, 5.551115123125783e-17, 0.9999999999999998, -2.220446049250313e-16, 0.0, -2.220446049250313e-16, 1.0000000000000002]

Within computational error, this is definitely what we want. One can have some more fun with this by generating a 1\,000\times 1\,000 matrix and solving for the eigenvectors, but for now, the conjecture looks pretty solid, and we can try to prove it properly (but not here). It is also valuable at this point to look at what the spectral theorem means for the diagonalisability of the matrix.

This has been fairly simple, but the goal is to study the spectral theorem in its full form. I an upcoming post, I will consider some more interesting operators on infinite dimensional spaces.

Why the cross product?

This is of course a ridiculously simple thing, that many encounter in their first year of university mathematics, and perhaps even in high school. But, like many things I encountered in those days, I kind-of just accepted its existence and moved on. The calculations were easy enough, so I never really had to think about it.

These days I am having a lot of fun examining the undergraduate maths I haven’t touched in years, ever since being forced to do a lot of linear algebra for a research project. From a more mature perspective (I guess), there is so much I missed on in my undergraduate studies by not questioning the things I was learning more. I have a simple question I ask myself now when encountering a mathematical concept: “What the hell is this and why should it be this way?” It might seem a bit belligerent, but I think a bit of aggression is not entirely unwarranted when encountering mathematics. To me, a mathematical concept or entity is not satisfying until I can say where it came from and what it is good for, and I am slightly ashamed of the years I spent just coasting on the definitions of others, never questioning why things should be this way. Perhaps we too easily accept the work that has gone before, in all its abstract glory. Sure, things were done that way for a reason, but you would be so much better off if you knew that reason…

Hence, the cross or vector product in Euclidean space. For now I am going to accept the existence of the dot/scalar product as is, although if you would like a really nice explanation of it, I would suggest 3Blue1Brown‘s very excellent video. (The entire series of videos is worth watching, and should make you appreciate the wonderful world of linear transformations.) The video on the cross product did not entirely satisfy me though, which is why I’ve written this post.

Firstly, why should there be such a thing as a vector product? In other words, how could such a concept arise naturally? I mean, the definition is certainly not obvious. For the moment, let us leave aside the size of the product and simply ask: when will I need to get a vector perpendicular to two other (linearly independent) vectors? As with so much of mathematics, a physical intuition is the best answer. Suppose we have some kind of flow (for now, we can suppose we are dealing with a liquid) through some surface. The flow is not completely uniform, and the surface is not necessarily flat. Supposing we have a function describing the flow at each point in space (in other words, a vector field), can we determine how much is flowing through the surface?

As with so much of calculus, we are going to approximate the surface linearly (assuming the surface is nice enough to do this, of course – let us assume that it is given by a smooth enough function). Therefore, we consider the an approximation to the surface given by tiny parallelograms. By making these small enough, we can estimate the flow through the surface arbitrarily well. Thus, in order to find the total flow, we need some way of determining how much is flowing out of each of our parallelograms. Any part of the flow that is parallel to the plane created by our parallelogram is not relevant, because it is not flowing out. Therefore, we only need to consider the flow with components perpendicular to our parallelogram, in the direction of what we would call a normal vector.

Suppose we know how to determine our tiny parallelograms (for instance, by using partial derivatives of the surface at certain points), and that we know our parallelograms is defined by two vectors, say, \vec{v} = (v_1 ,v_2 ,v_3 ) and \vec{w} = (w_1 ,w_2 ,w_3 ) (supposing that these vectors are not parallel). To get a normal vector, we have to get a vector which is perpendicular to both of these. Fortunately, we can assume that we already know about the dot product, and can use it to test for perpendicularity. If there is a vector \vec{z} = (z_1 ,z_2 ,z_3 ) perpendicular to both of the above, it will have to satisfy the equations

\begin{array}{rcl} z_1 v_1 +z_2 v_2 +z_3 v_3 &=& 0\\ z_1 w_1 +z_2 w_2 +z_3 w_3& =&0. \end{array}

We now have two equations with three unknowns. But if we decide not to care about the length of the vector, we can set any of its components equal to 1, and solve for the other two. We therefore get the equations

\begin{array}{rcl}  v_1 +z_2 v_2 +z_3 v_3 &=& 0\\  w_1 +z_2 w_2 +z_3 w_3& =&0. \end{array}

Now it becomes a simple case of solving two equations in two unknowns, and we get

\begin{array}{rcl} z_1 &=& 1\\ z_2 & = & \frac{w_1 v_3 -v_1 w_3}{v_2 w_3 -w_2 v_3} \\ z_3 & = & -\frac{w_1 v_2 - v_1 w_2}{w_3 v_2 - v_3 w_2}.\end{array}

So far it’s not very pretty, but we can make it look nicer by multiplying every vector by v_2 w_3 -w_2 v_3:

\begin{array}{rcl} z_1 &=& v_2 w_3 - w_2 v_3 \\ z_2 &=& - (v_1 w_3 - w_1 v_3) \\ z_3 &=& v_1 w_2 - w_1 v_2. \end{array}

This is now starting to look very familiar. In order to get the unit normal vector that we require, we simply need to take

\vec{n} = \frac{\vec{z}}{|\vec{z}|}.

(Exercise: Show that the cross product corresponds to a unique linear map from \mathbb{R}^3 \otimes \mathbb{R}^3 to \mathbb{R}.)

We’re not quite done with the original form of \vec{z}, though. Supposing once again that we are trying to calculate the flow through some surface by approximating the surface with small parallelograms. To find the flow through the surface then, we have to find the part of the flow perpendicular to the parallelogram and multiply with the area. As if by magic though, we already have the surface area of the parallelogram! Specifically, it is given by the magnitude of the normal vector \vec{z} that we created earlier.

This is something that we are usually taught, and just assume. But why is that magnitude the area of the parallelogram? It is easy enough to get the surface area of a parallelogram in two dimensions; supposing that the parallelogram is determined by the vectors (a,b) and (c,d), the area is given by |ad-bc| (show this!) . This looks familiar, at least, and conforms to the determinant method we usually use to get the area.

In three dimensions, the calculation is not very hard either, but let us forget about the cross product for a moment. To get the area of any parallelogram we, of course, only need to multiply the base times the height. In this case, we can take one of the vectors as the base, and find its length in the usual way. To find the height, we just need the dot product and Pythagoras’ theorem.

Specifically, suppose we have the vectors \vec{x} = (a,b,c) and \vec{y} = (d,e,f), since I’m tired of using subscripts. Taking \vec{y} as the base, we have the first term, \sqrt{d^2 + e^2 +f^2}. To get the height, we first take the length of \vec{x} projected onto \vec{y}:

\frac{\vec{x}\cdot \vec{y}}{|\vec{y}|} = \frac{ad+be+cf}{|y|} = \frac{ad+be+cf}{\sqrt{d^2 + e^2 +f^2}} .

Using Pythagoras to evaluate the height h, we get

h^2 = |x|^2 -  \frac{(ad+be+cf)^2}{d^2 + e^2 +f^2}.

The square of the area is then given be

\begin{array}{ll} &  (d^2 +e^2 +f^2) \left( a^2 +b^2 +c^2 - \frac{(ad+be+cf)^2}{d^2+e^2+f^2 }\right) \\ &= (a^2 +b^2 +c^2)(d^2 +e^2 +f^2) -  (ad+be+cf)^2\\ & = a^2 e^2 + a^2 f^2 +b^2d^2 + b^2 f^2 +c^2 d^2 +c^2 e^2 -2adbe -2adcf - 2becf \\ &= (bf-ec)^2 + (af-dc)^2 + (ae-bd)^2 \end{array}

We can see from this that the usual, determinant form of evaluation the area agrees with the more elementary version above. At least we know that the form of the cross product is justified. There is still, to my mind, one deeper issue that needs to be resolved. We now know, formally, that the area of the parallelogram formed by two independent vectors is given by the size of the cross product, but we can also notice that the determinant form of the cross product consists of evaluating the size of three two-dimensional parallelograms. Why should this be? Left to the reader…

Solving symbolic matrix equations in Python with SymPy

I am no Python expert, and have only recently encountered SymPy, for symbolic calculations. I have been trying to do some (relatively simple) matrix calculations, and it has taking me an embarrassingly long time to figure out how to do this. So, I am sharing what I have learned here to help someone else avoid the rather large number of internet searches I had to do to piece it together.

There is a lot out there on how to use SymPy to solve matrix equations of the form Ax = b. What I am interested in is taking a bunch of given matrices (with numerical values) and constants, performing some operations with an unknown matrix, and setting each entry of the final matrix equal to zero and solving. In other words, suppose we are given n \times n matrices A and B, which are determined beforehand. There is some constant t which can be varied (this forms part of an iterative scheme), and an unknown matrix C, which is represented purely symbolically, as such:

\begin{bmatrix} c_{0,0} & c_{0,1} & \cdots &c_{0, n-1} \\ \vdots & \vdots & \ddots & \vdots \\ c_{n-1,0} & c_{n-1,1} & \cdots & c_{n-1,n-1} \end{bmatrix}

There is a function f_{A,B,t}: \mathbb{R}^{n\times n}\to\mathbb{R}^{n\times n}, and we want to find the entries of C for which

f_{A,B,t}(C) = \mathbf{0},

where \mathbf{0} indicates the n\times n zero matrix.

In order to solve an equation in SymPy, you have to declare the “symbols” that you are solving for. Now, defining a matrix symbol in SymPy is easy, but this did not help me in solving for the matrix, and I kept getting an empty output. I needed a way to iteratively declare each entry of the matrix as a symbol, whilst putting them together as a single matrix. This turned out to be the key to the whole thing. First, let us state the preamble:

import sympy as sym
from sympy.interactive import printing
printing.init_printing(use_latex=True)

The last two lines have no effect on the calculations, but they do give the option of displaying your matrices very nicely. Next, let us define some function with which to work:

def matrixfunction(A,B,C,t):
    mf = A - t*C + B.inv()
    return mf

(The final part of the last line is simply how we compute the inverse of B.) Here is one of the first things that tripped me up. In SymPy, you should distinguish between operations involving symbolic matrices and usual operations between matrices. For instance, if I were to declare D and E to be two arbitrary 5\times 5 matrices and wanted, for instance, to multiply them, I would use

D = sym.MatrixSymbol('D', 5, 5)
E = sym.MatrixSymbol('E', 5, 5)
sym.MatMul(D,E)

The output to this would be

D*E

and we would be able to see the symbolic entries of this matrix by using

X = sym.MatMul(D,E)
X.as_explicit()

The same holds for MatAdd. However, if you have defined the matrix by declaring all of its entries to be symbols, there does not seem to be a need to use this method, and a simple * can be used for multiplication and + for addition.

Our objective is now to set each entry in the matrix obtained from the function “matrixfunction” equal to zero and solve for the unknown matrix C. To do so, we define

def solvefor(A,B,t):
    C = Matrix(n,n,sym.symbols('D0:n(0:n)'))
    sol = sym.solve(matrixfunction(A,B,C,t))
    display sol

Of course, in the above, n needs to be replaced by an actual numerical value. Once the functions are defined, we can assign values to A, B and t and run “solvefor”. The output will be a set of values assigning the solution value to each entry of C. In this case, B has to be invertible. Note that SymPy automatically sets the argument of “sym.solve” equal to zero unless otherwise instructed – in this case, it is set equal to the zero matrix. We are also not specifying which symbols to solve for, since SymPy will automatically solve for the free variables here.

Szemerédi’s regularity lemma

I have taken an interest in regularity lemmas recently, starting with the weak regularity lemma of Frieze and Kannan (in order to understand Szemerédi’s theorem on arithmetic progressions), and decided that I needed to understand the proof of the full regularity lemma. The reference I’m using is Reinhard Diestel‘s book Graph Theory. The regularity lemma seems almost magically powerful to me, and has many applications (and generalisations, but we’ll get to that), but the proof is not all that hard. This post won’t present the full proof, just an overview to get the feel of it. (For another perspective on the regularity lemma by someone who actually knows what he is doing, see Yufei Zhao’s blog entry https://yufeizhao.wordpress.com/2012/09/07/graph-regularity/.)

Much like certain proofs of Roth’s theorem, the idea behind the proof of the regularity lemma is to increase a bounded positive quantity by a constant in each iteration. There are only so many times you can do this, and when you stop, what is left must satisfy the conditions of the lemma. Firstly, some definitions.

Definition 1. Let G=(V,E) be a graph. We say that a pair of disjoint subsets U,W \subset V is \varepsilon-regular if

|d(U,W) - d(U_1 ,W_1)| < \varepsilon

for any subsets U_1 \subseteq U and W_1 \subseteq W for which |U_1 |\geq \varepsilon |U| and |W_1 |\geq \varepsilon |W|.

In the above, d(U,W) as usual denotes edge density

\frac{\| U,V\|}{|U||W|}

between U and W.

Definition. Let \varepsilon >0 be given and let G=(V,E), |V|=n, be a graph. We say that a partition \{ V_0 ,V_1 , \dots V_k \} of the vertices of the graph is \varepsilon-regular if

  1. |V_0| \leq \varepsilon n
  2. |V_1| = |V_2| = \cdots =|V_k|
  3. Each pair (V_i ,V_j), i\neq j is \varepsilon-regular for all but at most \varepsilon k^2 of the possible pairs.

The regularity lemma (in one form; there are several equivalent ones) can be stated as follows.

Regularity Lemma. Give some \varepsilon >0 and some m\geq 1, there exists an integer M such that any graph G = (V,E) of order at least m admits an \varepsilon-regular partition \{ V_0 , V_1 , \dots ,V_k \} where m\leq k \leq M.

The quantity that we will bound is a kind of L^2 norm on the graph. If A,B \subseteq V and disjoint, set

q(A,B) = \frac{|A| |B|}{n^2}d(A,B)^2 = \frac{\| A,B \|^2}{|A||B|n^2},

where n = |V|. It should be clear that q(A,B) \leq 1. In effect, we consider A\cup B as a bipartite graph. If the graph is complete, q will be |A||B|/(|A| +|B|)^2 <1/2, and 0 when empty. It does not seem to me that the quantity has a very intuitive meaning. However, it does satisfy the conditions of being bounded and increasing under successive partitions, which is what we’ll need. Suppose that \mathcal{C} is a partition of C and \mathcal{D} is a partition of D. We define

q(\mathcal{C} , \mathcal{D} ) = \sum_{A \in \mathcal{C}, B \in \mathcal{D}} q(A,B).

 The fundamental fact which we can prove (quite easily) is that, with all objects as above,

q(\mathcal{C},\mathcal{D}) \geq q(C,D).

If we have a partition \mathcal{P} = \{V_1 , \dots ,V_k \} of the graph G, we define

q(\mathcal{P}) = \sum_{i\neq j} q(V_j ,V_j).

We can now prove from the above that if \mathcal{P}' is a refinement of the partition \mathcal{P}, then

q(\mathcal{P}') \geq \mathcal{P}.

We can already start to spy the strategy of the proof here – we have a bounded quantity that increases with further partitions. What we have to prove is that such refinement of partitions will increase by a fixed amount, which means that it can be done only a bounded number of times, given that q is bounded by 1.

Suppose we are now given a partition of V, namely \{ V_0 ,V_1 , \dots ,V_k \}, where all the members except V_0 have the same size. We regard V_0 as a set of members of the partition itself, each a singleton. We set

\mathcal{P} = \{ V_1, \dots , V_k \} \cup \{ \{ v \} : v\in V_0 \}

and define the quantity q for our partition { V_0 ,V_1 , \dots ,V_k } as q(\mathcal{P} ). With our definitions in place, we can start assembling the required lemmas.

Lemma 0. From the Cauchy-Schwarz inequality, we can conclude that

\sum \frac{e_{i}^{2} }{\mu_i} \geq \frac{(\sum e_i )^2}{\sum \mu_i}.

We won’t be using this lemma in this post (hence calling it Lemma 0), but it is essential in working out the details of the following lemmas.

Lemma 1. If C,D \subset V are disjoint and not \varepsilon -regular, there are partitions \mathcal{C} = \{ C_1 ,C_2 \} of C and \mathcal{D} = \{ D_1 , D_2 \} of D such that

q(\mathcal{C} , \mathcal{D} ) \geq q(C,D) + \frac{\varepsilon^4 }{|C| |D| n^2}.

For the proof, we pick the sets C_1 and D_1 to be two sets of sizes no less than \varepsilon |C| and \varepsilon |D|, respectively, witnessing the irregularity of (C,D), and C_2 and D_2 to be what is left over in C and D.

Using Lemma 1, we can prove the following crucial lemma.

Lemma 2. Let \varepsilon be such that 0<\varepsilon \leq 1/4. If \mathcal{P} =  \{ V_0 ,V_1, \dots ,V_k \} is an \varepsilon-irregular partition of V where |V_1 | = |V_2 | = \cdots = |V_k | and |V_0| \leq \varepsilon n, then there is a partition \mathcal{P}' = \{ W_0 , W_1 , \dots ,W_l \}, k\leq l \leq k4^k, of V such that |W_1| = |W_2 | = \cdots = |W_l | and |W_0 | \leq |V_0 | + n/2^k such that

q( \mathcal{P} ' ) \geq q( \mathcal{P} ) + \frac{ \varepsilon^5 }{2}.

The crucial part here is that the constant increased by does not depend on any of the parameters except \varepsilon. The idea behind the proof of Lemma 2 is to compare all pairs of sets V_i , V_j in the first partition, that might be irregular, then use Lemma 1 to partition them. We take a common refinement of all of these partitions, then partition further to obtain sets of equal cardinality, shunting all the vertices left over into V_0.

What astounds me about Lemma 2 is how wasteful it seems to be. We just partition according to Lemma 1 until we get the desired refinement, and yet this is still powerful enough to lead directly to the result we’re after.

We can now carefully pick our constants and iterate Lemma 2 until we obtain the regularity lemma. Pick \varepsilon such that 0< \varepsilon \leq 1/4. We will need to determine the constants m and M, depending on \varepsilon but not on G. These depend on how many times we have to implement Lemma 2, and how many further partitions each use of Lemma 2 creates. Because the q-value of a partition cannot be greater than 1, we know that the number of iterations has to be bounded by s = 2/ \varepsilon^5 . Now, at each stage the cardinality of the exceptional set may increase by n/2^k , if we start with a partition \{ C_0 , C_1 , \dots ,C_k \}. We want to prevent the size of the exceptional set at any stage from exceeding \varepsilon n, and since we start with some C_0 with |C_0 |\leq \frac{1}{2} \varepsilon n, we have to make sure that s iterations of adding n/2^k do not exceed \frac{1}{2}\varepsilon n either. (This is not precise, since the exceptional set will not actually increase by the same factor at each stage, but this will certainly do.) Now n must be chosen large enough so that for the initial exceptional set C_0, we have a large k so that |C_0| < k and |C_0 |\leq \frac{1}{2} \varepsilon n. The exceptional set should be allowed up to k members, to ensure we have equal cardinality for all the other sets. If k\geq m is large enough to allow 2^{k-1} \geq s/\varepsilon , we have s/2^k \leq \varepsilon /2 and

k+\frac{s}{2^k} n \leq \varepsilon n

for n\geq 2k/ \varepsilon .

What is left is to choose M. We choose M larger than the maximum of 2k/\varepsilon and the number of sets (non-exceptional) that can be generated by s applications of the previous lemma, that is, s iterations of r elements of the partition increasing to r4^r elements.

As long as our graph is relatively small, it is easy to show that a suitable partition exists. If the graph has order n, where m\leq n \leq M, we choose V_0 = \emptyset and let |V_1|  = |V_2| = \cdots = |V_n|. For n> M, we want to start with a partition which satisfies |W_0| \leq \frac{1}{2} \varepsilon n for Lemma 2 to be valid. Let k be as we established above. Now, let W_0 be a minimal subset of V so that k divides V\setminus W_0. We then partition V\setminus W_0 into k sets of equal cardinality. Because k\leq \frac{1}{2} \varepsilon n, Lemma 2 is applicable, and we simply apply it repeatedly to obtain the partition we are after.

The most important thing I do every day. Almost (kind of).

This post started out in a more innocent time, but has become more relevant because of the global crisis. I have officially been on COVID-19 lockdown for four weeks now, and have to take my own advice here so as not to go stir-crazy.

The current post has its genesis in a relatively recent holiday. My wife and I spent nearly two weeks away from it all, or at least, most of it. For the most part we were in relative isolation, surrounded by nature. Scenes like this:

undefined

This was one of those near-perfect experiences. We rested well, but were also active. We slept much and often, but also spent hours trekking through the mountains and the bush, and still had time to do a mountain of reading (because otherwise it doesn’t qualify as a holiday in my book, so to speak). I’ve had holidays that were fun while they lasted, but left me with a hollow feeling, as if I have neither rested properly nor done anything worthwhile. This was the opposite.

What set this one apart then, were two factors:

  1. We were surrounded by gorgeous natural scenes and very few people.
  2. We were active every day, going on some taxing hikes and swimming in rock pools.

I returned from my holiday refreshed and mentally prepared for the blizzard of things I have to do this year. Yet, only one day after my holiday, I was sitting at home catching up on Netflix when I was shocked to realise that I did, in fact, feel like shit. There was absolutely no reason for me to feel anything but wonderful. I was well-rested, I was healthy, I was financially stable, did not have great stress, and I had a weekend of relaxing left with my wife before going back to work (which I was looking forward to).

It didn’t take me long to figure out why, because I could contrast my circumstances with those of two days before, when I was feeling great. Firstly, I hadn’t been out of the house yet, despite it being a gorgeous, sunny day. Secondly, I hadn’t done anything physical. So, opposite circumstances yielded opposite results.

I know this does not sound like a major epiphany. But as someone who spends a lot of time thinking about very abstract stuff and not as much on external circumstances, it was an important one. The lesson comes down to this: to feel mentally excellent, the easiest thing to do is to involve the physical. I am no stranger to physical exercise and train regularly, but usually in the evenings. I realised that to have an excellent day, I needed to feel excellent first thing in the morning.

To implement this, I came up with a little exercise programme I do first thing in the morning. It is important for me to do this before having my coffee, before checking my e-mail, or anything else. It’s not a massive work-out. My aim is to get thoroughly warmed up, to get out of breath and to sweat. There is an easy metric to determine when I have done enough: it is when the exercise stops feeling like a chore and starts being fun, and I don’t really want to stop.

The difference in my life is clearly measurable and significant. During the lockdown, there have been times when I have slept later than normal and entered the day with a fuzzy mind and low physical energy. The solution is to get out of bed (even when I’ve overslept) and get straight into exercise. I make the requirements for these sessions very low, and only require ten goblet squats for the session to be considered complete. Of course, I never stop there, but knowing that I can finish this in a minute helps me get started. When I start my day’s work, my mind feels clear and energised. If I could quantify my happiness, I would say that this simple habit has increased it by at least 10%.

I do still skip it sometimes because of sheer laziness or time pressure (mostly the former). Every time I do though, I regret it.

What is a unitary operator?

I shall make no attempt at maintaining a uniform level of mathematics in this blog. For instance, the previous post showed a simple approach to a relatively advanced (or, at least, new-ish) concept, and the one before that I found to be quite difficult, and am still not sure I get it entirely. In this post, I’m going to try to explain a very basic concepts, which most people who read blogs like this may consider to be trivial. This would be somewhat like a talk I would give to students who have recently learnt what Hilbert spaces and bounded linear operators are. What I’m feeling for is a kind an intuition at a very basic level, which can be used to build upon for a more refined intuition in more complex cases. Unless I approach new concepts this way myself, I soon find myself lost. In a way, this post approximates the way I would approach a new concept, and thus is more about process than content.

The reason for this post is that I was reading a bit of ergodic theory again, with an eye to understanding the number-theoretic arguments that rely on it (Furstenberg’s proof of Szemerédi, and the like). When I saw the use of unitary operators in the formulation of von Neumann’s ergodic theorem, I wondered how I would explain the use of them to a student with only a scant knowledge of operator theory. The discussion will initially just focus on the kind of unitary transformation a student should already have encountered in linear algebra, but later on involve some more abstract notions.

Let’s first settle on a suitable Hilbert space to use. For now, since we’re going to discuss physical ideas involving lengths and angles, we’ll stick with Euclidean space, and see how the ideas generalise to other spaces later. We start with \mathbb{R}^3, with the usual inner product

(x_1 ,y_1 ,z_1 )\cdot (x_2 ,y_2 ,z_3 ) = x_1 x_2 +y_1 y_2 +z_1 z_2.

Any 3\times 3 real matrix can be seen as a bounded linear operator \mathbb{R}^3 \to \mathbb{R}^3.  (Exercise: Show this.) Since the way in which we “measure” elements of the Hilbert space is through the inner product (and the norm deriving from it), interesting operators would be ones which have some quantifiable relationship with this. For instance, a subspace could be seen as the set of vectors that all satisfy some angular/directional relationship (such as being contained in a plane), and the projection onto this subspace isolates the parts of a vector that satisfy this relationship. So it would seem that a fairly natural question to ask is, which operators will preserve the length and angular relationships between vectors?

The simplest operations on a space might be considered to be translation, rotation and dilation (and the identity mapping, which is not that interesting). Starting with dilation, suppose that D_{\lambda} is an operator that takes a vector x and transforms it to a vector of length \lambda \| x\| in the same direction as x. Obviously, this operator acts as

D_{\lambda} (x_1 ,x_2 ,x_3 ) = \sqrt{\lambda}(x_1 ,x_2 ,x_3 ).

The operator can of course be written as a diagonal 3\times 3 matrix with all entries equal to \sqrt{\lambda}. It is easily seen that unless \lambda = 1, this operator does not preserve inner products, that is, \langle D_{\lambda }x, D_{\lambda}y \rangle \neq \langle x, y \rangle. The same goes for translation, that is, an operator that translates any (x_1, x_2, x_3 ) to  (x_1+ a, x_2+b, x_3 +c ).

You may now complain that these operators do indeed preserve length and angular relationships, and they do to a certain extent. However, they do not preserve those relationships with the origin, which is why the lengths differ. The third kind of transformation does this. Any rotation U on \mathbb{R}^3 can be seen to preserve inner products. When you think geometrically of the inner product in terms of the projection of one vector on another, it is clear that the absolute orientation of these vectors does not play a part. Furthermore, given that the rotation can be represented by a 3\times 3 matrix, we can now deduce the defining property of a unitary operator. Denoting the transpose of a matrix U by U^{\#}, we see that

\langle x, y \rangle =  \langle Ux ,Uy \rangle = \langle U^{\#} Ux, y \rangle = \langle x, U^{\#} Uy \rangle

for all x,y \in \mathbb{R}^3.

Since this must hold for all relevant x and y, it must be that U^{\#}U = UU^{\#} = 1. (This property will be satisfied by all matrices with determinant -1 or 1).

(Another nice example to consider is that of a rotation in a complex plane of, which is just a multiplication by a complex number of modulus 1.)

With some idea of what a unitary operator in this space is, we can try to make sense of the von Neumann ergodic theorem.

Theorem. Let U be a unitary operator on a Hilbert space H. Let P be the orthogonal projection onto the subspace \{ x\in H: Ux = x\}. Then, for any x \in H,

\lim_{N\to \infty} \frac{1}{N}\sum_{n=0}^{N-1} U^n x = Px.

Suppose that we were to apply this theorem to the case of rotations in \mathbb{R}^2, which is not much different from considering rotations in three dimensions. We immediately run into a problem. If P is a rotation, what is an invariant subspace? Unless P is a full rotation, only the origin is invariant, and a full rotation is the same as the identity operator! However, this does not imply that the content of the theorem is entirely void. Since the zero operator is the projection on the invariant subspace which is the origin, we see that for any x \in \mathbb{R}^2 ,

\displaystyle  \lim_{N\to \infty} \frac{1}{N}\sum_{n=0}^{N-1} U^n x =0.

Hence, if the sum up the rotations (which are not full rotations) of any vector, we must get an average of 0, which makes intuitive sense. In the case of rational rotations (i.e. rational mutiples of 2\pi ), it is not too hard to prove, and in the case of irrational ones it would seem sensible from the Weyl equidistribution theorem.

One of the first examples one usually encounters in ergodic theory is the shift operator. Since my introduction to ergodic theory came via Furstenberg’s proof of Szemeredi’s theorem, this felt natural. However, supposing that you are used to unitarity meaning rotation, is there a way to interpret the shift operator as such?

To make sense of this, we have to slightly expand our idea of unitarity. Consider the following operator acting on vectors in \mathbb{R}^2:

\displaystyle A = \begin{bmatrix}  0 & 1 \\  1 & 0  \end{bmatrix}.

It is easy to see that this matrix takes the unit vector [1,0] to the unit vector [0,1], and vice versa. Furthermore, the conjugate transpose of A is itself, and A^2 = 1. Thus A is unitary, and is in fact a permutation of the axes. Of course, in two dimensions there are only two permutations possible (it is the permutation group on two elements, after all!), so things will get more interesting in higher dimensions. (Exercise: Show that any permutation of axes in \mathbb{R}^3 is unitary. Can all of these permutations be seen as rotations?) In fact, any permutation of an orthonormal basis in a Hilbert space is unitary (think of how a permutation of the basis would affect the inner product). Thus, if we think of the shift operator as a permutation of a countable basis, we see that it fits into the general intuition of a unitary operator.

Clearly, there is still much to unpack here. What kind of unitary operators do we get on some common Hilbert spaces, such as l^2 or L^2 ? What does unitary equivalence mean? What does it mean when a unitary operator is self-adjoint? Posing questions like these is, I believe, more valuable than reading a single textbook’s introduction on the subject, because you’re trying to make sense of something in terms of what you already know, and you can allow curiosity to lead you. At some point, the concepts will take on a life of their own and your frame of reference will expand, ready to begin making sense of the next, more advanced concept.

Learning mathematics is a lot messier than most textbooks (and lectures) would have us believe. Embrace it.

Hypergraph complexity

There are occasional, serendipitous events that make one suspect that there may be some benevolent animating power behind the universe. (This only lasts until the next annoying thing happens.) In August 2017, I was leaving San Francisco for Calgary after attending a workshop at AIM. On the flight, across the aisle from me, I saw someone editing what was clearly some mathematical papers. I introduced myself, and the other party turned out to be none other than Jacob Fox, on his way to a meeting in Banff. After parting ways, I started to look at some of his papers, and one immediately caught my eye, namely A relative Szemerédi theorem (with Conlon and Zhao). I had a problem that I’ve been working (and been stuck) on for a while, and this paper gave me a crucial theorem I needed to continue.

This paper also introduced me to the idea of hypergraph complexity, which I don’t yet understand, hence this post. We’ll start with some basic definitions and concepts.

Definition 1. Let J be a set and for H \subseteq J, let \binom {J}{r} denote the set of all r-element subsets of J. An runiform hypergraph on J is any subset of H.

It should be clear that this just generalises the notion of an edge as a choice or two elements, to a choice of an arbitrary (but constant) number of elements. Hypergraphs  give us some nice generalisations of Ramsey-type results, for example,

Theorem 1. [2] An infinite m-regular hypergraph contains an infinite clique or an infinite anticlique.

The nonstandard proof of this (which can be found in [2]) involves a beautiful use of iterated nonstandard extensions, a somewhat tricky (but very useful) subject I will probably address in a later post.

In what follows, we restrict our attention to finite hypergraphs. We firstly need to define a hypergraph system:

Definition 2. [1] A hypergraph system is a quadruple V = (J, (V_j )_{j\in J}, r, H) where J is a finite set, (V_j )_{j\in J} is a collection of non-empty sets, r\geq 1 is a positive integer, and H \subseteq \binom{J}{r} is an r-uniform hypergraph. For any e\subseteq J, set V_e := \prod_{j\in e} V_j.

What would be a reasonable hypergraph system, and what would it be good for? Since the concept occurs in a paper concern a Szemerédi-type theorem, it seems reasonable that they have something to do with arithmetic progressions. To make this clear, we’ll need the notion of a weighted hypergraph, which we’ll illustrate by means of an example. Set J = \{ 1,2,3\}, r=2, and V_j = \mathbb{Z}_N. The weights on V are functions g_e : V_e \to \mathbb{R}^{\geq 0}. For instance, for the edge (1,2) we have a function g_{(1,2)}: \mathbb{Z}_N \times \mathbb{Z}_N \to \mathbb{R}^{\geq 0}. Suppose that A \subseteq \mathbb{Z}_N, and let \chi_A denote its indicator function. We define the functions

g_{1,2}(x,y) = \chi_A (x-y)

g_{2,3}(x,y) = \chi_A (x+y)

g_{1,3}(x,y) = \chi_A (x).

The product g_{1,2}(x,y) g_{2,3}(x,y) g_{1,3}(x,y) will only be greater than 0 if there are some x,y \in \mathbb{Z}_N such that x-y, x, x+y are in A. Equivalently, if

\mathbb{E}_{x,y \in \mathbb{Z}_N} g_{1,2}(x,y) g_{2,3}(x,y) g_{1,3}(x,y) >0,

A will contain a 3-arithmetic progression.

This is a simplistic example, and not nearly sufficient to characterise the existence of 3APs. For instance, we make no mention of the issue of wrap-around progressions in the cyclic group \mathbb{Z}_N. What is more, one would ideally like to have a result which takes into account values of N of arbitrary size. For appropriate linear forms conditions that work for any size of AP, the paper [1] should be consulted. For now, we continue with trying to get a handle on the complexity.

Suppose we are given a hypergraph system as above.

Definition 3. [1] For any set e of size r and any E_e \subseteq V_e = \prod_{j\in e} V_j, define the complexity of E_e to be the smallest integer T such that there is a partition of E_e into T sets E_{e,1}, \dots ,E_{e,T} such that each E_{e,i} is the set of r-cliques of some (r-1)-uniform hypergraph.

Thus, in the above example we would let E_e be a subset of \mathbb{Z}_N \times \mathbb{Z}_N for some e \in H. Certainly, an (r-1)-uniform hypergraph for r=2 is not the most scintillating mathematical object, but let us persist with this simple case.

To make sense of the latter part of the Definition 3, let us first discuss what the required cliques are (following [1]). If x = (x_j)_{j\in J} \in V_j and J' \subseteq J, write x_{J'} = (x_j)_{j\in J'} \in V_{J'} for the obvious projection of x onto the coordinates J'. For any e\subseteq J, define

\partial e = \{ f\subseteq e: |f| = |e|-1.\}

In a 2-uniform hypergraph as above, for e = (1,2), \partial e = \{1,2\}. Simply put, the boundary is seen as the set of edges of one dimension less that together uniquely determine the edge we are looking at.

If we now say that E_{e,i} is the set of (r-1)-cliques of some r-uniform hypergraph, we mean that there is some B_{f,i} \subseteq V_f for each f\in \partial e so that

\chi_{E_{e,i}} (x_e) = \prod_{f\in \partial e} \chi_{B_{f,i}}(x_f)

for all x_e \in V_e.

To be fair, this does not make anything very clear, so let’s unpack this notion further with our very simple example of a (complete) 2-uniform hypergraph, by trying to understand what a satisfying notion of complexity should be. Taking the edge \{ 1,2\} as before, and V_1 = V_2 = \mathbb{N}\times \mathbb{N}. Let the set V_e = \{1,2,\dots ,20 \} \times \{1,3,5, \dots ,19\}. We want sets E_{e,i} and B_{1,i} , B_{2,i} \subseteq \mathbb{N} so that

\chi_{E_{e,i}} (x_e) = \chi_{B_{1,i}}(x_1) \times \chi_{B_{2,i}}(x_2)

for all x_e \in V_{\{ 1,2\} }. The triviality of this example now becomes obvious – setting B_{1,1} = \{1,2,\dots ,20\} and B_{2,1} = \{1,3,\dots ,19\} gives us a complexity of 1. We could see this example as an instance of a complete bipartite graph between the components making up V_e, which is fully described by the Cartesian product of the two sets.

Let’s see if the same reasoning applies to 3-uniform hypergraphs. Let e = \{ 1,2,3\} be an edge in some 3-uniform hypergraph, with the associated set V_e = \mathbb{N}\times \mathbb{N} \times \mathbb{N}. Let the subset E_e be \{1,2,\dots ,10\} \times \{1,3,5,7 ,9\} \times \{2,4,6,8\}. Now we need sets E_{e ,1}, \dots , E_{e, T} and subsets B_{f_1 ,i}, B_{f_2, i}, B_{f_3, i} of V_{f_1 }, V_{f_2 }, V_{f_3 } = \mathbb{N}\times \mathbb{N} such that

\chi_{E_{e,i}} (x_e) = \chi_{B_{f_1,i}}(x_1) \times \chi_{B_{2,i}}(x_2)\times \chi_{B_{f_3,i}}(x_3).

A little consideration will show that the same trick employed above works here as well, because our set V_e can be considered to be a complete 3-uniform tripartite graph on the parts V_1, V_2, V_3. It seems clear that we will need to be slightly more creative in order to get higher complexity.

Consider now the following subset of the above set V_e:

E_e =\{ (1,3,2), (1,3,4), (2,5,7), (2, 5, 9), (4,7,6), (3,7,6), (3,7,5), (4, 7,5), (10, 9, 8), (10,9,1)\}.

This set can’t be written as a single  Cartesian product of subsets of the sets we are considering, and seems slightly more random in character, so there is a hope that the complexity should be higher. Setting f_1 = (1,2)f_2 = (2,3) and f_3 = (1,2), we take

B_{1,1} = \{(1,3)\}, B_{2,1} = \{ (3,2), (3,4) \} , B_{3,1} = \{(1,2), (1,4)\}

B_{1,2} = \{(2,5)\}, B_{2,2} = \{ (5,7), (5,9) \} , B_{3,2} = \{(2,7), (2,9)\}

B_{1,3} = \{(3,7), (4,7)\}, B_{2,3} = \{ (7,5), (7,6) \} , B_{3,3} = \{(3,5), (3,6), (4,5),(4,6)\}

B_{1,4} = \{(10,9)\}, B_{2,4} = \{ (9,1), (9,8) \} , B_{3,4} = \{(10,1), (10,8)\}

(where the notation has been shortened in an obvious way). To avoid confusion, we note that there are elements x,y,z \in B_{1,i}, B_{2,i}, B_{3,i} such that \chi_{B_{1,i}}(x)\chi_{B_{2,i}}(y)\chi_{B_{3,i}}(z) = 1, but which do not correspond to projections of an edge in V_e, and can therefore be ignored. We thus see that our set E_e has a complexity of at most 4.

Of course, the case of a 3-uniform hypergraph is quite easy to analyse this way, because two of the edges determine the third, and the difficulty of generating such examples will increase rapidly for higher r.

Intuitively, the example at least makes clear that the complexity of the hypergraph is somehow determined by how easily the set V_e can be described in terms of parts of a smaller “dimension”, and that highly random sets will have higher complexity (without specifying exactly what we mean by “random” here). The real application of this complexity is in hypergraph removal lemmas, which we’ll (possibly) discuss in another post.

Shameless self-promotion: If you enjoy Fourier series and Martin-Löf randomness (or Kolmogorov-Chaitin-Solomonoff randomness – take your pick), see my paper in APAL. For something similar, involving Carleson’s theorem and Schnorr randomness, see the paper by Franklin, McNicholl and Rute at https://arxiv.org/abs/1603.01778.

 

References:

  1. A relative Szemerédi theorem, David Conlon, Jacob Fox and Yufei Zhao
  2. Nonstandard Methods in Ramsey Theory and Combinatorial Number Theory, M Di Nasso, I Goldbring and M Lupini