High school deconvolution

aurora_smal_animated

One of my favorite formulas is the factorization of 1-x^n:

1 - x^n = (1 - x)(1 + x + x^2 + \dots + x^{n-1}).

This is a fairly ubiquitous formula that most people would have seen in high school, perhaps in various guises. A common special case is (1-x)(1+x) = 1- x^2, which is itself a special case of the “sum of squares” formula (a-b)(a+b) = a^2 - b^2.

The irreducible factors of 1 + x + x^2 + \dots + x^{n-1} are the cyclotomic polynomials which have many interesting links to number theory.

In other areas, substituting something nice for x leads to interesting formulas/results. When x is some prime power p^e (usually written q), the quantity

\frac{1-q^n}{1-q} = 1 + q + \dots + q^{n-1}

is the q-analog of n. The reason for the name is that as we let q \to 1, we get

\lim_{q \to 1} \frac{1-q^n}{1-q} = n.

This observation forms the starting point of the study of combinatorial q-analogs. Toggling between q=1 and q=p^e allows one to view sets as vector spaces over the “field” of 1 element.

However, I won’t be writing about that today. Instead, this post is about a curious connection between that simple formula above and deconvolution.

Geometric series and the Neumann series

For this purpose, let’s start by letting x be small (e.g. |x| < 1), then x^n \approx 0, so

(1-x)(1+x+\dots + x^{n-1}) = 1 - x^n \approx 1.

Letting n \to \infty, we obtain the geometric series 1 + x + \dots + x^n + \dots, which is the Taylor series expansion for \frac{1}{1-x}.

Going one step further, replacing x with a matrix M, we get the Neumann series

I + M + M^2 + \dots + M^n + \dots

If M is suitably small in some sense (so that M^n \approx 0 for large enough n), then this infinite series converges to the inverse of I - M, where I is the identity matrix. By truncating the series, we get

(I - M)(I + M + M^2 + \dots + M^{n-1}) = I - M^n,

which approximates I for suitably large n. If, further, M is nilpotent (i.e. M^n = 0 for some n) then I + M + \dots + M^{n-1} is the exact inverse of I - M.

The Neumann series thus provides an iterative way of solving linear equations if the system can be written as I - M for some suitably small M. If we have the equation

(I - M)x = b

and we’re trying to solve for x given b, define a recursive relation:

x_0 = b

x_{i+1} = Mx_i + b.

Observe that this is equivalent to the recursion

x_{i+1} = x_i + M^{i+1}b,

which makes it obvious that

x_n = (I + M + \dots + M^n) b = (I - M^{n+1})x.

When n is large enough, M^{n+1} will be small, and x_n will approximate the solution x.

What has all this got to do with deconvolution?

Now suppose I want to record a signal (e.g. photo) with some device (e.g. camera). For simplicity, let’s say my signal is just an array of values:

signal

Unfortunately, during my measurement process, my signal gets “blurred”, causing me to record this instead:

final

In technical terms, my signal is a convolution of the original signal with some point-spread function. To recover the true green signal from the corrupted blue one, I’ll need to carry out deconvolution.

But it turns out that in this (grossly oversimplified) case, we already know how to deconvolve this signal!

Let’s say I have some additional information about the true signal: for example, I may expect the original signal to have only one peak, and it seems like the corrupted signal has 2 peaks. I may thus guess that my recorded signal is the sum of the original signal with a translated (and possibly scaled) version of the original signal such as this:

noise

As you can see, it is indeed true that the blue signal = green + red signals, and the translation is 2 units to the right. Of course, if I know the red signal, then I can subtract it from the blue signal to get the green signal. But what if all I have is the blue signal?

If we treat the (unknown) green signal as a vector x, then translation to the right by 2 units is given by this matrix

T =  {\left(\begin{array}{rrrrrrr}  0 & 0 & 0 & 0 & 0 & 0 & 0 \\  0 & 0 & 0 & 0 & 0 & 0 & 0 \\  1 & 0 & 0 & 0 & 0 & 0 & 0 \\  0 & 1 & 0 & 0 & 0 & 0 & 0 \\  0 & 0 & 1 & 0 & 0 & 0 & 0 \\  0 & 0 & 0 & 1 & 0 & 0 & 0 \\  0 & 0 & 0 & 0 & 1 & 0 & 0  \end{array}\right)},

This is a nilpotent matrix, because T^4 = 0, as can be easily verified by repeated matrix multiplication, or by the observation that if we translate 2 units to the right 4 times, we’ll just end up with all zeros.

So we guess that our blue signal is given by b = (I + cT)x, where I is the identity matrix and c is some scaling constant that I have to guess. Letting M = -cT, we can recover our original signal via the iterative procedure above:

deblur_iterations

At each iteration, we’re adding M^{i+1}b to x_i. Here’s what each M^i b looks like; they are essentially scaled, translated copies of the blue signal:

deblurrer

It might seem strange that to recover the original signal, we take even more corrupted copies of the corrupted signal. But this is precisely what’s going on in the formula

\frac{1}{1-x} = 1 + x + x^2 + x^3 \dots

The formula says that to invert 1-x, we just sum many copies (infinitely many, too!) of powers of x!

Back to reality
In this case, we obtained perfect reconstruction of our original green signal. This was only possible because I used the correct value of c. Here’s what would’ve happened if I didn’t know the true value of c:

deblur_iterations_wrongc

It’s pretty close, but not a perfect reconstruction of the original signal.

“Naïve deconvolution” as I described above looks good on paper, but will seldom work with real data. Recall that to successfully get the green signal, I had to

  • Guess that the corrupted data was due to a simple translation
  • Guess that the translation by was 2 units to the right (by looking at the peaks)
  • Guess the correct scaling factor c

Essentially, I had to successfully guess that the point-spread function looked like this:

psf

This is about as simple as point-spread functions go. If signals were photos, the blue signal might look like this:
aurora_small
If you look closely, you can see that the stars are “doubled”. This was caused by me accidentally knocking into my camera tripod while the photo was being taken.

The double stars give a pretty good gauge of what the corrupting transformation was: a very slight rotation, centered somewhere above the top-left corner of the photo. Even knowing this, I had to carry out many rounds of trial and error to find the rotation parameters, as well as the intensity scaling factor c (the double stars are about equally bright, so c \approx 1), and even then, as the residual streaks at the top right corner of the next image indicate, my parameters are still off.

The result is an image that is slightly better than the original in terms of being sharper, but worse in every other aspect:
aurora_small_deblurred

In the deconvoluted image, the stars have been contracted to points (not quite perfectly). But there are lots of “wrinkles” in the image. This is because instead of obtaining

b = (I - M)x,

what I got was a noisy version b + \epsilon. At each iteration of the deconvolution procedure, the noise \epsilon gets multiplied and propagated by M!

There are also vertical bars at the bottom left of the image. These arise because of the artificial truncation that occurs at the boundary of the image. To remove these bars, one would have to guess what the image looks like outside the photo.

Even in the absence of noise or truncation, it is often the case that the point-spread function results in an M that is not “small enough”. Recall that for our method to work, we needed M^n \to 0 as n \to \infty. However, many point-spread functions (such as Gaussian blur) result in matrices M that blow up as n \to \infty. Using this iterative method will only make the image worse.

Difficulties like these are why people can spend their lives studying various methods of deconvolution. But if you can deconvolve well, you’re practically guaranteed a job in image processing, finance, microscopy, seismology and basically any field that requires signal processing!

Here are the two images again, for comparison:
aurora_smal_animated

Not great, but not too shabby for a humble high school formula!

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s