There was an unfortunate youtube video released recently making the claim that \(1 + 2 + 3 + 4 + ... = -\frac{1}{12}\). This is simply not true. Here’s the proof presented in the video:

$$ \displaystyle \begin{align} \text{Let }S_1 &=1-1+1-1+1-1+... \\ \text{Let }S_2 &=1-2+3-4+5-6+... \\ \text{Let }S_3 &=1+2+3+4+5+6+... \\ \end{align} \\ $$

First, calculate \(S_1\):

$$ \begin{align} S_1 + S_1 = 1&-1+1-1+1-... \\ &+1-1+1-1+... \\ \end{align} \\ 2S_1=1 \\ S_1=\frac{1}{2} \\ $$

Next, \(S_2\):

$$ \displaystyle \begin{align} S_2 + S_2 = 1&-2+3-4+5-6+... \\ &+1-2+3-4+5-... \\ \end{align} \\ S_2 + S_2 = 1-1+1-1+1-1+...=S_1 \\ S_2 = \frac{1}{4} \\ $$

And finally, \(S_3\):

$$ \displaystyle \begin{align} S_3-S_2=&1+2+3+4+5+6+... \\ -&1+2-3+4-5+6-... \\ =&0+4+0+8+0+12+... \\ =&4S_3 \\ S_3-\frac{1}{4}=&4S_3 \\ S_3=&-\frac{1}{12} \\ \end{align} $$

This looks nice and simple, but there’s a lot of trickery going on. Let’s start by taking several steps back into real analysis.

First of all, there are multiple ways to define the sum of an infinite series. The most common is:

$$ \sum_{n=1}^{\infty}a_n=\lim_{N \to \infty}\sum_{n=1}^{N}a_n $$

Basically, we define another series of partial sums, and whatever value the partial sums converge to is the value of the infinite sum. This is the “standard” definition in the sense that if we use another definition, it’s usually mentioned explicitly, as in Cesaro summation, or Abel summation.

But how about that first sum, \(1-1+1-1+1-...\)? Its partial sums bounce between 0 and 1, so what is the limit of that series? Is it 0, or 1, or anything in between? Does it even exist? In high school, they don’t teach you the actual definition of a limit, so it’s hard to say. Luckily, there is a very specific definition:

$$ \lim_{n \to \infty}x_n=x \iff \forall \epsilon > 0, \exists N \text{ s.t. } k > N \Rightarrow \left|x_k-x\right| < \epsilon $$

In more verbose english: “the limit of an infinite series is x” means that, no matter how small of an \(\epsilon\) we choose, we can find some point in the series, N, and after that point, all values in the series are within \(\epsilon\) of x.

Clearly then, the sequence 1,0,1,0,1,0… does not converge to one half, nor any other value. If we choose \(\epsilon=0.4999\), then no values in the series (\(x_k\) in the above definition) are within \(\epsilon\) of one half.

In cases like this, it is said that the limit does not exist, and so the infinite sum also does not exist. In order to even start to do math with the sum of an infinite series, you must first prove that it exists. \(S_1+S_1\) is meaningless until you have proven that \(S_1\) exists. In fact, it does not, so the proof is dead in the water.

However, this idea isn’t coming from nowhere. There is some nugget of truth in the original video. It has to do with the Riemann Zeta Function. Consider this function:

$$ f(s)=\sum_{n=1}^{\infty}\frac{1}{n^s} $$

Most high school calculus classes at some point prove that for \(s \leq 1\), the infinite sum does not converge, and so this function does not exist at those points. Its domain is only \(s > 1\).

However, there is a technique called analytic continuation that can be used to extend functions to larger domains, over which they would not normally be defined. First, the function is represented as a power series about some point \(x_0\):

$$ f(x)=\sum_{n=0}^{\infty}a_n(x-x_0)^n $$

This is usually often valid only for some “radius of convergence” around the point \(x_0\). However, this radius can often extend over the boundary of the domain of the function. i.e., the power series will still converge for some points that aren’t in the domain of the original function. In this way, we can “continue” the function to regions outside of its domain, and define a new function over this extension. We say that this new function is the “analytic continuation” of the original function.

The Riemann Zeta function is defined as the analytic continuation of the above \(f(s)\):

$$ \zeta(s) = \text{ the analytic continuation of } f(s)\\ \text{where }f(s)=\sum_{n=1}^{\infty}\frac{1}{n^s} $$

The radius of convergence of the power series for \(f\) is large enough that the Riemann Zeta function is actually defined everywhere in the complex plane except 1.

Through some much more difficult math (more than enough to fill another blog post), it is known that: \(\zeta(-1)=-\frac{1}{12}\)

And if you confuse a function with its analytic continuation, you might be tempted to conclude that: \(\zeta(-1)=f(-1)=1+2+3+4+...=-\frac{1}{12}\)

However, a function is only equal to its analytic continuation over their shared domain, and -1 is not in the domain of \(f\)! That’s the key issue here. Don’t confuse a function with its analytic continuation, be careful with domains, and don’t gloss over the rigorous definitions when dealing with infinity.

What’s disappointing is that I suspect the physicists in the video already know all this, but chose to ignore it just for the “flashy” outcome.