The Intelligent Design folk tell you that complexity requires a designer.
The Richard Dawkins crowd tell you that complexity must evolve from simplicity.
I claim they’re both wrong, because the natural numbers, together with the operations of arithmetic, are fantastically complex, but were neither created nor evolved.
I’ve made this argument multiple times, in The Big Questions, on this blog, and elsewhere. Today, I aim to explain a little more deeply why I say that the natural numbers are fantastically complex.
Here’s one way to think about simplicity versus complexity: Simple things have short descriptions; complex things have only long descriptions. A string of a million zeros is very simple because I can describe it in six words: “A string of a million zeros”. A string of a million random numbers is complex, because it takes a long time to describe all of the content.
Now what about the system of natural numbers? To first appearances, there’s a very simple description: Start with 0, then add 1, then add 1 again, keep doing this forever, and those are the natural numbers. Unfortunately, “keep doing this forever” is a little vague, and the complexity comes in when you try to make that precise.
So if you were setting out to give a complete description of the natural numbers, where would you start? Probably here:
- We have a number called zero, and then every number has a successor.
But that description fits a lot of things besides the natural numbers; it also fits, for example, the integers (the integers, unlike the natural numbers, include negatives). Here’s an attempt to fix that:
- We have a number called zero, and then every number has a successor, and zero is not the successor of any number.
Better, but still no good; this fails to rule out a system where 1 follows 0, 2 follows 1, 3 follows 2, and 1 follows 3, like this:
To fix that, we have to add a clause specifying that no two numbers (such as 0 and 3) have the same successor. But even now, we’ve only just begun.
We still haven’t ruled out the possibility of infinite gaps between numbers. For all we can tell from our description so far, the number system might look like this:
with infinitely many numbers in between 3 and that very large number N. How can we rule that out?
This one is not so easy. We’d like to say that all gaps between numbers are finite. But how do we define “finite”? Usually we say a number is finite if it’s part of the set of natural numbers. Or to put this another way: We’d like to say that no matter where you start (say at N), you can’t count backward forever; eventually you’ve got to hit a stopping point. But what does it mean to count backwards forever? It means counting back more than a natural number of steps. There’s that circularity again.
What we really really need, it turns out, is to add a clause like this to our description:
- Every non-empty subset of the natural numbers has a smallest element.
This will solve our problem, because it implies, for example, that the set of numbers you can reach by counting backwards from N has a smallest element—eliminating the possibility of that infinite gap.
But this assumption, stated in this way, opens a can of worms that almost nobody wants to open. Here’s why: For the first time, we’ve been forced to talk about sets of natural numbers, as opposed to natural numbers themselves—and even to talk about all those sets at once. In technical jargon, we’ve left the world of first-order logic and entered the world of second-order logic. But that’s a very strange world indeed. In ordinary (first-order) logic, we have a small number of rules of inference that allow us to proceed, for example, from “Socrates is a man” and “All men are mortal” to “Socrates is mortal”. But in second-order logic, not only are the rules of inference not finite; they cannot even be printed out (even in an infinite amount of time) by any computer. That’s why the great logician Willard van Ormand Quine insisted that second order logic is not logic, and why mathematicians usually prefer to avoid it.
All is not lost, though. Instead of adding one second-order axiom, we can add infinitely many first-order axioms, viz:
- The set of odd numbers has a smallest element.
- The set of numbers greater than 7 has a smallest element.
- The set of numbers that can be reached by counting backward from 100 has a smallest element.
And so on.
Okay, our description of the natural numbers just got infinitely long, but at least it’s infinitely long in a simple sort of way. We’ve added an infinite number of axioms, but they all fit the same simple pattern—a pattern that you could easily train your computer to recognize.
Unfortunately, though, we still have a long way to go to get to a full description of arithmetic. First, we have to add rules for addition and multiplication. (If we don’t do this, then we won’t be able to talk about interesting subjects like prime numbers.) Now we’ll want to add even more axioms. But now we come up against the content of Godel’s incompleteness theorem: NO description suffices. No matter what axioms you add, your description will always fail to distinguish the natural numbers from any of an infinite number of other structures. (Those other structures are usually called “non-standard models of arithmetic”).
When I say that “NO description suffices”, you might reasonably ask what counts as a description. Here’s what counts: A description is some (possibly infinite) list of axioms that some computer program is capable of recognizing. So if, for example, you try to describe arithmetic by listing every true statement about it, I will cry foul, because no computer program is capable of recognizing every true statement about arithmetic. (This is not just an observation about the state of the art in computer programming. It is a theorem about all possible computer programs.)
That’s the sense in which arithmetic is fantastically complex. Not only do the natural numbers have no finite description; they have no description that is recognizable by any computer. If “simple” means “capable of a short description”, then the natural numbers are about as far from simple as you can get. Not only do they have no short description, they don’t even have an infinite description.
Other mathematical structures are simpler. Euclidean geometry, for example, can be fully described by a first-order theory, and there is a computer program that can distinguish true from false statements in that theory.
Likewise for the (first-order) theory of the real numbers. There are axioms for the real numbers that suffice to prove all true first-order statements about the real numbers, and there is a computer program that can distinguish the true statements from the false. In that sense, the real numbers are far simpler than the natural numbers. (There are still non-standard models of the real numbers, but through a first-order lens, they are indistinguishable from the real thing.)
(You might be tempted to think that because the natural numbers sit inside the real numbers, they must be simpler. But of course any sequence of arbitrary complexity sits inside the very simple sequence 010101010101….., if you can pick and choose what to keep and what to throw away. Complexity can reside quite comfortably inside simplicity.)
Did that help?