A question from my Uncle Bill, who’s one of my great heroes and a philosopher of some standing:

How does one get from 1 to 2 other than saying by diktat that 2 is the name we give to 1+1?

Now, I’m far from the world’s greatest expert on the philosophy of number. The purer you get with maths, the shakier the ground I’m on: I distinctly remember comparing my year studying in France with the maths I was used to at St Andrews by saying “I’m used to questions like ‘How fast is the train going?’; now the questions are all ‘Prove there exists a railway.’”

So, philosophy of number is a minefield for me. I’m not an authority, and I expect to be more wrong than right in my answer. I look forward to being corrected in the comments!


For my purposes, mathematics is a language. It has its own grammar and vocabulary, which are much stringenter than English; there are conventions and exceptions, and a good theorem plays much the same role as a work of literature. I tend to think of $1 + 1 = 2$ as just the plume de ma tante of mathematics as a foreign language. You can certainly drill down into the meaning and scrape out the origins of the grammar, but it probably makes more sense to go the other way and figure out how to order crèpes or locate a monkey in a forest.

Sadly, for a mathematical ideal, $1+1=2$ relies on a certain amount of diktat and convention. You need at least one of the Peano axioms (every natural number has a successor), a convention that the smallest whole number is called 1, a convention that the next greatest whole number is called 2, and an idea that adding the smallest whole number to itself gives the next greatest whole number.

The statement $1 + 1 = 2$ isn’t always true, though. It’s true in many areas - complex, real, rational and natural numbers, for a few examples - but not in modulo-2 arithmetic (where $1 + 1 = 0$) or in groups where 1 is not defined.

Before the First World War, Bertrand Russell and Alfred North Whitehead embarked on a grand project: to put mathematics on a rigorous, logical foundation so you could (in principle) take any true statement and prove it from first principles. A brilliant idea, with just two flaws: one, it was impossible, and two, it was impossible ((As Kryten says, technically that’s only one flaw, but it’s such a big one that it’s worth mentioning twice.)) Gödel’s Incompleteness Proof knocked Russell and Whitehead’s Principia Mathematica out of the water.

Not before R&W had made some progress, though: starting from the idea of an empty set, they developed the idea of ‘one’ and, by page 379, had proved that $1 + 1 = 2$ in the natural numbers. ((In fact, it took longer; they hadn’t defined addition yet. That would have to wait until 90-odd pages into Volume II.)) Cards on the table: I’ve not read Principia. I don’t know how they prove it. I can’t read the notation on the wikipedia page about it; if anyone can turn it into something I can understand, I’d love to run a guest post on it.

I’m not sure that’s shed much light on the matter. As with most questions about why things are they way they are in maths, I end up turning to von Neumann: “In mathematics, you don’t understand things; you just get used to them.”