“That’s going to be a quick one,” I thought. It did not turn out that way.

An email from friend and hero Rob Eastaway, asking if I can run a Monte Carlo simulation for him. The problem:

Imagine you’re playing a game like Snakes and Ladders but without snakes or ladders. Each move, you roll a six-sided die and move on that many squares. For each square, what’s the probability you land on it?

Of course I can! And I can check it against a couple of Rob’s observations:

  • Because you can only land on square 1 by rolling a 1 on your first go, the probability of landing on 1 is $\frac{1}{6}$
  • In the long run, you move 3.5 squares each go, so the probability for high-numbered squares should be close to $\frac{2}{7}$.

So, I set up a simulation, ran it 10,000,000 times, and estimated the probability to 1dp as requested. Job done, right?

Well, it wouldn’t be worth writing up if I’d stopped there.

A Monte Carlo simulation? We can do better than that.

Rather than simulate it, we can calculate the probability for each square.

We can set up a recurrence relation:

$\begin{equation} P(n) = \begin{cases} 0 & n < 0
1 & n = 0
\frac{1}{6}\left( P(n-1) + P(n-2) + P(n-3) + P(n-4) + P(n-5) + P(n-6)\right) & n > 0 \end{cases} \end{equation}$

In principle, you can solve that exactly for as many values of $n$ as you want – although I took the lazy way out and trusted Python’s floats to be good enough for Rob.

But did I catch something out of the corner of my eye?

Hello, sensei.

My powers are fading.

Well, fading is your power. What have you got for us?

When you see a recurrence relation, you should think “generating function.”

Of course!

On your first throw, you’re at square 0 with probability 1, so we’ll get $G_0(x)=1$.

I know better than to question.

Now, each time we throw the die, we multiply the existing probability distribution by $g(x) = \frac{1}{6} \left(x + x^2 + x^3 + x^4 + x^5 + x^6\right)$.

Oh yes – the coefficient of $x^n$ is the probability of ending up on square $n$.

Indeed. And so $G_1(x)$ is exactly $g(x)$, $G_2(x)$ is $g(x)^2$ and so on.

So you have the probability distribution of where you will be after any number of throws – and you have to add them all up to get Rob’s distribution?

Correct! I shall – against all my principles – drop the $(x)$ and say $G = G_0 + G_1 + G_2 + …$, or $G= \sum_{n=0}^\infty g^n$.

That’s a geometric sequence!

It is. $G = \frac{1}{1-g}$. But we can do better: $g$ is also a geometric sequence – it works out to be $\frac{x\left(1-x^6\right)}{6(1-x)}$.

I can’t say that that looks pleasant.

It isn’t. $G$ simplifies to $\frac{6(1-x)}{6(1-x)-x(1-x^6)}$, or $\frac{6(1-x)}{6 - 7x + x^7}$.

I would not put such a thing into Wolfram|Alpha. I have not seen any Wolfram|Alphas around here. Don’t ask me any more questions.