Sticky 13s is a game where you and your friends are given 13 random playing cards. The quiz master then reads through a full deck of cards one by one, and you and your friends remove each card from your 13-card hand (if you have it) one by one until one person has no cards, at which point they win the game. Now, what would you say if I asked you:

On this page, I will attempt to answer these questions (using a computer I'm not smart enough for the statistics).

I will also be checking my analogous results against qntm.org

Things to think about:

In each game you are only one player, so theoretically you only want to average the results of you, the one player. However, as your hand of 13 cards is chosen at random, does it matter if you not only average over your remaining cards for say 10000 simulated games with your friends, but also all of their remaining cards over the same simulated games?

Does it matter if I average over only the losers (who are the only ones that have remaining cards) or overall players including the winner (zero remaining cards) and how would it affect the meaning of the result?

We will start by answering the first question... How many cards will you be left with on average at the end of the game, and how does this depend on the number of players?  By simulating 200,000 sticky 13s games for each data point we can produce the following (fully interactive) graph:

FIGURE 1:

This shows that yes the number of cards you would expect to have at the end of a game does depend on how many other players there are in the game. We can see that for 1 player, the result is 0, which is to be expected as that one player will always win and necessarily have 0 cards remaining. 

You might be wondering if I was justified in running 200,000 simulations for each setup. Is this enough to reach close enough to the true average? Too many? By plotting the cumulative average of cards left against the number of simulations we can produce the following for the case of 28 players in a game:

FIGURE 2:

As we can see in this graph, the average steadies at what we assume to be its true average after about 25,000 simulations. An (x-axis) log plot of the same graph can be seen in the image carousel on the left.

With this knowledge, we can run future tests with only 25,000 simulations and still get results accurate to 0.01 cards which will speed up the process. 

Going  back to our previous graph showing how the average number of cards a loser has at the end of a game correlates to the number of players in a game we can see there is a trend and that this value might tend towards a constant at very high (infinite) players in a game. My thinking: With infinite players in a game, someone is guaranteed to win after the dealer has drawn the first 13 cards, so I would hazard a guess that our value will tend towards 13/2=6.5 cards. This may be naive.

Recreating figure 1 with a broader range of number of players could give us a hint as to whether the average number of cards will eventually tend towards a constant, whether that be 6.5 or otherwise.

10,000 players @ 25,000 simulations = 5.86633784749378 (simulation took 3 hours & 4 minutes)

100,000 players @ 10,000 simulations = 6.726244717847853 (simulation took 12 hours * 40 minutes)

1,000,000 players @ 10 simulations = 7.5 (simulation took 8 minutes)

FIGURE 3:

This plot looks very similar to Figure 1 except more 'zoomed out'. It still looks like it's tending towards something but it's hard to tell even going up to 1000 players in a game. Fit a mathematical function to this line which we can more easily extrapolate to -> ∞ 

FIGURE 4:

Using a scientific program called Origin Pro we can do exactly this. Fitting the line to a logarithmic ln(x) plot which looks like this. We must bear in mind that the average cards left at the end of a game for a loser can never exceed te number of cards you start with (13).

FIGURE 5:

By fitting to the equation y = a - b*ln(x+c) we can acurately model the data we have. The empirical data is in black and the fitted line is in red. As you can see they match nearly exactly. However, just because they match this data does not mean it can necessarily be used as a predictive model.

FIGURE 6:

Taking the formula format and appropriate values in Figure 6, we can the line over an extended x-axis. Instead of stopping at 1000 players in a game, there's no reason we can't go up to a billion players. However, at around the one billion players mark the predicted number of cards left for a losing player exceeds 13 which is impossible so we know at some point between 1000 players and 1 billion players our model breaks down. View the graph here.

I think the next step is to get some more data. Maybe running 25,000 simulations for each of 10,000 players and 100,000 players would be a good starting point. We can then see if these extreme values lie on our model line (within error). There is an error both in the model and the value we obtain from the computer.