Reverse Law of Large Numbers

Most people are familiar with the concept that if you toss a coin four times, you won’t necessarily get a 50-50 split between heads and tails: indeed, you could get 4 tails, which could suggest (wrongly) that the coin will always land on tails. But if you toss a coin a million times, you will most definitely get the expected result.  It is the “Law of Large Numbers” (LLN) that ensures that the one million coin tosses will produce an average of 50% heads and 50% tails.

Weakening the LLN

The reason the LLN works is simply because that the “sample size” is so much larger than the “number of options available”.  (i.e.  sample size = 1,000,000 coin tosses and the number of possible outcomes of each toss = 2).

The LLN effectively says that as the ratio of sample-size over number-of-options gets larger, then the outcome of a sample event will converge on the expected result.   This means that there are two things that can weaken the LLN (and the gravitational pull of the expected result), and they are

1.  The sample size being made smaller.

2.  The number of options being made larger.


1:  Decreasing Sample Size

In our coin-toss example above we would find that as we reduce the sample size the stability of the 50-50 split would start to crack (skewing the result in favour of one side or the other).

First we would start to see results like 49-51 or 51-49, then results like 45-55 or 55-45, and then 40-60 or 60-40 etc.  So essentially the more we reduce the sample size the more we can expect to see greater deviations from the expected 50-50 outcome.  The most extreme version of this is when we have a sample size of 1, in which there is no chance of a 50-50 split, because one coin toss obviously cannot result in ½ a head and ½ a tail!…

Obviously on a single coin toss we can only get either a head or a tail, and thus the outcome of a sample size of 1 is the most extreme deviation from the expected 50-50 split, that is to say our sample produces either 100% heads with 0% tails, or, 0% heads and 100% tails…

2:  Increasing Number of Options

In our coin toss, there are only 2 options; head or tails (both equally likely).  We know that a sample size of 1,000,000 is certain to give a representative result, but we also know that a much smaller sample size, of say 52, is still likely to yield the same representative result, and will pretty much do so every time we sample 52 tosses (thus there is a strong degree of stability and consistency in the results).

But let’s say that rather than tossing a coin we were instead to roll a 6 sided dice, or, to pick a card from a deck of 52 playing cards; in each case there are more options available (all of which are equally likely).

And so while a sample size of 52 coin tosses will almost certainly give a relatively equal number of heads and tails, it is much less certain that 52 separate rolls of a dice will produce a relatively even distribution of ones, twos, threes, fours, fives, and sixes on 52 separate rolls of a dice, and you will certainly never be able to produce one of each of the 52 playing cards from 52 separate pulls from the deck of cards (unless you live a very very long time indeed, have nothing better to do, and the universe doesn’t actually disappear up its own arse before you have a chance to do so…).

So while multiple samples of 52 coin tosses will likely produce fairly consistent results, multiple samples of 52 card pulls, it is safe to say, will have a very high degree of diversity…


The RLLN

The LLN ensures stability over a large sample size; but anything that weakens the LLN, will weaken this stability.  Thus either decreasing the sample size, or, increasing the number of options, will have the effect of “reversing” the stabilizing force of the LLN.  This Reverse of the Law of Large Numbers (RLLN) has 2 primary effects.

1.  A single sample is skewed away from the expected distribution.

2.  Multiple samples produce diverse and volatile results.

Thus the Reverse Law of Large Numbers (RLLN) effectively tells us that, as the ratio of  sample-size  over number-of-options gets smaller then the outcome of a sample event will diverge from the expected result…

If we are dealing with a system where all the options are equally likely, then these effects (of the RLLN) become

1.  In a single instance, a balanced system becomes unbalanced (this is often referred to as “symmetry breaking”). 

2.  Over multiple instances, a stable system becomes unstable (in non-adaptive systems this is often referred to as “turbulence”, and in adaptive system it is referred to as “chaos”).