The Law of small numbers
Amos and I called our first joint article “Belief in the Law of Small Numbers.” We explained, tongue-in-cheek, that “intuitions about random sampling appear to satisfy the law of small numbers, which asserts that the law of large numbers applies to small numbers as well.” We also included a strongly worded recommendation that researchers regard their “statistical intuitions with proper suspicion and replace impression formation by computation whenever possible.
People tend to generalize. And they do it based on little evidence.
Law of small numbers, or hasty generalization, is a cognitive bias and refers to the tendency to draw broad conclusions based on small data. The term was coined by Daniel Kahneman and Amos Tversky:
“We submit that people view a sample ran-domly drawn from a population as highlyrepresentative, that is, similar to the popula-tion in all essential characteristics. Conse-quently, they expect any two samples drawnfrom a particular population to be more simi-lar to one another and to the population thansampling theory predicts, at least for smallsamples.”
For example, imagine rolling a dice for 5 times. If two of the rolls result in a 3, and just deciding by this very small sample, it means there is a 2/5 = 40% probability of getting a 3, which is far from the real probability of getting any number on a fair dice, which is 1/6, or roughly 17%.
Similarly, assuming that early trends will continue to emerge and the same patterns seen so far will continue to happen again is also an example of the law of small numbers. For instance, a striker scoring 3 goals in the first two matches of the season is expected to continue scoring in the same fashion throughout the season, which is very rarely possible.
Law of small numbers may result in Gambler’s Fallacy. For instance, when flipping a coin and get two heads, individuals will start putting too much probability in the next flip being a tails. And this is a reasoning based on just small amount of data in the sample.