Heuristics That Almost Always Work

By Astral Codex Ten

https://astralcodexten.substack.com/p/heuristics-that-almost-always-work

The Security Guard

He works in a very boring building. It basically never gets robbed. He sits in his security guard booth doing the crossword. Every so often, there’s a noise, and he checks to see if it’s robbers, or just the wind.

It’s the wind. It is always the wind. It’s never robbers. Nobody wants to rob the Pillow Mart in Topeka, Ohio. If a building on average gets robbed once every decade or two, he might go his entire career without ever encountering a real robber.

At some point, he develops a useful heuristic: it he hears a noise, he might as well ignore it and keep on crossing words: it’s just the wind, bro.

This heuristic is right 99.9% of the time, which is pretty good as heuristics go. It saves him a lot of trouble.

The only problem is: he now provides literally no value. He’s excluded by fiat the possibility of ever being useful in any way. He could be losslessly replaced by a rock with the words “THERE ARE NO ROBBERS” on it.

[…]

And sometimes the rare exceptions are so important to spot that we charge experts with the task. But the heuristics are so hard to beat that the experts themselves might be tempted to secretly rely on them, while publicly pretending to use more subtle forms of expertise. “My statistical model, accounting for chaos theory, barometric pressure, and the price of tea in China, says there won’t be a hurricane tomorrow. Rejoice!”

Maybe this is because the experts are stupid and lazy. Or maybe it’s social pressure: failure because you didn’t follow a well-known heuristic that even a rock can get right is more humiliating than failure because you didn’t predict a subtle phenomenon that nobody else predicted either. Or maybe it’s because false positives are more common (albeit less important) than false negatives, and so over any “reasonable” timescale the people who never give false positives look more accurate and get selected for.

This is bad for several reasons.

First, because it means everyone is wasting their time and money having experts at all.

But second, because it builds false confidence. Maybe the heuristic produces a prior of 99.9% that the thing won’t happen in general. But then you consult a bunch of experts, who all claim they have additional evidence that the thing won’t happen, and you raise your probability to 99.999%. But actually the experts were just using the same heuristic you were, and you should have stayed at 99.9%. False consensus via information cascade!

This new invention won’t change everything. This emerging disease won’t become a global pandemic. This conspiracy theory is dumb. This outsider hasn’t disproven the experts. This new drug won’t work. This dark horse candidate won’t win the election. This potential threat won’t destroy the world.

All these things are almost always true. But Heuristics That Almost Always Work tempt us to be more certain than we should of each.

Whenever someone pooh-poohs rationality as unnecessary, or makes fun of rationalists for spending zillions of brain cycles on “obvious” questions, check how they’re making their decisions. 99.9% of the time, it’s Heuristics That Almost Always Works.

(but make sure to watch for the other 0.1%; those are the people you learn from!)