Monty Hall, The Black Swan, and a New Way of Thinking
Published: May 9, 2022
We’re getting a bit philosophical in that tech modernization introduces a new way of thinking; one you'll probably grasp intuitively and immediately apply in other areas of your life. Let's go!
The Monty Hall Problem
The Monty Hall problem goes like this: you're a game show contestant. You've made it to the final round and get to guess which door has the big prize.
- Door number one has a new Corvette behind it.
- Door number two has nothing behind it.
- Door number three has a goat.
Let’s say you pick door number two. Your host Monty Hall opens door number three, revealing the goat. Then Monty asks “Would you like to switch doors?" What would you do? Did you pick correctly with door number two? Or is that Corvette behind door number one?
You're probably thinking, "Why switch? I've got a 50/50 shot at a Corvette!" And you're sort of right. But in reality, you had a one-in-three shot of getting it right, and Monty had a two-in-three chance of keeping the Corvette. He’s offering you the chance to switch odds with him.
To drive the point home, imagine there are 100 doors and Monty’s opened 98 of them. You had a 1-in-100 chance to begin with, Monty had a 99-in-100 chance. Would you then switch places with Monty? Absolutely!
The Thomas Bayes Formula
In the above examples, you learned that when you uncover new information, it's smart to respond accordingly. A pastor, Thomas Bayes, was the first to observe this and develop a mathematical formula to be used to solve some probability problems. While I'd love to dive into the equation itself, I'll leave that for another time.
A fun story about Bayes is that he wrote the formula but never published it. The theorem eventually got published by another guy and developed even more by a French mathematician, Pierre-Simon Laplace, who generalized it to be used in other forms. Fun fact about Laplace — he was the first to suggest black holes could exist!
As always, you should be asking "What does this have to do with insurance?" And the answer is, surprisingly, "It applies in a bunch of places!"
Take, for instance, some of our machine learning models that are used today. The model makes a prediction, observes what happens, and then updates the model itself to make a better prediction next time. It took a statistical model, an actual observation, and then updated to accommodate it.
Perhaps you could use this to test some Mark Twainish things — those things "people know that ain't so." Your prior assumption ("the world works this way") meets a real assumption ("except in this case") and then you find it might be worth updating your assumption ("the world doesn't actually work this way").
In “The Black Swan,” Nassim Taleb highlights this with the example of a black swan changing how Europeans viewed the species altogether. If you'd only ever seen white swans and then came across a black one, you'd have to reevaluate your model of the swan population. (Note: there's plenty more to Taleb's example, but we'll stop here.)
Finally, in the insurance world, we might consider those things we know for a fact and begin to poke at them, making sure we update our mental model along the way. If we find a lot of "except in this case," we should probably rethink the validity of the model altogether.
As we implement more data science/machine learning models at Mutual of Omaha, and continue to run experiments as a company, we'll have plenty of opportunities to take our previous ideas about how the world works and update them with new information.