A Math Problem discussed in Mostly Walking

Subtitle: Why Bill Graner didn't make as complete a fool of himself as you might have thought.

 

I just watched Mostly Walking King's Quest P6. In it, Bill states a fallacy in probability: "If you have two chances to do something, that doubles your odds of success. If I have to make a D20 roll with a 10% chance of sucess, but I get two rolls, that doubles my odds of success to 20%". Seans P & B rightly pointed out that this isn't right, as by that logic, having two chances at a 70% DnD roll would up your odds to 140%. Instead you have to square the odds of failure, which means two chances at a 70/30 roll gives you a 91% chance of sucess.

 

Bill made a good point after: while he believed the math, he didn't understand intuitively why you don't just double the odds of success. But Bill then started singing a song about peeing your pants, and the topic was dropped. I want to bring it up again, because it's kind of an interesting  math problem.

 

What's interesting about it is that Bill's fallacy actually is correct: for small probabilities. If you have two chances at a 1% roll, then your odds of making at least one is 1.99%, which is approximately double. If you have two shots at a 10% roll, then your odds of success are 19%, again close to double. Poker players use this approximation, assuming that having two chances of hitting an out doubles your odds of hitting that out. However, the approximation starts to break down when you start dealing with larger probabilities.

 

Why do your odds of success double for small probabilities but not large ones? The easiest answer I can give to that is that if it doubled for large probabilities, that would mean that two chances at a 70% D&D roll results in a 140% chance of success, which is absurd.

 

A better (but more complicated) explanation: When you have two shots at success, while your odds of getting at least one success don't double, your "expected number of sucesses" does double. If you get two shots at a 0.7 chance of success, you'll get "on average" 1.4 successes, meaning you'll get two successes about half the time, but you'll also get 1 or 0 successes some of the time, so it works out to 1.4 successes on average. So Bill would actually be exactly correct if he said "Getting two shots at a D&D roll doubles your average number of successes." You can't say "Getting two shots at a D&D roll doubles your odds of getting at least one success". The formula for expected number of successes:

 2*(odds of two successes) + 1*(odds of one success) + 0*(odds of no successes)

This gives you 1.4 successes. But when you're asking what your odds of getting at least one success is, you don't get to count two successes as double. Instead the formula is:

 1*(odds of two successes) + 1*(odds of one success) + 0*(odds of no successes)

This gives you 0.91, the actual probability of succeeding if you get two shots at a 70% roll.

 

The reason why Bill's fallacy works for small probabilities is that the odds of getting two successes is very small, so the "odds of two successes" part of the formula can be ignored (and when you ignore that part of the formula, the two formulas become the same). It can be safely assumed that "the odds of at least one success" double when you get two shots, just as "the expected number of successes" doubles.