50 intuition about

0 downloads 0 Views 214KB Size Report
Nov 10, 2018 - Hall problem being .50 (50/50) is correct for any given trial, and that ... and a 2/3 chance of selecting a door with a Goat. ... Furthermore, Monty Hall is not going to open a door with the Car behind ... Figure 6: Only Door 1 and Door 2 are left to choose from. ... This is much like the probability of a coin flip being.
The goat is not a car (Proof that 50/50 intuition about the Monty Hall problem is correct) Edward G. Brown

Saturday 10

th



November, 2018

Abstract This paper demonstrates that intuition regarding the probability of the Monty Hall problem being .50 (50/50) is correct for any given trial, and that assertions regarding a probabilistic advantage being gained by always switching doors are incorrect for any given trial 1 . Finally, entropy calculations demonstrate that an unequal probability assignment to the two remaining doors describes an alternate reality that is inconsistent with the reality the game begins with.

1

Preliminaries

What we know to be true:

1. At the start of the game we have a 1/3 chance of selecting the door with the Car and a 2/3 chance of selecting a door with a Goat. 2. Monty Hall will never open the door we have selected as this would end the game immediately.

As the door is not opened, we have no information about what is

behind the door we choose.

In other words, we don't know if we have initially

picked the door with the Car or not. 3. Monty Hall will only open a door which shows a Goat, never a door which shows the Car, as this would end the game immediately. As this door is opened, we have new information (that there is a Goat behind the door Monty Hall opens). 4. Monty Hall never played the game as put forward in the problem. As he is immortalized by the problem, we might as well get our facts straight

2.

[email protected] https://web.archive.org/web/20120411015313/marilynvossavant.com/game-show-problem/ 2 Monty Hall discusses the "Let's Make a Deal" "Monty Hall Math Problem" https://www.youtube.com/watch?v=c1BSkquWkDo ∗

1

1

2

Demonstration

To begin with there are three possible permutations of Car, Goat, Goat (as determined by the density of states

N! 3 m!(N −m)! ) . As we can see:

Figure 1: Three possible permutations. In the leftmost permutation Door 1 is C for Car, Door 2 is G for Goat, Door 3 is G for Goat.

Suppose we choose Door 1. As mentioned previously, we have a 1/3 chance of having selected the door with the Car. And, as mentioned previously, Monty Hall is not going to open Door 1, so we have no information about Door 1.

Door 1 is greyed below to

illustrate.

Figure 2: Door 1 is greyed out, we have no information about it.

Furthermore, Monty Hall is not going to open a door with the Car behind it. For our demonstration Monty Hall opens Door 3, which is colored red below.

Figure 3: Door 3 is red, Monty Hall has opened it.

It should be obvious that the third permutation is not possible at this point, because Monty Hall will never open the door which reveals the Car. Therefore, permutation 3 is discarded.

3

See the density of states R script calculations included in this project. 2

Figure 4: Third permutation is discarded.

It should further be recognized that we have new information, namely that Door 3 has a Goat behind it. We must also recognize that we have no further information about Door 2. Door 2 is greyed below to illustrate.

Figure 5: Door 2 is greyed out, we have no information about it.

At this point it should be obvious that the only choice open to us is between Door 1 and Door 2.

Figure 6: Only Door 1 and Door 2 are left to choose from.

And it should be readily obvious that for any given trial there is a .50 probability (50/50) of selecting the Car, whether or not we stay with Door 1 or change to Door 2. This is the second choice to be made. Note:

This is just one working through of the various possibilities.

Carefully work-

ing through all possibilities will demonstrate that the end result is always the same, that the second choice gives us exactly .50 probability of success or failure for any given trial.

3

3

Proof

For the rst choice the probability of success, as has been noted, is 1/3 or 0.(3) And, for the rst choice the probability of failure is 2/3 or 0.(6) For the second choice, after the reveal, for any given trial of the game the probabilities of success for the three doors are listed below (following from the demonstration above where Door 1 is the initial chosen door and Door 3 revealed a Goat).

Door 1: P(Car | door is closed so it

could

be the car) = 0.5

Door 2: P(Car | door is closed so it

could

be the car) = 0.5

Door 3: P(Car | Goat is behind the door, or the Goat is a Car) = 0.0

4

Discussion

The probabilities that the game began with are no longer relevant once we reach the second choice stage. No longer relevant because the new information has changed the landscape of possibilities. To see otherwise is to see an illusion. Any perceived advantage gained by always switching (for any given trial) is an illusion because it ignores the fact that we are making two choices.

Staying with the rst door or switching is a second

choice and that is a choice between two doors, not three. This second choice is the choice that must be assessed for probability of success. The exposure of the Goat is not just new information, it is entirely relevant information. The only way that the beginning probabilities would remain relevant would be if there were no second choice; if we had no new information from Door 3 (which we do); or if the Goat behind Door 3 could magically be transformed into a Car. If you've read this far you may protest, We've heard this argument before, the author doesn't understand that it is advantageous to always switch, this is yet another 50/50 argument and there's nothing of any substance to it. Bear with me. The argument for always switching says that because you have a 2/3 chance of picking a Goat when you make your rst choice, you should always switch-thereby increasing your chances of success. You may have even heard it said that you will pick a Goat 2/3rds of the time. This is where the argument for always switching breaks down. Probability theory tells us nothing about what will happen during any individual trial. It doesn't tell us that 2/3rds of the time we will select a Goat on the rst choice.

4

It merely says that in the very long run the proportion of Goats selected to Cars selected will be approximately 2/3 to 1/3. This is much like the probability of a coin ip being 1/2 telling us nothing about what the next ip will be (heads or tails). We could very well see 5 or more heads/tails in a row. And this is the crunch: For any given trial there is no way to know what we will choose initially.

Even for a handful of trials (say three) there is no reason one might

not initially select the Car three times in a row, or a Goat only one time out of three. Believing otherwise is a well known fallacy, but perhaps forgotten in this instance. There's nothing magical about the beginning probabilities 1/3 success and 2/3 failure. They are determined by the density of states (the Car is in one of three locations). The probabilities at the second stage are also determined by the density of states, which given that two doors are remaining, is two. Assigning 2/3 probability to one of the doors at the second choice stage is a guess based on long run expectations, and a misunderstanding. The correct assignment is 1/2 of the total probability (of having the Car) to each remaining door with

P (Door1 ∪ Door2)c

= 0.

You may protest, "What about the examples put forward where one has 100 doors, with 99 Goats and 1 Car?" The comparison between a 1/3 chance and a 1/100 chance is an absurd one, a little consideration will make this plain.

4

∗∗∗ The Monty Hall game is a game of big payos. It's not a game one would expect to play a great number of times or even more than a few times. A strategy which relies on the law of large numbers for a single trial or even a handful of trials will not give any advantage beyond normal chance. And since people are fond of running simulations and recording the results, let us examine what happens if we test the null hypothesis that there is a 1/2 chance of winning when employing the always switching strategy. Get a partner and play the game, recording your results.

Based on your results

you can easily calculate the posterior probabilities. For example, my wife played Monty Hall (using playing cards) and with an always switch strategy I won 6 out of 10 trials. Using the posterior probability calculation of Jereys (1 below)

6 the chance of success is 1/2 is .69 (2 below)

(n + 1)!P0x (1 − P0 )n−x (n + 1)!P0x (1 − P0 )n−x + x!(n − x)!

5 the probability that

(1)

4 Such examples are akin to performing an experiment which compares 3 liters of helium gas to 100 liters of helium gas while assuming equivalent entropy in each volume. 5 From "Theory of Probability" by Sir Harold Jereys, 1939, Chapter 2, Oxford University Press. 6 Contrariwise, using an always stay strategy I won 4 out of 10 trials, and testing the null hypothesis that the probability is 1/3 (as is often assumed) the calculation results in .037, which is hardly compelling. Your results may vary.

5

11!( 12 )10 = .69 11!( 12 )10 + 6!4!

(2)

The point is that for any given trial or even handful of trials, one may observe results that break with the commonly accepted answer regarding whether or not one should always switch. This is because there is no advantage to switching or staying in any given trial unless you know what you have selected on the rst choice (but that would involve some kind of cheating). For a given trial you might as well ip a coin to decide whether to stick or switch.

∗∗∗ Lastly, I've stated that the probabilities that the game started with are no longer relevant once reaching the second choice stage. However, this isn't to say that the system has per se changed such that the third door (the opened door) no longer exists.

The

remaining doors are a subsystem of the three doors, and the density of states being two is in agreement with the fact that we now know one of the door's contents; we just don't know the contents of the remaining two doors. It should be apparent that dividing the remaining probability up equally between the remaining two doors is just a redistribution of the randomness that we had in the rst choice: equivalent probability for each door. We simply assign 0 probability to the open door for having the Car because it is an impossibility. So if one is to assign 2/3 probability to the door that one has not initially chosen, one is assigning a nominal probability value and one can see from the entropy calculations that doing so describes a system which is less random than the initial choice state (equal probability and maximum randomness/entropy as in equation 3 below). This creates a problem for anyone imagining that assigning 2/3 of the probability from the opened door to the closed door you didn't choose will not change the underlying reality of probability distribution. The entropy for P(1/2) for each door is 0.69 (4 below) vs .63 for P(1/3) and P(2/3) (5 below).

7

1 1 S = −3k[ ln( )] = 1.09k 3 3

(3)

1 1 1 1 S = −k( ln( ) + ln( ) + 0ln(0)) = 0.69k 2 2 2 2

(4)

1 1 2 2 S = −k( ln( ) + ln( ) + 0ln(0)) = 0.63k 3 3 3 3 7 See the entropy and randomness R script calculations included in this project.

6

(5)

5

Conclusion

It has long been argued that always switching leads to a probabilistic advantage when playing the Monty Hall game. But for any given trial this is absurd. It is absurd because for any given trial one has no way of knowing if one has chosen the Car or a Goat initially; absurd because the reveal of Door 3 has given new information and changed the available doors from three to two (and the realities of the sample space); and because there are two choices being made, and they are separated in time, with one being conditionally inuenced by Monty Hall's opening of a door. Furthermore, a perceived advantage gained by switching is an illusion given that the

variability

of successes and the

proportion

of successes change as

n

trials increases.

In other words, over the long run an always switching strategy may lead to a greater proportion of successes, but in any given trial (or short run of trials) the contestant's expectation of an increased chance of success is a well-known fallacy. David Ruelle wrote in

Chance and Chaos, "it is not enough just to be able to compute

probabilities, we also have to be able to compare our results operationally with physical reality. If we do not pay sucient attention to the problem of relation to physical reality, we can easily get trapped in paradoxes."

8 I suspect that it is such a disconnection of

probabilities with physical reality which has led to the so-called Monty Hall Paradox. In reality, no one would reasonably expect to have the chance to play the Monty Hall game (and potentially win a brand new car) with any regularity. In all likelihood it would be a once in a lifetime opportunity. If there was opportunity to play such a game a large number of times, then a strategy which took the long-run expected limit into account would be optimal. However, given that a player would likely face only one such opportunity, a single trial, there is no true probabilistic advantage when implementing the always switch strategy. Finally, the 2/3 probability assignment to one of the remaining doors at the second choice stage describes a dierent reality than the 1/2 assignment to each.

The 1/2

assignment to both doors is consistent with the density of states and the maximum randomness/entropy of equal probability that the game begins with. The 2/3 assignment is not a description of the reality of the moment, but some suddenly altered reality; an alternate reality where one door has magically been given a greater chance to conceal the Car than it had at the beginning of the game (simply because you chose another door), and where the future is seemingly talking to us through a clairvoyant who says, Ahh, I know what you will be like in the future. Do you believe in magic?

8

From "Chance and Chaos" by David Ruelle, 1991, Chapter 3, Princeton University Press. 7