Several housekeeping items:

1) I received word from the bookstore today that the paperback edition of “Predictably Irrational” won’t come out until next year. Since I am unwilling to make you pay for hardback prices, we will just not read that book, and I will introduce topics from it in my lectures alone.

2) New reading. You should have read Chapters 1-5 of “Why Flip a Coin” by now, along with Part I of “Calculated Risks.” Please continue your reading by reading Chapters 6-9 of “Why Flip a Coin,” and Chapters 5-6 of “Calculated Risks.”

Summary of today’s class:

I proposed a problem with three cards. One has two red sides, one has a red and a blue side, one has two blue sides. You pick a card at random with one “up” side that is random. Suppose the “up” side that you see is red. What is the probability that the other side is red?

Everyone guessed that it would be 50%. But that turns out to be wrong. We drew a probability tree, and calculated that P(RR)=1/3, P(RB)=1/3, P(BB)=1/3. Also, P(see R|RR)=1 and P(see R|RB)=1/2. So by multiplying (conditional probability formula), we get P(see R, RR)=1/3, P(see R,RB)=1/6. (The other cases are when we counterfactually see a blue side, and you can always ignore cases that were not observed.) The total probability that you see red is the sum of these: P(see R)=P(see R,RR)+P(see R,RB)=1/3+1/6=1/2. That makes sense, because the problem is completely symmetrical between seeing red and seeing blue. But then the conditional probability formula says that P(RR|see R)=(1/3)/(1/2)=2/3, so the probability that the other side is red is 2/3.

But there was still a concern that there were two cards, one with red on the other side and the other with blue. So, it seems that the probability is 50% that the other side is red.

So I imagined a similar coin experiment, with a HH coin, a TT coin and an HT coin. If we chose a coin at random and tossed it, and did this 30 times, we’d expect to get the HH coin 10 times and see 10 heads; we’d expect to get the HT coin 10 times and see 5 heads. We’d see no heads in the 10 times we’d expect to get the TT coin. But we see from this example that of the times we see a head, in 10 of those cases we’d have the HH coin, and the other side would be H. But in only 5 of the HT cases would we see a head. So in 2/3 of the cases where we see a head, the other side of the coin is a head.

We then did it with a spreadsheet-style calculation. Column 1 is the states of nature: RR, RB, BB. Column 2 the prior probabilities of the three states (in this case the probability that we picked that card): 1/3, 1/3, 1/3. Column 3 has the likelihoods (the probability of observing R, given that the particular state of nature in Column 1 is the case: 1, 1/2, 0. By multiplying Column 2 by Column 3 we get the joint probabilities in Column 4, the probabilities of observing both R and a particular state of nature: 1/3, 1/6, 0. We then add the joint probabilities to get the marginal probability, the probability of observing R: 1/3+1/6=1/2. This gets divided into each line in Column 4 to get Column 5, the posterior probability of observing each state of nature, given that we observed R: 2/3, 1/3, 0.

I mentioned that all of these methods give the right answer, and in response to a question, on a test or on homework, everyone should explain in detail how they got their result, no matter what method is used. (Aside: If you do it two ways, e.g., Natural Frequencies as well as Spreadsheet, and get the same answer, you can be more confident that you’ve done it right).

I used a table format to put down a joint probability table for P(state of nature, color seen). I showed how we can sum down the table to get the probability that we observe a given color, regardless of the state of nature. Since we write these numbers in the bottom margin of the table, we call these “marginal probabilities.” Similarly, if we sum across the table, we get the probabilities of the states of nature, regardless of the color seen. These are also marginal probabilities.

I then described the “Monty Hall Problem,” based loosely on the former TV game, “Let’s Make a Deal.” In this game, the contestant is faced with three doors, behind which are various prizes. One of them is a great prize, the others not so great. (Traditionally, a car and two goats). You pick a door, and Monty may (in the real show) give you the prize, or may offer you a chance to switch, or may open a door containing a lesser prize and offer you a chance to switch.

The “Problem” has different rules from the show. In this problem, Monty knows where the prize is, and after you choose a door, always opens another door with a goat, and always offers you a chance to switch. The question is, is it advantageous to switch, if your aim is to get the great prize (“car”)?

Most in the class thought it would be advantageous to switch, but the reasoning seemed not quite right. Some thought that Monty’s opening the door changed the probability that the prize is behind the door you chose from 1/3 to 1/2, with 1/2 as the probability that the prize is behind the other door. It would seem that if this is the case, it doesn’t matter whether you switch or not, the probability of getting the car is 1/2 if you switch and 1/2 if you stick. So I was a bit puzzled why this was being said. Maybe it was because the probability of the prize being behind the other door was thought to be 1/2, which is more than 1/3. But still, the probabilities have to add up to 1, and they don’t because 1/3+1/2 isn’t equal to 1.

So I went to the million door problem. There are a million doors, one of which has a car, and the rest booby prizes. You choose a door. Everyone agrees that the probability that you got the right door is 1 in a million. I then (knowing where the prize is) open 999,998 doors, none of which has the car. Since I always open a door that I know doesn’t have the car, does this change the probability that you initially chose the door with the car? Evidently not. So, as I open door after door, the probability that one of the other doors has the car also remains at 999,999/1,000,000. All I am doing is eliminating doors that don’t have the prize, but that doesn’t affect the probability that you guessed right (or wrong) initially. So after I open all the doors, there is still a chance of 1 in a million that the door you chose has the prize, and a chance of 999,999 in a million that the other door has the prize.

I left you with the problem of showing (in the original Monty Hall Problem) that the probability that the prize is behind the door you didn’t choose is 2/3 after Monty shows you the goat, using one of the methods we have discussed (natural frequencies, probability tree, spreadsheet calculation). We’ll discuss your solutions on Wednesday.

## Leave a Reply