HCOL 196, April 20, 2011

We discussed the second quiz.

The first question asked about the probability that the experimental drug is better than the standard drug. Some people thought that the 0.2, 0.3, 0.4, 0.5, 0.6 were probabilities; they are not, they are the states of nature for this problem. The probabilities for the standard and experimental drugs were the numbers in the corresponding (second and third) columns. In the chart, I have put them in parentheses to the left and above the states of nature. Since the control and experimental drugs are done independently, the probability for state (r,s) is just the product of the corresponding posterior probabilities from the table. For example, for the SON (0.2, 0.3), the corresponding probabilities are 0.4 and 0.2, so the number to put in the corresponding box is 0.08. You only need the boxes above the red diagonal line, since those represent the boxes for which r>s. Then just add up those boxes and the result is 0.66. A shortcut method is to do it row by row, the answer is the same (see red numbers on the right of the picture).

There was no need (and no way) to start with prior and likelihood to get posteriors for this problem. The posteriors were already given to you. This was to save you time.

For the polling problem, the critical thing is to recognize that H on the first coin and T on the second (HT) is a different outcome from TH. The probability tree shows us that the probability of HH is equal to the probability of TT, 1/4, and the probability of a head and a tail (in either order) is 1/2. This means that we expect 25% of the answers to be “programmed” yes, corresponding to HH. Since 40% of the answers were yes, the remaining 15% were no. Similarly, 60%-25%=35% of the nos were not programmed. Of the unprogrammed answers, 30% were yes and 70% were no.

Here, we approximated the stockbroker’s claim by assuming that his actual hit rate is 80%, which is what the data said. This is obviously an approximation, since it could be some other number and still be consistent with the observed data. So the calculation is going to overestimate the probability that the stockbroker can actually do what he says. If we wanted to do better, we would put (say) half the prior probability on 65% (which says he can’t do it), and parcel out the remaining half amongst all the other possibilities. But that is too much for a 10 minute test question, hence the approximation. (Some people outlined the better scheme, and they got credit for that).

So the states of nature are two: Either the broker can’t really do better than the market, or he can (represented by 80%). In the table I tentatively put 1/2 on each possibility. The likelihoods use the probabilities 0.65 (o.35) and 0.8 (0.2), raised to the power of the number of successes (failures). The calculation isn’t hard, but I gave credit if you got this far and then explained how to complete the calculation. The result is approximately 0.4 for the posterior probability that he can’t do it and o.6 that he can. However, experienced investors know that it is really hard to beat the market, and if this were taken into account by changing the prior appropriately, it is quite likely that we would conclude that the stockbroker can’t really do what he claims.

For Problem 4(a), I realized after class that I had miscalculated the numerical value of the likelihood for “guilty”. I had apparently raised the 0.99 to the 6th power in my calculator (which has pretty small buttons), and not the 9th power. I’ll say more in a moment.

The first step is to recognize that the juror cannot use the fact that the accused was arrested in any way to set the prior probability. In particular, you can’t say that the grand jury indicted, so the probability is at least 1/2 that he is guilty, because the grand jury would have used the evidence to draw that conclusion, and you are going to see the same evidence, and a Bayesian can’t use the same evidence twice. About all you know is that there are a million people in the city, and as far as you know (since no evidence is available yet) is that the accused was randomly picked out of the population. So the prior on guilt is 1/1,000,000. You can safely write the prior on innocence as 1 (we are going to be dividing by the marginal anyway, and 1 is very close to the implied prior 999,999/1,000,000; the difference is insignificant).

The probabilities to be used in the likelihood are for innocence 0.1 (o.9) and for guilty 0.99 (0.01). Note that again, each pair of numbers add to 1; this mistake was made by several people. The corresponding numbers are raised to the 9th and 1st power, respectively, for the corresponding likelihoods.

At the end I noted that you can do the calculation of 0.999 by using a binomial expansion and dropping all but the first and second terms (hopefully you remember this from high school). That is shown at the bottom of the page. It was when I looked at this line after class that I realized I had miscalculated it on my calculator when I prepared for class discussion. The picture below has been corrected.

Finally, for 4(b), the decision in the jury case, the crucial thing is to get a decent loss function. For the correct decisions (AI and CG), the losses are zero, since these are correct decisions. You can arbitrarily set the loss for AG equal to 1. The important thing is to set the loss for CI big enough, since as you make it smaller (relative to 1), you will increase the likelihood that you’ll convict someone who is actually innocent. For example, if you pick the loss for CI equal to 2, a posterior probability of 2/3 on guilt (and hence 1/3 on innocent) would be sufficient to convict. How would you feel if you were actually innocent, and were sent to prison on evidence that still left you with a probability of 1/3 in the jury’s eyes that you were innocent? Not very well, I think. In class, we had decided that a loss of 100 in a burglary case, for example, would be appropriate. We might choose even larger losses in a murder case for which the penalty was life in prison, since the consequences of convicting an innocent person are even larger in this case.

Since some people are going home early this weekend, I will discuss the last question on Monday. Friday I will talk about how we are using Bayesian methods in our astronomical research.

Advertisements

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


%d bloggers like this: