HCOL 195 9/30/09

Housekeeping: There will be a journal due this Friday, but none for the next two weeks because of the October 9 break and the quiz on the 14th.

I have set aside Wednesday the 7th and Monday the 12th to discuss the test review. Please get together with your group and discuss them in advance so that we can have a fruitful discussion.

I brought up the subject of the value of a human life in the context of making government decisions such as whether to require cars to have seat belts, and similar questions. Today we looked at decision-making in a more general context, and I introduced a different graphical notation for decision-making. On a tree, we used circles to denote probabilistic (chance) events that are not under our control. The new notation is to use a square for a decision-making node that is fully under our control. As an example, we constructed a tree for deciding whether to bring an umbrella with us, depending on the probability of rain. The final version of that tree is here (more discussion below):

Decision Tree for Umbrellas

The numbers on the right represent our losses: These were for one student’s losses, a person who loves to go out in the rain and doesn’t mind getting wet. Most people had different losses. The worse the outcome, the larger the number. 0 is the best outcome. For this student, getting rained on was the best outcome, carrying an umbrella and not getting rained on was the worst, with not carrying the umbrella and not getting rained on almost as bad. Once the losses are entered onto the ends of the branches, evaluation of the tree is done this way: Whenever we have a chance node (like whether or not it will rain), we multiply the value at the end of the branch by the corresponding probability, and then we add these numbers across all the branches. This gives us a value for the branch that goes up to the chance node. These numbers are the expected losses. We then (in this simple case) choose the branch (decision) that has the smallest expected loss, and cut off all the other branches at the square node. That way we make the decision that has the least expected loss.

This is the general procedure for any decision problem: Evaluate expected losses for chance nodes, and use only decisions that have the least expected loss. You can have trees with any number of nodes of any type; the same procedure follows back all the way to the root of the tree. On the tree illustrated, since the student doesn’t mind getting wet but prefers not to carry an umbrella when it doesn’t rain. Most losses would come out differently; as we did it, there would be a certain probability of rain, and if the probability were high enough, the person would elect to bring an umbrella.

The losses are up to the person making the decision. They measure how bad each outcome is, relative to the best outcomes. The numbers don’t have to be different; you just have to make a decision.

We had a discussion about jury duty. Again, we know that no juror can be absolutely sure of innocence or guilt, so the probability of innocence or guilt will be greater than zero and less than one. We can rank outcomes (convicting an innocent person, convicting a guilty person, acquitting an innocent person, acquitting a guilty person). Some of them are much worse than others. We noted that in civil cases the criterion in “preponderance of the evidence”, which just means that the side that wins is the one that has over 50% of the probability in its favor. In criminal cases, the criterion is “beyond a reasonable doubt,” which is much stricter. However, the Constitution does not say what this means precisely, and it is up to each juror to decide (if they are being Bayesian about it), what the probability of guilt that satisfies this criterion is. We’ll discuss this more later.