Archive for January, 2011

HCOL 196, Assignments

January 30, 2011

Here is a link to the next problem set, which is due on Monday, February 7.

Also, here’s some more reading for you. Please continue reading Flip, Chapters 3-5, and Calculated Risks, Chapters 3-4.

HCOL 196, January 28, 2011

January 28, 2011

We first discussed the remaining Fermi problems. On the parking lot problem, most teams just based their answer on personal experience with parking lots; one team actually tried to estimate the time for a space to come free from the fact that you could command 20 spaces. I commented that since one might arrive randomly in that interval, maybe dividing by two would be reasonable (though the other parameters are uncertain enough that this is probably not necessary). On the Grand Canyon, the most convincing calculation indicated that what I had heard is false. The population problem was fairly straightforward, and most teams correctly thought that the number of millionaire households would be a few percent (the actual number is 5.5%). Small numbers were guessed for the piano tuners, I recall that it is something like ten in the phone book (but they serve the entire county). And the value of the rice is much, much bigger than the U.S. National Debt.

We then looked at a problem where you pick one of three coins out of a hat; one has two heads, one has two tails, and one is a regular coin. If the coin is tossed, and comes up heads, what is the probability that the other side of the coin is heads (that is, that you picked the two-headed coin)? First we identified the states of nature, which are best described as which coin we have. Before tossing the coin, we have no data, so the prior on each is 1/3. (The likelihood will rule out that we have the two-tailed coin). The likelihood of seeing heads if the coin is HH is 1, and of seeing heads if it is HT is 1/2. Thus the tree that pops out is as shown below. It is twice as likely that the coin has two heads than that it has a head and a tail.

This problem is equivalent to the “Bertrand’s box” paradox.

The problem can also be done as a “spreadsheet” calculation:

Finally we looked at the “Monty Hall” problem, which is described here on WikiPedia. Another, completely equivalent version is the “Three Prisoners” problem. The important thing here is that Monty knows where the prize is, and always opens a door with a goat, and always offers the contestant a chance to switch. Under those circumstances, it is an advantage to switch, as our probability tree shows:

I asked you to think about turning this into a spreadsheet calculation for Monday.

HCOL 196, January 26, 2011

January 27, 2011

We welcomed a new student into the class.

We discussed Question #2 on Friday’s little quiz. There are actually three different cases: a+c, either a+d or b+c, and b+d. The payoffs are for a+c: A sure loss of $520 (one student chose this as I recall), either a+d or b+c: a 25% chance of gaining $240 and a 75% chance of losing $760. I believe everyone else chose one of these two equivalent bets. The payoff for b+d is a 3/8 chance of $0, a 1/16 chance of gaining $1000, but a 9/16 chance of losing $1000.

Viewed from the point of view of risk, a+c is least risky, but there’s no possibility of a gain. b+d is most risky (and nobody liked it), and the other choices are intermediate in risk.

I then attempted to guess which coin-toss sequences were real and which were fake. I didn’t do very well because quite a few of the ones I guessed were real were really fake! But, at least all the ones I identified as fake were fake. I explained the method I used for guessing. The point of the experiment is that it’s really hard to make up random sequences.

We discussed the Fermi problems. A common method for estimating things (for example, in #1, #2, #3 and #4) was to guess the dimensions of the relevant objects, pretend that they were just a big box and compute volumes or radii as appropriate. Several biology majors in the class gave us informed data about the number of cells in the body. It is quite large (something like 1013, as I recall. Email me if my memory is incorrect on this). Estimating the number of students at UVM and how many books each needs was the method of choice for #5.

We’ll finish this on Friday.

HCOL 196, Problem Set #2

January 25, 2011

I will be handing out a new problem set on Wednesday. It will be due on Monday. As usual, it is to be done in groups, and one paper to be handed in. It can be found here.

HCOL 196, January 24, 2011

January 25, 2011

I posed the problem of medical tests (e.g., mammography, prostate cancer tests, etc.) These tests are not perfect. Sometimes they will report a problem when there is none. Sometimes they will miss a problem that is there. How should we think about the results of such tests? As an example, mammography is about 90% accurate in both of these challenges. If a woman has breast cancer, mammography will detect it correctly 90% of the time, but will report incorrectly 10% of the time. If a woman does not have breast cancer, mammography will report correctly 90% of the time that there is no cancer, but will give a false positive 10% of the time. But only 1% of the women in the population that gets routine mammography has an undetected cancer. If a woman gets a positive result, how worried should she be? The answer is, she should be worried, but not 90% worried. We worked this out in the following tree on the board.

Mammograms

Analysis of this tree led us to some basic rules that probabilities have to obey.

We can present the exact same calculation that came from the tree in a “spreadsheet” calculation.

While working on the spreadsheet, we defined some new terms: The first column has the states of nature (SON) we are interested in learning about, here, whether the patient has the disease or not. These are mutually exclusive (at most one can be true) and exhaustive (at least one must be true). P(D) is the prior probability that the person has the disease; we call it “prior” because it represents our best information about this, before looking at the data. Data are always known. Here the data will be the results of the mammogram, and will be either positive or negative. P(+|D) is the likelihood, that is, the probability of a positive mammogram, given that the person has the disease. P(+,D) is the joint probability, the probability of both having the disease and getting a positive mammogram. The conditional probability law that we got by looking at the tree (chart 2 above) says that P(+,D)=P(+|D)P(D). We add up all the joint probabilities to get P(+), the probability of the woman getting a positive mammogram, independent of whether she has the disease or not. Some of these positives will be true positives (because the woman has the disease), and others will be false positives. We call P(+) the marginal. Dividing the marginal into each joint probability gives us the posterior probability, for example P(D|+), the probability that the woman has the disease, given that the mammogram was positive. This is just using the conditional probability law again, in the form P(D|+)=P(D,+)/P(+). Note that P(D,+)=P(+,D); it doesn’t matter which order you put things in a joint probability. The probability of having the disease and getting a positive mammogram is equal to the probability of getting a positive mammogram and having the disease. The posterior probability is our goal, and the goal of every Bayesian analysis. It tells us everything that we can know about the states of nature, after we consider the data. Spreadsheets, trees. Both are correct, both are acceptable. Sometimes one is easier than the other for presenting the calculation. Use whichever method is best for you.

HCOL 196, January 21, 2011

January 22, 2011

I assigned a problem set, due next Wednesday. The first part (Fermi problems) is to be done in your group, one paper from each group. The second is to be done individually, so I will get eleven sets of coin tosses, some fake and some real.

We had a little quiz and discussed the results. Some of the questions were designed (with varying success) to illustrate that the way identical questions are phrased (gain, loss; saving lives, losing lives; etc.) can change the results. Some were on basic probability ideas. We will discuss the one remaining question on Monday, so be sure to bring your paper with you.

The blackboard had two pictures. One showed why it is less likely that the young bank teller would also be “socially concerned” in the ways the question asked.

The other showed why in the taxicab problem, the probability that the witness correctly identified the cab as blue is around 41%. Some guessed in the 12-20% range, others in the 70-80% range, but no one got close to the correct answer. The tree diagram I drew will be one of our useful tools in this course.

I notice in this chart that I incorrectly put an “X” (meaning “ignore”) next to the third branch of the tree, which represents “Says blue, correct”. It should have been on the last branch (“Says green, mistake”).

See you Monday

HCOL 196, January 19, 2011

January 20, 2011

Today we introduced ourselves, and then discussed probability in the context of a little coin toss experiment. We distinguished between two ideas of probability, and in particular the Bayesian idea that probability can be used to encode your uncertainty about some proposition, even if the proposition is perfectly definite (as after the coin is tossed or asking about the length of the Nile river). We noted that people’s probability assessments can be different if they have different background information. We write

P(some proposition|background information)

which reads “the probability that the proposition is true, given the known background information).

We saw that if background information changes, so will the probabilities. This is shown by our blackboard at the end of class:

I should have used a darker marker!

Reading: Start reading “Why Flip a Coin?” (“Flip”), Chapters 1-3. Also, start reading “Calculated Risks,” Chapters 1-2.