STAT 330 October 30, 2012

I am hoping and expecting that Sandy will not prevent me from making class on Tuesday, but it all depends on how bad the storm will be. If I cannot make it I will tweet at bayesrulez, and send email (assuming that we have power!) and if that doesn’t work I will try to contact the department and get a message posted on the blackboard.

UPDATE: I am in Burlington and there will be class today.

Here is the next set of charts, on Hierarchical Bayes models.

Nate Silver, whom I have mentioned before as a Bayesian who tries to predict election outcomes (and more…) has a new book.

Andrew Gelman has an op-ed in the NY Times today on how to interpret the probabilities that Nate (and others) are calculating. And here is another similar discussion from today’s Salon.com.

Today we talked about Maximum Entropy priors; I explained how mathematical entropy can be used to quantify the amount of information that we stand to gain by learning which of a number of states happens to be the case, when all we have is a probability distribution on those states. We would have maximum uncertainty, that is to say, minimum amount of prior information, by maximizing the entropy of a distribution, subject to constraints that reflect what we do know. That maximization is accomplished using Lagrange multipliers. In the case of a continuous distribution we must also use the calculus of variations. I gave several examples of how to do this.

We then took up Jeffreys priors, introduced by the statistician Harold Jeffreys. The Jeffreys prior is the square root of the determinant of the Fisher information of the likelihood function. It has the advantage that if you transform the parameters of a problem, the Jeffreys prior in the new coordinates is the prior that will give the same results as the Jeffreys prior in the original parameter set would give. So you can decide which parameters are most convenient, and then just calculate and use the Jeffreys prior in those coordinates (if you have decided that the Jeffreys prior is the right one for the problem).

Next time I will give you some examples; then we will proceed with the next chart set.

Leave a comment