In class I mentioned an article in Scientific American by Efron and Morris on the Stein problem. Here it is! There’s also a WikiPedia article on the Stein problem here.

At start of class I mentioned an NPR story that I heard that indicated that more reliable answers to polling questions about elections would be gotten, not by asking who a person is going to vote for, but who that person thinks will win the election. Here is the story, and here is a link to a related article by the respected pollster Andrew Kohut, of Pew Research.

Here are the notes on Bayesian hypothesis testing, which we started on today.

And here is the next (and final) assignment, due after the holiday break.

I continued the discussion of the Stein problem; we saw how an estimator that dominates the obvious estimator shrinks the estimated batting averages towards the common mean (this was the Efron-Morris estimator). This is very typical of shrinkage estimators, which are a common feature of hierarchical Bayes models. I mentioned that the Efron-Morris estimator (and the James-Stein estimator) are themselves inadmissible, although they are better than the naive estimator. I noted that every admissible decision rule (with exceptions concerning finiteness) is a Bayes rule. This is interesting because admissibility is a frequentist idea, and this observation unites frequentism and Bayesian ideas in the area of decision theory.

I then discussed several examples; one was a normal model analogous to the binomial model that we discussed the other day for the baseball batting averages. I also discussed an oil well logging problem that one of my Texas students suggested some years ago. I went through writing down the posterior probability, but I did not discuss the sampling strategy (which is in the notes). The important points are two: First, enforcing the condition that by including a factor of in the likelihood; and second, using a hierarchical independence Jeffreys prior on and .

Between those two examples I discussed an example that shows how a bunch of independent MCMC calculations can be combined, after the fact, into a single hierarchical model, by using Peter Müller’s “slick trick” of using the samples from the individual calculations to provide the proposals for the hierarchical model.

### Like this:

Like Loading...

*Related*

This entry was posted on November 6, 2012 at 9:30 pm and is filed under STAT 330. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

## Leave a Reply