We started with a few words from Bill, who remarked that the idea of Gibbs sampling includes the idea of sampling in turn from various conditional distributions, so that each (vector) parameter sampled conditional on the current values of all the other parameters , where the dash indicates that is omitted. When we did this, we assumed that we could actually sample easily and exactly from the conditional distributions. However, this is usually not possible, and the way around it is to use Metropolis-Hastings sampling on those parameters. This is called “Metropolis-within-Gibbs.”

I remarked that you have to be careful when using Metropolis-Hastings on a parameter (like ) that is strictly positive. One way to do this is just to define the posterior conditional distribution on this parameter as zero for so that any attempt to propose in that region will be rejected; a simpler and probably better method is to log-transform the variable: , for example. This has many advantages. One is that the Jeffreys prior on is singular at 0, whereas the corresponding prior on is flat.

Jeff took over to discuss linear models. He remarked that linear models like regression, multiple regression, ANOVA,… are still a huge part of the statistican’s toolkit, even though modern computational power enables us to solve models that previously would have been intractable.

Jeff started with simplest case: Normal, one variable. The problem is estimation from a normal, mean , variance population. The data are *n* *iid* random variables , each . The objective is to estimate .

The likelihood is

,

where is the sample mean and (you’ve seen this notation before).

We have several cases:

1. known (this never happens, but it is pedagogically interesting).

We have two subcases:

1a) flat prior on

1b) normal prior on

(1a) is the limiting case of (1b) for when the variance . So the only case Jeff discussed under (1) was (1b).

2. , both unknown.

Again, we have two cases:

2a. Flat prior on , Jeffreys prior on .

2b. Conjugate priors, i.e.,

,

(2a) is the limiting case of (2b) when . So Jeff only discussed (2b).

So we really only have to look at two cases:

Jeff then stated, but did not prove, the preliminary result that you are to prove and turn in on 3/26/09. We will find this result very useful.

Case 1b:

Now expand and look at the useful theorem, just read off the answer, which is:

where

As

as and the standard deviation

Note that in this limit we recover the frequentist result (known , flat prior on ).

For large n,

Case 2b: unknown, ,

The calculation for is very similar to case (1a). See the notes. The result is

,

where

Priors and posteriors are in same family, they are conjugate.

Specific example: two predictors.

Linear regression. We’ll start with a specific example and perhaps do some general theory later. The example comes from “Bayesian Modeling Using WinBUGS” by Ntzoufras.

Example: We are interested in understanding the relation between total service time (minutes) of restocking vending machines and two predictors, is the number of cases stocked, is the distance in feet from the truck to the machine.

Model: for *i*=1,…,25

Assume that the errors are and independent (). Let precision.

We’ll use INDEPENDENT priors on and , specifically and gamma(0.01,0.01)

We learned that Control-R or right-click selected region to run it in R (at least for Windows).

Note that there is a WinBUGS listserv discussion list.

A package called “arm” allows you to call WinBUGS via R. Will also need WinBUGS itself. A patch must be installed, and you must run their immortality key.

To be continued…

## Leave a Reply