Comment: Last time we said that the rejection rate is 1/c (for the smallest c). In the problem c=1 is optimal for n=y=5, but it’s not optimal for other data choices. Remarked again the optimal proposal distribution is the target distribution. For this problem y|p~bin(n,p) and so that .

For prior N(0,.25) and y=n=5, . If we choose y=n=50, the acceptance rate will go way down; the evidence is strong that the coin is not fair, and the prior is not representative of the actual state…likelihood way far away of the proposal. So while it works in theory, in practice it’s not efficient and need to think more carefully.

One more comment about SIR. It’s easier to say than “weighted resampling.”

Result: Sample from posterior allows us to examine sensitivity to our assumptions. For example, sensitivity to the prior. If I want to change the prior to another, we can use SIR sampling using as weights the ratio of priors.

Have sample , can obtain sample from posteriors that result from using another prior or if we left some data out. Procedure: if is a sample from and we wish to obtain a sample from we can resample the original sample using weights and probabilities

Leaving data out: Same idea, but now use ratio of the two likelihoods for the weights.

Onward. Talked about how if dimension of is large, these methods won’t work because it will be impossible to get an apropriate proposal density. This leads to MCMC (Markov chain Monte Carlo). [Note, that although the acronym for this is MCMC, all caps, the second word is not capitalized. This is because ‘Markov’ and ‘Monte Carlo’ are proper names, whereas ‘chain’ is just a word.] We discussed some of the history of the methods, which were developed during the bomb projects in the 1950s.

Nicholas Metropolis invented a key idea in this regard, although it wasn’t developed in image processing until the 1980s, and only in the late 1980s was it directed to statistical inference, in a seminal paper by Gelfand and Smith. Note also another paper by these two authors.

Jeff remarked that the name “Gibbs sampling” is very strange. It came from sampling on a distribution invented (I think) by the Yale physicist J. Willard Gibbs, who had nothing to do with how this sampling scheme works (he flourished well before computers, having died in 1903).

Gibbs Sampling:

Outline:

- Gibbs sampling algorithm
- Discrete probability example
- Normal model example
- (Maybe) Hierarchical model

Gibbs Sampling Algorithm: Let be the parameter vector of interest where is a subvector of . It is simplest if the ‘s are all scalars, but actually they can be anything (e.g., vectors, matrices,…)

Suppose we know and can reasonably easily sample from the full conditional densities , where is the vector excluding the jth component.

For example, and

- Let be a vector of starting values.
- Sequentially generate

,

,

…

- Iterate step 2 to obtain a sequence of vectors
- Discard the first k iterations (burn-in) and use the remainder as a sample from .

See this paper by Casella and George for a good introduction to the Gibbs sampling algorithm.

Specifying all the full conditionals gives a lot of information that allows us to overcome the curse of dimensionality. Note that replacing by when we sample is essential, otherwise this is not a Markov chain. In a Markov chain, the next state is dependent only on the immediately preceding state. So, if we transition from a state depending on to a state depending on , then the next transition must depend on , otherwise we are no longer dealing with a Markov chain.

Example: Suppose is discrete, where and . Suppose the conditionals are given by

and the conditionals by

Note that we can compute the marginal distributions in terms of the conditionals (which we know) and the marginals of the other variable (which we don’t know…yet!) So, for example,

.

Also, , and similarly for and .

Plug the second set of three equations into the first equation to get an expression for in terms of and . We also have Let and Plug these in; we get two linear equations in two unknowns. The solution is .

From these two we can get the remaining marginals , etc. by plugging the ‘s into the three other equations above. Once we have the marginals, we need only multiply by the conditionals to recover the joint distribution. So we have shown that one can go from full conditionals to the marginals and then to the full joint distribution. In general, a similar argument works for any discrete distribution. It’s more complicated to prove this in the continuous case. Our result is

We ran the Gibbs sampler for this in R. Did it for 1000 and 10000 samples, 10000 was better (closer to the exact value)

## Leave a Reply