We finished looking at fitting straight lines using centered variables. However, I showed you that although the centered variables are uncorrelated, they are not independent. We mentioned some generalizations of linear regression: the heteroskedastic case, where the variances if different observations are different, error in x instead of y (which we solved by introducing latent variables and eliminating them explicitly), the errors-in-variables case where both x and y have errors, again solved by introducing latent variables, but this time sampling on the latent variables since we can’t use the trick of eliminating them explicitly. We ran some code to do this. I’ll be posting this later. I mentioned that you have to do something special to avoid and from being unidentified (and thus having a posterior distribution that can’t be normalized). I gave the example of , for which the marginal distributions exist and are perfectly good, whereas there is no joint distribution.

We finished up by looking at a case where two variables are highly correlated and noting that sampling in the simplistic way we have been doing it doesn’t mix very well. For some reason there were several errors in the programs on the charts, and with help from you all we were able to fix the code and get it to run.

### Like this:

Like Loading...

*Related*

This entry was posted on October 19, 2012 at 12:29 am and is filed under STAT 330. You can follow any responses to this entry through the RSS 2.0 feed.
You can leave a response, or trackback from your own site.

## Leave a Reply