thanks a lot for the class,

if you have more charts/codes/useful things, feel free to upload them even if we didn’t get to them in the class!

regards

]]>http://informatics.indiana.edu/rocha/blackbox/BlackBox.html

Ahmed

]]>I think Mark asked during the class whether Hierarchical Models are Bayesian Networks. There is certainly an overlap. Here is a tutorial that I have referred to a few times in the past: http://research.microsoft.com/pubs/69588/tr-95-06.pdf

If anyone is interested in having a discussion with me about that, please feel free to approach me. Bayesian Networks are one of the cornerstones in research and I am personally interested in them.

Have a great weekend,

-Ahmed

]]>http://www.thedailyshow.com/watch/wed-october-17-2012/exclusive—nate-silver-extended-interview-pt–1

]]>Thank you, Ahmed. This looks very interesting and I am glad you found it. There are many methods illustrated here, and we don’t have time to explore most of them, so I hope that using this applet will help all of you understand the many approaches to MCMC that are available, and choose the one that best solves your problem.

]]>I found an applet that simulates the MH sampling algorithm. It is really very interesting and I think will help us to visualize things from a more interactive point of view than just static pictures. You will need to have the Java plugin installed on your browser to see it. I hope you find it useful:

http://www.lbreyer.com/classic.html

-Ahmed

]]>One thing I noticed while I was watching both parts, that they had similarities and my brain was tricked in a fraction of a second and couldn’t tell whether I was watching space or the man’s hand.

I was actually wondering today, as you were discussing the light year example, what would you see if you can see that much in time. I need to watch this video a few more times and perhaps I will notice more things. For now, I am good with the one time watching and my head spinning alright ðŸ™‚

Thanks again for sharing!

-Ahmed

]]>The point is, that if you observe ONLY low numbered taxis, over and over, every time you notice a taxi, each time you will be more and more convinced that there are only low-numbered taxis in the town. You have to think that your observations are independent (that is, you don’t count as a separate observation seeing a taxi, glancing to the left, glancing back at the taxi and seeing it again as an independent observation).

Here’s another example, from the first day of class: I had a two headed coin. Suppose I toss a coin 100 times, and every time it comes up heads. Sure, it might be a fair coin, but every time that you toss it and it comes up heads again, you will believe more and more that the coin has two heads (or that I am cheating in some other way).

So, it is reflected in the likelihood by the fact that each observation of the same taxi (or coin toss) will add an additional factor, in the case of taxis, of 1/N, again assuming that the observation events are independent. So if you observe taxi #3, for example, 1000 times, that will be a factor of in the likelihood, which peaks very close to 3.

]]>if you do sample with replacement and you use 1/N^3 as likelihood, I don’t see how seeing for example car 3 1,000 times over and over will affect the likelihood. I understand that Im more sure that there are 3 cars for example, but how is that reflected in the likelihood? is it reflected in the likelihood?. ]]>