Model selection is a difficult process particularly in high dimensional settings, dependent observations, and sparse data regime. In this post, I will discuss a common misconception about selecting models based on values of the objective function generated from optimization algorithms in sparse data settings. TL;DR Don’t do it.
Do you really believe your variance parameter can be anywhere from zero to infinity?
In the past, I’ve often not included priors in my models. I often felt daunted by having to pick sensible priors for my parameters, and I usually fell into the common trap of thinking that no priors or uniform priors are somehow the most objective prior because they “let the data do all the talking.” Recent experiences and have completely changed my thinking on this though.
A friend asked me about how he should update his beliefs about correlation after seeing some data. In his words:
If I have two variables and I want to express that my prior is that the correlation could be anything between -1 and +1 how would I update this prior based on the observed correlation?