28 Apr 2017 - Rajesh Ranganath - Implicit Models and Posterior Approximations.

Just a reminder that this Friday (4/28) at 11:00am in the Z6 fishbowl, we'll have a guest, Rajesh Ranganath, presenting on "Implicit Models and Posterior Approximations." (abstract below). For those who are unfamiliar, Rajesh has done a lot of interesting work on probabilistic generative modeling and variational inference, so if you're interested in machine learning, definitely plan to attend--it should be a great talk.

Implicit Models and Posterior Approximations.

Probabilistic generative models tell stories about how data were generated. These stories uncover hidden patterns (latent states) and form the basis for predictions. Traditionally, probabilistic generative models provide a score for generated samples via a tractable likelihood function. The requirement of the score limits the flexibility of these models. For example, in many physical models we can generate samples, but not compute their likelihood --- such models defined only by their sampling process are called implicit models. In the first part of the talk I will present a family of implicit models that combine hierarchical Bayesian models with deep models. The main computational task in working with probabilistic generative models is computing the distribution of the latent states given data: posterior inference. Posterior inference cast as optimization over an approximating family is variational inference. The accuracy of variational inference hinges on the expressivity of the approximating family. In the second part of this talk, I will present multiple types of implicit variational approximations for both traditional and implicit models. Along the way, we'll explore models for text and regression.