Schedule for December 2016

December 2 : Theo Karaletsos, Geometric Intelligence

'Adversarial Message Passing For Graphical Models'.

A currently popular technique for learning generative models is generative adversarial networks (GANs). They form a basis to learning generative models by learning to discriminate true samples versus fake ones to guide a model towards good solutions that can fool a strong discriminator into assigning high probability of being true to model samples. It has been shown that GANs minimize a well-defined f-divergence, the Jensen-Shannon Divergence, between the model distribution and the data distribution.

However, current best practices have a number of shortcomings.

Typically, GANs are considered to be models and are not understood in the context of inference. In addition, current techniques rely on global discrimination of joint distributions to perform learning, which is ineffective.

We propose to alleviate this limitation by showing how to relate adversarial learning to distributed approximate Bayesian inference on factor graphs. We propose local learning rules based on message passing which minimize a global variational criterion based on adversaries used to score ratios of distributions instead of explicit likelihood evaluations. 

This yields an inference and learning framework that facilitates treating model specification and inference separately by combining ideas from message passing with adversarial inference and can be used on arbitrary computational structures within the family of Directed Acyclic Graphs and models, including intractable likelihoods, non-differentiable models and generally cumbersome models.

We thus present adversarial learning under the viewpoint of approximate inference and modeling. We combine adversarial learning with nonparametric variational families to yield a learning framework which performs implicit Bayesian Inference on graph structures by sampling particles, without the need to evaluate densities.

These approaches hold promise to be useful in the toolbox of probabilistic modelers and have the potential to enrich the gamut of flexible probabilistic programming applications beyond current practice.

To be presented at NIPS Advances In Approximate Inference 2016