Contribute Media
A thank you to everyone who makes this possible: Read More

Posterior Collapse in Deep Generative models

Description

Generative models are powerful Machine Learning models useful at extracting information from high-dimensional data, but they sometimes suffer from the problem called "posterior collapse" which prevents them from learning representation having practical value. I am going to show why and when it happens, also how to deal with it.

Why

Deep generative models like Variational AutoEncoders (VAEs) and Generative Adversarial Networks (GANs) turned out to be very successful in real-world applications of machine learning, including: natural image modelling, data compression, audio synthesis and many more. Unfortunately, it appears that models belonging to VAEs family - under some conditions may suffer from an undesired phenomenon called "posterior collapse" which causes them to learn poor data representation. The talk's purpose is to present this problem and its practical implications.

What

The presentation will comprise following elements:

  • Short introduction of basic Variational AutoEncoder model
  • Introducing the "posterior collapse" problem
  • How posterior collapse affects learning from data - natural images examples
  • Some research on dealing with posterior collapse

Audience

Being familiarised with the topic of generative modelling will be helpful for anyone attending the talk, but it's not required. In fact, everyone having basic understanding of neural networks, representation learning and probability can gain useful information. Presentation won't be overloaded with mathematical formulas, I will do my best to present math-related aspects in an intuitive form.

Details

Improve this page