Info

Time: Thursday 19 Jan 14:00-16:00

Location: Lab42 L3.36

Zoom id: 89718225062

Speaker(s): Jakub Tomczak, Emiel Hoogenboom, Yuki Asano, Efstratios Gavves, David Zhang.

Abstract

What does it mean to accurately model using generative models? Is it about building informative representations of real-world data? Do they allow us to investigate questions and ideas about the world that we couldn’t before?

Researchers in machine learning focus on constructing more domain-specific methods, using available prior knowledge on the problem, and effectively trying to smartly reduce the hypothesis space of function approximators.

In light of the recent developments from DALLE, Imagen, ChatGPT, Diffusion models and many other that have become incredibly popular making use of scale to achieve incredible results. The best-performing diffusion model uses very simple building blocks and seems like it achieves its incredible performance especially because it is capable of making use of the large resources available (for this and many other controversial opinions, come to the round table for an in-depth discussion).

When will this stop? What are the limits of scale? Should researchers focus their attention on better scaling algorithms or on more specific models based on prior knowledge? Will research soon reach a plateau? What role does accurate mathematical modelling play in better-performing methods?

All this and more will be covered in this first edition of our panel discussion format, by an invited panel of influential researchers.

This initiative is supported by ELLIS.