On the geometry of latent-variable models
Onsdag 27. november 2019
Aud. D3 (1531-215)
Latent variable models (LVMs) estimate (often) low-dimensional representations of data, which can aid both analysis and interpretation of data. When the relation between latent representations and the data is affine, we recover classic models such as PCA, while in the more general nonlinear case LVMs correspond to autoencoders and related models. In this talk, I will discuss the geometry of the recovered latent space and argue that it should be viewed as being equipped with a random Riemannian metric. I'll discuss limitations of classic Riemannian geometry for coping with this scenario, and present both theoretical and practical tools for data analysis under random Riemannian metrics.
Kontakt: Andrew du Plessis