Conference Week (NeurIPS 2020)

Event

OSI Lab Conference Week (NeurIPS 2020)

Short summary

In this seminar titled Topics not covered by anyone… (kind of), I will talk about 4 NeurIPS 2020 papers (all of which are theory-heavy) that I thought were interesting, whose topics were not covered by anyone in OSI Lab.

Abstract

First 2 papers are directly related to my current research topic of uncovering the inner working of deep learning in general. First paper by Daneshmand et al. gives a rigorous proof that characterizes the effect of batch normalization(BN) in deep linear networks; BN preserves the rank of hidden representation, whose lower bound is dependent on the width of the network and independent of the depth. Second paper by Maennel et al. theoretically and empirically confirms that the first layer of the neural network learns the first principal component of the input data, when trained with random labels.

Third paper by Rubin-Delanchy gives a first theoretical characterization of the so-called manifold-hypothesis by proving that a manifold structure, whose Hausdorff dimension is given explicitly, is sure to arise in spectral embedding of latent position model.

Fourth paper by Nadjahi et al. gives a full theoretical characterization of the recently-introduced sliced probability divergences. Topological characterizations include metric properties and convergence properties while statistical characterizations include sample complexity and projection complexity.

Papers

Papers discussed in the seminar:

  • Hadi Daneshmand, Jonas Kohler, Francis Bach, Thomas Hofmann, and Aurelien Lucchi. Batch Normalization Provably Avoids Rank Collapse for Randomly Initialized Deep Networks. In NeurIPS 2020.
  • Hartmut Maennel, Ibrahim Alabdulmohsin, Ilya Tolstikhin, Robert J. N. Baldock, Olivier Bousquet, Sylvain Gelly, and Daniel Keysers. What Do Neural Networks Learn When Trained With Random Labels? In NeurIPS 2020. (Spotlight paper!)
  • Patrick Rubin-Delanchy. Manifold structure in graph embeddings. In NeurIPS 2020. (Spotlight paper!)
  • Kimia Nadjahi, Alain Durmus, Lénaïc Chizat, Soheil Kolouri, Shahin Shahrampour, and Umut Şimşekli. Statistical and Topological Properties of Sliced Probability Divergences? In NeurIPS 2020. (Spotlight paper!)
Previous
Next