Gradient Descent on Infinitely Wide Neural Networks

Event

Weekly DL Theory & Stat Phy Seminar

Short summary

In this seminar, I will talk about the paper “Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization” (Bach and Chizat, arXiv 2021) and the references therein.

Papers

Paper discussed in the seminar:

  • Main: Bach, Francis and Chizat, Lénaïc. Gradient Descent on Infinitely Wide Neural Networks: Global Convergence and Generalization. In arXiv 2021.
  • Chizat, Lénaïc and Bach, Francis. On the Global Convergence of Gradient Descent for Over-parametrized Models using Optimal Transport. In NeurIPS 2018.
Next