Gradient Descent with Polyak's Momentum Finds Flatter Minima via Large Catapults

Abstract

Although gradient descent with Polyak’s momentum is widely used in modern machine and deep learning, a concrete understanding of its effects on the training trajectory remains elusive. In this work, we empirically show that for linear diagonal networks and nonlinear neural networks, momentum gradient descent with a large learning rate displays large catapults, driving the iterates towards much flatter minima than those found by gradient descent. We hypothesize that the large catapult is caused by momentum ``prolonging’’ the self-stabilization effect (Damian et al., 2023). We provide theoretical and empirical support for our hypothesis in a simple toy example and empirical evidence supporting our hypothesis for linear diagonal networks.

Publication
In The ICML 2024 - 2nd Workshop on High-dimensional Learning Dynamics (HiLD), NeurIPS 2023 Workshop on Mathematics of Modern Machine Learning (M3L) - Oral Presentation

Previous title: Large Catapults in Momentum Gradient Descent with Warmup: An Empirical Study

Junghyun Lee
Junghyun Lee
PhD Student

PhD student at GSAI, KAIST, jointly advised by Profs. Se-Young Yun and Chulhee Yun. Research focuses on interactive machine learning, particularly at the intersection of RLHF and preference learning, and statistical analyses of large networks, with an emphasis on community detection. Broadly interested in mathematical and theoretical AI and related problems in mathematics.