Junghyun Lee
Junghyun Lee
Home
Experiences
Publications
Projects
Posts
Organizer
Korean AI Theory Community Workshop
SNU-KAIST ML/AI Theory Workshop
Machine/Deep Learning Theory + Physics Seminar
Contact
Seminars
Light
Dark
Automatic
Optimization
Conference Week (NeurIPS 2020)
Event OSI Lab Conference Week (NeurIPS 2020) Short summary In this seminar titled Topics not covered by anyone… (kind of), I will talk about 4 NeurIPS 2020 papers (all of which are theory-heavy) that I thought were interesting, whose topics were not covered by anyone in OSI Lab.
Jan 7, 2021
Heavy-tail behaviour of SGD - Part 2
Event Weekly OSI Lab Seminar Short summary This seminar continues on from Part 1, focusing more on the implication of such heavy-tailed theories of SGD to generalization capability of neural nets, and the origin of the heavy-tailedness.
Nov 6, 2020
Heavy-tail behaviour of SGD - Part 1
Event Weekly OSI Lab Seminar Short summary In this seminar, I will talk about a recent line of works that propose to analyze SGD under heavy-tail noise assumptions. Abstract One of the popular ways of analyzing the behavior of SGD and SGDm(SGD with momentum) is by considering it as a discretization of Langevin-type SDE.
Aug 14, 2020
Gradient Descent with Polyak's Momentum Finds Flatter Minima via Large Catapults
Although gradient descent with Polyak’s momentum is widely used in modern machine and deep learning, a concrete understanding of …
Prin Phunyaphibarn
,
Junghyun Lee
,
Bohan Wang
,
Huishuai Zhang
,
Chulhee Yun
PDF
Cite
Project
Poster
Slides
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint
Proposes a framework for performing fair PCA in memory limited, streaming setting. Sample complexity results and empirical discussions show the superiority of our approach compared to the existing approaches.
Junghyun Lee
,
Hanseul Cho
,
Se-Young Yun
,
Chulhee Yun
PDF
Cite
Code
Project
Poster
Slides
Fast and Efficient MMD-based Fair PCA via Optimization over Stiefel Manifold
Proposes a new MMD-based definition of fairness for PCA, then formulate fair PCA as an optimization over the Stiefel manifold. Various theoretical and empirical discussions show the superiority of our approach compared to the existing approach (Olfat & Aswani, AAAI'19).
Junghyun Lee
,
Gwangsu Kim
,
Matt Olfat
,
Mark Hasegawa-Johnson
,
Chang D. Yoo
PDF
Cite
Code
Project
Poster
Slides
Deep Learning Theory - Optimization
specific topics tbd
Junghyun Lee
Fair Dimensionality Reduction
Part of fair representation learning. Develop a theory of fairness in dimensionality reduction: new definition, new (efficient) algorithm, new theoretical results. Currently focused on PCA.
Junghyun Lee
«
Cite
×