Junghyun Lee
Junghyun Lee
Home
Experiences
Publications
Projects
Posts
Seminars
Organizer
Korean AI Theory Community Workshop
SNU-KAIST ML/AI Theory Workshop
Machine/Deep Learning Theory + Physics Seminar
Contact
Light
Dark
Automatic
paper-conference
Probability-Flow ODE in Infinite-Dimensional Function Spaces
Recent advances in infinite-dimensional diffusion models have demonstrated their effectiveness and scalability in function generation …
Kunwoo Na
,
Junghyun Lee
,
Se-Young Yun
,
Sungbin Lim
PDF
Cite
FlickerFusion: Intra-trajectory Domain Generalizing Multi-Agent RL
Multi-agent reinforcement learning has demonstrated significant potential in addressing complex cooperative tasks across various …
Woosung Koh
,
Wonbeen Oh
,
Siyeol Kim
,
Suhin Shin
,
Hyeongjin Kim
,
Jaein Jang
,
Junghyun Lee
,
Se-Young Yun
PDF
Cite
Project
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits
We present a unified likelihood ratio-based confidence sequence (CS) for any (self-concordant) generalized linear model (GLM) that is …
Junghyun Lee
,
Se-Young Yun
,
Kwang-Sung Jun
PDF
Cite
Code
Project
Poster
Slides
Gradient Descent with Polyak's Momentum Finds Flatter Minima via Large Catapults
Although gradient descent with Polyak’s momentum is widely used in modern machine and deep learning, a concrete understanding of …
Prin Phunyaphibarn
,
Junghyun Lee
,
Bohan Wang
,
Huishuai Zhang
,
Chulhee Yun
PDF
Cite
Project
Poster
Slides
Querying Easily Flip-flopped Samples for Deep Active Learning
Proposes a new active learning approach by proposing a new uncertainty measure called the least disagree metric, as well as its efficient estimator, which is proven to be asymptotically consistent. This is then combined with seeding to become a new active learning algorith, LDM-S, which is shown to outperform existing approaches across various architectures and datasets.
Seong Jin Cho
,
Gwangsu Kim
,
Junghyun Lee
,
Jinwoo Shin
,
Chang D. Yoo
PDF
Cite
Project
Improved Regret Bounds of (Multinomial) Logistic Bandits via Regret-to-Confidence-Set Conversion
Logistic bandit is a ubiquitous framework of modeling users’ choices, e.g., click vs. no click for advertisement recommender …
Junghyun Lee
,
Se-Young Yun
,
Kwang-Sung Jun
PDF
Cite
Code
Project
Poster
Slides
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint
Proposes a framework for performing fair PCA in memory limited, streaming setting. Sample complexity results and empirical discussions show the superiority of our approach compared to the existing approaches.
Junghyun Lee
,
Hanseul Cho
,
Se-Young Yun
,
Chulhee Yun
PDF
Cite
Code
Project
Poster
Slides
Flooding with Absorption: An Efficient Protocol for Heterogeneous Bandits over Complex Networks
A novel problem setting where heterogeneous multi-agent bandits collaborate over a network to minimize their group regret. To deal with the high communication complexity of the classic flooding protocol combined with UCB, a new network protocol called Flooding with Absorption (FwA) is proposed. Theoretical and empirical analyses are provded for flooding and FwA, showing the efficacy of our proposed FwA.
Junghyun Lee
,
Laura Schmid
,
Se-Young Yun
PDF
Cite
Code
Project
Poster
Slides
Nearly Optimal Latent State Decoding in Block MDPs
First theoretical analysis of model estimation and reward-free RL of block MDP, without resorting to function approximation frameworks. Lower bounds and algorithms with near-optimal upper bound are provided.
Yassir Jedra
,
Junghyun Lee
,
Alexandre Proutière
,
Se-Young Yun
PDF
Cite
Code
Project
Poster
Slides
Fast and Efficient MMD-based Fair PCA via Optimization over Stiefel Manifold
Proposes a new MMD-based definition of fairness for PCA, then formulate fair PCA as an optimization over the Stiefel manifold. Various theoretical and empirical discussions show the superiority of our approach compared to the existing approach (Olfat & Aswani, AAAI'19).
Junghyun Lee
,
Gwangsu Kim
,
Matt Olfat
,
Mark Hasegawa-Johnson
,
Chang D. Yoo
PDF
Cite
Code
Project
Poster
Slides
»
Cite
×