Junghyun Lee
Junghyun Lee
Home
Experiences
Publications
Seminars
Organizer
Korean AI Theory Community Workshop
SNU-KAIST ML/AI Theory Workshop
Machine/Deep Learning Theory + Physics Seminar
Contact
Light
Dark
Automatic
Statistics
GL-LowPopArt: A Nearly Instance-Wise Minimax-Optimal Estimator for Generalized Low-Rank Trace Regression
We present GL-LowPopArt, a novel Catoni-style estimator for generalized low-rank trace regression. Building on LowPopArt (Jang et al., …
Junghyun Lee
,
Kyoungseok Jang
,
Kwang-Sung Jun
,
Milan Vojnović
,
Se-Young Yun
PDF
Cite
Near-Optimal Clustering in Mixture of Markov Chains
We study the problem of clustering T trajectories of length H, each generated by one of K unknown ergodic Markov chains over a finite …
Junghyun Lee
,
Yassir Jedra
,
Alexandre Proutière
,
Se-Young Yun
PDF
Cite
TESSAR: Geometry-Aware Active Regression via Dynamic Voronoi Tessellation
Active learning improves training efficiency by selectively querying the most informative samples for labeling. While it naturally fits …
Seong Jin Cho
,
Gwangsu Kim
,
Junghyun Lee
,
Hee Suk Yoon
,
Joshua Tian Jin Tee
,
Chang D. Yoo
PDF
Cite
Regularized Online RLHF with Generalized Bilinear Preferences
We consider the problem of contextual online RLHF with general preferences, where the goal is to identify the Nash Equilibrium. We …
Junghyun Lee
,
Minju Hong
,
Kwang-Sung Jun
,
Chulhee Yun
,
Se-Young Yun
Cite
A Jointly Efficient and Optimal Algorithm for Heteroskedastic Generalized Linear Bandits with Adversarial Corruptions
We consider the problem of heteroskedastic generalized linear bandits (GLBs) with adversarial corruptions, which subsumes various …
Sanghwa Kim
,
Junghyun Lee
,
Se-Young Yun
PDF
Cite
A Unified Confidence Sequence for Generalized Linear Models, with Applications to Bandits
We present a unified likelihood ratio-based confidence sequence (CS) for any (self-concordant) generalized linear model (GLM) that is …
Junghyun Lee
,
Se-Young Yun
,
Kwang-Sung Jun
PDF
Cite
Code
Poster
Slides
Querying Easily Flip-flopped Samples for Deep Active Learning
Proposes a new active learning approach by proposing a new uncertainty measure called the least disagree metric, as well as its efficient estimator, which is proven to be asymptotically consistent. This is then combined with seeding to become a new active learning algorith, LDM-S, which is shown to outperform existing approaches across various architectures and datasets.
Seong Jin Cho
,
Gwangsu Kim
,
Junghyun Lee
,
Jinwoo Shin
,
Chang D. Yoo
PDF
Cite
Improved Regret Bounds of (Multinomial) Logistic Bandits via Regret-to-Confidence-Set Conversion
Logistic bandit is a ubiquitous framework of modeling users’ choices, e.g., click vs. no click for advertisement recommender …
Junghyun Lee
,
Se-Young Yun
,
Kwang-Sung Jun
PDF
Cite
Code
Poster
Slides
Fair Streaming Principal Component Analysis: Statistical and Algorithmic Viewpoint
Proposes a framework for performing fair PCA in memory limited, streaming setting. Sample complexity results and empirical discussions show the superiority of our approach compared to the existing approaches.
Junghyun Lee
,
Hanseul Cho
,
Se-Young Yun
,
Chulhee Yun
PDF
Cite
Code
Poster
Slides
Nearly Optimal Latent State Decoding in Block MDPs
First theoretical analysis of model estimation and reward-free RL of block MDP, without resorting to function approximation frameworks. Lower bounds and algorithms with near-optimal upper bound are provided.
Yassir Jedra
,
Junghyun Lee
,
Alexandre Proutière
,
Se-Young Yun
PDF
Cite
Code
Poster
Slides
»
Cite
×