Improved Sample Complexity for Reward-free Reinforcement Learning under Low-rank MDPs

Event

Weekly OSI Lab Seminar

Short summary

In this seminar, I will talk about the paper “Improved Sample Complexity for Reward-free Reinforcement Learning under Low-rank MDPs” (Cheng et al., ICLR 2023).

Abstract

(taken directly from the paper)

In reward-free reinforcement learning (RL), an agent explores the environment first without any reward information, in order to achieve certain learning goals afterwards for any given reward. In this paper we focus on reward-free RL under low-rank MDP models, in which both the representation and linear weight vectors are unknown. Although various algorithms have been proposed for reward-free low-rank MDPs, the corresponding sample complexity is still far from being satisfactory. In this work, we first provide the first known sample complexity lower bound that holds for any algorithm under low-rank MDPs. This lower bound implies it is strictly harder to find a near-optimal policy under low-rank MDPs than under linear MDPs. We then propose a novel model-based algorithm, coined RAFFLE, and show it can both find an epsilon-optimal policy and achieve an epsilon-accurate system identification via reward-free exploration, with a sample complexity significantly improving the previous results. Such a sample complexity matches our lower bound in the dependence on epsilon, as well as on K (in the large d regime), where d and K respectively denote the representation dimension and action space cardinality. Finally, we provide a planning algorithm (without further interaction with true environment) for RAFFLE to learn a near-accurate representation, which is the first known representation learning guarantee under the same setting.

Papers

Papers discussed in the seminar:

  • Main: Yuan Cheng, Ruiquan Huang, Jing Yang, and Yingbin Liang. Improved Sample Complexity for Reward-free Reinforcement Learning under Low-rank MDPs. In ICLR 2023.
Previous
Next