Sample-Efficient Exploration

GowU: Uncertainty-Guided Tree Search for Hard Exploration

We introduce Go-With-Uncertainty (GowU), a new approach to exploration in reinforcement learning that treats exploration as a particle-based search guided by uncertainty, rather than as learning a policy to maximize an exploration objective. GowU achieves state-of-the-art results on Montezuma’s Revenge, Pitfall!, and Venture, and solves pixel-based MuJoCo Adroit and AntMaze tasks without expert demonstrations..

End-to-End Efficient RL for Linear Bellman Complete MDPs with Deterministic Transitions

Pre-Print 2026

The Power of Resets in Online Reinforcement Learning

NeurIPS 2024 (Spotlight)

Sample and Oracle Efficient Reinforcement Learning for MDPs with Linearly-Realizable Value Functions

Pre-Print 2024

Efficient Model-Free Exploration in Low-Rank MDPs

Pre-Print 2023

Representation Learning with Multi-Step Inverse Kinematics: An Efficient and Optimal Approach to Rich-Observation RL

ICML 2023 (Oral Presentation)