Simulators are a pervasive tool in reinforcement learning, but existing algorithms cannot efficiently exploit simulator access — particularly in high-dimensional domains that require general function approximation — in a theoretically principled fashion. We explore the power of simulators through the lens of local simulator access (or, local planning), an online RL protocol in which the agent can reset to previously visited states during training. We show that MDPs with low coverability can be learned efficiently with only Q*-realizability, a weaker condition than required by existing online RL algorithms. We also show that the challenging Exogenous Block MDP problem is tractable under local simulator access.