Speakers: Olya Mastikhina and Dhruv Sreenivas
Title: Optimistic critics can empower small actors
Abstract:
Actor-critic methods have been central to many of the recent advances in deep reinforcement learning. The most common approach is to use symmetric architectures, whereby both actor and critic have the same network topology and number of parameters. How-ever, recent works have argued for the advantages of asymmetric setups, specifically with the use of smaller actors. We perform broad empirical investigations and analyses to better understand the implications of this and find that, in general, smaller actors result in performance degradation and overfit critics. Our analyses suggest poor data collection, due to value underestimation, as one of the main causes for this behavior,and further highlight the crucial role the critic can play in alleviating this pathology.We explore techniques to mitigate the observed value underestimation, which enables further research in asymmetric actor-critic methods.
Links:
Paper
Bio:
Olya Mastikhina and Dhruv Sreenivas are PhD students at University of Montreal and Mila - Quebec AI Institute, where they work with Pablo Samuel Castro. Olya’s research focuses on reinforcement learning (RL) and the study of agency, with a particular interest in how broader conceptions of agency can inform how we think about and design intelligent systems. Dhruv’s research focuses on sample-efficient, scalable RL, particularly on how to design and scale RL and imitation learning algorithms to complex control tasks using representation learning, exploration and new model architectures.