top of page
website jpg.jpg

I'm a DPhil (PhD) candidate at the University of Oxford supervised by Shimon Whiteson, funded by the Oxford-Google DeepMind Doctoral Scholarship, and studying deep reinforcement learning (RL). My main research area is meta-RL. (For an introduction, see this tutorial or this interview where I explain meta-RL!) I've worked on hypernetworks in meta-RL, bias-variance trade-offs in meta-gradients, and the consistency of meta-RL algorithms – in addition to a survey of meta-RLPreviously, I did my MS and BS at Brown University, completed a pre-doc at Microsoft Research on sequence models in RL, and researched autonomous vehicles in both academia and industry.

At Brown University, I published research with the self-driving car lab, took many graduate seminars on ML, and was a TA for deep learning. During my master's degree I was advised by Michael Littman. My research focused on human feedback, imitation learning, and multi-agent game theory. Some of our work gained publicitySome other projects included: an RL agent that learns skills in Minecraft using emotion detection as feedback and a GAN that can reconstruct corrupted images. As a TA for the first iteration of Brown's graduate-level deep learning course, I designed a lab and gave a guest lecture on implementing sequence-to-sequence machine translation.


In industry, I worked at Microsoft, Lyft, and Adobe, in addition to several smaller companies. I completed a pre-doc at Microsoft Research with Katja Hofmann on long-term memory in RL. At DeepScale, acquired by Tesla, I worked on perception for autonomous vehicles, developing novel methods for instance segmentation. At Lyft I designed a framework for solving sequential decision making problems with MDP's, including writing a special case solver for an MDP specific to autonomous vehicles at stop intersections. At Adobe I designed neural nets to forecast marketing data. I also worked at a robotics startup on both software and hardware, and co-created Food with Friends (an iOS app). Next, I will research large language models (LLMs) at InstaDeep.


Broadly, my interests include generalization, adaptation, and representation. Specifically, topics include learning to learn (in-context), sequence models, and few-shot learning – including intersections with multi-agent RL and offline learning. I am also interested in human feedback, vision-language models for environment design, and any challenging problem in ML. Please see my CV, Google Scholar, and GitHub for a sample of my work!

bottom of page