I'm a DPhil (PhD) candidate at the University of Oxford supervised by Shimon Whiteson, funded by the Oxford-Google DeepMind Doctoral Scholarship, and studying deep reinforcement learning (RL). I completed my MS and BS at Brown University, studying computer science and machine learning. I completed my pre-doc at Microsoft Research on long-term memory in RL. Previously, I researched autonomous vehicles in industry and academia.
In academia, I published research with Brown's self-driving car lab, took many graduate-level seminars focusing on ML, and was a TA for deep learning. Some of my projects include: an RL agent that learns skills in Minecraft using emotion detection as feedback, a GAN that can reconstruct images of faces with up to 80% of the pixels missing, and a state-of-the-art actor-critic network to play Atari games using RL. As a TA for the first iteration of Brown's graduate-level deep learning course, I designed a lab and gave a guest lecture on implementing sequence-to-sequence machine translation. For research, I've worked on imitation learning, multi-agent game theory, and deep RL with Michael Littman's self-driving car lab. (Some of our work has gained Publicity and other work has been published at the International Conference on Social Robotics.)
In industry, I've worked at Microsoft, Lyft, and Adobe, in addition to several smaller companies. Most recently, I completed a pre-doc at Microsoft Research with Katja Hofmann on long-term memory in RL. I wrote a paper on this work, as first author, published at ICLR 2020. At DeepScale, acquired by Tesla, I worked on perception for autonomous vehicles, developing novel methods for instance segmentation. At Lyft I designed a framework for solving sequential decision making problems with MDP's, including writing a special case solver for an MDP specific to autonomous vehicles at stop intersections. At Adobe I designed neural nets to forecast marketing data. I also worked at a robotics startup on both software and hardware, and co-created Food with Friends (an iOS app) years ago.
Currently, my research interests include representations in meta-learning, memory models, multi-agent RL, and offline learning. I am also interested in human feedback and autonomous vehicles, but I'm really open to any area of ML where I can work on important, challenging problems. When a problem grabs me, I can't stop thinking about it until it's solved. Please see Portfolio, Google Scholar, and GitHub for a sample of my work and research!