I am a principal researcher at Microsoft Research, New England (and New York City), where I am a member of the Reinforcement Learning Group.
Previously, I was a postdoctoral fellow at the MIT Institute for Foundations of Data Science, and prior to this I received my PhD from the
Department of Computer Science
at Cornell University (2019), advised by Karthik Sridharan.
I received my BS and MS in electrical engineering from the University of Southern California in 2014. I work on theory for machine learning.
My research lies at the intersection of machine learning and decision making, including reinforcement learning and control, contextual bandits, and learning in causal/counterfactual settings. I am interested in uncovering new algorithmic principles and fundamental limits for interactive decision making, with themes including:
More broadly, I am interested in all theoretical aspects of machine learning and AI.
Lecture Notes on Statistical Reinforcement Learning and Decision Making
Fall 2023 Course @ MIT : Statistical Reinforcement Learning and Decision Making
ICML 2022 Tutorial: Bridging Learning and Decision Making
Internships and postdocs. I have been fortunate to work with the following amazing interns and postdocs at MSR:
Phil Amortila, Adam Block, Noah Golowich, Audrey Huang, Qinghua Liu, Dhruv Rohatgi, Clayton Sanford, Yuda Song, Andrew Wagenmaker, Tengyang Xie, Yunzong Xu, and Yinglun Zhu.
For postdocs in AI/ML/RL at MSR starting in 2025, apply here (theory) and here (empirical).
If you are a PhD student interested in internships in ML/AI/RL at MSR for 2025, apply here (spring) or here (summer).
Is Behavior Cloning All You Need? Understanding Horizon in Imitation Learning
Dylan J. Foster, Adam Block, and Dipendra Misra.
NeurIPS 2024. Spotlight presentation. [talk]
Audrey Huang, Wenhao Zhan, Tengyang Xie, Jason D. Lee, Wen Sun, Akshay Krishnamurthy, and Dylan J. Foster.
2024. [Audrey's talk]
Exploratory Preference Optimization: Harnessing Implicit Q*-Approximation for Sample-Efficient RLHF
Tengyang Xie†, Dylan J. Foster†, Akshay Krishnamurthy, Corby Rosset, Ahmed Awadallah, and Alexander Rakhlin.
2024.
Scalable Online Exploration via Coverability
Philip Amortila, Dylan J. Foster, and Akshay Krishnamurthy.
ICML 2024. [Phil's talk]
The Role of Coverage in Online Reinforcement Learning
Tengyang Xie†, Dylan J. Foster†, Yu Bai, Nan Jiang, and Sham M. Kakade.
ICLR 2023. Oral presentation. [Tengyang's talk]
Tight Guarantees for Interactive Decision Making with the Decision-Estimation Coefficient
Dylan J. Foster, Noah Golowich, and Yanjun Han.
COLT 2023.
[talk]
Instance-Optimality in Interactive Decision Making: Toward a Non-Asymptotic Theory
Andrew Wagenmaker and Dylan J. Foster.
COLT 2023.
Zakaria Mhammedi, Dylan J. Foster, and Alexander Rakhlin.
ICML 2023. Oral presentation.
Offline Reinforcement Learning: Fundamental Barriers for Value Function Approximation
Dylan J. Foster, Akshay Krishnamurthy, David Simchi-Levi, and Yunzong Xu.
COLT 2022. Oral presentation at NeurIPS 2021 Offline RL workshop.
On the Complexity of Adversarial Decision Making
Dylan J. Foster, Alexander Rakhlin, Ayush Sekhari, and Karthik Sridharan.
NeurIPS 2022. Oral presentation.
The Statistical Complexity of Interactive Decision Making
Dylan J. Foster, Sham M. Kakade, Jian Qian, and Alexander Rakhlin.
2021. [talk]
Efficient First-Order Contextual Bandits:
Prediction, Allocation, and Triangular Discrimination
Dylan J. Foster and Akshay Krishnamurthy.
NeurIPS 2021. Oral presentation.
Beyond UCB: Optimal and Efficient Contextual Bandits
with Regression Oracles
Dylan J. Foster and Alexander Rakhlin.
ICML 2020.
Now in Vowpal Wabbit! Use the --squarecb option or see here for more information.
Naive Exploration is Optimal for Online LQR
Max Simchowitz and Dylan J. Foster.*
ICML 2020.
Orthogonal Statistical Learning
Dylan J. Foster and Vasilis Syrgkanis.
Annals of Statistics, 2023.
COLT 2019. Best Paper Award.
Lower Bounds for Non-Convex Stochastic Optimization
Yossi Arjevani, Yair Carmon, John C. Duchi, Dylan J. Foster,
Nathan Srebro, and Blake Woodworth.
Mathematical Programming, Series A, 2022.
The Complexity of Making the Gradient Small in
Stochastic Convex Optimization
Dylan J. Foster, Ayush Sekhari, Ohad Shamir, Nathan Srebro, Karthik Sridharan,
and Blake Woodworth.
COLT 2019. Best Student Paper Award.
Logistic Regression: The Importance of Being Improper
Dylan J. Foster, Satyen Kale, Haipeng Luo, Mehryar Mohri, and Karthik Sridharan.
COLT 2018. Best Student Paper Award.
Online Learning: Sufficient Statistics and the Burkholder Method
Dylan J. Foster, Alexander Rakhlin, and Karthik Sridharan.
COLT 2018.
Spectrally-Normalized Margin Bounds for Neural Networks
Peter Bartlett, Dylan J. Foster, and Matus Telgarsky.
NeurIPS 2017. Spotlight presentation.
Adaptive Learning: Algorithms and Complexity
Dylan J. Foster
Ph.D. Thesis. Department of Computer Science, Cornell University, 2019.
Cornell CS Doctoral Dissertation Award.
Foundations of Reinforcement Learning and Interactive Decision Making
Dylan J. Foster and Alexander Rakhlin, 2023.
Lecture notes from
9.522: Statistical Reinforcement Learning and Decision Making
Program Committee/Area Chair: COLT (Senior PC): 2020, 2021, 2022, 2023, 2024, NeurIPS (Area Chair): 2020, 2021, 2022, 2023, 2024 ICML (Area Chair): 2022, ALT: 2019, 2020, 2021, 2022, 2023, 2024, Learning for Dynamics and Control (L4DC): 2020, 2021, 2022.
Conference Reviewing: COLT, NeurIPS, ICML, STOC, FOCS, SODA, ALT, AISTATS, AAAI.
Journal Reviewing: JMLR, Journal of the ACM, Annals of Statistics, Mathematics of Operations Research, Operations Research, Biometrika.
Statistical Reinforcement Learning and Decision Making
MIT, Fall 2023.
Co-taught with Sasha Rakhlin.
Statistical Reinforcement Learning and Decision Making
MIT, Fall 2022.
Co-taught with Sasha Rakhlin.
Machine Learning Theory
Cornell University, Spring 2018.
Teaching assistant for Karthik Sridharan.
Introduction to Analysis of Algorithms
Cornell University, Spring 2015.
Teaching assistant for Éva Tardos and David Steurer.
Received outstanding teaching award.
Foundations of Artificial Intelligence
Cornell University, Fall 2014.
Teaching assistant for Bart Selman.
I can be reached at dylanfoster at microsoft dot com.