Selected for oral presentations at the Econometric Society
Interdisciplinary Frontiers: Economics and AI+ML
conference and Conference on Digital Experimentation
Adaptivity
can significantly improve efficiency of experimentation, but it is challenging to implement even at large
online platforms with mature experimentation systems.
As a result, many real-world
experiments are deliberately implemented with large batches and a handful of
opportunities to update the sampling allocation as a way to reduce operational
costs of experimentation.
In this work, we focus on adaptive experiments with limited adaptivity (short horizons T < 10). Bandit algorithms focusing on long-horizon settings are tailored to provide regret guarantees for each specific case, and we find they often underperform
static A/B tests on practical problem instances with
batched feedback, non-stationarity, multiple objectives and constraints, and
personalization.
In response, we develop a mathematical programming framework for
developing adaptive experimentation algorithms. Instead of the
problem-specific research paradigm (akin to an optimization solver developed
for a particular linear program), we ask the modeler to write down a flexible
optimization formulation and use modern machine learning systems to
(heuristically) solve for adaptive designs.
Since a naive formulation of the adaptive
experimentation problem as a dynamic program is intractable,
we propose a batched view of the experimentation process. We model the uncertainty around
batch-level sufficient
statistics necessary to make allocation decisions, instead of attempting to
model unit-level outcomes whose distributions are commonly unknown and leads
to intractable dynamic programs with combinatorial action spaces.
Sequential Gaussian approximations is the main intellectual vehicle
powering our mathematical programming framework. CLT-based normal approximations are universal in statistical
inference, and a sequential variant we prove provides a simple optimization formulation that lends itself to modern computational tools. Through extensive empirical
evaluation, we observe that even a preliminary and heuristic solution
approach can provide major robustness benefits. Unlike bespoke methods (e.g.,
Thompson sampling variants), our mathematical programming framework provides
consistent gains over static randomized control trials and exhibits robust
performance across problem instances.
AI models are omni-present yet extrapolate in unexpected ways,
posing a significant barrier to robust and fair systems.
Building AI systems that can articulate their own uncertainty has been
a longstanding challenge in ML, such probabilistic reasoning capability
is key to bounding downside risk (e.g., delegating to human experts) and
continually improving system performance by gathering data to resolve uncertainty.
Despite recent advances in large language models, uncertainty quantification remains a
challenge, with methods attempting to leverage these deep neural networks—such as Bayesian
neural networks—frequently facing scalability limitations.
This work takes an important conceptual step towards building large-scale
AI systems that can reason about uncertainty through natural language.
We revisit De Finetti’s view of uncertainty coming from missing observations rather
than latent parameters, which allows us to pose learning to do statistical inference
as a prediction problem involving masked inputs. This formal connection between
autoregressive generation with probabilistic reasoning allows pre-trained sequence
models to express their epistemic uncertainty on underlying concepts, and refine
their beliefs as they gather more information.
Our findings open a promising avenue for addressing uncertainty in complex,
data-rich settings in a scalable way. We are excited by how this work leverages
a timeless insight to inform a timely topic: guiding the next generation of AI systems.
1. As internet data depletes, the pace of progress in LLM capabilities has been widely
observed to slow down (even in public media). This suggests that the limited
paradigm of pre-training on passively scraped web data has reached its full potential.
To move forward, the authors believe that the next generation of AI systems
must be able to understand tasks on which they suffer high uncertainty, and
actively gather data in order to continually improve their performance.
2. Since scalable uncertainty quantification poses a key intellectual bottleneck,
we resolve this by going back to De Finetti’s insight developed in the 1920s.
We believe the connection between Bayesian inference and autoregressive generation provides
the groundwork for building LLMs with probabilistic reasoning capabilities.
Taken together, our work showcases how principled scientific insights have the
potential to shape the design of even the largest scale AI systems.
Causal inference provides the foundation of decision-making in sciences and industry alike,
and our work addresses a longstanding gap between practical performance and theoretical guarantees in
causal inference. Machine learning-based methods can provide a powerful way to control for confounding,
and the de facto standard approach is to use debiased estimators, which enjoy guarantees like statistical
efficiency and double robustness; examples include one-step
estimation (i.e. augmented inverse propensity weighting (AIPW)) and targeted
maximum likelihood estimation (TMLE).
However, in practice, these estimators have been observed to be unstable when there is
limited overlap between treatment and control, necessitating ad hoc adjustments
such as truncating propensity scores. In contrast, naive plug-in estimators
using an ML model can be more stable but lack these desirable asymptotic properties.
This trade-off can make it difficult to choose an estimator and ultimately,
to reach a conclusion regarding the treatment effect.
We propose a novel framework that combines the best of both worlds:
we derive the best plug-in estimator that is debiased,
retaining the stability of plug-ins while enjoying statistical efficiency and double robustness.
Our estimation framework is based on a constrained optimization problem and
can incorporate flexible modern ML techniques, including controlling for text-based confounders
using LLMs. Empirically, we demonstrate our approach over a range of examples,
and observe that it outperforms standard debiased methods when there is limited overlap.
As low overlap settings are a persistent challenge in practice,
we expect these results will be of interest to a broad spectrum of researchers,
including practitioners in statistics, economics, and machine learning.
We are unusually excited by how our framework provides a novel and pragmatic approach
to a longstanding challenge in causal inference.
By introducing an entirely new constrained optimization framework for semiparametric estimation, we hope to spur further progress in developing robust but theoretically grounded estimators.
Recent advances in AI present significant opportunities to
rethink the design of service systems with AI at the
forefront. Even in the era of LLMs, managing a
workforce of human agents (“servers”) is a crit-
ical problem. Crowdsourcing workers are vital for
aligning LLMs with human values (e.g., ChatGPT) and
in many domains, the cost of human annotation is a
binding constraint (e.g., medical diagnosis from
radiologists). This work models and analyzes modern
service systems involving human reviewers and
state-of-the-art AI models. A key intellectual
challenge in managing con- gestion within such
service systems is endogeneity. Prediction is never
the goal, and the link between predictive
performance and downstream decision-making
performance is not straightforward due to
endogeneity. Our work crystallizes how classical
tools from queueing theory provide managerial
insights into the design of AI-based service
systems.
Different distribution shifts require different interventions, and algorithms must be grounded in the specific shifts they address. Advocating for an inductive approach to research on distributional robustness, we build an empirical testbed, "WhyShift", comprising of natural shifts across 5 tabular datasets and 60,000 model configurations encompassing imbalanced learning algorithms and distributionally robust optimization (DRO) methods. We find Y|X-shifts are most prevalent on our testbed, in stark contrast to the heavy focus on X (covariate)-shifts in the ML literature. We conduct
an in-depth empirical analysis of DRO methods and find that the underlying model class (e.g.,
neural networks, XGBoost) and hyperparameter selection have a first-order impact in practice
despite being overlooked by DRO researchers. To further bridge that gap between methodological
research and practice, we design case studies that illustrate how such a refined understanding of
distribution shifts can enhance both data-centric and algorithmic interventions.
Starting with my one-year stint at Meta’s adaptive
experimentation team, I’ve been pondering on how
bandit algorithms are largely designed by
theoreticians to achieve good regret bounds and are
rarely used in practice due to the difficulty of
implementation and poor empirical performance. In
this work, we focus on underpowered, short-horizon,
and large-batch problems that typically arise in
practice. We use large batch normal approximations
to derive an MDP formulation for deriving the
optimal adaptive design. Our formulation allows the
use of computational tools for designing adaptive
algorithms, a break from the existing theory-driven
paradigm.
Our approach significantly improves statistical power over standard
methods, even when compared to Bayesian bandit algorithms
(e.g., Thompson sampling) that require full distributional knowledge
of individual rewards. Overall, we expand the scope of
adaptive experimentation to settings that are difficult
for standard methods, involving limited adaptivity,
low signal-to-noise ratio, and unknown reward distributions.
Recent advances in AI present significant opportunities to
rethink the design of service systems with AI at the
forefront. Even in the era of LLMs, managing a
workforce of human agents (“servers”) is a crit-
ical problem. Crowdsourcing workers are vital for
aligning LLMs with human values (e.g., ChatGPT) and
in many domains, the cost of human annotation is a
binding constraint (e.g., medical diagnosis from
radiologists). This work models and analyzes modern
service systems involving human reviewers and
state-of-the-art AI models. A key intellectual
challenge in managing con- gestion within such
service systems is endogeneity. Prediction is never
the goal, and the link between predictive
performance and downstream decision-making
performance is not straightforward due to
endogeneity. Our work crystallizes how classical
tools from queueing theory provide managerial
insights into the design of AI-based service
systems.