Upcoming Events

Talking to Machines Seminar Series
Friday, 13th February 2026
14.00 – 15.00, hybrid format
Butler Room, Nuffield College & Zoom
Brian Scholl
Dr. Brian Scholl is a leading economist at the intersection of regulation, financial markets, and artificial intelligence. He currently serves as a Staff Regulatory Researcher at Norm Ai, where he leads work on quantifying risk and designing data-driven, AI-enabled compliance programs for financial institutions.
Previously, Dr. Scholl was the founding Chief Economist of the U.S. Securities and Exchange Commission Office of Investor Research, where he led cutting-edge research on investor behavior and emerging technologies, including AI. He also served as Chief Economist of the U.S. Senate Budget Committee, advising policymakers on economic evidence and fiscal policy.
Recognized as the U.S. government’s top evidence innovator in 2022, Dr. Scholl pioneered rapid-cycle evidence generation systems that dramatically shortened research timelines and translated economic insights into real-world policy impact. Across roles, he has been known for building bridges between regulators, markets, and technology—turning complex economic and behavioral research into actionable frameworks.
Title: Experimental Evidence on Decision-Making: Implications for Governance in an AI-Enabled Future
Abstract
Rapid advances in artificial intelligence have renewed longstanding questions about human decision-making, institutional design, and governance. Is this technological moment fundamentally different? How do new systems interact with human cognitive limitations? And under what conditions do they enhance—or undermine—trust, legitimacy, and social welfare?
This talk draws on a series of large-scale behavioural experiments I conducted in the context of financial regulation to examine how individuals make consequential decisions in environments characterized by complexity, asymmetric information, and institutional mediation. Retail investors routinely face choices—whether to participate in markets, how to allocate assets, how to interpret disclosures, and whether to rely on advice—that exceed the capacities of unaided human cognition. Traditional regulatory tools, particularly disclosure, have struggled to address these challenges and, in important respects, have fallen short of their intended protective role.
I review evidence from multiple randomized experiments that investigate how features of choice architecture shape behaviour, including linguistic complexity and jargon, reference points and performance benchmarks, and reliance on expert advice. Across settings, the findings reveal both the promise and the fragility of behavioural interventions: modest changes in presentation can meaningfully improve decisions, yet the same mechanisms can also be exploited by intermediaries facing competing incentives. In one set of studies, simplifying language and carefully structuring choice environments substantially improves comprehension and decision quality; in others, benchmark performance framing strongly influences investment choices even when underlying fundamentals are unchanged. A further experiment shows that individuals exhibit limited ability to screen the quality of financial advice, accepting poor guidance nearly as often as good guidance.
Taken together, these results highlight persistent limits to individual self-protection in complex systems and raise broader questions for the governance of AI-enabled decision support. While AI systems hold the potential to augment human capital, triage information, and personalize guidance at scale—benefiting individuals, institutions, and regulators alike—they also risk amplifying manipulation, opacity, and power asymmetries if left unchecked.
The talk concludes by using these experimental findings as a lens to prompt discussion about the design and regulation of AI-enabled institutions. Rather than treating AI as a replacement for human judgment, the discussion emphasizes how evidence on human behaviour can inform regulatory and institutional frameworks that are more trustworthy, legitimate, and aligned with long-run systemic stability—while acknowledging the risks such systems pose and the possibility that fundamentally new governance approaches may be required.

Talking to Machines Synthetic Replication Games Workshop @ IMEBESS/EPSA
EPSA Prague 2026 Pre-Conference Event
Date: Wednesday, 8th July
Venue: Prague Congress Centre, Prague, Czech Republic
Format: Presentations + round table + hands-on programming workshop
Talking to Machines, Oxford is hosting a pre-IMEBESS/EPSA round table exploring the use of large language models as synthetic subjects in social science research.

