Upcoming Events


Talking to Machines Seminar Series
Friday, 24th April 2026
10.00 – 11.00
Online: Zoom link
Professor Xun Pang
Professor · School of International Studies
Peking University | PKU Analytics Lab for Global Risk Politics
Professor Pang’s research spans global risk politics, the geopolitics of critical raw materials, and the application of LLMs to social science. She is the author of From Cold Politics to Hot Politics (Peking University Press, 2026) and a forthcoming textbook on Large Language Models and Social Science Research. She has published in Political Analysis, International Organization, and Political Science Research & Methods, among others.
Yang Wu
Ph.D. Candidate
Institute of Automation, Chinese Academy of Sciences
Yang Wu’s research focuses on large language models, LLM reasoning, social simulation, and computational sociology.
How Can Synthetic Experiments Deliver Credible Causal Inference in Social Science?
The rapid diffusion of large language models (LLMs) has spurred growing interest in ‘synthetic experiments’, in which LLMs or LLM-driven agents simulate human subjects for causal inference. While such approaches promise scalability, cost efficiency, and experimental flexibility, fundamental methodological challenges — both theoretical and technical — must be addressed before they can serve as a credible causal engine.
Drawing on a pilot study that synthetically replicates the ‘hawkish bias’ experiment in foreign policy decision-making, this talk identifies key obstacles to credible causal inference in synthetic settings — including persona drift, ambiguous treatment assignment, underdeveloped benchmarking, and an unsettled research design — and discusses potential solutions from a learning perspective.
This session will cover:
- The rise of synthetic experiments using LLMs for causal inference in social science.
- Key methodological challenges: persona drift, treatment assignment, benchmarking, and sampling.
- How counterfactual data in fine-tuning promotes causal rather than correlational learning.
- Reasoning-oriented training for capturing intermediate causal mechanisms.
- Practical implications for designing credible AI-driven social science experiments.

Talking to Machines Seminar Series
Thursday, 7th May 2026
In person: Nuffield College, Butler Room – 14:00
Online: Zoom link
Carlos Scartascini
Principal Technical Leader
Inter-American Development Bank
Carlos Scartascini is Principal Technical Leader at the Research Department of the Inter-American Development Bank and Leader of the Research Department Behavioral Economics Group. He has published eight books and about 90 articles in academic journals and edited volumes. He is a member of the Executive Committee of IDB’s GDLab, member of the Scientific Committee of Elcano Royal Institute, member of the Board of Advisors of the Master of Behavioral and Decision Sciences at the University of Pennsylvania, Associate Editor of the academic journal Economía, and Founding Member of LACEA’s BRAIN (Behavioral Insights Network).
Corruption and Political Accountability in Good and Bad Economic Times
While the literature extensively explores the structural enablers of corruption and its adverse effects on economic performance, less is known about how the state of the economy influences corruption and political accountability. To address this gap, we develop a theoretical model in which politicians may divert resources from public goods, and citizens can respond by punishing corruption. In our model, positive economic booms increase corruption while weakening accountability. We validate these predictions through a laboratory experiment, finding that corruption rates significantly rise when economic conditions are good. However, citizens’ willingness to punish corrupt politicians remains stable across the business cycle. Punishment decisions are driven by observed public good allocations; low allocations prompt significantly higher punishment rates than high allocations, even resulting in the punishment of honest politicians during bad economic times. Additionally, we assess the role of corruption expectations in shaping responses: citizens with prior beliefs that politicians are corrupt are less likely to punish than those who believe politicians are honest when public good provision is low. Accountability becomes more challenging when citizens struggle to clearly identify corruption, and citizens are more forgiving of corruption during good economic times, especially if they already mistrust politicians. These findings highlight the importance of strong transparency and accountability mechanisms to uphold governance standards, particularly in the face of economic fluctuations and public mistrust.

Talking to Machines Seminar Series
Friday, 8th May 2026
In person: Nuffield College, Butler Room – 14:00
Online: Zoom link
Aaron R. Kaufman
Associate Professor of Political Science
NYU Abu Dhabi
Aaron Kaufman is Associate Professor of Political Science at NYU Abu Dhabi, and Co-Director of the Center for Interdisciplinary Data Science and Artificial Intelligence. His work applies computational tools to measurement problems in political science, including ideology, discrimination, policy significance, and legislative district compactness. His work has appeared in Nature, Nature Scientific Data, Nature Scientific Reports, the APSR, AJPS, BJPS, JOP, Political Analysis, the British Medical Journal, and the Journal of Quantitative Analysis in Sports. He received his PhD in Political Science and AM in Statistics from Harvard, and his BA in Political Science from the University of California, Berkeley.
Measuring the Political Biases of Large Language Models
Large Language Models (LLMs) are a transformational technology, fundamentally changing how people obtain information and interact with the world. As people become increasingly reliant on them for an enormous variety of tasks, a body of academic research has developed to examine these models for inherent biases, especially political biases, often finding them small. We challenge this prevailing wisdom. First, by comparing 31 LLMs to legislators, judges, and a nationally representative sample of U.S. voters, we show that LLMs’ apparently small overall partisan preference is the net result of offsetting extreme views on specific topics, much like moderate voters. Second, in a randomized experiment, we show that LLMs can promulgate their preferences into political persuasiveness even in information-seeking contexts: voters randomized to discuss political issues with an LLM chatbot are as much as 5 percentage points more likely to express the same preferences as that chatbot.

Talking to Machines Synthetic Replication Games Workshop @ IMEBESS/EPSA
EPSA Prague 2026 Pre-Conference Event
Date: Wednesday, 8th July
Venue: Prague Congress Centre, Prague, Czech Republic
Format: Presentations + round table + hands-on programming workshop
Talking to Machines, Oxford is hosting a pre-IMEBESS/EPSA round table exploring the use of large language models as synthetic subjects in social science research.

