Talking to machines
IMEBESS workshop
Riga, Latvia
22 may 2024
The Talking to Machines (T2M) project will organize another of its periodic workshops focusing on themes related to Artificial Intelligence and social science research. This Riga workshop will be held in conjunction with the IMEBESS 2024 conference in Riga, Latvia that is scheduled for 23-25 May 2024. The T2M workshop will take place on 22 May 2024 and will have the same format as previous pre-IMEBESS conference workshops. The format brings together an invited-group of leading researchers working in a particular area of the social sciences – registered participants in the IMEBESS conference are also encouraged to attend the one-day workshop session.
This year’s workshop is organized by the directors of the Talking to Machines project, Ray Duch (University of Oxford) and Sonja Vogt (University of Lausanne). The sessions are organized around the research and development activities of the T2M project. These sessions are scheduled for the morning and afternoon of May 22nd. The sessions have two purposes: One objective is to explore how advances in these areas of AI will shape the research activities that we are planning for the T2M project. A second objective is more pedagogical – in particular, providing the IMEBESS attendees with insights into cutting-edge developments in AI and social science research.
Enhancing Information Engagement and Human Agency
A core aim of the T2M project is to explore how AI can be leveraged in the design and execution of large-scale experiments that aim to identify optimal strategies for communicating information to average citizens. The goal of many large RCTs — for example in the domains of financial inclusion, health behaviors, voting turnout or labor market decisions – require participants to engage with information.
Chris Bail (Duke University) has been working at the forefront of efforts to employ AI in order to enhance social and political engagement (“Simulating Social Media Using Large Language Models to Evaluate Alternative News Feed Algorithms” and “Leveraging AI for Democratic Discourse”) . Chris will lead this session providing an overview of his work and the potential of AI for enhancing this engagement and ultimately human agency.
LLMs and Homo Economicus
Experimental design and causal inference are the critical foundations of the T2M project. We envisage harnessing LLMs to help build the causal architecture of our AI enhanced RCTs. And we are inspired by recent evidence regarding the decisions of synthetic AI subjects in class lab experiments. Ben Manning and his colleagues at MIT have been making important contributions in these regards.
Ben Manning (MIT) will provide an assessment of how LLMs can help us better understand the Homo Economicus. As an illustration of recent applied experiments, Rueben Kline and Ignacio Urbina (SUNY Stonybrook) will present “Communication, cooperation, and trust with human and artificial agents.”
Talking to machines
With the aid of LLMs, such as Chat GPT, the T2M project aims to make considerable advances in how we design and implement information treatments. Ben Lira (University of Pennsylvania) has been working with T2M in several prototype experiments with this aim in mind. Ben will provide an overview of how Chat GPT can be deployed to enhance subject engagement and improve data collection. We will present the results of one of these recent prototype experiments on corruption belief updating in Chile.
AI in the Field
The T2M team have been exploring how to incorporate human corpuses, synthetic subjects, and data collection with human subjects to optimize the design and implementation of large-scale RCTs. Initial results were recently published in a report to one our funding bodies, Innovation for Poverty Action. Members of the team – Ray Duch, Ben Manning and Piotr Kotlarz – will report on the recent LLM strategies they have been exploring and also report on recent results from an on-going Vaccine and TB Screening RCT they have been conducting Ghana.
AI Video Production
Of particular interest to the T2M project is the implementation of video treatments in large-scale RCTs. Our expectation is that both the design and production of these video treatments will be automated; LLMs will be critical in their conceptualization but also in their production. The final session will be led by Dr. Ye Zhu from the Princeton Visual AI Lab. Dr. Zhu will focus on multimodality generative models: multimodality in the context of visual, audio, and textual data, and generative models covering diverse types of generative network architectures such as GANs and Diffusion Models.