Binding Large Language Models to Virtual Personas for Human Simulation
Large language models (LLMs) show superhuman performance on math, coding, and other reasoning tasks, but they still struggle to behave like real people. Human attitudes and decisions depend on rich personal backgrounds, and without such grounding, LLMs often fall back on generic or stereotypical responses. This talk presents a unified framework for simulating distinct human participants at scale by conditioning LLMs on coherent virtual personas constructed through narrative backstories. I first introduce the Anthology method for generating backstories that capture demographic traits, experiences, beliefs, and values implicitly encoded in the pretraining corpus, enabling models to reproduce human survey response distributions with greater consistency. I then show how extending backstories to represent identity and group perception allows models to capture realistic in-group and out-group perceptions as well as meta-perceptions. Finally, I describe how temporal contextualization and consistency reinforcement enable LLMs to exhibit cooperative and strategic decision patterns in social-dilemma games. Together, these results outline a scalable and grounded approach to human simulation.
Attend in person or see weblink to connect
Friday, 11/21/25
Contact:
Website: Click to VisitCost:
FreeSave this Event:
iCalendarGoogle Calendar
Yahoo! Calendar
Windows Live Calendar
