01

Political and Cultural Divides in Knowledge Systems

Even as technologies connect us more tightly, political and cultural divides deepen and communities increasingly consume separate information worlds. I study the social processes through which scientific findings and public information are bent, filtered, and reinterpreted as they move across these divides: on social media platforms, in policy discourse, and in the institutions that claim to represent evidence.

Two strands shape this line of work. The first asks how online systems structure exposure to disagreement: when a fact-check travels person-to-person it often hardens echo chambers, but when it emerges from ideologically diverse peer review, such as Twitter's Community Notes, it widens the information diets of the people it targets. The second asks what happens when scientific knowledge enters politics: by tracing more than a million citations from U.S. government, think tank, and international reports back to the research they cite, I measure how selectively and how faithfully ideological institutions represent science, and how these distortions propagate through a network of mutually reinforcing citations.

Representative Work

02

AI as Mediators and Agents of Knowledge

Large language models are no longer only tools. They talk, persuade, disagree, and increasingly act as social participants, among humans and among each other. I study how these systems encode perspective, how their participation reshapes human judgment, and what emerges when many AI agents reason together.

On the side of persuasion, my collaborators and I find that most frontier LLMs carry a left-leaning political bias that measurably shifts voter preferences; yet when AI is explicitly designed to disagree, users become more accurate, more deliberative, and less trusting of the machine, a productive friction.

On the mechanistic side, I look inside the models themselves. Political perspective turns out to live as a single, adjustable direction in their internal representations, one we can inspect and steer directly. I also study reasoning itself as a social process: we find that today's most capable AI models solve problems by staging internal conversations among distinct “voices” with different perspectives and areas of expertise, a form of group deliberation that emerged on its own rather than by design.

Representative Work

03

AI for Social Science

If language models can simulate human perspectives and parse unstructured text at scale, can they extend the empirical reach of social science? My third line of work treats AI not as an object of study but as an instrument, one that, used carefully, recovers what surveys never asked, scales measurements that once required hand coding, and surfaces systematic biases in whose opinions are easy to infer.

Our AI-augmented survey models learn embeddings over questions, respondents, and historical periods, and use them to reconstruct long-run public opinion on issues before they entered the survey instrument, recovering, for example, the decades-long rise in support for same-sex marriage years before pollsters began to ask. These methods come with a warning as much as a promise: predictability is not uniform, and highly educated, high-income, and White respondents are systematically easier to model than lower-income and minority respondents, a finding that points to the boundaries of AI-augmented social science.

Representative Work