Aalto University, Kobe Un
Abstract
We review the historical development and current trends of artificially
intelligent agents (agentic AI) in the social and behavioral sciences: from the
first programmable computers, and social simulations soon thereafter, to
today's experiments with large language models. This overview emphasizes the
role of AI in the scientific process and the changes brought about, both
through technological advancements and the broader evolution of science from
around 1950 to the present. Some of the specific points we cover include: the
challenges of presenting the first social simulation studies to a world unaware
of computers, the rise of social systems science, intelligent game theoretic
agents, the age of big data and the epistemic upheaval in its wake, and the
current enthusiasm around applications of generative AI, and many other topics.
A pervasive theme is how deeply entwined we are with the technologies we use to
understand ourselves.
AI Insights - LLMs now act as adaptive interviewers, tailoring survey questions in real time.
- Empirical studies show LLM prompts can shift spoken communication, revealing AIâmediated influence.
- The replication crisis in languageâmodel behavior research drives new safeguards and transparent benchmarks.
- Computational social science uses LLM embeddings to map largeâscale discourse networks with unprecedented detail.
- LLMs inherit systemic biases, demanding rigorous audit frameworks before policy use.
- Waldropâs âComplexityâ and Wienerâs âThe Human Use of Human Beingsâ frame AI agentsâ socioâtechnical dynamics.
- Future research must balance LLMsâ discovery speed with ethical risks, calling for interdisciplinary governance.
Universit de Montral
Abstract
Artificial intelligence systems increasingly mediate knowledge,
communication, and decision making. Development and governance remain
concentrated within a small set of firms and states, raising concerns that
technologies may encode narrow interests and limit public agency. Capability
benchmarks for language, vision, and coding are common, yet public, auditable
measures of pluralistic governance are rare. We define AI pluralism as the
degree to which affected stakeholders can shape objectives, data practices,
safeguards, and deployment. We present the AI Pluralism Index (AIPI), a
transparent, evidence-based instrument that evaluates producers and system
families across four pillars: participatory governance, inclusivity and
diversity, transparency, and accountability. AIPI codes verifiable practices
from public artifacts and independent evaluations, explicitly handling
"Unknown" evidence to report both lower-bound ("evidence") and known-only
scores with coverage. We formalize the measurement model; implement a
reproducible pipeline that integrates structured web and repository analysis,
external assessments, and expert interviews; and assess reliability with
inter-rater agreement, coverage reporting, cross-index correlations, and
sensitivity analysis. The protocol, codebook, scoring scripts, and evidence
graph are maintained openly with versioned releases and a public adjudication
process. We report pilot provider results and situate AIPI relative to adjacent
transparency, safety, and governance frameworks. The index aims to steer
incentives toward pluralistic practice and to equip policymakers, procurers,
and the public with comparable evidence.
AI Insights - Imagine model cards closing the AI accountability gap by transparently reporting model behavior.
- OECD AI Recommendation pushes for humanâcentered, explainable, and fair AI.
- UNESCO Ethics Recommendation embeds human values to turn AI into societal good.
- HELM from Stanfordâs CRFM holistically benchmarks language models on safety and impact.
- NIST AI RMF offers a riskâmanagement cycle for responsible AI governance.
- WCAGâŻ2.2 ensures AI interfaces are accessible to users with disabilities.
- Krippendorffâs contentâanalysis method quantifies stakeholder participation in AI governance.