Universit de Montral
Abstract
Artificial intelligence systems increasingly mediate knowledge,
communication, and decision making. Development and governance remain
concentrated within a small set of firms and states, raising concerns that
technologies may encode narrow interests and limit public agency. Capability
benchmarks for language, vision, and coding are common, yet public, auditable
measures of pluralistic governance are rare. We define AI pluralism as the
degree to which affected stakeholders can shape objectives, data practices,
safeguards, and deployment. We present the AI Pluralism Index (AIPI), a
transparent, evidence-based instrument that evaluates producers and system
families across four pillars: participatory governance, inclusivity and
diversity, transparency, and accountability. AIPI codes verifiable practices
from public artifacts and independent evaluations, explicitly handling
"Unknown" evidence to report both lower-bound ("evidence") and known-only
scores with coverage. We formalize the measurement model; implement a
reproducible pipeline that integrates structured web and repository analysis,
external assessments, and expert interviews; and assess reliability with
inter-rater agreement, coverage reporting, cross-index correlations, and
sensitivity analysis. The protocol, codebook, scoring scripts, and evidence
graph are maintained openly with versioned releases and a public adjudication
process. We report pilot provider results and situate AIPI relative to adjacent
transparency, safety, and governance frameworks. The index aims to steer
incentives toward pluralistic practice and to equip policymakers, procurers,
and the public with comparable evidence.
AI Insights - Imagine model cards closing the AI accountability gap by transparently reporting model behavior.
- OECD AI Recommendation pushes for human‑centered, explainable, and fair AI.
- UNESCO Ethics Recommendation embeds human values to turn AI into societal good.
- HELM from Stanford’s CRFM holistically benchmarks language models on safety and impact.
- NIST AI RMF offers a risk‑management cycle for responsible AI governance.
- WCAG 2.2 ensures AI interfaces are accessible to users with disabilities.
- Krippendorff’s content‑analysis method quantifies stakeholder participation in AI governance.
University of Oslo
Abstract
Digital technologies are transforming democratic life in conflicting ways.
This article bridges two perspectives to unpack these tensions. First, we
present an original survey of software developers in Silicon Valley,
interrogating how coder worldviews, ethics, and workplace cultures shape the
democratic potential and social impact of the technologies they build. Results
indicate that while most developers recognize the power of their products to
influence civil liberties and political discourse, they often face ethical
dilemmas and top-down pressures that can lead to design choices undermining
democratic ideals. Second, we critically investigate these findings in the
context of an emerging new digital divide, not of internet access but of
information quality. We interrogate the survey findings in the context of the
Slop Economy, in which billions of users unable to pay for high-quality content
experience an internet dominated by low-quality, AI-generated ad-driven
content. We find a reinforcing cycle between tech creator beliefs and the
digital ecosystems they spawn. We discuss implications for democratic
governance, arguing for more ethically informed design and policy interventions
to help bridge the digital divide to ensure that technological innovation
supports rather than subverts democratic values in the next chapter of the
digital age.
AI Insights - The Slop Economy shows billions consuming low‑quality AI ads, widening the information gap.
- Coders report top‑down pressures that push designs away from democratic ideals.
- A reinforcing loop links coder worldviews, platform design, and user beliefs.
- Link‑recommendation algorithms turn feeds into echo chambers, amplifying polarization.
- Responsible innovation must prioritize human values over profit, reshaping engineering ethics.
- Participatory democracy and civic literacy are key to countering AI‑generated misinformation.