Universit de Montral
Abstract
Artificial intelligence systems increasingly mediate knowledge,
communication, and decision making. Development and governance remain
concentrated within a small set of firms and states, raising concerns that
technologies may encode narrow interests and limit public agency. Capability
benchmarks for language, vision, and coding are common, yet public, auditable
measures of pluralistic governance are rare. We define AI pluralism as the
degree to which affected stakeholders can shape objectives, data practices,
safeguards, and deployment. We present the AI Pluralism Index (AIPI), a
transparent, evidence-based instrument that evaluates producers and system
families across four pillars: participatory governance, inclusivity and
diversity, transparency, and accountability. AIPI codes verifiable practices
from public artifacts and independent evaluations, explicitly handling
"Unknown" evidence to report both lower-bound ("evidence") and known-only
scores with coverage. We formalize the measurement model; implement a
reproducible pipeline that integrates structured web and repository analysis,
external assessments, and expert interviews; and assess reliability with
inter-rater agreement, coverage reporting, cross-index correlations, and
sensitivity analysis. The protocol, codebook, scoring scripts, and evidence
graph are maintained openly with versioned releases and a public adjudication
process. We report pilot provider results and situate AIPI relative to adjacent
transparency, safety, and governance frameworks. The index aims to steer
incentives toward pluralistic practice and to equip policymakers, procurers,
and the public with comparable evidence.
AI Insights - Imagine model cards closing the AI accountability gap by transparently reporting model behavior.
- OECD AI Recommendation pushes for human‑centered, explainable, and fair AI.
- UNESCO Ethics Recommendation embeds human values to turn AI into societal good.
- HELM from Stanford’s CRFM holistically benchmarks language models on safety and impact.
- NIST AI RMF offers a risk‑management cycle for responsible AI governance.
- WCAG 2.2 ensures AI interfaces are accessible to users with disabilities.
- Krippendorff’s content‑analysis method quantifies stakeholder participation in AI governance.
Abstract
This is a skeptical overview of the literature on AI consciousness. We will
soon create AI systems that are conscious according to some influential,
mainstream theories of consciousness but are not conscious according to other
influential, mainstream theories of consciousness. We will not be in a position
to know which theories are correct and whether we are surrounded by AI systems
as richly and meaningfully conscious as human beings or instead only by systems
as experientially blank as toasters. None of the standard arguments either for
or against AI consciousness takes us far.
Table of Contents
Chapter One: Hills and Fog
Chapter Two: What Is Consciousness? What Is AI?
Chapter Three: Ten Possibly Essential Features of Consciousness
Chapter Four: Against Introspective and Conceptual Arguments for Essential
Features
Chapter Five: Materialism and Functionalism
Chapter Six: The Turing Test and the Chinese Room
Chapter Seven: The Mimicry Argument Against AI Consciousness
Chapter Eight: Global Workspace Theories and Higher Order Theories
Chapter Nine: Integrated Information, Local Recurrence, Associative Learning,
and Iterative Natural Kinds
Chapter Ten: Does Biological Substrate Matter?
Chapter Eleven: The Problem of Strange Intelligence
Chapter Twelve: The Leapfrog Hypothesis and the Social Semi-Solution