Abstract
Under what conditions would an artificially intelligent system have
wellbeing? Despite its obvious bearing on the ethics of human interactions with
artificial systems, this question has received little attention. Because all
major theories of wellbeing hold that an individual's welfare level is
partially determined by their mental life, we begin by considering whether
artificial systems have mental states. We show that a wide range of theories of
mental states, when combined with leading theories of wellbeing, predict that
certain existing artificial systems have wellbeing. While we do not claim to
demonstrate conclusively that AI systems have wellbeing, we argue that our
metaphysical and moral uncertainty about AI wellbeing requires us dramatically
to reassess our relationship with the intelligent systems we create.
Hugging Face
Abstract
Artificial intelligence promises to accelerate scientific discovery, yet its
benefits remain unevenly distributed. While technical obstacles such as scarce
data, fragmented standards, and unequal access to computation are significant,
we argue that the primary barriers are social and institutional. Narratives
that defer progress to speculative "AI scientists," the undervaluing of data
and infrastructure contributions, misaligned incentives, and gaps between
domain experts and machine learning researchers all constrain impact. We
highlight four interconnected challenges: community dysfunction, research
priorities misaligned with upstream needs, data fragmentation, and
infrastructure inequities. We argue that their roots lie in cultural and
organizational practices. Addressing them requires not only technical
innovation but also intentional community-building, cross-disciplinary
education, shared benchmarks, and accessible infrastructure. We call for
reframing AI for science as a collective social project, where sustainable
collaboration and equitable participation are treated as prerequisites for
technical progress.
AI Insights - Democratizing advanced cyberinfrastructure unlocks responsible AI research across global labs.
- Only 5āÆ% of Africaās AI talent accesses sufficient compute, underscoring regional inequity.
- Preātrained transformer models now generate multiāomics, multiāspecies, multiātissue samples.
- Quantizationāaware training yields efficient neural PDEāsolvers showcased at recent conferences.
- The FAIR Guiding Principles guide scientific data stewardship, enhancing reproducibility.
- MAGEāTabās spreadsheetābased format standardizes microarray data for seamless sharing.
- Resources like The Human Cell Atlas and pymatgen empower interdisciplinary materialāgenomics research.