Abstract
Artificial intelligence (AI) is a digital technology that will be of major
importance for the development of humanity in the near future. AI has raised
fundamental questions about what we should do with such systems, what the
systems themselves should do, what risks they involve and how we can control
these. - After the background to the field (1), this article introduces the
main debates (2), first on ethical issues that arise with AI systems as
objects, i.e. tools made and used by humans; here, the main sections are
privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy &
responsibility (2.6) and the singularity (2.7). Then we look at AI systems as
subjects, i.e. when ethics is for the AI systems themselves in machine ethics
(2.8.) and artificial moral agency (2.9). Finally we look at future
developments and the concept of AI (3). For each section within these themes,
we provide a general explanation of the ethical issues, we outline existing
positions and arguments, then we analyse how this plays out with current
technologies and finally what policy consequences may be drawn.
Abstract
This paper argues that a techno-philosophical reading of the EU AI Act
provides insight into the long-term dynamics of data in AI systems,
specifically, how the lifecycle from ingestion to deployment generates
recursive value chains that challenge existing frameworks for Responsible AI.
We introduce a conceptual tool to frame the AI pipeline, spanning data,
training regimes, architectures, feature stores, and transfer learning. Using
cross-disciplinary methods, we develop a technically grounded and
philosophically coherent analysis of regulatory blind spots. Our central claim
is that what remains absent from policymaking is an account of the dynamic of
becoming that underpins both the technical operation and economic logic of AI.
To address this, we advance a formal reading of AI inspired by Simondonian
philosophy of technology, reworking his concept of individuation to model the
AI lifecycle, including the pre-individual milieu, individuation, and
individuated AI. To translate these ideas, we introduce futurity: the
self-reinforcing lifecycle of AI, where more data enhances performance, deepens
personalisation, and expands application domains. Futurity highlights the
recursively generative, non-rivalrous nature of data, underpinned by
infrastructures like feature stores that enable feedback, adaptation, and
temporal recursion. Our intervention foregrounds escalating power asymmetries,
particularly the tech oligarchy whose infrastructures of capture, training, and
deployment concentrate value and decision-making. We argue that effective
regulation must address these infrastructural and temporal dynamics, and
propose measures including lifecycle audits, temporal traceability, feedback
accountability, recursion transparency, and a right to contest recursive reuse.