Abstract
As AI agents become more widely deployed, we are likely to see an increasing
number of incidents: events involving AI agent use that directly or indirectly
cause harm. For example, agents could be prompt-injected to exfiltrate private
information or make unauthorized purchases. Structured information about such
incidents (e.g., user prompts) can help us understand their causes and prevent
future occurrences. However, existing incident reporting processes are not
sufficient for understanding agent incidents. In particular, such processes are
largely based on publicly available data, which excludes useful, but
potentially sensitive, information such as an agent's chain of thought or
browser history. To inform the development of new, emerging incident reporting
processes, we propose an incident analysis framework for agents. Drawing on
systems safety approaches, our framework proposes three types of factors that
can cause incidents: system-related (e.g., CBRN training data), contextual
(e.g., prompt injections), and cognitive (e.g., misunderstanding a user
request). We also identify specific information that could help clarify which
factors are relevant to a given incident: activity logs, system documentation
and access, and information about the tools an agent uses. We provide
recommendations for 1) what information incident reports should include and 2)
what information developers and deployers should retain and make available to
incident investigators upon request. As we transition to a world with more
agents, understanding agent incidents will become increasingly crucial for
managing risks.
Abstract
Artificial intelligence (AI) is a digital technology that will be of major
importance for the development of humanity in the near future. AI has raised
fundamental questions about what we should do with such systems, what the
systems themselves should do, what risks they involve and how we can control
these. - After the background to the field (1), this article introduces the
main debates (2), first on ethical issues that arise with AI systems as
objects, i.e. tools made and used by humans; here, the main sections are
privacy (2.1), manipulation (2.2), opacity (2.3), bias (2.4), autonomy &
responsibility (2.6) and the singularity (2.7). Then we look at AI systems as
subjects, i.e. when ethics is for the AI systems themselves in machine ethics
(2.8.) and artificial moral agency (2.9). Finally we look at future
developments and the concept of AI (3). For each section within these themes,
we provide a general explanation of the ethical issues, we outline existing
positions and arguments, then we analyse how this plays out with current
technologies and finally what policy consequences may be drawn.