Hi j34nc4rl0+ai_ethics,

Here is our personalized paper recommendations for you sorted by most relevant
AI Fairness
Paper visualization
Abstract
Current AI systems minimize risk by enforcing ideological neutrality, yet this may introduce automation bias by suppressing cognitive engagement in human decision-making. We conducted randomized trials with 2,500 participants to test whether culturally biased AI enhances human decision-making. Participants interacted with politically diverse GPT-4o variants on information evaluation tasks. Partisan AI assistants enhanced human performance, increased engagement, and reduced evaluative bias compared to non-biased counterparts, with amplified benefits when participants encountered opposing views. These gains carried a trust penalty: participants underappreciated biased AI and overcredited neutral systems. Exposing participants to two AIs whose biases flanked human perspectives closed the perception-performance gap. These findings complicate conventional wisdom about AI neutrality, suggesting that strategic integration of diverse cultural biases may foster improved and resilient human decision-making.
Abstract
Uncertainty in artificial intelligence (AI) predictions poses urgent legal and ethical challenges for AI-assisted decision-making. We examine two algorithmic interventions that act as guardrails for human-AI collaboration: selective abstention, which withholds high-uncertainty predictions from human decision-makers, and selective friction, which delivers those predictions together with salient warnings or disclosures that slow the decision process. Research has shown that selective abstention based on uncertainty can inadvertently exacerbate disparities and disadvantage under-represented groups that disproportionately receive uncertain predictions. In this paper, we provide the first integrated socio-technical and legal analysis of uncertainty-based algorithmic interventions. Through two case studies, AI-assisted consumer credit decisions and AI-assisted content moderation, we demonstrate how the seemingly neutral use of uncertainty thresholds can trigger discriminatory impacts. We argue that, although both interventions pose risks of unlawful discrimination under UK law, selective frictions offer a promising pathway toward fairer and more accountable AI-assisted decision-making by preserving transparency and encouraging more cautious human judgment.
AI Transparency
Abstract
As AI becomes more "agentic," it faces technical and socio-legal issues it must address if it is to fulfill its promise of increased economic productivity and efficiency. This paper uses technical and legal perspectives to explain how things change when AI systems start being able to directly execute tasks on behalf of a user. We show how technical conceptions of agents track some, but not all, socio-legal conceptions of agency. That is, both computer science and the law recognize the problems of under-specification for an agent, and both disciplines have robust conceptions of how to address ensuring an agent does what the programmer, or in the law, the principal desires and no more. However, to date, computer science has under-theorized issues related to questions of loyalty and to third parties that interact with an agent, both of which are central parts of the law of agency. First, we examine the correlations between implied authority in agency law and the principle of value-alignment in AI, wherein AI systems must operate under imperfect objective specification. Second, we reveal gaps in the current computer science view of agents pertaining to the legal concepts of disclosure and loyalty, and how failure to account for them can result in unintended effects in AI ecommerce agents. In surfacing these gaps, we show a path forward for responsible AI agent development and deployment.
Abstract
Multimodal Large Language Models (MLLMs) hold immense promise as assistive technologies for the blind and visually impaired (BVI) community. However, we identify a critical failure mode that undermines their trustworthiness in real-world applications. We introduce the Escalator Problem -- the inability of state-of-the-art models to perceive an escalator's direction of travel -- as a canonical example of a deeper limitation we term Implicit Motion Blindness. This blindness stems from the dominant frame-sampling paradigm in video understanding, which, by treating videos as discrete sequences of static images, fundamentally struggles to perceive continuous, low-signal motion. As a position paper, our contribution is not a new model but rather to: (I) formally articulate this blind spot, (II) analyze its implications for user trust, and (III) issue a call to action. We advocate for a paradigm shift from purely semantic recognition towards robust physical perception and urge the development of new, human-centered benchmarks that prioritize safety, reliability, and the genuine needs of users in dynamic environments.
AI Ethics
Paper visualization
Abstract
As AI systems increasingly permeate everyday life, designers and developers face mounting pressure to balance innovation with ethical design choices. To date, the operationalisation of AI ethics has predominantly depended on frameworks that prescribe which ethical principles should be embedded within AI systems. However, the extent to which users value these principles remains largely unexplored in the existing literature. In a discrete choice experiment conducted in four countries, we quantify user preferences for 11 ethical principles. Our findings indicate that, while users generally prioritise privacy, justice & fairness, and transparency, their preferences exhibit significant variation based on culture and application context. Latent class analysis further revealed four distinct user cohorts, the largest of which is ethically disengaged and defers to regulatory oversight. Our findings offer (1) empirical evidence of uneven user prioritisation of AI ethics principles, (2) actionable guidance for operationalising ethics tailored to culture and context, (3) support for the development of robust regulatory mechanisms, and (4) a foundation for advancing a user-centred approach to AI ethics, motivated independently from abstract moral theory.
Abstract
Public-sector bureaucracies seek to reap the benefits of artificial intelligence (AI), but face important concerns about accountability and transparency when using AI systems. In particular, perception or actuality of AI agency might create ethics sinks - constructs that facilitate dissipation of responsibility when AI systems of disputed moral status interface with bureaucratic structures. Here, we reject the notion that ethics sinks are a necessary consequence of introducing AI systems into bureaucracies. Rather, where they appear, they are the product of structural design decisions across both the technology and the institution deploying it. We support this claim via a systematic application of conceptions of moral agency in AI ethics to Weberian bureaucracy. We establish that it is both desirable and feasible to render AI systems as tools for the generation of organizational transparency and legibility, which continue the processes of Weberian rationalization initiated by previous waves of digitalization. We present a three-point Moral Agency Framework for legitimate integration of AI in bureaucratic structures: (a) maintain clear and just human lines of accountability, (b) ensure humans whose work is augmented by AI systems can verify the systems are functioning correctly, and (c) introduce AI only where it doesn't inhibit the capacity of bureaucracies towards either of their twin aims of legitimacy and stewardship. We suggest that AI introduced within this framework can not only improve efficiency and productivity while avoiding ethics sinks, but also improve the transparency and even the legitimacy of a bureaucracy.
AI Bias
Abstract
The rapid development of AI tools and implementation of LLMs within downstream tasks has been paralleled by a surge in research exploring how the outputs of such AI/LLM systems embed biases, a research topic which was already being extensively explored before the era of ChatGPT. Given the high volume of research around the biases within the outputs of AI systems and LLMs, it is imperative to conduct systematic literature reviews to document throughlines within such research. In this paper, we conduct such a review of research covering AI/LLM bias in four premier venues/organizations -- *ACL, FAccT, NeurIPS, and AAAI -- published over the past 10 years. Through a coverage of 189 papers, we uncover patterns of bias research and along what axes of human identity they commonly focus. The first emergent pattern within the corpus was that 82% (155/189) papers did not establish a working definition of "bias" for their purposes, opting instead to simply state that biases and stereotypes exist that can have harmful downstream effects while establishing only mathematical and technical definition of bias. 94 of these 155 papers have been published in the past 5 years, after Blodgett et al. (2020)'s literature review with a similar finding about NLP research and recommendation to consider how such researchers should conceptualize bias, going beyond strictly technical definitions. Furthermore, we find that a large majority of papers -- 79.9% or 151/189 papers -- focus on gender bias (mostly, gender and occupation bias) within the outputs of AI systems and LLMs. By demonstrating a strong focus within the field on gender, race/ethnicity (30.2%; 57/189), age (20.6%; 39/189), religion (19.1%; 36/189) and nationality (13.2%; 25/189) bias, we document how researchers adopt a fairly narrow conception of AI bias by overlooking several non-Western communities in fairness research, as we advocate for a stronger coverage of such populations. Finally, we note that while our corpus contains several examples of innovative debiasing methods across the aforementioned aspects of human identity, only 10.6% (20/189) include recommendations for how to implement their findings or contributions in real-world AI systems or design processes. This indicates a concerning academia-industry gap, especially since many of the biases that our corpus contains several successful mitigation methods that still persist within the outputs of AI systems and LLMs commonly used today. We conclude with recommendations towards future AI/LLM fairness research, with stronger focus on diverse marginalized populations.
Unsubscribe from these updates