Papers from 08 to 12 September, 2025

Here are the personalized paper recommendations sorted by most relevant
Democratic Processes
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
How can voters induce politicians to put forth more proximate (in terms of preference) as well as credible platforms (in terms of promise fulfillment) under repeated elections? Building on the work of Aragones et al. (2007), I study how reputation and re-election concerns affect candidate behavior and its resultant effect on voters' beliefs and their consequent electoral decisions. I present a formal model where, instead of assuming voters to be naive, I tackle the question by completely characterizing a set of subgame-perfect equilibria by introducing non-naive (or strategic) voting behavior into the mix. I find that non-naive voting behavior, by using the candidate's reputation as an instrument of policy discipline after the election, aids in successfully inducing candidates to put forth their maximal incentive-compatible promise (among a range of such credible promises) in equilibrium. Through the credible threat of punishment in the form of loss of reputation for all future elections, non-naive voters gain a unanimous increase in expected utility relative to when they behave naively. In fact, comparative statics show that candidates who are more likely to win are more likely to keep their promises. In this framework, voters are not only able to bargain for more credible promises but also end up raising their expected future payoffs in equilibrium. Including such forms of strategic behavior thus reduces cheap talk by creating a credible electoral system where candidates do as they say once elected. Later, I present an analysis that includes limited punishment as a political accountability mechanism.
👍 👎 ♥ Save
Kings College London
Abstract
Large Language Models (LLMs) alignment methods have been credited with the commercial success of products like ChatGPT, given their role in steering LLMs towards user-friendly outputs. However, current alignment techniques predominantly mirror the normative preferences of a narrow reference group, effectively imposing their values on a wide user base. Drawing on theories of the power/knowledge nexus, this work argues that current alignment practices centralise control over knowledge production and governance within already influential institutions. To counter this, we propose decentralising alignment through three characteristics: context, pluralism, and participation. Furthermore, this paper demonstrates the critical importance of delineating the context-of-use when shaping alignment practices by grounding each of these features in concrete use cases. This work makes the following contributions: (1) highlighting the role of context, pluralism, and participation in decentralising alignment; (2) providing concrete examples to illustrate these strategies; and (3) demonstrating the nuanced requirements associated with applying alignment across different contexts of use. Ultimately, this paper positions LLM alignment as a potential site of resistance against epistemic injustice and the erosion of democratic processes, while acknowledging that these strategies alone cannot substitute for broader societal changes.
AI Insights
  • Decentralising alignment shifts control from elite institutions to diverse stakeholders, curbing epistemic injustice.
  • The paper grounds context, pluralism, and participation in concrete use cases, showing alignment must adapt to each scenario.
  • Using the power/knowledge nexus, it critiques centralized authority and proposes participatory methods as a countermeasure.
  • Recommended reading: “Human‑Machine Reconfigurations” and the “Handbook of Ethics, Values, and Technological Design”.
  • Key references: Ouyang et al.’s instruction‑following with human feedback and Padhi et al.’s value alignment from unstructured text.
  • Participatory AI: actively involving diverse communities in design, deployment, and governance of AI systems.
  • The study concludes decentralised strategies resist epistemic injustice but need broader reforms for lasting democratic alignment.
Political Philosophy
👍 👎 ♥ Save
University of Sussex
Paper visualization
Rate this image: 😍 👍 👎
Abstract
How should well-being be prioritised in society, and what trade-offs are people willing to make between fairness and personal well-being? We investigate these questions using a stated preference experiment with a nationally representative UK sample (n = 300), in which participants evaluated life satisfaction outcomes for both themselves and others under conditions of uncertainty. Individual-level utility functions were estimated using an Expected Utility Maximisation (EUM) framework and tested for sensitivity to the overweighting of small probabilities, as characterised by Cumulative Prospect Theory (CPT). A majority of participants displayed concave (risk-averse) utility curves and showed stronger aversion to inequality in societal life satisfaction outcomes than to personal risk. These preferences were unrelated to political alignment, suggesting a shared normative stance on fairness in well-being that cuts across ideological boundaries. The results challenge use of average life satisfaction as a policy metric, and support the development of nonlinear utility-based alternatives that more accurately reflect collective human values. Implications for public policy, well-being measurement, and the design of value-aligned AI systems are discussed.
AI Insights
  • Discrete choice tasks paired with probabilistic gambles let us map how people trade personal for societal life satisfaction.
  • Utility curves were fitted with Expected Utility Maximisation and then tested against Cumulative Prospect Theory for low‑probability bias.
  • Policy design should account for individual value heterogeneity, especially in health and education, to better align with public preferences.
  • The study links Rawlsian justice and prospect theory, showing that fairness concerns outweigh personal risk aversion across demographics.
  • These findings suggest AI systems should embed human inequality aversion, not just political bias, to achieve true value alignment.
👍 👎 ♥ Save
Abstract
This study examines the structural dynamics of Truth Social, a politically aligned social media platform, during two major political events: the U.S. Supreme Court's overturning of Roe v. Wade and the FBI's search of Mar-a-Lago. Using a large-scale dataset of user interactions based on re-truths (platform-native reposts), we analyze how the network evolves in relation to fragmentation, polarization, and user influence. Our findings reveal a segmented and ideologically homogenous structure dominated by a small number of central figures. Political events prompt temporary consolidation around shared narratives, followed by rapid returns to fragmented, echo-chambered clusters. Centrality metrics highlight the disproportionate role of key influencers, particularly @realDonaldTrump, in shaping visibility and directing discourse. These results contribute to research on alternative platforms, political communication, and online network behavior, demonstrating how infrastructure and community dynamics together reinforce ideological boundaries and limit cross-cutting engagement.
Social Movements
👍 👎 ♥ Save
Abstract
Migration has been a core topic in German political debate, from millions of expellees post World War II over labor migration to refugee movements in the recent past. Studying political speech regarding such wide-ranging phenomena in depth traditionally required extensive manual annotations, limiting the scope of analysis to small subsets of the data. Large language models (LLMs) have the potential to partially automate even complex annotation tasks. We provide an extensive evaluation of a multiple LLMs in annotating (anti-)solidarity subtypes in German parliamentary debates compared to a large set of thousands of human reference annotations (gathered over a year). We evaluate the influence of model size, prompting differences, fine-tuning, historical versus contemporary data; and we investigate systematic errors. Beyond methodological evaluation, we also interpret the resulting annotations from a social science lense, gaining deeper insight into (anti-)solidarity trends towards migrants in the German post-World War II period and recent past. Our data reveals a high degree of migrant-directed solidarity in the postwar period, as well as a strong trend towards anti-solidarity in the German parliament since 2015, motivating further research. These findings highlight the promise of LLMs for political text analysis and the importance of migration debates in Germany, where demographic decline and labor shortages coexist with rising polarization.
Human Rights
👍 👎 ♥ Save
University of Manitoba
Abstract
Contemporary robots are increasingly mimicking human social behaviours to facilitate interaction, such as smiling to signal approachability, or hesitating before taking an action to allow people time to react. Such techniques can activate a person's entrenched social instincts, triggering emotional responses as though they are interacting with a fellow human, and can prompt them to treat a robot as if it truly possesses the underlying life-like processes it outwardly presents, raising significant ethical questions. We engage these issues through the lens of informed consent: drawing upon prevailing legal principles and ethics, we examine how social robots can influence user behaviour in novel ways, and whether under those circumstances users can be appropriately informed to consent to these heightened interactions. We explore the complex circumstances of human-robot interaction and highlight how it differs from more familiar interaction contexts, and we apply legal principles relating to informed consent to social robots in order to reconceptualize the current ethical debates surrounding the field. From this investigation, we synthesize design goals for robot developers to achieve more ethical and informed human-robot interaction.
AI Insights
  • Anthropomorphism inflates expectations, turning a simple robot into a perceived sentient companion, complicating consent.
  • Deceptive cues—hesitation, smiling—activate human instincts, raising legal questions about manipulation.
  • The paper maps informed consent onto HRI using tort and privacy doctrines, exposing regulatory gaps.
  • Design guidelines: robots must disclose their artificial nature and limits before emotionally charged interactions.
  • Eldercare and f-commerce are high‑stakes arenas where consent lapses can have severe consequences.
  • Recommended reading: “Close Engagements with Artificial Companions” and a Harvard Law Review article on robotic agency.
  • Informed consent in HRI is dynamic; it must evolve with robot sophistication and user familiarity.
Political Economy
👍 👎 ♥ Save
Abstract
The rapid adoption of autonomous AI agents is giving rise to a new economic layer where agents transact and coordinate at scales and speeds beyond direct human oversight. We propose the "sandbox economy" as a framework for analyzing this emergent system, characterizing it along two key dimensions: its origins (emergent vs. intentional) and its degree of separateness from the established human economy (permeable vs. impermeable). Our current trajectory points toward a spontaneous emergence of a vast and highly permeable AI agent economy, presenting us with opportunities for an unprecedented degree of coordination as well as significant challenges, including systemic economic risk and exacerbated inequality. Here we discuss a number of possible design choices that may lead to safely steerable AI agent markets. In particular, we consider auction mechanisms for fair resource allocation and preference resolution, the design of AI "mission economies" to coordinate around achieving collective goals, and socio-technical infrastructure needed to ensure trust, safety, and accountability. By doing this, we argue for the proactive design of steerable agent markets to ensure the coming technological shift aligns with humanity's long-term collective flourishing.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Democratic Institutions
  • Political Movements
  • Activism
  • Political Science
  • Democratic Systems
  • Democracy
  • Political Theory
You can edit or add more interests any time.

Unsubscribe from these updates