Abstract
How can voters induce politicians to put forth more proximate (in terms of
preference) as well as credible platforms (in terms of promise fulfillment)
under repeated elections? Building on the work of Aragones et al. (2007), I
study how reputation and re-election concerns affect candidate behavior and its
resultant effect on voters' beliefs and their consequent electoral decisions. I
present a formal model where, instead of assuming voters to be naive, I tackle
the question by completely characterizing a set of subgame-perfect equilibria
by introducing non-naive (or strategic) voting behavior into the mix. I find
that non-naive voting behavior, by using the candidate's reputation as an
instrument of policy discipline after the election, aids in successfully
inducing candidates to put forth their maximal incentive-compatible promise
(among a range of such credible promises) in equilibrium. Through the credible
threat of punishment in the form of loss of reputation for all future
elections, non-naive voters gain a unanimous increase in expected utility
relative to when they behave naively. In fact, comparative statics show that
candidates who are more likely to win are more likely to keep their promises.
In this framework, voters are not only able to bargain for more credible
promises but also end up raising their expected future payoffs in equilibrium.
Including such forms of strategic behavior thus reduces cheap talk by creating
a credible electoral system where candidates do as they say once elected.
Later, I present an analysis that includes limited punishment as a political
accountability mechanism.
Kings College London
Abstract
Large Language Models (LLMs) alignment methods have been credited with the
commercial success of products like ChatGPT, given their role in steering LLMs
towards user-friendly outputs. However, current alignment techniques
predominantly mirror the normative preferences of a narrow reference group,
effectively imposing their values on a wide user base. Drawing on theories of
the power/knowledge nexus, this work argues that current alignment practices
centralise control over knowledge production and governance within already
influential institutions. To counter this, we propose decentralising alignment
through three characteristics: context, pluralism, and participation.
Furthermore, this paper demonstrates the critical importance of delineating the
context-of-use when shaping alignment practices by grounding each of these
features in concrete use cases. This work makes the following contributions:
(1) highlighting the role of context, pluralism, and participation in
decentralising alignment; (2) providing concrete examples to illustrate these
strategies; and (3) demonstrating the nuanced requirements associated with
applying alignment across different contexts of use. Ultimately, this paper
positions LLM alignment as a potential site of resistance against epistemic
injustice and the erosion of democratic processes, while acknowledging that
these strategies alone cannot substitute for broader societal changes.
AI Insights - Decentralising alignment shifts control from elite institutions to diverse stakeholders, curbing epistemic injustice.
- The paper grounds context, pluralism, and participation in concrete use cases, showing alignment must adapt to each scenario.
- Using the power/knowledge nexus, it critiques centralized authority and proposes participatory methods as a countermeasure.
- Recommended reading: “Human‑Machine Reconfigurations” and the “Handbook of Ethics, Values, and Technological Design”.
- Key references: Ouyang et al.’s instruction‑following with human feedback and Padhi et al.’s value alignment from unstructured text.
- Participatory AI: actively involving diverse communities in design, deployment, and governance of AI systems.
- The study concludes decentralised strategies resist epistemic injustice but need broader reforms for lasting democratic alignment.