Papers from 22 to 26 September, 2025

Here are the personalized paper recommendations sorted by most relevant
Democratic Processes
👍 👎 ♥ Save
arXiv250918848v1 math
Abstract
Throughout mathematics there are constructions where an object is obtained as a limit of an infinite sequence. Typically, the objects in the sequence improve as the sequence progresses, and the ideal is reached at the limit. I introduce a view that understands this as a development process by which a dynamic mathematical object develops teleologically. In particular, this paper elaborates and clarifies the intuition that such constructions operate on a single dynamic object that maintains its identity throughout the process, and that each step consists in a transformation of this dynamic object, rather than in a genesis of an entirely new static object. This view is supported by a general philosophical discussion, and by a formal modal first-order framework of development processes. In order to exhibit the ubiquity of such processes in mathematics, and showcase the advantages of this view, the framework is applied to wide range of examples: The set of real numbers, forcing extensions of models of set theory, non-standard numbers of arithmetic, the reflection theorem schema of set theory, and the revision semantics of truth. Thus, the view proposed promises to yield a unified dynamic ontology for infinitary mathematics.
AI Insights
  • A development process is a modal first‑order construction that transforms a single dynamic object stepwise, preserving identity toward a limit.
  • Each step is a potential world where the object satisfies a stronger property, encoding a teleological trajectory.
  • Forcing extensions become stages of this process, showing generic filters as intermediate states rather than new universes.
  • Revision semantics for truth is modeled by letting the truth predicate evolve along the process, resolving paradoxes while keeping classical logic.
  • "Divergent Potentialism" and "The Modal Logic of Set‑Theoretic Potentialism" formalise these modal treatments.
  • This dynamic ontology turns infinitary mathematics into a living, evolving structure, inviting proofs to be seen as developmental narratives.
👍 👎 ♥ Save
Abstract
This article examines the political consequences of terrorism in Burkina Faso. Using a dataset combining geolocated terrorist events from ACLED (from 2015 to 2024) with public opinion data from Afrobarometer, I compare the effect of successful terrorist attacks on public support for democracy and authoritarian alternatives. The results reveal that successful terrorist attacks significantly increase support for military regimes, one man regimes, and one party systems, while decreasing support for democratic governance. These changes are most pronounced immediately after the attacks and persist over time. This suggests that terrorism has triggered a trade-off in public preferences between security and freedom. The study also reveals that terrorism erodes perceptions of key democratic values, particularly civil liberties and freedom of movement. Robustness tests confirm that weak institutions or a lack of political knowledge are not driving the results. The article highlights how terrorism in fragile democracies can undermine democratic resilience and accelerate authoritarian drift.
Political Philosophy
👍 👎 ♥ Save
Abstract
The colloquial phrase "partisan bias" encompasses multiple distinct conceptions of bias, including partisan advantage, packing & cracking, and partisan symmetry. All are useful and have their place, and there are several proposed measures of each. While different measures frequently signal the direction of bias consistently for redistricting plans, sometimes the signals are contradictory: for example, one metric says a map is biased towards Democrats while another metric say the same map is biased towards Republicans. This happens most frequently with metrics that measure different kinds of bias, but it also occurs between measures in the same category. These inconsistencies are most pronounced in states where one party is dominant, but they also occur across the full range of partisan balance. The political geography of states also influences the frequency with which various measures are inconsistent in their assessment of bias. No subset of metrics is always internally consistent in their signal of bias.
👍 👎 ♥ Save
Abstract
Amidst the rapid normalization of generative artificial intelligence (GAI), intelligent systems have come to dominate political discourse across information mediums. However, internalized political biases stemming from training data skews, human prejudice, and algorithmic flaws continue to plague the novel technology. This paper employs a zero-shot classification approach to evaluate algorithmic political partisanship through a methodical combination of ideological alignment, topicality, response sentiment, and objectivity. A total of 1800 model responses across six mainstream large language models (LLMs) were individually input into four distinct fine-tuned classification algorithms, each responsible for computing an aforementioned bias evaluation metric. Results show an amplified liberal-authoritarian alignment across all six LLMs evaluated, with notable instances of reasoning supersessions and canned refusals. The study subsequently highlights the psychological influences underpinning human-computer interactions and how intrinsic biases can permeate public discourse. The resulting distortion of the political landscape can ultimately manifest as conformity or polarization, depending on a region's pre-existing socio-political structures.
Democratic Institutions
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Questions in political interviews and hearings serve strategic purposes beyond information gathering including advancing partisan narratives and shaping public perceptions. However, these strategic aspects remain understudied due to the lack of large-scale datasets for studying such discourse. Congressional hearings provide an especially rich and tractable site for studying political questioning: Interactions are structured by formal rules, witnesses are obliged to respond, and members with different political affiliations are guaranteed opportunities to ask questions, enabling comparisons of behaviors across the political spectrum. We develop a pipeline to extract question-answer pairs from unstructured hearing transcripts and construct a novel dataset of committee hearings from the 108th--117th Congress. Our analysis reveals systematic differences in questioning strategies across parties, by showing the party affiliation of questioners can be predicted from their questions alone. Our dataset and methods not only advance the study of congressional politics, but also provide a general framework for analyzing question-answering across interview-like settings.
Social Movements
👍 👎 ♥ Save
University of Amsterdam
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Cooperation is fundamental to the functioning of biological and social systems in both human and animal populations, with the structure of interactions playing a crucial role. Previous studies have used networks to describe interactions and explore the evolution of cooperation, but with limited transposability to social settings due to biologically relevant assumptions. Exogenous processes -- that affect the individual and are not derived from social interactions -- even if unbiased, have a role in supporting cooperation over defection, and this role has been largely overlooked in the context of network-based interactions. Here, we show that selection can favor either cooperation or defection depending on the frequency of exogenous, even if neutral, processes in any population structure. Our framework allows for deriving analytically the conditions for favoring a specific behavior in any network structure strongly affected by non-social environments (frequent exogenous forcing, FEF), which contrasts with previous computationally prohibitive methods. Our results demonstrate that the requirements for favoring cooperation under FEF do not match those in the rare-mutation limit, establishing that underlying neutral processes can be considered a mechanism for cooperation. We reveal that, under FEF, populations are less cooperative, and network heterogeneity can provide an advantage only if targeting specific network properties, clarifying seemingly contradictory experimental results and evolutionary predictions. While focused on cooperation, our assumptions generalize to any decision-making process involving a choice between alternative options. Our framework is particularly applicable to non-homogeneous human populations, offering a new perspective on cooperation science in the context of cultural evolution, where neutral and biased processes within structured interactions are abundant.
AI Insights
  • Analytical fixation probabilities under frequent exogenous forcing eliminate costly simulations.
  • Network heterogeneity boosts cooperation only when degree distribution aligns with targeted mutation rates, revealing a structural “sweet spot”.
  • High‑frequency random events reduce cooperation, disproving the noise‑helps assumption.
  • Mapping dynamics onto a Moran process with time‑varying fitness uncovers tunable neutral drift mechanisms.
  • Online multiplayer data show frequent random rewards suppress cooperative clusters.
  • Scale‑free network paradoxes resolved: cooperation gains appear only at specific degree thresholds.
  • The framework extends to multi‑option decisions, offering a template for cultural evolution.
👍 👎 ♥ Save
Chinese University ofHong
Abstract
Large Language Models (LLMs) are increasingly used for social simulation, where populations of agents are expected to reproduce human-like collective behavior. However, we find that many recent studies adopt experimental designs that systematically undermine the validity of their claims. From a survey of over 40 papers, we identify six recurring methodological flaws: agents are often homogeneous (Profile), interactions are absent or artificially imposed (Interaction), memory is discarded (Memory), prompts tightly control outcomes (Minimal-Control), agents can infer the experimental hypothesis (Unawareness), and validation relies on simplified theoretical models rather than real-world data (Realism). For instance, GPT-4o and Qwen-3 correctly infer the underlying social experiment in 53.1% of cases when given instructions from prior work-violating the Unawareness principle. We formalize these six requirements as the PIMMUR principles and argue they are necessary conditions for credible LLM-based social simulation. To demonstrate their impact, we re-run five representative studies using a framework that enforces PIMMUR and find that the reported social phenomena frequently fail to emerge under more rigorous conditions. Our work establishes methodological standards for LLM-based multi-agent research and provides a foundation for more reliable and reproducible claims about "AI societies."
AI Insights
  • GPT‑4o and Qwen‑3 inferred the experiment’s hypothesis in 53.1 % of cases when prompts violated Unawareness, revealing a hidden bias.
  • A survey of 40+ studies uncovered six systematic flaws—Profile, Interaction, Memory, Minimal‑Control, Unawareness, Realism—that undermine validity.
  • Re‑running five representative papers under a PIMMUR‑enforced framework caused most reported social phenomena to disappear, questioning prior claims.
  • The PIMMUR principles formalize strict requirements for agent heterogeneity, realistic interaction protocols, memory retention, minimal prompt control, hypothesis unawareness, and real‑world validation.
  • The paper’s framework offers a reproducible audit tool for future LLM‑based multi‑agent research across economics, political science, and beyond.
  • By highlighting the risk of LLMs inferring experimental cues, the study urges careful prompt engineering and blind‑folded designs.
  • These insights lay the groundwork for standardized, bias‑mitigated simulations of AI societies, sparking curiosity about what truly emerges when constraints are respected.
Political Science
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Partisan bias in LLMs has been evaluated to assess political leanings, typically through a broad lens and largely in Western contexts. We move beyond identifying general leanings to examine harmful, adversarial representational associations around political leaders and parties. To do so, we create datasets \textit{NeutQA-440} (non-adversarial prompts) and \textit{AdverQA-440} (adversarial prompts), which probe models for comparative plausibility judgments across the USA and India. Results show high susceptibility to biased partisan associations and pronounced asymmetries (e.g., substantially more favorable associations for U.S. Democrats than Republicans) alongside mixed-polarity concentration around India's BJP, highlighting systemic risks and motivating standardized, cross-cultural evaluation.
👍 👎 ♥ Save
Abstract
Large language models (LLMs) are known to generate politically biased text, yet how such biases arise remains unclear. A crucial step toward answering this question is the analysis of training data, whose political content remains largely underexplored in current LLM research. To address this gap, we present in this paper an analysis of the pre- and post-training corpora of OLMO2, the largest fully open-source model released together with its complete dataset. From these corpora, we draw large random samples, automatically annotate documents for political orientation, and analyze their source domains and content. We then assess how political content in the training data correlates with models' stance on specific policy issues. Our analysis shows that left-leaning documents predominate across datasets, with pre-training corpora containing significantly more politically engaged content than post-training data. We also find that left- and right-leaning documents frame similar topics through distinct values and sources of legitimacy. Finally, the predominant stance in the training data strongly correlates with models' political biases when evaluated on policy issues. These findings underscore the need to integrate political content analysis into future data curation pipelines as well as in-depth documentation of filtering strategies for transparency.
Political Economy
👍 👎 ♥ Save
Abstract
Trade agreements are often understood as shielding commerce from fluctuations in political relations. This paper provides evidence that World Trade Organization membership reduces the penalty of political distance on trade at the extensive margin. Using a structural gravity framework covering 1948 to 2023 and two measures of political distance, based on high-frequency events data and UN General Assembly votes, GATT/WTO status is consistently associated with a wider range of products traded between politically distant partners. The association is strongest in the early WTO years (1995 to 2008). Events-based estimates also suggest attenuation at the intensive margin, while UN vote-based estimates do not. Across all specifications, GATT/WTO membership increases aggregate trade volumes. The results indicate that a function of the multilateral trading system has been to foster new trade links across political divides, while raising trade volumes among both close and distant partners.
👍 👎 ♥ Save
University of Miskolc
Abstract
This study aims to reveal different varieties of capitalism and to uncover new patterns of development that emerged between 2010 and 2020. A hybrid model is applied that quantifies three pillars of development (Future - F, Outside - O, Inside - I) using supply-side and demand-side indicators that measure norms, institutions, and policies. Investigating 34 OECD members, this study describes five varieties of capitalism: traditional, dualistic, government-led, open market-based, and human capital-based models. It is suggested that the most significant cut-off point in the development of OECD economies in this period was along the green growth dimension, where European countries with a tradition in coordinated markets outperform the rest. Using Israel and Estonia as an example, it is also suggested that institutional and policy changes that enhance the quality of governance and make coordination more effective are the way out of the middle-income trap.
AI Insights
  • Hall and Soskice’s 2001 VoC framework links institutional design to comparative advantage.
  • Labor market, education, and corporate governance are key levers shaping OECD growth.
  • The paper critiques VoC’s narrow focus, urging inclusion of globalization and tech shocks.
  • Variegated capitalism (Peck & Theodore 2007) and post‑Keynesian macro (Stockhammer 2022) are suggested as complementary lenses.
  • Path dependence and historical context are essential for interpreting institutional evolution across OECD members.
  • Recommended reading: Hall & Soskice (2001), North (1991), and Soskice (2022) for foundational and updated VoC insights.
  • The study invites scholars to empirically test how coordinated market reforms unlock green growth.
Democratic Systems
👍 👎 ♥ Save
University of Florida
Abstract
Transparency and security are essential in our voting system, and voting machines. This paper describes an implementation of a stateless, transparent voting machine (STVM). The STVM is a ballot marking device (BMD) that uses a transparent, interactive printing interface where voters can verify their paper ballots as they fill out the ballot. The transparent interface turns the paper ballot into an interactive interface. In this architecture, stateless describes the machine's boot sequence, where no information is stored or passed forward between reboots. The machine does not have a hard drive. Instead, it boots and runs from read-only media. This STVM design utilizes a Blu-ray Disc ROM (BD-R) to boot the voting software. This system's statelessness and the transparent interactive printing interface make this design the most secure BMD for voting. Unlike other voting methods, this system incorporates high usability, accessibility, and security for all voters. The STVM uses an open-source voting system that has a universally designed interface, making the system accessible for all voters independent of their ability or disability. This system can make voting safer by simultaneously addressing the issue of voters noticing a vote flip and making it difficult for a hack to persist or go unmitigated.
AI Insights
  • Malware tests showed the STVM cannot keep malicious code after reboot, blocking persistent attacks seen in other BMDs.
  • A see‑through housing lets inspectors spot foreign parts instantly, raising tamper‑detection confidence.
  • The paper lists vote‑flip attack vectors—software ballot re‑ordering and hardware key‑logging—to guide hardening.
  • Prototype‑stage STVM already meets many auditability criteria used in risk‑limiting post‑election audits.
  • Authors note gaps like limited field testing and side‑channel leakage, calling for deeper security analysis.
  • Suggested reading includes “Security Analysis of Voting Systems” and “Designing Voting Machines for Verification” for deeper technical insight.
  • The transparent interface turns a paper ballot into a dynamic, verifiable display, letting voters confirm each choice before printing.
👍 👎 ♥ Save
Abstract
Much research in electoral control -- one of the most studied form of electoral attacks, in which an entity running an election alters the structure of that election to yield a preferred outcome -- has focused on giving decision complexity results, e.g., membership in P, NP-completeness, or fixed-parameter tractability. Approximation algorithms on the other hand have received little attention in electoral control, despite their prevalence in the study of other forms of electoral attacks, such as manipulation and bribery. Early work established some preliminary results with respect to popular voting rules such as plurality, approval, and Condorcet. In this paper, we establish for each of the ``standard'' control problems under plurality, approval, and Condorcet, whether they are approximable, and we prove our results in both the weighted and unweighted voter settings. For each problem we study under either approval or Condorcet, we show that any approximation algorithm we give is optimal, unless P=NP. Our approximation algorithms leverage the fact that Covering Integer Programs (CIPs) can be approximated within a factor of $O(\log n)$. Under plurality, we give an $O(m)$-approximation algorithm, and give as lower bound $\Omega(m^{1/4})$, by using a known lower bound on the Minimum $k$-Union (M$k$U) problem. To our knowledge, this is the first application of M$k$U in computational social choice. We also generalize our $O(m)$-approximation algorithm to work with respect to an infinite family of voting rules using an axiomatic approach. Our work closes a long list of open problems established 18 years ago.
Political Theory
👍 👎 ♥ Save
Abstract
This article proposes a synthetic theory of socio-epistemic structuration to understand the reproduction of inequality in contemporary societies. I argue that social reality is not only determined by material structures and social networks but is fundamentally shaped by the epistemic frameworks -- ideologies, narratives, and attributions of agency -- that mediate actors' engagement with their environment. The theory integrates findings from critical race theory, network sociology, social capital studies, historical sociology, and analyses of emerging AI agency. I analyze how structures (from the ``racial contract'' to Facebook networks) and epistemic frameworks (from racist ideology to personal culture) mutually reinforce one another, creating resilient yet unequal life trajectories. Using data from large-scale experiments like the Moving to Opportunity and social network analyses, I demonstrate that exposure to diverse environments and social capital is a necessary but insufficient condition for social mobility; epistemic friction, manifested as `friending bias' and persistent cultural frameworks, systematically limits the benefits of such exposure. I conclude that a public and methodologically reflexive sociology must focus on unpacking and challenging these epistemic structures, recognizing the theoretical capacity of subaltern publics (``reverse tutelage'') and developing new methods to disentangle the complex interplay of homophily, contagion, and structural causation in a world of big data.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Political Movements
  • Human Rights
  • Activism
  • Democracy
You can edit or add more interests any time.

Unsubscribe from these updates