Papers from 22 to 26 September, 2025

Here are the personalized paper recommendations sorted by most relevant
Social Inequality
👍 👎 ♥ Save
Abstract
This article proposes a synthetic theory of socio-epistemic structuration to understand the reproduction of inequality in contemporary societies. I argue that social reality is not only determined by material structures and social networks but is fundamentally shaped by the epistemic frameworks -- ideologies, narratives, and attributions of agency -- that mediate actors' engagement with their environment. The theory integrates findings from critical race theory, network sociology, social capital studies, historical sociology, and analyses of emerging AI agency. I analyze how structures (from the ``racial contract'' to Facebook networks) and epistemic frameworks (from racist ideology to personal culture) mutually reinforce one another, creating resilient yet unequal life trajectories. Using data from large-scale experiments like the Moving to Opportunity and social network analyses, I demonstrate that exposure to diverse environments and social capital is a necessary but insufficient condition for social mobility; epistemic friction, manifested as `friending bias' and persistent cultural frameworks, systematically limits the benefits of such exposure. I conclude that a public and methodologically reflexive sociology must focus on unpacking and challenging these epistemic structures, recognizing the theoretical capacity of subaltern publics (``reverse tutelage'') and developing new methods to disentangle the complex interplay of homophily, contagion, and structural causation in a world of big data.
👍 👎 ♥ Save
Abstract
As machine learning (ML) algorithms are increasingly used in social domains to make predictions about humans, there is a growing concern that these algorithms may exhibit biases against certain social groups. Numerous notions of fairness have been proposed in the literature to measure the unfairness of ML. Among them, one class that receives the most attention is \textit{parity-based}, i.e., achieving fairness by equalizing treatment or outcomes for different social groups. However, achieving parity-based fairness often comes at the cost of lowering model accuracy and is undesirable for many high-stakes domains like healthcare. To avoid inferior accuracy, a line of research focuses on \textit{preference-based} fairness, under which any group of individuals would experience the highest accuracy and collectively prefer the ML outcomes assigned to them if they were given the choice between various sets of outcomes. However, these works assume individual demographic information is known and fully accessible during training. In this paper, we relax this requirement and propose a novel \textit{demographic-agnostic fairness without harm (DAFH)} optimization algorithm, which jointly learns a group classifier that partitions the population into multiple groups and a set of decoupled classifiers associated with these groups. Theoretically, we conduct sample complexity analysis and show that our method can outperform the baselines when demographic information is known and used to train decoupled classifiers. Experiments on both synthetic and real data validate the proposed method.
Inequality
👍 👎 ♥ Save
Hunan University
Abstract
The LYM inequality is a fundamental result concerning the sizes of subsets in a Sperner family. Subsequent studies on the LYM inequality have been generalized to families of $r$-decompositions, where all components are required to avoid chains of the same length. In this paper, we relax this constraint by allowing components of a family of $r$-decompositions to avoid chains of distinct lengths, and derive generalized LYM inequalities across all the relevant settings, including set-theoretic, $q$-analog, continuous analog, and arithmetic analog frameworks. Notably, the bound in our LYM inequalities does not depend on the maximal length of all forbidden chains. Moreover, we extend our approach beyond $r$-decompositions to $r$-multichains, and establish analogous LYM inequalities.
AI Insights
  • The authors introduce a counting scheme that surpasses classical Sperner bounds by refining chain decompositions.
  • Combining Sperner’s theorem with Dilworth’s lemma yields a tight upper bound on r‑decompositions.
  • Counterexamples show the inequality is optimal even for chains of disparate lengths.
  • The work links combinatorial limits to geometric probability, suggesting new research avenues.
  • Future directions include extending the method to infinite families and algorithmic matching applications.
  • For background, see “Matching Theory, an Introduction,” which contextualizes the paper’s techniques.
Economic Inequality
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Bilateral trade models the task of intermediating between two strategic agents, a seller and a buyer, willing to trade a good for which they hold private valuations. We study this problem from the perspective of a broker, in a regret minimization framework. At each time step, a new seller and buyer arrive, and the broker has to propose a mechanism that is incentive-compatible and individually rational, with the goal of maximizing profit. We propose a learning algorithm that guarantees a nearly tight $\tilde{O}(\sqrt{T})$ regret in the stochastic setting when seller and buyer valuations are drawn i.i.d. from a fixed and possibly correlated unknown distribution. We further show that it is impossible to achieve sublinear regret in the non-stationary scenario where valuations are generated upfront by an adversary. Our ambitious benchmark for these results is the best incentive-compatible and individually rational mechanism. This separates us from previous works on efficiency maximization in bilateral trade, where the benchmark is a single number: the best fixed price in hindsight. A particular challenge we face is that uniform convergence for all mechanisms' profits is impossible. We overcome this difficulty via a careful chaining analysis that proves convergence for a provably near-optimal mechanism at (essentially) optimal rate. We further showcase the broader applicability of our techniques by providing nearly optimal results for the joint ads problem.

We did not find tons of content matching your interests we've included some additional topics that are popular. Also be aware that if the topics is not present in arxiv we wont be able to recommend it.

AI Agents
👍 👎 ♥ Save
Ixent Games
Abstract
Current Large Reasoning Models (LRMs) exhibit significant limitations in reliability and transparency, often showing a collapse in reasoning capabilities when faced with high-complexity, long-horizon tasks. This "illusion of thinking" is frequently an artifact of non-agentic, black-box evaluation paradigms that fail to cultivate robust problem-solving processes. In response, we introduce The STAR-XAI Protocol (Socratic, Transparent, Agentic, Reasoning - for eXplainable Artificial Intelligence), a novel methodology for training and operating verifiably reliable AI agents. Our method reframes the human-AI interaction as a structured, Socratic dialogue, governed by an explicit and evolving rulebook, the Consciousness Transfer Package (CTP). Through an interactive Gameplay Cycle that enforces ante-hoc strategic justification and a state-locking Checksum that prevents error accumulation, the protocol transforms a powerful but opaque LRM into a disciplined "Clear Box" agent. We demonstrate the efficacy of this method through an exhaustive 25-move case study in the complex strategic game "Caps i Caps". The agent not only solved the high-complexity puzzle but also demonstrated Second-Order Agency, identifying flaws in its own supervisor-approved plans and adapting its core integrity protocols mid-task. The STAR-XAI Protocol offers a practical pathway to creating AI agents that are not just high-performing, but also transparent, auditable, and trustworthy by design.
AI Insights
  • The Consciousness Transfer Package (CTP) is a step‑by‑step manual for mastering gear‑based board games, covering placement, rotation, and vector mechanics.
  • CTP provides concrete examples of successful moves, letting Gems learn proven strategies instead of trial‑and‑error.
  • The package is designed for seamless handoff, so one Gem can train an agent that another can use without knowledge loss.
  • Recommended literature includes reasoning classics, game‑theory treatises, and studies on gear‑placement efficiency.
  • Online forums and simulation tools are highlighted as practical resources for testing and refining gear‑game tactics.
  • A caveat: the CTP’s depth may overwhelm novices and assumes baseline familiarity with gear‑game mechanics.
  • Core definitions—gear, placement, rotation, vector, base—ensure consistent terminology across training sessions.
👍 👎 ♥ Save
Abstract
Modern socio-economic systems are undergoing deep integration with artificial intelligence technologies. This paper constructs a heterogeneous agent-based modeling framework that incorporates both human workers and autonomous AI agents, to study the impact of AI collaboration under resource constraints on aggregate social output. We build five progressively extended models: Model 1 serves as the baseline of pure human collaboration; Model 2 introduces AI as collaborators; Model 3 incorporates network effects among agents; Model 4 treats agents as independent producers; and Model 5 integrates both network effects and independent agent production. Through theoretical derivation and simulation analysis, we find that the introduction of AI agents can significantly increase aggregate social output. When considering network effects among agents, this increase exhibits nonlinear growth far exceeding the simple sum of individual contributions. Under the same resource inputs, treating agents as independent producers provides higher long-term growth potential; introducing network effects further demonstrates strong characteristics of increasing returns to scale.
AI and Society
👍 👎 ♥ Save
University of Buenos Ares
Paper visualization
Rate this image: 😍 👍 👎
Abstract
This paper develops a taxonomy of expert perspectives on the risks and likely consequences of artificial intelligence, with particular focus on Artificial General Intelligence (AGI) and Artificial Superintelligence (ASI). Drawing from primary sources, we identify three predominant doctrines: (1) The dominance doctrine, which predicts that the first actor to create sufficiently advanced AI will attain overwhelming strategic superiority sufficient to cheaply neutralize its opponents' defenses; (2) The extinction doctrine, which anticipates that humanity will likely lose control of ASI, leading to the extinction of the human species or its permanent disempowerment; (3) The replacement doctrine, which forecasts that AI will automate a large share of tasks currently performed by humans, but will not be so transformative as to fundamentally reshape or bring an end to human civilization. We examine the assumptions and arguments underlying each doctrine, including expectations around the pace of AI progress and the feasibility of maintaining advanced AI under human control. While the boundaries between doctrines are sometimes porous and many experts hedge across them, this taxonomy clarifies the core axes of disagreement over the anticipated scale and nature of the consequences of AI development.
Research Automation with AI
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Climate data science faces persistent barriers stemming from the fragmented nature of data sources, heterogeneous formats, and the steep technical expertise required to identify, acquire, and process datasets. These challenges limit participation, slow discovery, and reduce the reproducibility of scientific workflows. In this paper, we present a proof of concept for addressing these barriers through the integration of a curated knowledge graph (KG) with AI agents designed for cloud-native scientific workflows. The KG provides a unifying layer that organizes datasets, tools, and workflows, while AI agents -- powered by generative AI services -- enable natural language interaction, automated data access, and streamlined analysis. Together, these components drastically lower the technical threshold for engaging in climate data science, enabling non-specialist users to identify and analyze relevant datasets. By leveraging existing cloud-ready API data portals, we demonstrate that "a knowledge graph is all you need" to unlock scalable and agentic workflows for scientific inquiry. The open-source design of our system further supports community contributions, ensuring that the KG and associated tools can evolve as a shared commons. Our results illustrate a pathway toward democratizing access to climate data and establishing a reproducible, extensible framework for human--AI collaboration in scientific research.
👍 👎 ♥ Save
KT
Abstract
KT developed a Responsible AI (RAI) assessment methodology and risk mitigation technologies to ensure the safety and reliability of AI services. By analyzing the Basic Act on AI implementation and global AI governance trends, we established a unique approach for regulatory compliance and systematically identify and manage all potential risk factors from AI development to operation. We present a reliable assessment methodology that systematically verifies model safety and robustness based on KT's AI risk taxonomy tailored to the domestic environment. We also provide practical tools for managing and mitigating identified AI risks. With the release of this report, we also release proprietary Guardrail : SafetyGuard that blocks harmful responses from AI models in real-time, supporting the enhancement of safety in the domestic AI development ecosystem. We also believe these research outcomes provide valuable insights for organizations seeking to develop Responsible AI.
AI Insights
  • The risk taxonomy categorizes threats into data, model, deployment, and societal dimensions, each with measurable indicators.
  • A multi‑stage assessment pipeline integrates static code analysis, adversarial testing, and human‑in‑the‑loop audits to quantify robustness.
  • SafetyGuard employs a lightweight transformer‑based policy network that intercepts outputs in real time, achieving <5 ms latency on edge devices.
  • Compliance mapping aligns each risk factor with specific clauses of the Basic Act on AI, enabling automated audit reports.
  • Pilot deployments in Korean telecom and finance sectors demonstrated a 30 % reduction in policy‑violating incidents after Guardrail integration.
  • The report proposes a future research agenda on explainable mitigation strategies and cross‑border data‑sharing protocols.
AGI: Artificial General Intelligence
👍 👎 ♥ Save
Abstract
Safety, trust and Artificial General Intelligence (AGI) are aspirational goals in artificial intelligence (AI) systems, and there are several informal interpretations of these notions. In this paper, we propose strict, mathematical definitions of safety, trust, and AGI, and demonstrate a fundamental incompatibility between them. We define safety of a system as the property that it never makes any false claims, trust as the assumption that the system is safe, and AGI as the property of an AI system always matching or exceeding human capability. Our core finding is that -- for our formal definitions of these notions -- a safe and trusted AI system cannot be an AGI system: for such a safe, trusted system there are task instances which are easily and provably solvable by a human but not by the system. We note that we consider strict mathematical definitions of safety and trust, and it is possible for real-world deployments to instead rely on alternate, practical interpretations of these notions. We show our results for program verification, planning, and graph reachability. Our proofs draw parallels to G\"odel's incompleteness theorems and Turing's proof of the undecidability of the halting problem, and can be regarded as interpretations of G\"odel's and Turing's results.
Deep Learning
👍 👎 ♥ Save
University of Pennsylvann
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Given the widespread use of deep learning models in safety-critical applications, ensuring that the decisions of such models are robust against adversarial exploitation is of fundamental importance. In this thesis, we discuss recent progress toward designing algorithms that exhibit desirable robustness properties. First, we discuss the problem of adversarial examples in computer vision, for which we introduce new technical results, training paradigms, and certification algorithms. Next, we consider the problem of domain generalization, wherein the task is to train neural networks to generalize from a family of training distributions to unseen test distributions. We present new algorithms that achieve state-of-the-art generalization in medical imaging, molecular identification, and image classification. Finally, we study the setting of jailbreaking large language models (LLMs), wherein an adversarial user attempts to design prompts that elicit objectionable content from an LLM. We propose new attacks and defenses, which represent the frontier of progress toward designing robust language-based agents.
AI Insights
  • Random erasing data augmentation injects stochastic occlusions during training, boosting pixel‑level robustness.
  • Stability training enforces Lipschitz continuity across layers, yielding provable robustness margins.
  • Robust prompt optimization tailors LLM inputs to shrink jailbreak‑induced decision space.
  • Universal adversarial attacks generate a single perturbation that transfers across many inputs, breaking input‑specific defenses.
  • Randomness in SGD can amplify or dampen adversarial vulnerability, depending on learning‑rate schedules.
  • Tooling—automated augmentation pipelines and reproducibility frameworks—drives consistent robustness across labs.
  • “Essentials of Robust Control” links classical control theory to deep learning, providing a rigorous basis for safe neural systems.
👍 👎 ♥ Save
Istanbul Medeniyet Univer
Abstract
Deep learning optimizers are optimization algorithms that enable deep neural networks to learn. The effectiveness of learning is highly dependent on the optimizer employed in the training process. Alongside the rapid advancement of deep learning, a wide range of optimizers with different approaches have been developed. This study aims to provide a review of various optimizers that have been proposed and received attention in the literature. From Stochastic gradient descent to the most recent ones such as Momentum, AdamW, Sophia, and Muon in chronological order, optimizers are examined individually, and their distinctive features are highlighted in the study. The update rule of each optimizer is presented in detail, with an explanation of the associated concepts and variables. The techniques applied by these optimizers, their contributions to the optimization process, and their default hyperparameter settings are also discussed. In addition, insights are offered into the open challenges encountered in the optimization of deep learning models. Thus, a comprehensive resource is provided both for understanding the current state of optimizers and for identifying potential areas of future development.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Poverty
You can edit or add more interests any time.

Unsubscribe from these updates