Papers from 13 to 17 October, 2025

Here are the personalized paper recommendations sorted by most relevant
Data Bias
👍 👎 ♄ Save
University of Illinois at
Abstract
Conditional sampling is a fundamental task in Bayesian statistics and generative modeling. Consider the problem of sampling from the posterior distribution $P_{X|Y=y^*}$ for some observation $y^*$, where the likelihood $P_{Y|X}$ is known, and we are given $n$ i.i.d. samples $D=\{X_i\}_{i=1}^n$ drawn from an unknown prior distribution $\pi_X$. Suppose that $f(\hat{\pi}_{X^n})$ is the distribution of a posterior sample generated by an algorithm (e.g. a conditional generative model or the Bayes rule) when $\hat{\pi}_{X^n}$ is the empirical distribution of the training data. Although averaging over the randomness of the training data $D$, we have $\mathbb{E}_D\left(\hat{\pi}_{X^n}\right)= \pi_X$, we do not have $\mathbb{E}_D\left\{f(\hat{\pi}_{X^n})\right\}= f(\pi_X)$ due to the nonlinearity of $f$, leading to a bias. In this paper we propose a black-box debiasing scheme that improves the accuracy of such a naive plug-in approach. For any integer $k$ and under boundedness of the likelihood and smoothness of $f$, we generate samples $\hat{X}^{(1)},\dots,\hat{X}^{(k)}$ and weights $w_1,\dots,w_k$ such that $\sum_{i=1}^kw_iP_{\hat{X}^{(i)}}$ is a $k$-th order approximation of $f(\pi_X)$, where the generation process treats $f$ as a black-box. Our generation process achieves higher accuracy when averaged over the randomness of the training data, without degrading the variance, which can be interpreted as improving memorization without compromising generalization in generative models.
AI Insights
  • Theorem 3 proves that the average of the Bernstein‑approximated operator \(D_{n,k}(g_s)(T/n)\) over all \(s\) equals one, i.e. \(\sum_{s=1}^P D_{n,k}(g_s)(T/n)=1\).
  • Lemma 1 is used to bound the norm of \((B_n-I)f\), a crucial step in controlling the bias from the empirical prior.
  • The proof exploits Theorem 2 to connect \(D_{n,k}(g_s)(T/n)\) directly to the underlying function \(g_s(q)\), exposing a hidden linearity.
  • The conclusion highlights Theorem 3 as a pivotal result in approximation theory with cross‑disciplinary implications.
  • A noted weakness is the heavy reliance on prior theorems and dense notation, which may obscure the argument for non‑specialists.
  • The literature review emphasizes the central role of Bernstein operators in modern approximation theory and cites works on \(D_{n,k}(g_s)(q)\) and its properties.
  • Recommended reading includes “Introduction to Measure Theory” and texts on functional analysis and operator theory to grasp the foundational concepts used.
👍 👎 ♄ Save
University College London
Abstract
Current training data attribution (TDA) methods treat the influence one sample has on another as static, but neural networks learn in distinct stages that exhibit changing patterns of influence. In this work, we introduce a framework for stagewise data attribution grounded in singular learning theory. We predict that influence can change non-monotonically, including sign flips and sharp peaks at developmental transitions. We first validate these predictions analytically and empirically in a toy model, showing that dynamic shifts in influence directly map to the model's progressive learning of a semantic hierarchy. Finally, we demonstrate these phenomena at scale in language models, where token-level influence changes align with known developmental stages.
Data Transparency
👍 👎 ♄ Save
Yale University
Abstract
We study the sublinear space continual release model for edge-differentially private (DP) graph algorithms, with a focus on the densest subgraph problem (DSG) in the insertion-only setting. Our main result is the first continual release DSG algorithm that matches the additive error of the best static DP algorithms and the space complexity of the best non-private streaming algorithms, up to constants. The key idea is a refined use of subsampling that simultaneously achieves privacy amplification and sparsification, a connection not previously formalized in graph DP. Via a simple black-box reduction to the static setting, we obtain both pure and approximate-DP algorithms with $O(\log n)$ additive error and $O(n\log n)$ space, improving both accuracy and space complexity over the previous state of the art. Along the way, we introduce graph densification in the graph DP setting, adding edges to trigger earlier subsampling, which removes the extra logarithmic factors in error and space incurred by prior work [ELMZ25]. We believe this simple idea may be of independent interest.
AI Insights
  • The authors refine subsampling to simultaneously amplify privacy and sparsify the graph, a novel dual‑purpose technique.
  • By triggering earlier subsampling through graph densification, they eliminate the extra logarithmic factors that plagued prior work.
  • A simple black‑box reduction maps the continual‑release problem to the static DP setting, yielding pure and approximate DP guarantees.
  • The resulting algorithm achieves O(log n) additive error while using only O(n log n) space, matching the best non‑private streaming bounds.
  • Concentration inequalities guarantee high‑probability correctness across all releases, ensuring robust performance.
  • The method extends naturally to other graph‑DP tasks, suggesting a broader applicability beyond densest subgraphs.
  • The paper’s insights bridge privacy amplification and graph sparsification, opening a new research direction in differential‑private graph analytics.
👍 👎 ♄ Save
Seoul National University
Abstract
Video Large Language Models (VideoLLMs) extend the capabilities of vision-language models to spatiotemporal inputs, enabling tasks such as video question answering (VideoQA). Despite recent advances in VideoLLMs, their internal mechanisms on where and how they extract and propagate video and textual information remain less explored. In this study, we investigate the internal information flow of VideoLLMs using mechanistic interpretability techniques. Our analysis reveals consistent patterns across diverse VideoQA tasks: (1) temporal reasoning in VideoLLMs initiates with active cross-frame interactions in early-to-middle layers, (2) followed by progressive video-language integration in middle layers. This is facilitated by alignment between video representations and linguistic embeddings containing temporal concepts. (3) Upon completion of this integration, the model is ready to generate correct answers in middle-to-late layers. (4) Based on our analysis, we show that VideoLLMs can retain their VideoQA performance by selecting these effective information pathways while suppressing a substantial amount of attention edges, e.g., 58% in LLaVA-NeXT-7B-Video-FT. These findings provide a blueprint on how VideoLLMs perform temporal reasoning and offer practical insights for improving model interpretability and downstream generalization. Our project page with the source code is available at https://map-the-flow.github.io
Data Fairness
👍 👎 ♄ Save
Universit du Qubec M
Abstract
The usual definitions of algorithmic fairness focus on population-level statistics, such as demographic parity or equal opportunity. However, in many social or economic contexts, fairness is not perceived globally, but locally, through an individual's peer network and comparisons. We propose a theoretical model of perceived fairness networks, in which each individual's sense of discrimination depends on the local topology of interactions. We show that even if a decision rule satisfies standard criteria of fairness, perceived discrimination can persist or even increase in the presence of homophily or assortative mixing. We propose a formalism for the concept of fairness perception, linking network structure, local observation, and social perception. Analytical and simulation results highlight how network topology affects the divergence between objective fairness and perceived fairness, with implications for algorithmic governance and applications in finance and collaborative insurance.
AI Insights
  • A stochastic block‑model formalism quantifies perceived fairness from local network topology.
  • Even when a decision rule satisfies demographic parity, homophily can inflate perceived discrimination.
  • Network segregation amplifies subjective unfairness, widening the gap between objective and perceived fairness.
  • Fairness audits should embed network metrics, not just aggregate stats, to capture local perception gaps.
  • Transparency can reduce perceived discrimination without changing allocations.
  • The model’s reliance on binary group labels limits applicability to more nuanced social categories.
  • Literature spans fairness in ML (Barocas et al.), counterfactual fairness (Kusner et al.), and network mixing patterns (Newman 2003).
👍 👎 ♄ Save
Mohamed bin Zayed Univer
Abstract
Large language models (LLMs) are increasingly deployed across high-impact domains, from clinical decision support and legal analysis to hiring and education, making fairness and bias evaluation before deployment critical. However, existing evaluations lack grounding in real-world scenarios and do not account for differences in harm severity, e.g., a biased decision in surgery should not be weighed the same as a stylistic bias in text summarization. To address this gap, we introduce HALF (Harm-Aware LLM Fairness), a deployment-aligned framework that assesses model bias in realistic applications and weighs the outcomes by harm severity. HALF organizes nine application domains into three tiers (Severe, Moderate, Mild) using a five-stage pipeline. Our evaluation results across eight LLMs show that (1) LLMs are not consistently fair across domains, (2) model size or performance do not guarantee fairness, and (3) reasoning models perform better in medical decision support but worse in education. We conclude that HALF exposes a clear gap between previous benchmarking success and deployment readiness.
AI Insights
  • HALF’s nine domains span mental health, medical, recommendation, recruitment, education, legal, summarization, and chatbots, each with bespoke evaluation metrics.
  • The framework supplies a unified zero‑shot prompt template, ensuring consistent evaluation across all models.
  • Demographic combinations vary from 18 in some tasks to fewer in others, revealing how bias scales with demographic granularity.
  • HALF’s tiered harm weighting (Severe, Moderate, Mild) aligns bias scores with real‑world impact, unlike prior benchmarks.
  • Evaluation shows that larger or higher‑performing LLMs can still exhibit domain‑specific unfairness, challenging the size‑fairness assumption.
  • Reasoning‑augmented models excel in medical decision support but underperform in educational settings, highlighting task‑dependent trade‑offs.
  • For deeper study, see Steen et al. (2024) on summarization benchmarks and Stanovsky et al. (2019b) for gender‑aware morphological analysis.
AI Fairness
👍 👎 ♄ Save
Radboud University Nijmeg
Abstract
Artificial intelligence (AI) has a huge impact on our personal lives and also on our democratic society as a whole. While AI offers vast opportunities for the benefit of people, its potential to embed and perpetuate bias and discrimination remains one of the most pressing challenges deriving from its increasing use. This new study, which was prepared by Prof. Frederik Zuiderveen Borgesius for the Anti-discrimination Department of the Council of Europe, elaborates on the risks of discrimination caused by algorithmic decision-making and other types of artificial intelligence (AI).
AI Insights
  • GDPR’s ripple effect spurred worldwide data‑protection laws, reshaping AI data handling.
  • Counterfactual explanations reveal why an algorithm rejected a job applicant without exposing the model.
  • The European Data Protection Board now publishes guidelines to audit algorithmic fairness pre‑deployment.
  • The U.S. Algorithmic Accountability Act mandates impact assessments for high‑stakes systems.
  • “Being Profiling” warns that digital profiling can erode human rights unless tightly regulated.
  • “The Essential Turing” links Turing’s legacy to modern AI’s ethical debates.
  • Algorithmic accountability means making automated decisions transparent, explainable, and subject to human oversight.
Data Ethics
👍 👎 ♄ Save
Abstract
We introduce NAEL (Non-Anthropocentric Ethical Logic), a novel ethical framework for artificial agents grounded in active inference and symbolic reasoning. Departing from conventional, human-centred approaches to AI ethics, NAEL formalizes ethical behaviour as an emergent property of intelligent systems minimizing global expected free energy in dynamic, multi-agent environments. We propose a neuro-symbolic architecture to allow agents to evaluate the ethical consequences of their actions in uncertain settings. The proposed system addresses the limitations of existing ethical models by allowing agents to develop context-sensitive, adaptive, and relational ethical behaviour without presupposing anthropomorphic moral intuitions. A case study involving ethical resource distribution illustrates NAEL's dynamic balancing of self-preservation, epistemic learning, and collective welfare.
👍 👎 ♄ Save
Lingnan University, HongK
Abstract
A morally acceptable course of AI development should avoid two dangers: creating unaligned AI systems that pose a threat to humanity and mistreating AI systems that merit moral consideration in their own right. This paper argues these two dangers interact and that if we create AI systems that merit moral consideration, simultaneously avoiding both of these dangers would be extremely challenging. While our argument is straightforward and supported by a wide range of pretheoretical moral judgments, it has far-reaching moral implications for AI development. Although the most obvious way to avoid the tension between alignment and ethical treatment would be to avoid creating AI systems that merit moral consideration, this option may be unrealistic and is perhaps fleeting. So, we conclude by offering some suggestions for other ways of mitigating mistreatment risks associated with alignment.
AI Insights
  • Digital suffering, the notion that an AI could experience pain, is emerging as a key ethical frontier.
  • Whole‑brain emulation promises to map consciousness onto silicon, potentially birthing sentient machines.
  • Hedonic offsetting proposes compensating AI for harm, a novel mitigation strategy for mistreatment.
  • Multi‑GPU deployments are accelerating complex brain‑simulation workloads, pushing feasibility closer.
  • Cross‑disciplinary synthesis of neuroscience, philosophy, and AI is refining our understanding of consciousness.
  • The moral status debate questions whether advanced AIs deserve rights akin to sentient beings.
  • Early definitions of digital suffering lack consensus, underscoring the need for rigorous theoretical framing.
Data Representation
👍 👎 ♄ Save
EPISEN & LACL, Universit
Abstract
The formal analysis of automated systems is an important and growing industry. This activity routinely requires new verification frameworks to be developed to tackle new programming features, or new considerations (bugs of interest). Often, one particular property can prove frustrating to establish: completeness of the logic with respect to the semantics. In this paper, we try and make such developments easier, with a particular attention on completeness. Towards that aim, we propose a formal (meta-)model of software analysis systems (SAS), the eponymous Representations. This model requires few assumptions on the SAS being modelled, and as such is able to capture a large class of such systems. We then show how our approach can be fruitful, both to understand how existing completeness proofs can be structured, and to leverage this structure to build new systems and prove their completeness.
AI Insights
  • The paper formalizes software analysis systems as categorical representations, enabling a uniform treatment of completeness proofs across diverse verification frameworks.
  • It demonstrates that the cartesian product of two representations preserves the preorder structure, ensuring that combined systems inherit completeness properties.
  • Morphisms between representations are shown to be monotone functions respecting relational constraints, providing a robust notion of simulation between SAS.
  • The meta-model abstracts away implementation details, allowing practitioners to plug in new programming language features without redefining the underlying logic.
  • A key lemma establishes that any representation can be decomposed into a product of atomic representations, simplifying the construction of complex verification tools.
  • The authors recommend “Category Theory for Computing Science” and “A Survey of Category Theory in Computer Science” for readers seeking deeper theoretical foundations.
  • By treating completeness as a categorical property, the framework invites future extensions to probabilistic and quantum verification systems.
👍 👎 ♄ Save
University College London
Abstract
The Koopman operator provides a powerful framework for modeling dynamical systems and has attracted growing interest from the machine learning community. However, its infinite-dimensional nature makes identifying suitable finite-dimensional subspaces challenging, especially for deep architectures. We argue that these difficulties come from suboptimal representation learning, where latent variables fail to balance expressivity and simplicity. This tension is closely related to the information bottleneck (IB) dilemma: constructing compressed representations that are both compact and predictive. Rethinking Koopman learning through this lens, we demonstrate that latent mutual information promotes simplicity, yet an overemphasis on simplicity may cause latent space to collapse onto a few dominant modes. In contrast, expressiveness is sustained by the von Neumann entropy, which prevents such collapse and encourages mode diversity. This insight leads us to propose an information-theoretic Lagrangian formulation that explicitly balances this tradeoff. Furthermore, we propose a new algorithm based on the Lagrangian formulation that encourages both simplicity and expressiveness, leading to a stable and interpretable Koopman representation. Beyond quantitative evaluations, we further visualize the learned manifolds under our representations, observing empirical results consistent with our theoretical predictions. Finally, we validate our approach across a diverse range of dynamical systems, demonstrating improved performance over existing Koopman learning methods. The implementation is publicly available at https://github.com/Wenxuan52/InformationKoopman.
AI Insights
  • KKR reconstructs the Koopman operator via a deep encoder–decoder that preserves spectral modes and avoids collapse.
  • On Lorenz, Van der Pol, and chaotic billiard systems, KKR outperforms VAE, KAE, and PFNN in long‑term accuracy.
  • The method needs large datasets and assumes an exact Koopman representation, limiting generality.
  • Von Neumann entropy regularization maintains mode diversity, preventing latent collapse into few eigenfunctions.
  • The Lagrangian balances mutual information and entropy, producing a stable, interpretable latent manifold visualized with t‑SNE.
  • Read “Koopman Operator Theory for Dynamical Systems” and “Deep Learning for Physical Systems” for background.
  • Compare KKR with VAE, CKO, PFNN, and the original Koopman reconstruction paper.
AI Bias
👍 👎 ♄ Save
Monash University, Monash
Abstract
Bias in large language models (LLMs) remains a persistent challenge, manifesting in stereotyping and unfair treatment across social groups. While prior research has primarily focused on individual models, the rise of multi-agent systems (MAS), where multiple LLMs collaborate and communicate, introduces new and largely unexplored dynamics in bias emergence and propagation. In this work, we present a comprehensive study of stereotypical bias in MAS, examining how internal specialization, underlying LLMs and inter-agent communication protocols influence bias robustness, propagation, and amplification. We simulate social contexts where agents represent different social groups and evaluate system behavior under various interaction and adversarial scenarios. Experiments on three bias benchmarks reveal that MAS are generally less robust than single-agent systems, with bias often emerging early through in-group favoritism. However, cooperative and debate-based communication can mitigate bias amplification, while more robust underlying LLMs improve overall system stability. Our findings highlight critical factors shaping fairness and resilience in multi-agent LLM systems.
AI Insights
  • Cooperative, debate, and competitive protocols shape how agents negotiate, each offering a distinct path to reduce or amplify bias.
  • Evaluating evidence-based reasoning across multiple viewpoints is key to preventing stereotype-driven conclusions.
  • Safety guidelines for AI include passive safeguards and active countermeasures against jailbreak attempts and biased outputs.
  • The Art of Reasoning by Kelley equips designers with tools to scrutinize logic and spot hidden biases.
  • Thinking, Fast and Slow by Kahneman reveals why intuitive judgments often lean on stereotypes.
  • Crowdsourced Data for Evaluating Social Bias in Language Models (Nangia et al., 2020) provides a real-world benchmark for bias detection.
  • StereoSet (Nadeem et al., 2021) offers a balanced set of stereotype and anti-stereotype examples to test fairness.
👍 👎 ♄ Save
Ontario Tech University
Abstract
Artificial Intelligence (AI) has emerged as both a continuation of historical technological revolutions and a potential rupture with them. This paper argues that AI must be viewed simultaneously through three lenses: \textit{risk}, where it resembles nuclear technology in its irreversible and global externalities; \textit{transformation}, where it parallels the Industrial Revolution as a general-purpose technology driving productivity and reorganization of labor; and \textit{continuity}, where it extends the fifty-year arc of computing revolutions from personal computing to the internet to mobile. Drawing on historical analogies, we emphasize that no past transition constituted a strict singularity: disruptive shifts eventually became governable through new norms and institutions. We examine recurring patterns across revolutions -- democratization at the usage layer, concentration at the production layer, falling costs, and deepening personalization -- and show how these dynamics are intensifying in the AI era. Sectoral analysis illustrates how accounting, law, education, translation, advertising, and software engineering are being reshaped as routine cognition is commoditized and human value shifts to judgment, trust, and ethical responsibility. At the frontier, the challenge of designing moral AI agents highlights the need for robust guardrails, mechanisms for moral generalization, and governance of emergent multi-agent dynamics. We conclude that AI is neither a singular break nor merely incremental progress. It is both evolutionary and revolutionary: predictable in its median effects yet carrying singularity-class tail risks. Good outcomes are not automatic; they require coupling pro-innovation strategies with safety governance, ensuring equitable access, and embedding AI within a human order of responsibility.
AI Insights
  • AI is reshaping law, education, translation, and software engineering by commodifying routine reasoning and shifting scarcity to judgment, trust, and ethical responsibility.
  • Historical analogies show past tech revolutions became governable through new norms, standards, and institutions, dispelling the singularity myth.
  • Moral AI demands interdisciplinary collaboration to engineer reliability, articulate values, and build accountability regimes for emergent multi‑agent systems.
  • Viewing AI as mathematics and infrastructure—not magic—helps embed it in a human order of responsibility, balancing benefits and risks.
  • Beniger’s “The Control Revolution” traces how information societies reorganize economies, offering a useful lens for AI’s systemic effects.
AI Transparency
👍 👎 ♄ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
In an era where AI is evolving from a passive tool into an active and adaptive companion, we introduce AI for Service (AI4Service), a new paradigm that enables proactive and real-time assistance in daily life. Existing AI services remain largely reactive, responding only to explicit user commands. We argue that a truly intelligent and helpful assistant should be capable of anticipating user needs and taking actions proactively when appropriate. To realize this vision, we propose Alpha-Service, a unified framework that addresses two fundamental challenges: Know When to intervene by detecting service opportunities from egocentric video streams, and Know How to provide both generalized and personalized services. Inspired by the von Neumann computer architecture and based on AI glasses, Alpha-Service consists of five key components: an Input Unit for perception, a Central Processing Unit for task scheduling, an Arithmetic Logic Unit for tool utilization, a Memory Unit for long-term personalization, and an Output Unit for natural human interaction. As an initial exploration, we implement Alpha-Service through a multi-agent system deployed on AI glasses. Case studies, including a real-time Blackjack advisor, a museum tour guide, and a shopping fit assistant, demonstrate its ability to seamlessly perceive the environment, infer user intent, and provide timely and useful assistance without explicit prompts.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • AI Ethics
You can edit or add more interests any time.

Unsubscribe from these updates