Papers from 08 to 12 September, 2025

Here are the personalized paper recommendations sorted by most relevant
Econometrics for Social Good
šŸ‘ šŸ‘Ž ♄ Save
McGill University
Abstract
Global comparisons of wellbeing increasingly rely on survey questions that ask respondents to evaluate their lives, most commonly in the form of "life satisfaction" and "Cantril ladder" items. These measures underpin international rankings such as the World Happiness Report and inform policy initiatives worldwide, yet their comparability has not been established with contemporary global data. Using the Gallup World Poll, Global Flourishing Study, and World Values Survey, I show that the two question formats yield divergent distributions, rankings, and response patterns that vary across countries and surveys, defying simple explanations. To explore differences in respondents' cognitive interpretations, I compare regression coefficients from the Global Flourishing Study, analyzing how each question wording relates to life circumstances. While international rankings of wellbeing are unstable, the scientific study of the determinants of life evaluations appears more robust. Together, the findings underscore the need for a renewed research agenda on critical limitations to cross-country comparability of wellbeing.
AI Insights
  • The GFS dataset asks for income in local currencies, complicating cross‑country comparisons.
  • Financial literacy is strongly linked to higher savings rates, better retirement planning, and overall well‑being.
  • Financial education programs consistently improve decision‑making and reduce debt levels.
  • The GFS captures a wide array of variables: employment status, education level, religious practice, health behaviors, and social connections.
  • Researchers use the GFS to study how financial inclusion can lower poverty and spur economic growth.
  • The GFS is freely downloadable from the World Bank, enabling open‑access analysis worldwide.
  • Key terms: ā€œFinancial Literacyā€ = knowledge of personal finance concepts; ā€œFinancial Educationā€ = interventions that build that knowledge.
šŸ‘ šŸ‘Ž ♄ Save
University of Sussex
Abstract
How should well-being be prioritised in society, and what trade-offs are people willing to make between fairness and personal well-being? We investigate these questions using a stated preference experiment with a nationally representative UK sample (n = 300), in which participants evaluated life satisfaction outcomes for both themselves and others under conditions of uncertainty. Individual-level utility functions were estimated using an Expected Utility Maximisation (EUM) framework and tested for sensitivity to the overweighting of small probabilities, as characterised by Cumulative Prospect Theory (CPT). A majority of participants displayed concave (risk-averse) utility curves and showed stronger aversion to inequality in societal life satisfaction outcomes than to personal risk. These preferences were unrelated to political alignment, suggesting a shared normative stance on fairness in well-being that cuts across ideological boundaries. The results challenge use of average life satisfaction as a policy metric, and support the development of nonlinear utility-based alternatives that more accurately reflect collective human values. Implications for public policy, well-being measurement, and the design of value-aligned AI systems are discussed.
AI Insights
  • Discrete choice tasks paired with probabilistic gambles let us map how people trade personal for societal life satisfaction.
  • Utility curves were fitted with Expected Utility Maximisation and then tested against Cumulative Prospect Theory for low‑probability bias.
  • Policy design should account for individual value heterogeneity, especially in health and education, to better align with public preferences.
  • The study links Rawlsian justice and prospect theory, showing that fairness concerns outweigh personal risk aversion across demographics.
  • These findings suggest AI systems should embed human inequality aversion, not just political bias, to achieve true value alignment.
AI for Social Good
šŸ‘ šŸ‘Ž ♄ Save
Abstract
Under what conditions would an artificially intelligent system have wellbeing? Despite its obvious bearing on the ethics of human interactions with artificial systems, this question has received little attention. Because all major theories of wellbeing hold that an individual's welfare level is partially determined by their mental life, we begin by considering whether artificial systems have mental states. We show that a wide range of theories of mental states, when combined with leading theories of wellbeing, predict that certain existing artificial systems have wellbeing. While we do not claim to demonstrate conclusively that AI systems have wellbeing, we argue that our metaphysical and moral uncertainty about AI wellbeing requires us dramatically to reassess our relationship with the intelligent systems we create.
šŸ‘ šŸ‘Ž ♄ Save
Hugging Face
Abstract
Artificial intelligence promises to accelerate scientific discovery, yet its benefits remain unevenly distributed. While technical obstacles such as scarce data, fragmented standards, and unequal access to computation are significant, we argue that the primary barriers are social and institutional. Narratives that defer progress to speculative "AI scientists," the undervaluing of data and infrastructure contributions, misaligned incentives, and gaps between domain experts and machine learning researchers all constrain impact. We highlight four interconnected challenges: community dysfunction, research priorities misaligned with upstream needs, data fragmentation, and infrastructure inequities. We argue that their roots lie in cultural and organizational practices. Addressing them requires not only technical innovation but also intentional community-building, cross-disciplinary education, shared benchmarks, and accessible infrastructure. We call for reframing AI for science as a collective social project, where sustainable collaboration and equitable participation are treated as prerequisites for technical progress.
AI Insights
  • Democratizing advanced cyberinfrastructure unlocks responsible AI research across global labs.
  • Only 5 % of Africa’s AI talent accesses sufficient compute, underscoring regional inequity.
  • Pre‑trained transformer models now generate multi‑omics, multi‑species, multi‑tissue samples.
  • Quantization‑aware training yields efficient neural PDE‑solvers showcased at recent conferences.
  • The FAIR Guiding Principles guide scientific data stewardship, enhancing reproducibility.
  • MAGE‑Tab’s spreadsheet‑based format standardizes microarray data for seamless sharing.
  • Resources like The Human Cell Atlas and pymatgen empower interdisciplinary material‑genomics research.
Inequality
šŸ‘ šŸ‘Ž ♄ Save
Abstract
Nicolas inequality we deal can be written as \begin{equation}\label{Nicineq} e^\gamma \log\log N_x < \dfrac{N_x}{\varphi(N_x)}\,, \end{equation} where $x\ge 2$, $N_x$ denotes the product of the primes less or equal than $x$, $\gamma$ is the Euler constant and $\varphi$ is the Euler totient function. We show that there is a large $x_0>0$ such that this inequality fails infinitely often for integers $x\ge x_0$. To this aim we analyze the sign of the Big-o function in the Mertens estimate for the sum of reciprocals of primes that, we see, becomes crucial.
šŸ‘ šŸ‘Ž ♄ Save
University of California
Abstract
Nakamura and Tsuji (2024) recently investigated a many-function generalization of the functional Blaschke--Santal\'o inequality, which they refer to as a generalized Legendre duality relation. They showed that, among the class of all even test functions, centered Gaussian functions saturate this general family of functional inequalities. Leveraging a certain entropic duality, we give a short alternate proof of Nakamura and Tsuji's result, and, in the process, eliminate all symmetry assumptions. As an application, we establish a Talagrand-type inequality for the Wasserstein barycenter problem (without symmetry restrictions) originally conjectured by Kolesnikov and Werner (\textit{Adv.~Math.}, 2022). An analogous geometric Blaschke--Santal\'o-type inequality is established for many convex bodies, again without symmetry assumptions.
AI Insights
  • A forward–reverse Brascamp–Lieb inequality is proved, revealing a functional analogue of the Santaló point for arbitrary functions.
  • A stochastic barycenter argument in Wasserstein space yields the sharp symmetrized Talagrand transport–entropy inequality for Gaussians.
  • Entropy production is linked to convexity of transport costs via an information‑theoretic reinterpretation of Brascamp–Lieb bounds.
  • The geometric Blaschke–Santaló inequality is extended to convex bodies without symmetry, using a new duality framework.
  • An entropic duality proof removes symmetry assumptions, simplifying the generalized Legendre duality argument.
  • A Talagrand‑type inequality for Wasserstein barycenters resolves the 2022 Kolesnikov–Werner conjecture.
Animal Welfare
šŸ‘ šŸ‘Ž ♄ Save
Paper visualization
Rate this image: šŸ˜ šŸ‘ šŸ‘Ž
Abstract
Animal health monitoring and population management are critical aspects of wildlife conservation and livestock management that increasingly rely on automated detection and tracking systems. While Unmanned Aerial Vehicle (UAV) based systems combined with computer vision offer promising solutions for non-invasive animal monitoring across challenging terrains, limited availability of labeled training data remains an obstacle in developing effective deep learning (DL) models for these applications. Transfer learning has emerged as a potential solution, allowing models trained on large datasets to be adapted for resource-limited scenarios such as those with limited data. However, the vast landscape of pre-trained neural network architectures makes it challenging to select optimal models, particularly for researchers new to the field. In this paper, we propose a reinforcement learning (RL)-based transfer learning framework that employs an upper confidence bound (UCB) algorithm to automatically select the most suitable pre-trained model for animal detection tasks. Our approach systematically evaluates and ranks candidate models based on their performance, streamlining the model selection process. Experimental results demonstrate that our framework achieves a higher detection rate while requiring significantly less computational time compared to traditional methods.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Tech for Social Good
  • Racism
  • Measureable ways to end poverty
  • Healthy Society
  • Female Empowerment
  • Casual ML for Social Good
  • Poverty
You can edit or add more interests any time.

Unsubscribe from these updates