Hi!

Your personalized paper recommendations for 26 to 30 January, 2026.
Universidade Federal Fluminense
AI Insights
  • Global efforts to reduce susceptibility or enhance denunciation dynamics (e.g., content moderation, reporting tools) can shift the system toward safer states. (ML: 0.98)πŸ‘πŸ‘Ž
  • Our model provides a framework to identify quantitative intervention strategies even in the absence of empirical calibration. (ML: 0.98)πŸ‘πŸ‘Ž
  • Targeted interventions on influential nodes (e.g., users with high connectivity or reach) are likely to be most effective. (ML: 0.98)πŸ‘πŸ‘Ž
  • The SID model captures the dynamics of online racism as a social disease, where interactions predominate in social media. (ML: 0.96)πŸ‘πŸ‘Ž
  • SID model: A social disease model that captures the dynamics of online racism as a social contagion process. (ML: 0.95)πŸ‘πŸ‘Ž
  • Community-level moderation or disruption of shortcut pathways can help suppress undesirable macroscopic outcomes. (ML: 0.95)πŸ‘πŸ‘Ž
  • The persistence of absorbing states and critical thresholds suggests that timely intervention strategies can prevent the widespread dissemination of harmful ideologies. (ML: 0.95)πŸ‘πŸ‘Ž
  • Watts-Strogatz small-world networks: Networks with high clustering and short average path lengths, features commonly observed in empirical social networks. (ML: 0.83)πŸ‘πŸ‘Ž
  • Barabasi-Albert networks: A type of scale-free network where hubs (high-degree nodes) play a crucial role in information dissemination. (ML: 0.81)πŸ‘πŸ‘Ž
  • Mean-field approximation: An analytical method used to approximate the behavior of complex systems by averaging over individual interactions. (ML: 0.67)πŸ‘πŸ‘Ž
Abstract
Racism remains a persistent societal issue, increasingly amplified by the structure and dynamics of online social networks. In this work, we propose a three-state compartmental model to study the spreading and suppression of racist content, drawing from epidemic-like dynamics and interaction-driven transitions. We analyze the model on fully-connected (homogeneous mixing) networks using a set of coupled differential equations, and on BarabΓ‘si-Albert (BA) scale-free and Watts-Strogatz (WS) small-world networks through agent-based simulations. The system exhibits three distinct stationary regimes: two racism-free absorbing states and one active phase with persistent racist content. We identify and characterize the phase transitions between these regimes, discuss the role of network topology, and highlight the emergence of absorbing states. Our findings illustrate how statistical physics tools can help uncover the macroscopic consequences of microscopic social interactions in digital environments.
Why we are recommending this paper?
Due to your Interest in Racism

This paper directly addresses concerns around systemic issues like racism, aligning with your interest in inequality and social justice. The model's focus on online dynamics offers a valuable framework for understanding how these issues spread and evolve.
University of Bordeaux
AI Insights
  • Fixed-effects estimation: A statistical method used to control for nuisance parameters by estimating their effects on the outcome variable. (ML: 0.97)πŸ‘πŸ‘Ž
  • fixest views fixed effects as a set of nuisance parameters that need to be controlled for in order to derive valid results for the main parameters of interest. (ML: 0.93)πŸ‘πŸ‘Ž
  • The package may not be suitable for all types of data or research questions. (ML: 0.91)πŸ‘πŸ‘Ž
  • fixest supports bespoke customizations of stats methods, including coefficient tables with standard errors, t-stats, and p-values. (ML: 0.91)πŸ‘πŸ‘Ž
  • fixest provides a range of methods for controlling for nuisance parameters, including fixed-effects estimation, which is useful when there are multiple levels of nesting. (ML: 0.90)πŸ‘πŸ‘Ž
  • fixest is an R package for fixed-effects estimation that provides a range of methods for controlling for nuisance parameters. (ML: 0.90)πŸ‘πŸ‘Ž
  • Nuisance parameters: Parameters that are not of primary interest but need to be controlled for in order to derive valid results for the main parameters of interest. (ML: 0.89)πŸ‘πŸ‘Ž
  • The package supports bespoke customizations of stats methods, making it a powerful tool for data analysis and modeling. (ML: 0.87)πŸ‘πŸ‘Ž
  • VCOV (Var-Cov matrix): A matrix that represents the variance and covariance between the estimated coefficients. (ML: 0.84)πŸ‘πŸ‘Ž
  • The package views fixed effects as a set of nuisance parameters that need to be controlled for in order to derive valid results for the main parameters of interest. (ML: 0.78)πŸ‘πŸ‘Ž
Abstract
fixest is an R package for fast and flexible econometric estimation, providing a comprehensive toolkit for applied researchers. The package particularly excels at fixed-effects estimation, supported by a novel fixed-point acceleration algorithm implemented in C++. This algorithm achieves rapid convergence across a broad class of data contexts and further enables estimation of complex models, including those with varying slopes, in a highly efficient manner. Beyond computational speed, fixest provides a unified syntax for a wide variety of models: ordinary least squares, instrumental variables, generalized linear models, maximum likelihood, and difference-in-differences estimators. An expressive formula interface enables multiple estimations, stepwise regressions, and variable interpolation in a single call, while users can make on-the-fly inference adjustments using a variety of built-in robust standard errors. Finally, fixest provides methods for publication-ready regression tables and coefficient plots. Benchmarks against leading alternatives in R, Python, and Julia demonstrate best-in-class performance, and the paper includes many worked examples illustrating the core functionality.
Why we are recommending this paper?
Due to your Interest in Econometrics for Social Good

Given your interest in econometrics and social good, this paper provides a powerful tool for analyzing data related to poverty and inequality. The package’s focus on fixed-effects estimation is particularly relevant to your interest in measurable ways to end poverty.
University of Padua
AI Insights
  • Semigroup theory: a branch of mathematics that studies the behavior of certain types of operators, known as semigroups. (ML: 0.90)πŸ‘πŸ‘Ž
  • Operator theory: a branch of mathematics that studies the properties and behavior of linear operators. (ML: 0.89)πŸ‘πŸ‘Ž
  • Functional analysis: a branch of mathematics that deals with the study of vector spaces and linear transformations. (ML: 0.89)πŸ‘πŸ‘Ž
  • Throughout the proof, the author relies on various mathematical tools and techniques, including functional analysis, operator theory, and semigroup theory. (ML: 0.85)πŸ‘πŸ‘Ž
  • They then derive several key inequalities, including the boundedness of P0t from L1(S) to L∞(S). (ML: 0.83)πŸ‘πŸ‘Ž
  • They also provide several key observations and lemmas that are used to establish the desired inequalities. (ML: 0.83)πŸ‘πŸ‘Ž
  • The text also discusses the positivity preserving property of the semigroup Pt, which is used to establish a lower bound for the function v. (ML: 0.82)πŸ‘πŸ‘Ž
  • The author's use of positivity preserving property and lower bounds for functions is crucial in establishing the sharp Caffarelli-Kohn-Nirenberg inequality. (ML: 0.82)πŸ‘πŸ‘Ž
  • The proof relies on several key inequalities and observations, which are used to establish the desired bound for the function v. (ML: 0.81)πŸ‘πŸ‘Ž
  • The sharp Caffarelli-Kohn-Nirenberg inequality is proved using a combination of mathematical techniques, including semigroup theory and functional analysis. (ML: 0.79)πŸ‘πŸ‘Ž
  • The author presents a detailed proof of the inequality, starting with the definition of the operator TΞ» and its properties. (ML: 0.73)πŸ‘πŸ‘Ž
  • The solution involves a series of mathematical derivations and inequalities, including the use of semigroup theory and functional analysis. (ML: 0.72)πŸ‘πŸ‘Ž
  • Caffarelli-Kohn-Nirenberg inequality: a sharp version of the classical Sobolev inequality. (ML: 0.64)πŸ‘πŸ‘Ž
  • The author uses this bound to prove the sharp Caffarelli-Kohn-Nirenberg inequality. (ML: 0.64)πŸ‘πŸ‘Ž
  • The problem of proving the sharp Caffarelli-Kohn-Nirenberg inequality is addressed in this text. (ML: 0.59)πŸ‘πŸ‘Ž
Abstract
We consider a monomial Caffarelli-Kohn-Nirenberg inequality, find the optimal constant and classify the optimizers under an integrated curvature dimension condition. We take advantage of the $Ξ“$-calculus to exploit geometrical techniques to tackle the problem and regularity results to justify some integration by parts. A symmetry-breaking result is also provided.
Why we are recommending this paper?
Due to your Interest in Inequality

This paper’s exploration of mathematical inequalities, particularly within the context of curvature and dimension, could offer novel analytical techniques applicable to complex social and economic problems. The use of calculus aligns with your interest in measurable outcomes.
University of Helsinki
AI Insights
  • They also provide examples to demonstrate that the new inequality is sharper than the current best known inequality. (ML: 0.91)πŸ‘πŸ‘Ž
  • The new inequality can be used to improve existing results and provide new insights into the properties of Lp spaces. (ML: 0.91)πŸ‘πŸ‘Ž
  • Triangle inequality: An inequality that states that for any two vectors u and v in a normed vector space, ||u + v|| ≀ ||u|| + ||v||. (ML: 0.90)πŸ‘πŸ‘Ž
  • Lp space: A normed vector space consisting of all measurable functions f such that ∫ |f(x)|^p dx < ∞. (ML: 0.89)πŸ‘πŸ‘Ž
  • Theorem 1.2: A statement that provides a new inequality that sharpens the triangle inequality for sums of N functions in Lp spaces, given by βˆ‘_{i=1}^N ||f_i||_p ≀ C (βˆ‘_{i=1}^N ||f_i||_p^2)^{p/2}. (ML: 0.88)πŸ‘πŸ‘Ž
  • HΓΆlder's inequality: A mathematical statement that provides an upper bound on the Lp norm of the product of two functions f and g, given by ∫ |f(x)g(x)| dx ≀ (∫ |f(x)|^p dx)^1/p (∫ |g(x)|^q dx)^1/q. (ML: 0.87)πŸ‘πŸ‘Ž
  • The problem is to find a new inequality that sharpens the triangle inequality for sums of N functions in Lp spaces. (ML: 0.86)πŸ‘πŸ‘Ž
  • The authors provide a new inequality that sharpens the triangle inequality for sums of N functions in Lp spaces, which is stated in Theorem 1.2. (ML: 0.86)πŸ‘πŸ‘Ž
  • Conjecture 1.3: A statement that conjectures the existence of an inequality that provides a better bound for the sum of N functions in Lp spaces, given by βˆ‘_{i=1}^N ||f_i||_p ≀ C (βˆ‘_{i=1}^N ||f_i||_p^2)^{p/2}. (ML: 0.85)πŸ‘πŸ‘Ž
  • The authors use a combination of mathematical techniques, including HΓΆlder's inequality, to prove the new inequality. (ML: 0.81)πŸ‘πŸ‘Ž
  • This inequality is stronger than the current best known inequality (1.4) and provides a better bound for the sum of N functions in Lp spaces. (ML: 0.81)πŸ‘πŸ‘Ž
  • The problem has important applications in various fields, such as functional analysis, operator theory, and harmonic analysis. (ML: 0.80)πŸ‘πŸ‘Ž
  • The authors' work provides a significant contribution to the field of functional analysis and operator theory, and it is expected to have a lasting impact on the development of these fields. (ML: 0.75)πŸ‘πŸ‘Ž
  • The current best known inequality is given by (1.4), and it is conjectured that there exists an inequality that provides a better bound, as stated in Conjecture 1.3. (ML: 0.57)πŸ‘πŸ‘Ž
Abstract
Carbery (2006) proposed novel estimates for the $L^p$ norm of a sum of two nonnegative measurable functions. Subsequently, Carlen, Frank, Ivanisvili and Lieb (2018) provided stronger bounds, which Ivanisvili and Mooney (2020) further refined to achieve estimates that are, in a certain sense, optimal. Continuing this line of research, the present work establishes new upper and lower bounds for the range \(p\in(1,\infty)\). Carbery also asked under what conditions on a sequence \((f_j)\) of nonnegative measurable functions the inequality \(\sum \|f_j\|_p^p < \infty\) implies that \(\sum f_j \in L^p\). Ivanisvili and Mooney (2020) resolved this question for \(p\in[1,2]\), and the present work proposes an answer for \(p\in[2,\infty)\).
Why we are recommending this paper?
Due to your Interest in Inequality

This paper delves into advanced mathematical techniques for analyzing function sums, which can be applied to understanding complex data distributions related to your interests in inequality and social good. The work builds on established research, offering a solid foundation for further investigation.
University of Tartu
Paper visualization
Rate image: πŸ‘ πŸ‘Ž
AI Insights
  • It emphasizes the need for a more comprehensive understanding of the complex relationships between technological, social, and economic factors in AI development and deployment. (ML: 0.99)πŸ‘πŸ‘Ž
  • It cites studies on AI bias, transparency, accountability, and the need for responsible AI development and deployment. (ML: 0.99)πŸ‘πŸ‘Ž
  • The study acknowledges that it has limitations due to the complexity of the topic and the need for further research. (ML: 0.98)πŸ‘πŸ‘Ž
  • It also recognizes that the development of AI governance frameworks is a dynamic process that requires ongoing evaluation and refinement. (ML: 0.98)πŸ‘πŸ‘Ž
  • The study highlights the importance of considering paradoxes when developing AI governance frameworks. (ML: 0.97)πŸ‘πŸ‘Ž
  • The paper explores these kinds of complexities in AI governance and why they need to be considered when developing frameworks for responsible AI development and deployment. (ML: 0.97)πŸ‘πŸ‘Ž
  • The paper explores the concept of paradox in the context of artificial intelligence (AI) and its governance, highlighting the importance of considering these complexities when developing AI governance frameworks. (ML: 0.97)πŸ‘πŸ‘Ž
  • The paper discusses the concept of paradox in management and organization theories, specifically focusing on artificial intelligence (AI) and its governance. (ML: 0.96)πŸ‘πŸ‘Ž
  • The paper draws on existing literature in management, organization theory, and AI ethics to inform its discussion of paradoxes in AI governance. (ML: 0.95)πŸ‘πŸ‘Ž
  • Imagine you're trying to create a system that can make decisions without being biased, but at the same time, you want it to be transparent so people understand how those decisions are made. (ML: 0.93)πŸ‘πŸ‘Ž
  • That's a paradox! (ML: 0.91)πŸ‘πŸ‘Ž
  • Paradox: a situation or condition that is contradictory or opposite to what would be expected. (ML: 0.86)πŸ‘πŸ‘Ž
Abstract
The rapid proliferation of artificial intelligence across organizational contexts has generated profound strategic opportunities while introducing significant ethical and operational risks. Despite growing scholarly attention to responsible AI, extant literature remains fragmented and is often adopting either an optimistic stance emphasizing value creation or an excessively cautious perspective fixated on potential harms. This paper addresses this gap by presenting a comprehensive examination of AI's dual nature through the lens of strategic information systems. Drawing upon a systematic synthesis of the responsible AI literature and grounded in paradox theory, we develop the Paradox-based Responsible AI Governance (PRAIG) framework that articulates: (1) the strategic benefits of AI adoption, (2) the inherent risks and unintended consequences, and (3) governance mechanisms that enable organizations to navigate these tensions. Our framework advances theoretical understanding by conceptualizing responsible AI governance as the dynamic management of paradoxical tensions between value creation and risk mitigation. We provide formal propositions demonstrating that trade-off approaches amplify rather than resolve these tensions, and we develop a taxonomy of paradox management strategies with specified contingency conditions. For practitioners, we offer actionable guidance for developing governance structures that neither stifle innovation nor expose organizations to unacceptable risks. The paper concludes with a research agenda for advancing responsible AI governance scholarship.
Why we are recommending this paper?
Due to your Interest in AI for Social Good

With your interest in AI for social good, this paper provides a critical overview of the ethical and operational challenges associated with AI deployment. It’s timely exploration of responsible AI aligns with your broader focus on AI's impact on society.
University of Notre Dame
Paper visualization
Rate image: πŸ‘ πŸ‘Ž
AI Insights
  • The results suggest that while AI-powered learning tools can be beneficial, they require careful planning and adaptation to meet the unique needs of each family. (ML: 0.99)πŸ‘πŸ‘Ž
  • The results show that while some families effectively utilized AI-powered learning tools for task distribution and tutoring assistance, others struggled to adapt to these new methods. (ML: 0.99)πŸ‘πŸ‘Ž
  • The study emphasizes the need for more research on the effective implementation of AI-powered learning tools in family settings, considering factors such as caregiver roles, individual needs, and family dynamics. (ML: 0.98)πŸ‘πŸ‘Ž
  • The study explores the use of AI-powered learning tools in family settings, focusing on task distribution and tutoring assistance. (ML: 0.98)πŸ‘πŸ‘Ž
  • The study highlights the importance of considering family dynamics, caregiver roles, and individual needs when implementing AI-powered learning tools in family settings. (ML: 0.98)πŸ‘πŸ‘Ž
  • Tutoring assistance: The provision of guidance and support by caregivers or AI-powered systems to help family members complete tasks. (ML: 0.97)πŸ‘πŸ‘Ž
  • Task distribution: The process of assigning tasks to family members based on their abilities and availability. (ML: 0.97)πŸ‘πŸ‘Ž
  • Limited sample size (ML: 0.96)πŸ‘πŸ‘Ž
  • AI-powered learning tools: Technology-based platforms that provide personalized learning experiences and support for students. (ML: 0.95)πŸ‘πŸ‘Ž
  • Eleven families participated in the study, with a total of 44 individuals involved. (ML: 0.94)πŸ‘πŸ‘Ž
Abstract
Family learning takes place in everyday routines where children and caregivers read, practice, and develop new skills together. Despite growing interest in AI tutors, most existing systems are designed for single learners or classroom settings and do not address the distributed planning, coordination, and execution demands of learning at home. This paper introduces ParPal, a human-centred, LLM-powered system that supports multi-actor family learning by decomposing learning goals into actionable subtasks, allocating them across caregivers under realistic availability and expertise constraints, and providing caregiver-in-the-loop tutoring support with visibility into individual and collective contributions. Through expert evaluation of generated weekly learning plans and a one-week field deployment with 11 families, we identify systematic failure modes in current LLM-based planning, including misalignment with role expertise, unnecessary or costly collaboration, missing pedagogical learning trajectories, and physically or temporally infeasible tasks. While ParPal improves coordination clarity and recognition of caregiving effort, these findings expose fundamental limitations in how current LLMs operationalize pedagogical knowledge, reason about collaboration, and account for real-world, embodied constraints. We discuss implications for human-centred AI design and AI methodology, positioning multi-actor family learning as a critical testbed for advancing planning, adaptation, and pedagogical structure in next-generation AI systems.
Why we are recommending this paper?
Due to your Interest in AI for Social Good

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Poverty
  • Tech for Social Good
  • Casual ML for Social Good
  • Measureable ways to end poverty
  • Female Empowerment
  • Animal Welfare
  • Healthy Society
You can edit or add more interests any time.