Hi j34nc4rl0+social_good_topics,

Here is our personalized paper recommendations for you sorted by most relevant
Inequality
Abstract
We present the first proof of the reverse isoperimetric inequality for black holes in arbitrary dimension using a two-pronged geometric-analytic approach. The proof holds for compact Riemannian hypersurfaces in AdS space and seems to be a generic property of black holes in the extended phase space formalism. Using Euclidean gravitational action, we show that, among all hypersurfaces of given volume, the round sphere in the $D$-dimensional Anti-de Sitter space maximizes the area (and hence the entropy). This analytic result is supported by a geometric argument in a $1+1+2$ decomposition of spacetime: gravitational focusing enforces a strictly negative conformal deformation, and the Sherif-Dunsby rigidity theorem then forces the deformed 3-sphere to be isometric to the round 3-sphere, establishing the round sphere as the extremal surface, in fact, a maximally entropic surface. Our work establishes that the reversal of the usual isoperimetric inequality occurs due to the structure of the curved background governed by Einstein's equation, underscoring the role of gravity in the reverse isoperimetric inequality for black hole horizons in AdS space.
Abstract
Scholars of social stratification often study exposures that shape life outcomes. But some outcomes (such as wage) only exist for some people (such as those who are employed). We show how a common practice -- dropping cases with non-existent outcomes -- can obscure causal effects when a treatment affects both outcome existence and outcome values. The effects of both beneficial and harmful treatments can be underestimated. Drawing on existing approaches for principal stratification, we show how to study (1) the average effect on whether an outcome exists and (2) the average effect on the outcome among the latent subgroup whose outcome would exist in either treatment condition. To extend our approach to the selection-on-observables settings common in applied research, we develop a framework involving regression and simulation to enable principal stratification estimates that adjust for measured confounders. We illustrate through an empirical example about the effects of parenthood on labor market outcomes.
Tech for Social Good
Paper visualization
Abstract
This study investigates how U.S. news media framed the use of ChatGPT in higher education from November 2022 to October 2024. Employing Framing Theory and combining temporal and sentiment analysis of 198 news articles, we trace the evolving narratives surrounding generative AI. We found that the media discourse largely centered on institutional responses; policy changes and teaching practices showed the most consistent presence and positive sentiment over time. Conversely, coverage of topics such as human-centered learning, the job market, and skill development appeared more sporadically, with initially uncertain portrayals gradually shifting toward cautious optimism. Importantly, media sentiment toward ChatGPT's role in college admissions remained predominantly negative. Our findings suggest that media narratives prioritize institutional responses to generative AI over long-term, broader ethical, social, and labor-related implications, shaping an emerging sociotechnical imaginary that frames generative AI in education primarily through the lens of adaptation and innovation.
Racism
Abstract
Machine learning models often preserve biases present in training data, leading to unfair treatment of certain minority groups. Despite an array of existing firm-side bias mitigation techniques, they typically incur utility costs and require organizational buy-in. Recognizing that many models rely on user-contributed data, end-users can induce fairness through the framework of Algorithmic Collective Action, where a coordinated minority group strategically relabels its own data to enhance fairness, without altering the firm's training process. We propose three practical, model-agnostic methods to approximate ideal relabeling and validate them on real-world datasets. Our findings show that a subgroup of the minority can substantially reduce unfairness with a small impact on the overall prediction error.
Animal Welfare
Paper visualization
Abstract
Punishment as a mechanism for promoting cooperation has been studied extensively for more than two decades, but its effectiveness remains a matter of dispute. Here, we examine how punishment's impact varies across cooperative settings through a large-scale integrative experiment. We vary 14 parameters that characterize public goods games, sampling 360 experimental conditions and collecting 147,618 decisions from 7,100 participants. Our results reveal striking heterogeneity in punishment effectiveness: while punishment consistently increases contributions, its impact on payoffs (i.e., efficiency) ranges from dramatically enhancing welfare (up to 43% improvement) to severely undermining it (up to 44% reduction) depending on the cooperative context. To characterize these patterns, we developed models that outperformed human forecasters (laypeople and domain experts) in predicting punishment outcomes in new experiments. Communication emerged as the most predictive feature, followed by contribution framing (opt-out vs. opt-in), contribution type (variable vs. all-or-nothing), game length (number of rounds), peer outcome visibility (whether participants can see others' earnings), and the availability of a reward mechanism. Interestingly, however, most of these features interact to influence punishment effectiveness rather than operating independently. For example, the extent to which longer games increase the effectiveness of punishment depends on whether groups can communicate. Together, our results refocus the debate over punishment from whether or not it "works" to the specific conditions under which it does and does not work. More broadly, our study demonstrates how integrative experiments can be combined with machine learning to uncover generalizable patterns, potentially involving interactions between multiple features, and help generate novel explanations in complex social phenomena.
Casual ML for Social Good
Abstract
This work continues the one commenced in a previous one, where the key idea is that individual stances on a social matter can be modeled as positions of particles in physics. Here, we explore the aggregation of individual behavior as a microscopic model of social phenomena to obtain quantities characterizing a society as a whole, similar to the resulting thermodynamical quantities at the macroscopic scale. We follow the theoretical framework of statistical mechanics with a Boltzmann distribution. Notwithstanding the fact that the translation of physical concepts need to be adequately motivated, a key distinction with respect to the physical case is that a social particle has a position-dependent mass. From such a generalization we obtain a simple example to illustrate the possibilities of such an approach based on the ideal gas model in physics. As a result, we find that the social phenomena can be modeled as a gas under the assumptions considered here. We discuss how several concepts and their relations reasonably translate from physical to social phenomena.
Abstract
We consider mean field social optimization in nonlinear diffusion models. By dynamic programming with a representative agent employing cooperative optimizer selection, we derive a new Hamilton--Jacobi--Bellman (HJB) equation to be called the master equation of the value function. Under some regularity conditions, we establish $\epsilon$-person-by-person optimality of the master equation-based control laws, which may be viewed as a necessary condition for nearly attaining the social optimum. A major challenge in the analysis is to obtain tight estimates, within an error of $O(1/N)$, of the social cost having order $O(N)$. This will be accomplished by multi-scale analysis via constructing two auxiliary master equations. We illustrate explicit solutions of the master equations for the linear-quadratic (LQ) case, and give an application to systemic risk.
AI for Social Good
Abstract
The trustworthiness of AI is considered essential to the adoption and application of AI systems. However, the meaning of trust varies across industry, research and policy spaces. Studies suggest that professionals who develop and use AI regard an AI system as trustworthy based on their personal experiences and social relations at work. Studies about trust in AI and the constructs that aim to operationalise trust in AI (e.g., consistency, reliability, explainability and accountability). However, the majority of existing studies about trust in AI are situated in Western, Educated, Industrialised, Rich and Democratic (WEIRD) societies. The few studies about trust and AI in Africa do not include the views of people who develop, study or use AI in their work. In this study, we surveyed 157 people with professional and/or educational interests in AI from 25 African countries, to explore how they conceptualised trust in AI. Most respondents had links with workshops about trust and AI in Africa in Namibia and Ghana. Respondents' educational background, transnational mobility, and country of origin influenced their concerns about AI systems. These factors also affected their levels of distrust in certain AI applications and their emphasis on specific principles designed to foster trust. Respondents often expressed that their values are guided by the communities in which they grew up and emphasised communal relations over individual freedoms. They described trust in many ways, including applying nuances of Afro-relationalism to constructs in international discourse, such as reliability and reliance. Thus, our exploratory study motivates more empirical research about the ways trust is practically enacted and experienced in African social realities of AI design, use and governance.
Abstract
The concepts of ``human-centered AI'' and ``value-based decision'' have gained significant attention in both research and industry. However, many critical aspects remain underexplored and require further investigation. In particular, there is a need to understand how systems incorporate human values, how humans can identify these values within systems, and how to minimize the risks of harm or unintended consequences. In this paper, we highlight the need to rethink how we frame value alignment and assert that value alignment should move beyond static and singular conceptions of values. We argue that AI systems should implement long-term reasoning and remain adaptable to evolving values. Furthermore, value alignment requires more theories to address the full spectrum of human values. Since values often vary among individuals or groups, multi-agent systems provide the right framework for navigating pluralism, conflict, and inter-agent reasoning about values. We identify the challenges associated with value alignment and indicate directions for advancing value alignment research. In addition, we broadly discuss diverse perspectives of value alignment, from design methodologies to practical applications.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Measureable ways to end poverty
  • Poverty
  • Healthy Society
  • Econometrics for Social Good
  • Female Empowerment
You can edit or add more interests any time.

Unsubscribe from these updates