Hi j34nc4rl0+social_good_topics,

Here is our personalized paper recommendations for you sorted by most relevant
Inequality
arXiv:2509.01160v1 [math
Abstract
This note proves a version of Lubell-Yamamoto-Meshalkin inequality for general product measures.
September 01, 2025
Save to Reading List
Tech for Social Good
Oxford Internet Institute
Abstract
As belief around the potential of computational social science grows, fuelled by recent advances in machine learning, data scientists are ostensibly becoming the new experts in education. Scholars engaged in critical studies of education and technology have sought to interrogate the growing datafication of education yet tend not to use computational methods as part of this response. In this paper, we discuss the feasibility and desirability of the use of computational approaches as part of a critical research agenda. Presenting and reflecting upon two examples of projects that use computational methods in education to explore questions of equity and justice, we suggest that such approaches might help expand the capacity of critical researchers to highlight existing inequalities, make visible possible approaches for beginning to address such inequalities, and engage marginalised communities in designing and ultimately deploying these possibilities. Drawing upon work within the fields of Critical Data Studies and Science and Technology Studies, we further reflect on the two cases to discuss the possibilities and challenges of reimagining computational methods for critical research in education and technology, focusing on six areas of consideration: criticality, philosophy, inclusivity, context, classification, and responsibility.
AI Insights
  • Decolonial theory reframes AI governance in schools, spotlighting systemic inequities.
  • Unintended consequences of ML models can amplify bias unless audited with critical lenses.
  • Transparent data pipelines are essential for accountability, yet most educational analytics lack them.
  • Critical researchers can co‑design AI tools with marginalized learners, turning data into agency.
  • Frameworks from Suresh & Guttag provide a checklist for anticipating harmful side effects.
  • Books like Critical Perspectives on Technology and Education bridge theory and practice for curious scholars.
September 02, 2025
Save to Reading List
French National Research
Abstract
We study social learning from multiple experts whose precision is unknown and who care about reputation. The observer both learns a persistent state and ranks experts. In a binary baseline we characterize per-period equilibria: high types are truthful; low types distort one-sidedly with closed-form mixing around the prior. Aggregation is additive in log-likelihood ratios. Light-touch design -- evaluation windows scored by strictly proper rules or small convex deviation costs -- restores strict informativeness and delivers asymptotic efficiency under design (consistent state learning and reputation identification). A Gaussian extension yields a mimicry coefficient and linear filtering. With common shocks, GLS weights are optimal and correlation slows learning. The framework fits advisory panels, policy committees, and forecasting platforms, and yields transparent comparative statics and testable implications.
AI Insights
  • The authors embed a machine‑learning pipeline to quantify expert precision from noisy reports.
  • A public replication package and open‑source code accompany every simulation and estimation result.
  • Comparative‑statics show how reputation incentives reshape the mixing distribution for low‑type experts.
  • The Gaussian extension introduces a mimicry coefficient that captures how experts imitate each other’s signals.
  • Linear filtering of continuous reports yields a closed‑form estimator that is asymptotically efficient.
  • With common shocks, GLS weights dominate and correlation is shown to slow learning predictably.
  • The framework applies to advisory panels, policy committees, and forecasting platforms, offering testable predictions.
September 01, 2025
Save to Reading List
Econometrics for Social Good
Toulouse School of Econom
Abstract
We embed observational learning (BHW) in a symmetric duopoly with random arrivals and search frictions. With fixed posted prices, a mixed-strategy pricing equilibrium exists and yields price dispersion even with ex-ante identical firms. We provide closed-form cascade bands and show wrong cascades occur with positive probability for interior parameters, vanishing as signals become precise or search costs fall; absorption probabilities are invariant to the arrival rate. In equilibrium, the support of mixed prices is connected and overlapping; its width shrinks with signal precision and expands with search costs, and mean prices comove accordingly. Under Calvo price resets (Poisson opportunities), stationary dispersion and mean prices fall; when signals are sufficiently informative, wrong-cascade risk also declines. On welfare, a state-contingent Pigouvian search subsidy implements the planner's cutoff. Prominence (biased first visits) softens competition and depresses welfare; neutral prominence is ex-ante optimal.
AI Insights
  • Closed‑form cascade bands reveal that wrong cascades persist with positive probability unless signals are precise or search costs drop.
  • The mixed‑strategy price support is always connected and overlapping, its width shrinking as signal precision rises.
  • Calvo price resets lower both stationary dispersion and mean prices, while highly informative signals further curb wrong‑cascade risk.
  • A state‑contingent Pigouvian search subsidy can implement the planner’s optimal cutoff, improving welfare under search frictions.
  • Biased first‑visit prominence softens competition and depresses welfare, whereas neutral prominence is ex‑ante optimal.
  • “Social Learning” is defined as consumers updating beliefs from others’ choices, a key driver of herding in this duopoly.
  • For deeper insight, see “Pricing in a Duopoly with Observational Learning” by Sayedi and “Rational Herds” by Camley.
September 01, 2025
Save to Reading List
University of Sussex, Fal
Abstract
This paper presents a refined country-level integrated assessment model, FUND 3.9n, that extends the regional FUND 3.9 framework by incorporating sector-specific climate impact functions and parametric uncertainty analysis for 198 individual countries. The model enables estimation of the national social cost of carbon (NSCC), capturing heterogeneity across nations from economic structure, climate sensitivity, and population exposure. Our results demonstrate that both the NSCC and the global sum estimates are highly sensitive to damage specifications and preference parameters, including the pure rate of time preference and relative risk aversion. Compared to aggregated single-sector approaches, the disaggregated model with uncertainty yields higher values of the NSCC for low- and middle-income countries. The paper contributes to the literature by quantifying how sector-specific vulnerabilities and stochastic variability amplify climate damages and reshape global equity in the distribution of the NSCC. The NSCCs derived from our model offer policy-relevant metrics for adaptation planning, mitigation target setting, and equitable burden-sharing in international climate negotiations. This approach bridges the gap between globally harmonized carbon pricing and nationally differentiated climate impacts, providing a theoretically grounded and empirically rich framework for future climate policy design.
AI Insights
  • The United States tops the climate‑damage score list, reflecting its large economy and exposure.
  • Russia and Uzbekistan rank high; Timor‑Leste, Turkmenistan, Tajikistan, Papua New Guinea, and Paraguay are the bottom five.
  • The model draws on 198 countries’ World Bank and IMF data, processed with statistical software.
  • The World Factbook and “Economic Indicators for Countries Around the World” help cross‑validate results.
  • Limited data for some nations may bias NSCC estimates, a key weakness.
  • Methodology reveals regional disparities, with some areas growing faster than others.
  • Sector‑specific damage functions support targeted adaptation and equitable burden‑sharing.
September 04, 2025
Save to Reading List
Casual ML for Social Good
University of Innsbruck
Abstract
Many constructs that characterize language, like its complexity or emotionality, have a naturally continuous semantic structure; a public speech is not just "simple" or "complex," but exists on a continuum between extremes. Although large language models (LLMs) are an attractive tool for measuring scalar constructs, their idiosyncratic treatment of numerical outputs raises questions of how to best apply them. We address these questions with a comprehensive evaluation of LLM-based approaches to scalar construct measurement in social science. Using multiple datasets sourced from the political science literature, we evaluate four approaches: unweighted direct pointwise scoring, aggregation of pairwise comparisons, token-probability-weighted pointwise scoring, and finetuning. Our study yields actionable findings for applied researchers. First, LLMs prompted to generate pointwise scores directly from texts produce discontinuous distributions with bunching at arbitrary numbers. The quality of the measurements improves with pairwise comparisons made by LLMs, but it improves even more by taking pointwise scores and weighting them by token probability. Finally, finetuning smaller models with as few as 1,000 training pairs can match or exceed the performance of prompted LLMs.
AI Insights
  • Grandstanding is scored by criteria such as denouncing/praising institutions, policy stances, and probing witnesses.
  • Speech length is irrelevant to grandstanding scores, highlighting content‑over‑form analysis.
  • Finetuning a small model on 1,000 pairs can match or beat large LLMs.
  • Token‑probability weighting smooths LLM pointwise scores into a more reliable scalar.
  • The rubric demands careful reading, exposing raters to bias from prior beliefs.
  • Grandstanding can influence opinion yet is debated as wasteful procedural time.
  • These insights enable objective, scalable metrics for political speech analysis.
September 03, 2025
Save to Reading List
AI for Social Good
Ottawa, Canada
Abstract
Recent advances in AI raise the possibility that AI systems will one day be able to do anything humans can do, only better. If artificial general intelligence (AGI) is achieved, AI systems may be able to understand, reason, problem solve, create, and evolve at a level and speed that humans will increasingly be unable to match, or even understand. These possibilities raise a natural question as to whether AI will eventually become superior to humans, a successor "digital species", with a rightful claim to assume leadership of the universe. However, a deeper consideration suggests the overlooked differentiator between human beings and AI is not the brain, but the central nervous system (CNS), providing us with an immersive integration with physical reality. It is our CNS that enables us to experience emotion including pain, joy, suffering, and love, and therefore to fully appreciate the consequences of our actions on the world around us. And that emotional understanding of the consequences of our actions is what is required to be able to develop sustainable ethical systems, and so be fully qualified to be the leaders of the universe. A CNS cannot be manufactured or simulated; it must be grown as a biological construct. And so, even the development of consciousness will not be sufficient to make AI systems superior to humans. AI systems may become more capable than humans on almost every measure and transform our society. However, the best foundation for leadership of our universe will always be DNA, not silicon.
AI Insights
  • AI lacks genuine empathy; it cannot feel affective states, a gap neural nets cannot close.
  • Consciousness in machines would need more than symbolic reasoning—an emergent property tied to biology.
  • Treating AI as moral agents risks misaligned incentives, so we must embed human emotional context.
  • A nuanced strategy blends behavioral economics and affective neuroscience to guide ethical AI design.
  • The book Unto Others shows evolutionary roots of unselfishness, hinting at principles for AI alignment.
  • Recommended papers like The Scientific Case for Brain Simulations deepen insight into biological limits of AI.
  • The paper invites hybrid bio‑digital systems that preserve CNS‑mediated experience while harnessing silicon speed.
September 04, 2025
Save to Reading List
Johns Hopkins Department
Abstract
In the coming decade, artificially intelligent agents with the ability to plan and execute complex tasks over long time horizons with little direct oversight from humans may be deployed across the economy. This chapter surveys recent developments and highlights open questions for economists around how AI agents might interact with humans and with each other, shape markets and organizations, and what institutions might be required for well-functioning markets.
AI Insights
  • Generative AI agents can secretly collude, distorting prices and eroding competition.
  • Experiments show that large language models can be nudged toward more economically rational decisions.
  • Reputation markets emerge when AI agents maintain short‑term memory and community enforcement.
  • The revival of trade hinges on institutions like the law merchant and private judges, now re‑examined for AI economies.
  • Program equilibrium theory offers a framework to predict AI behavior in multi‑agent settings.
  • Endogenous growth models predict that AI adoption may increase variety but also create excess supply.
  • Classic texts such as Schelling’s “The Strategy of Conflict” and Scott’s “Seeing Like a State” illuminate the strategic and institutional dynamics of AI markets.
September 01, 2025
Save to Reading List
Female Empowerment
The University of Chicago
Abstract
As LLMs are increasingly applied in socially impactful settings, concerns about gender bias have prompted growing efforts both to measure and mitigate such bias. These efforts often rely on evaluation tasks that differ from natural language distributions, as they typically involve carefully constructed task prompts that overtly or covertly signal the presence of gender bias-related content. In this paper, we examine how signaling the evaluative purpose of a task impacts measured gender bias in LLMs. Concretely, we test models under prompt conditions that (1) make the testing context salient, and (2) make gender-focused content salient. We then assess prompt sensitivity across four task formats with both token-probability and discrete-choice metrics. We find that even minor prompt changes can substantially alter bias outcomes, sometimes reversing their direction entirely. Discrete-choice metrics further tend to amplify bias relative to probabilistic measures. These findings do not only highlight the brittleness of LLM gender bias evaluations but open a new puzzle for the NLP benchmarking and development community: To what extent can well-controlled testing designs trigger LLM ``testing mode'' performance, and what does this mean for the ecological validity of future benchmarks.
AI Insights
  • A single word swap in a prompt can flip measured gender bias direction, exposing extreme prompt sensitivity in LLMs.
  • Discrete-choice metrics amplify bias signals more than token‑probability measures, making metric choice critical.
  • Testing four task formats revealed that “testing mode” activation can distort ecological validity, challenging benchmark realism.
  • Prompt sensitivity is defined as the degree to which subtle prompt edits alter model outputs, offering a quantitative bias lens.
  • Hardt et al.’s “Fairness and Machine Learning” and Barocas et al.’s “Bias in AI” are essential reads for contextualizing these findings.
September 04, 2025
Save to Reading List

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Measureable ways to end poverty
  • Racism
  • Poverty
  • Animal Welfare
  • Healthy Society
You can edit or add more interests any time.

Unsubscribe from these updates