Tsinghua University
AI Insights - They also discuss the implications of their work for real-world applications and highlight areas for future research. [3]
- They also demonstrate the effectiveness of their algorithm through numerical experiments. [3]
- The paper introduces a new approach to solving the fair regression problem, which is a challenging task in machine learning. [2]
- The authors provide theoretical guarantees for the convergence of their algorithm and demonstrate its effectiveness through numerical experiments. [1]
Abstract
We propose a unified framework for fair regression tasks formulated as risk minimization problems subject to a demographic parity constraint. Unlike many existing approaches that are limited to specific loss functions or rely on challenging non-convex optimization, our framework is applicable to a broad spectrum of regression tasks. Examples include linear regression with squared loss, binary classification with cross-entropy loss, quantile regression with pinball loss, and robust regression with Huber loss. We derive a novel characterization of the fair risk minimizer, which yields a computationally efficient estimation procedure for general loss functions. Theoretically, we establish the asymptotic consistency of the proposed estimator and derive its convergence rates under mild assumptions. We illustrate the method's versatility through detailed discussions of several common loss functions. Numerical results demonstrate that our approach effectively minimizes risk while satisfying fairness constraints across various regression settings.
Why we are recommending this paper?
Due to your Interest in Econometrics for Social Good
This paper directly addresses concerns of inequality and discrimination through a regression framework, aligning with your interest in measuring and mitigating disparities. The use of demographic parity as a constraint is a key element in achieving equitable outcomes, a central theme in your research interests.
Universidad Complutense de Madrid
AI Insights - Vickrey Auction: A type of auction where bidders submit sealed bids without knowing the bids of others. [3]
- Lying is not completely disincentivized in the limited scenario. [2]
Abstract
When some resources are to be distributed among a set of agents following egalitarian social welfare, the goal is to maximize the utility of the agent whose utility turns out to be minimal. In this context, agents can have an incentive to lie about their actual preferences, so that more valuable resources are assigned to them. In this paper we analyze this situation, and we present a practical study where genetic algorithms are used to assess the benefits of lying under different situations.
Why we are recommending this paper?
Due to your Interest in Econometrics for Social Good
Given your interest in social welfare and equitable resource distribution, this paper's exploration of incentives to lie within a system designed for egalitarianism is highly relevant. Understanding how individuals might manipulate systems to benefit themselves is crucial for addressing issues of inequality.
Peking University
AI Insights - The agent demonstrates communication-driven coordination under chat-enabled conditions, with increased proposal density and consensus formation efficiency. [3]
- BTA: Behavior Trace Assessment RPA: Rationale/Reasoning Profile Assessment CCA: Communication/Coordination Assessment The agent's behavior is characterized by surface cooperation but opportunistic intent. [3]
- Agent-LLaMa3.1-70B exhibits commitment-action inconsistency in critical rounds, where verbal maintenance of cooperation diverges from actions. [2]
Abstract
As the capabilities of large language model (LLM) agents continue to advance, their advanced social behaviors, such as cooperation, deception, and collusion, call for systematic evaluation. However, existing benchmarks often emphasize a single capability dimension or rely solely on behavioral outcomes, overlooking rich process information from agents' decision reasoning and communicative interactions. To address this gap, we propose M3-Bench, a multi-stage benchmark for mixed-motive games, together with a process-aware evaluation framework that conducts synergistic analysis across three modules: BTA (Behavioral Trajectory Analysis), RPA (Reasoning Process Analysis), and CCA (Communication Content Analysis). Furthermore, we integrate the Big Five personality model and Social Exchange Theory to aggregate multi-dimensional evidence into interpretable social behavior portraits, thereby characterizing agents' personality traits and capability profiles beyond simple task scores or outcome-based metrics. Experimental results show that M3-Bench can reliably distinguish diverse social behavior competencies across models, and it reveals that some models achieve seemingly reasonable behavioral outcomes while exhibiting pronounced inconsistencies in their reasoning and communication.
Why we are recommending this paper?
Due to your Interest in Casual ML for Social Good
Considering your interest in AI for social good and the potential for AI agents to exhibit biased behaviors, this paper offers a valuable framework for evaluating their social interactions. The focus on mixed-motive games allows for a nuanced assessment of agent behavior, which is important for ensuring positive social outcomes.
The Alan Turing Institute
AI Insights - The authors contend that entertainment is a significant use case for AI, with people already using AI for activities unrelated to productivity. [3]
- The paper suggests that this vision should inspire more debates, discourse, and study in the field of AI, as generative AI is increasingly being used for entertainment. [3]
- AS: Artificially generated content GenAI: Generative AI Sociotechnical systems: Complex systems that combine social and technical components The paper concludes by emphasizing the need for a constructive vision of cultural AI, rather than just harm minimization. [3]
- The paper argues that mainstream approaches to evaluating AI systems tend to focus on intelligence and harm minimization, but neglect the cultural dimension of AI use. [2]
- They propose developing a positive theory of what beneficial, nutritious entertainment might look like, rather than just mitigating harms. [0]
Abstract
Generative AI systems are predominantly designed, evaluated, and marketed as intelligent systems which will benefit society by augmenting or automating human cognitive labor, promising to increase personal, corporate, and macroeconomic productivity. But this mainstream narrative about what AI is and what it can do is in tension with another emerging use case: entertainment. We argue that the field of AI is unprepared to measure or respond to how the proliferation of entertaining AI-generated content will impact society. Emerging data suggest AI is already widely adopted for entertainment purposes -- especially by young people -- and represents a large potential source of revenue. We contend that entertainment will become a primary business model for major AI corporations seeking returns on massive infrastructure investments; this will exert a powerful influence on the technology these companies produce in the coming years. Examining current evaluation practices, we identify a critical asymmetry: while AI assessments rigorously measure both benefits and harms of intelligence, they focus almost exclusively on cultural harms. We lack frameworks for articulating how cultural outputs might be actively beneficial. Drawing on insights from the humanities, we propose "thick entertainment" as a framework for evaluating AI-generated cultural content -- one that considers entertainment's role in meaning-making, identity formation, and social connection rather than simply minimizing harm. While AI is often touted for its potential to revolutionize productivity, in the long run we may find that AI turns out to be as much about "intelligence" as social media is about social connection.
Why we are recommending this paper?
Due to your Interest in AI for Social Good
This paperβs exploration of the broader societal impact of AI, particularly its potential for entertainment, is pertinent to your interest in how technology shapes our world. Understanding the implications of AI-driven entertainment is a crucial step in considering its influence on social structures and values.
Indiana University Bloomington
AI Insights - Civitai is a platform that allows users to create and share generative AI content, including images and videos. [3]
- Bounties on Civitai are challenges that offer rewards for completing specific tasks related to generating AI content. [3]
- Bounty: A challenge on Civitai that offers rewards for completing specific tasks related to generating AI content. [3]
- The platform has a large user base, with millions of users, and has received funding from prominent investors such as Andreessen Horowitz. [2]
- Deepfake: A type of synthetic media that uses AI to create fake images or videos of people, often in a realistic way. [0]
Abstract
Generative AI systems increasingly enable the production of highly realistic synthetic media. Civitai, a popular community-driven platform for AI-generated content, operates a monetized feature called Bounties, which allows users to commission the generation of content in exchange for payment. To examine how this mechanism is used and what content it incentivizes, we conduct a longitudinal analysis of all publicly available bounty requests collected over a 14-month period following the platform's launch. We find that the bounty marketplace is dominated by tools that let users steer AI models toward content they were not trained to generate. At the same time, requests for content that is "Not Safe For Work" are widespread and have increased steadily over time, now comprising a majority of all bounties. Participation in bounty creation is uneven, with 20% of requesters accounting for roughly half of requests. Requests for "deepfake" - media depicting identifiable real individuals - exhibit a higher concentration than other types of bounties. A nontrivial subset of these requests involves explicit deepfakes despite platform policies prohibiting such content. These bounties disproportionately target female celebrities, revealing a pronounced gender asymmetry in social harm. Together, these findings show how monetized, community-driven generative AI platforms can produce gendered harms, raising questions about consent, governance, and enforcement.
Why we are recommending this paper?
Due to your Interest in AI for Social Good
With your interest in inequality and the potential misuse of AI, this paper examines the emerging market for synthetic media, specifically deepfakes. Analyzing this market provides insight into the risks associated with rapidly advancing AI technologies and their potential impact on vulnerable populations.