University of California, Merced
AI Insights - Neural social choice theory studies how to aggregate individual preferences into a collective decision. (ML: 0.97)ππ
- The design space of neural social choice is closer to ML model selection than to classical axiomatic deduction. (ML: 0.95)ππ
- Classical voting rules are often computationally hard to compute exactly, making them difficult to deploy at scale. (ML: 0.94)ππ
- Deep learning can be used to learn aggregation rules from data while steering them toward an intended axiomatic compromise. (ML: 0.92)ππ
- Neural social choice theory Classical voting rules Axiomatic compromise Permutation-invariant neural social choice Canonical embeddings from social choice theory (ML: 0.89)ππ
- Manipulation as a learning-and-security problem can be addressed by measuring attack surfaces and attack success rates under bounded coalitions or vote changes. (ML: 0.87)ππ
Abstract
Social choice is no longer a peripheral concern of political theory or economics-it has become a foundational component of modern machine learning systems. From auctions and resource allocation to federated learning, participatory governance, and the alignment of large language models, machine learning pipelines increasingly aggregate heterogeneous preferences, incentives, and judgments into collective decisions. In effect, many contemporary machine learning systems already implement social choice mechanisms, often implicitly and without explicit normative scrutiny.
This Review surveys differentiable social choice: an emerging paradigm that formulates voting rules, mechanisms, and aggregation procedures as learnable, differentiable models optimized from data. We synthesize work across auctions, voting, budgeting, liquid democracy, decentralized aggregation, and inverse mechanism learning, showing how classical axioms and impossibility results reappear as objectives, constraints, and optimization trade-offs. We conclude by identifying 36 open problems defining a new research agenda at the intersection of machine learning, economics, and democratic theory.
Why we are recommending this paper?
Due to your Interest in Econometrics for Social Good
This paper directly addresses the intersection of machine learning and social choice, aligning with your interest in using AI for social good. The focus on learning mechanisms and alignment is particularly relevant to your interest in AI for social good and addressing inequality.
Georgia Institute of Technology
AI Insights - Potential for biased or discriminatory outcomes. (ML: 0.99)ππ
- Participants acknowledged AI's potential to enhance social efficacy by making decisions and delegating administrative burdens, but also noted its limitations in providing nuanced human feedback. (ML: 0.99)ππ
- AI's potential benefits in enhancing social efficacy must be balanced against its limitations in providing nuanced human feedback. (ML: 0.99)ππ
- Risk of over-reliance on AI, leading to decreased human skills and abilities. (ML: 0.99)ππ
- Lack of transparency in AI decision-making processes. (ML: 0.98)ππ
- It is essential to address concerns about data privacy, consent, and fairness among all users. (ML: 0.97)ππ
- The integration of AI into interpersonal interactions has both transformative potentials and profound threats to human agency. (ML: 0.95)ππ
- The integration of AI into interpersonal interactions raises concerns about data privacy, consent, and the blurring of boundaries between humans and machines. (ML: 0.95)ππ
- Non-primary users: Individuals involved in social interactions with the primary user but do not have direct access to the AI. (ML: 0.94)ππ
- AI as a social other: The introduction of artificial intelligence as a participant in interpersonal interactions, creating new roles and complexities. (ML: 0.94)ππ
- Interpersonal interaction: The exchange of information, ideas, or feelings between individuals. (ML: 0.89)ππ
Abstract
Recent advances in AI are integrating AI into the fabric of human social life, creating transformative, co-shaping relationships between humans and AI. This trend makes it urgent to investigate how these systems, in turn, shape their users. We conducted a three-phase design study with 24 participants to explore this dynamic. Our findings reveal critical tensions: (1) social AI often exacerbates the very interpersonal problems it is designed to mitigate; (2) it introduces nuanced privacy harms for secondary users inadvertently involved in AI-mediated social interactions; and (3) it can threaten the primary user's personal agency and identity. We argue these tensions expose a problematic tendency in the user-centered paradigm, which often prioritizes immediate user experience at the expense of core human values like interpersonal ethics and self-efficacy. We call for a paradigm shift toward a more provocative and relational design perspective that foregrounds long-term social and personal consequences.
Why we are recommending this paper?
Due to your Interest in Casual ML for Social Good
Given your interest in the societal impact of AI, this paper's exploration of how AI is reshaping social relationships is highly pertinent. The study's focus on understanding the challenges posed by AI integration into social life aligns well with your broader concerns about social change.
Maastricht University
AI Insights - The text also discusses the relationship between groundedness and maximization of complete and transitive preference relations. (ML: 0.97)ππ
- Some of the key concepts explored include consistency, monotonicity, and weak axiom of revealed preference (WARP). (ML: 0.97)ππ
- The results have implications for understanding rationalizability and groundedness in choice theory. (ML: 0.95)ππ
- GAIC: Grounded Axiom of Revealed Preference. (ML: 0.92)ππ
- A choice function c is said to satisfy GMAIC if it maximizes a complete and transitive preference relation over non-empty subsets of X. (ML: 0.91)ππ
- Groundedness: A choice function c satisfies groundedness if for all x β X, there exists a set S β X \{x such that I(S) = β
. (ML: 0.89)ππ
- GMAIC: Grounded Maximizing Axiom of Choice. (ML: 0.89)ππ
- The provided text provides a comprehensive proof of various theorems and propositions related to choice theory. (ML: 0.89)ππ
- The proofs cover topics such as injectivity, surjectivity, and double union closure of interpretation functions. (ML: 0.88)ππ
- A choice function c is said to satisfy GAIC if it satisfies groundedness and the corresponding interpretation I satisfies consistency, monotonicity, and WARP. (ML: 0.88)ππ
- The proofs demonstrate the relationship between different axioms and properties of choice functions. (ML: 0.86)ππ
- The provided text appears to be a proof of various theorems and propositions related to choice theory, specifically in the context of rationalizability and groundedness. (ML: 0.86)ππ
Abstract
This paper proposes a model of choice via agentic artificial intelligence (AI). A key feature is that the AI may misinterpret a menu before recommending what to choose. A single acyclicity condition guarantees that there is a monotonic interpretation and a strict preference relation that together rationalize the AI's recommendations. Since this preference is in general not unique, there is no safeguard against it misaligning with that of a decision maker. What enables the verification of such AI alignment is interpretations satisfying double monotonicity. Indeed, double monotonicity ensures full identifiability and internal consistency. But, an additional idempotence property is required to guarantee that recommendations are fully rational and remain grounded within the original feasible set.
Why we are recommending this paper?
Due to your Interest in AI for Social Good
This paperβs exploration of AI influencing human choice offers a valuable perspective on how technology can impact decision-making processes. The concept of AI misinterpretation is directly relevant to your interest in measuring and understanding the effects of AI on social outcomes.
Duke University
AI Insights - Without more context, it is challenging to determine the specific focus or goals of the collaborative effort. (ML: 0.98)ππ
- The sheer length and complexity of the list may make it difficult to identify key contributors or understand the scope of the research. (ML: 0.98)ππ
- The diversity of institutions and countries represented indicates a global scope to the research. (ML: 0.96)ππ
- The authors are from a variety of institutions and countries, indicating a global collaboration. (ML: 0.96)ππ
- This type of collaboration is becoming increasingly common in scientific research, as researchers recognize the benefits of working together to tackle complex problems. (ML: 0.93)ππ
- The fact that many authors have worked together on multiple projects highlights the importance of collaboration in scientific research. (ML: 0.93)ππ
- The sheer number of authors suggests that this is a large-scale collaborative effort. (ML: 0.90)ππ
- Many of the authors have worked on multiple projects together, suggesting a strong research network. (ML: 0.89)ππ
- Research Network: A group of researchers who collaborate on multiple projects and share knowledge and resources. (ML: 0.84)ππ
- The list of authors is extremely long, with over 900 names. (ML: 0.76)ππ
Abstract
As internet access expands, so does exposure to harmful content, increasing the need for effective moderation. Research has demonstrated that large language models (LLMs) can be effectively utilized for social media moderation tasks, including harmful content detection. While proprietary LLMs have been shown to zero-shot outperform traditional machine learning models, the out-of-the-box capability of open-weight LLMs remains an open question.
Motivated by recent developments of reasoning LLMs, we evaluate seven state-of-the-art models: four proprietary and three open-weight. Testing with real-world posts on Bluesky, moderation decisions by Bluesky Moderation Service, and annotations by two authors, we find a considerable degree of overlap between the sensitivity (81%--97%) and specificity (91%--100%) of the open-weight LLMs and those (72%--98%, and 93%--99%) of the proprietary ones. Additionally, our analysis reveals that specificity exceeds sensitivity for rudeness detection, but the opposite holds for intolerance and threats. Lastly, we identify inter-rater agreement across human moderators and the LLMs, highlighting considerations for deploying LLMs in both platform-scale and personalized moderation contexts. These findings show open-weight LLMs can support privacy-preserving moderation on consumer-grade hardware and suggest new directions for designing moderation systems that balance community values with individual user preferences.
Why we are recommending this paper?
Due to your Interest in Casual ML for Social Good
With your interest in social media moderation and harmful content, this study's investigation into the use of LLMs for this purpose is a strong fit. The research directly addresses the need for technological solutions to combat online harms, aligning with your broader interest in a healthy society.
University of Texas at Austin
AI Insights - Supporting learning online requires balancing social resources with tools that promote learning concepts and analytic thinking skills. (ML: 0.99)ππ
- The study highlights the importance of creating spaces where people can overcome failure in pursuit of learning. (ML: 0.98)ππ
- Topic porousness: The ability of commenting cultures to allow new topics to emerge and transform the conversation at hand. (ML: 0.97)ππ
- The three factors that foster participatory debugging are a sustained community, identified problems, and porous conversations. (ML: 0.97)ππ
- Papert (1980) observes that initial failures can reinforce negative beliefs about one's abilities. (ML: 0.97)ππ
- The study found that this phenomenon is rare on Scratch, but it can be supported by creating environments that balance social interaction with learning opportunities. (ML: 0.96)ππ
- The study only analyzed a subset of projects on Scratch, which may not be representative of the entire platform. (ML: 0.95)ππ
- The analysis focused primarily on the social aspects of participatory debugging and did not explore other factors that may influence its emergence. (ML: 0.94)ππ
- It's like a team effort where everyone contributes their skills and knowledge to overcome challenges. (ML: 0.94)ππ
- Participatory debugging is a rare phenomenon on Scratch, but it can be supported by balancing interest-driven engagement with incentives for learning. (ML: 0.93)ππ
- Participatory debugging is a rare phenomenon on Scratch, but it can be supported by balancing interest-driven engagement with incentives for learning. (ML: 0.93)ππ
- Participatory debugging: A process where users collaborate to solve problems and overcome challenges in programming. (ML: 0.93)ππ
- The connected learning agenda seeks to link deep vertical expertise with practices recognized as a source of professional opportunity. (ML: 0.92)ππ
- Participatory debugging is when users work together to solve problems and learn from each other. (ML: 0.88)ππ
Abstract
Although socializing is a powerful driver of youth engagement online, platforms struggle to leverage engagement to promote learning. We seek to understand this dynamic using a multi-stage analysis of over 14,000 comments on Scratch, an online platform designed to support learning about programming. First, we inductively develop the concept of "participatory debugging" -- a practice through which users learn through collaborative technical troubleshooting. Second, we use a content analysis to establish how common the practice is on Scratch. Third, we conduct a qualitative analysis of user activity over time and identify three factors that serve as social antecedents of participatory debugging: (1) sustained community, (2) identifiable problems, and (3) what we call "topic porousness" to describe conversations that are able to span multiple topics. We integrate these findings in a theoretical framework that highlights a productive tension between the desire to promote learning and the interest-driven sub-communities that drive user engagement in many new media environments.
Why we are recommending this paper?
Due to your Interest in Tech for Social Good
This paperβs investigation into how online socializing can foster computational thinking is relevant to your interest in tech for social good. The focus on learning through platforms like Scratch aligns with your broader interest in accessible educational technologies.