🎯 Top Personalized Recommendations
Northeastern University
AI Summary - RAMTN系统是一种基于元交互的人机协作认知增强范式,旨在通过提取专家决策框架来实现智能辅助和知识共享。 该系统的核心思想是将人类专家的认知过程与计算机系统的信息处理能力结合起来,从而实现高效的决策支持和知识推理。 RAMTN系统的应用领域包括投资、医疗和教育等多个领域,旨在通过提取专家决策框架来提高决策准确性和效率。 元交互(Meta-Interaction):一种将人类认知过程与计算机系统信息处理能力结合起来的技术,旨在实现高效的决策支持和知识推理。 人机协作认知增强范式(Human-Machine Collaborative Cognition Enhancement Paradigm):一种基于元交互的框架,旨在通过提取专家决策框架来实现智能辅助和知识共享。 RAMTN系统是一种创新性的解决方案,旨在通过提取专家决策框架来提高决策准确性和效率。 该系统的应用领域包括投资、医疗和教育等多个领域,具有广泛的潜力和前景。 该系统的开发和应用依赖于大量的数据和信息资源,可能存在数据质量和可靠性的问题。 该系统的安全性和隐私保护需要进一步研究和解决。 元交互技术在决策支持和知识推理领域有广泛的应用和研究。 [3]
Abstract
Currently, there exists a fundamental divide between the "cognitive black box" (implicit intuition) of human experts and the "computational black box" (untrustworthy decision-making) of artificial intelligence (AI). This paper proposes a new paradigm of "human-AI collaborative cognitive enhancement," aiming to transform the dual black boxes into a composable, auditable, and extensible "functional white-box" system through structured "meta-interaction." The core breakthrough lies in the "plug-and-play cognitive framework"--a computable knowledge package that can be extracted from expert dialogues and loaded into the Recursive Adversarial Meta-Thinking Network (RAMTN). This enables expert thinking, such as medical diagnostic logic and teaching intuition, to be converted into reusable and scalable public assets, realizing a paradigm shift from "AI as a tool" to "AI as a thinking partner." This work not only provides the first engineering proof for "cognitive equity" but also opens up a new path for AI governance: constructing a verifiable and intervenable governance paradigm through "transparency of interaction protocols" rather than prying into the internal mechanisms of models. The framework is open-sourced to promote technology for good and cognitive inclusion. This paper is an independent exploratory research conducted by the author. All content presented, including the theoretical framework (RAMTN), methodology (meta-interaction), system implementation, and case validation, constitutes the author's individual research achievements.
Why we think this paper is great for you:
This paper explores the crucial intersection of human and artificial intelligence, addressing the potential for collaborative enhancement – a key area for responsible AI development and governance.
WZB Berlin Social Science
AI Summary - Global AI governance is increasingly emphasizing the importance of gender equality and addressing gender-related AI harm. [2]
Abstract
This paper examines how international AI governance frameworks address gender issues and gender-based harms. The analysis covers binding regulations, such as the EU AI Act; soft law instruments, like the UNESCO Recommendations on AI Ethics; and global initiatives, such as the Global Partnership on AI (GPAI). These instruments reveal emerging trends, including the integration of gender concerns into broader human rights frameworks, a shift toward explicit gender-related provisions, and a growing emphasis on inclusivity and diversity. Yet, some critical gaps persist, including inconsistent treatment of gender across governance documents, limited engagement with intersectionality, and a lack of robust enforcement mechanisms. However, this paper argues that effective AI governance must be intersectional, enforceable, and inclusive. This is key to moving beyond tokenism toward meaningful equity and preventing reinforcement of existing inequalities. The study contributes to ethical AI debates by highlighting the importance of gender-sensitive governance in building a just technological future.
Why we think this paper is great for you:
The paper directly investigates how AI governance frameworks address gender-related issues, aligning with the user’s interest in female empowerment and broader social justice concerns.
University of California
AI Summary - The preference for controllable and variable machines is stronger in work contexts than play contexts, with a shift towards preferring purely variable machines in play contexts. [3]
- Variability: The range of possible outcomes that can be produced by a machine or system. [3]
- Adults show a significant shift in machine preferences between the work and play contexts, while children's preferences also change but to a lesser extent. [3]
- Children and adults prefer machines that offer controllability over variability in designing causal interventions for novel outcomes. [2]
Abstract
Learning about the causal structure of the world is a fundamental problem for human cognition. Causal models and especially causal learning have proved to be difficult for large pretrained models using standard techniques of deep learning. In contrast, cognitive scientists have applied advances in our formal understanding of causation in computer science, particularly within the Causal Bayes Net formalism, to understand human causal learning. In the very different tradition of reinforcement learning, researchers have described an intrinsic reward signal called "empowerment" which maximizes mutual information between actions and their outcomes. "Empowerment" may be an important bridge between classical Bayesian causal learning and reinforcement learning and may help to characterize causal learning in humans and enable it in machines. If an agent learns an accurate causal world model, they will necessarily increase their empowerment, and increasing empowerment will lead to a more accurate causal world model. Empowerment may also explain distinctive features of childrens causal learning, as well as providing a more tractable computational account of how that learning is possible. In an empirical study, we systematically test how children and adults use cues to empowerment to infer causal relations, and design effective causal interventions.
Why we think this paper is great for you:
This research delves into how individuals learn about causality, a fundamental aspect of understanding social systems and interventions, directly relevant to empowerment and social change.
University of Manchester
AI Summary - It introduces different scenarios (1A, 1B, 2) with distinct data structures and constraints. [3]
- Cn: set of admissible measures over Y, C(N1)n: set of empirical measures with at most N1 support points, C(N1,N2)n: set of empirical measures with at most N1 support points and all probabilities being a integer multiple of 1/N2. [3]
- ˆV(n)max: value obtained by sample optimization, V(n)max: object of interest in the population. [3]
- The text discusses various optimization problems related to inequality indices and their asymptotic behavior. [2]
Abstract
We develop a unified, nonparametric framework for sharp partial identification and inference on inequality indices when income or wealth are only coarsely observed -- for example via grouped tables or individual interval reports -- possibly together with linear restrictions such as known means or subgroup totals. First, for a broad class of Schur-convex inequality measures, we characterize extremal allocations and show that sharp bounds are attained by distributions with simple, finite support, reducing the underlying infinite-dimensional problem to finite-dimensional optimization. Second, for indices that admit linear-fractional representations after suitable ordering of the data (including the Gini coefficient, quantile ratios, and the Hoover index), we recast the bound problems as linear or quadratic programs, yielding fast computation of numerically sharp bounds. Third, we establish $\sqrt{n}$ inference for bound endpoints using a uniform directional delta method and a bootstrap procedure for standard errors. In ELSA wealth data with mixed point and interval observations, we obtain sharp Gini bounds of 0.714--0.792 for liquid savings and 0.686--0.767 for a broad savings measure; historical U.S. income tables deliver time-series bounds for the Gini, quantile ratios, and Hoover index under grouped information.
Why we think this paper is great for you:
The paper’s focus on inequality measurement and analysis aligns with the user’s interest in poverty, inequality, and potential solutions for addressing these issues.
Ho Chi Minh City Universt
AI Summary - Mathematical analysis and numerical methods are used to simulate examples on the correlation between minimizing cost, increasing cooperation, and maximizing social welfare. [3]
- Agent-based simulation is conducted on a square lattice structured network of population to observe and analyze the difference in the θ value for both global and local interference strategies. [3]
- Well-mixed populations: Populations where individuals interact with each other randomly. [3]
- The study concludes that optimizing intervention cost does not necessarily lead to maximizing social welfare, and there exists a gap between them. [3]
- The findings have implications for understanding the evolution of cooperation in populations and designing effective interventions. [3]
- The study explores the relationship between optimizing intervention cost and social welfare in well-mixed and structured populations under external institutional investment. [2]
Abstract
Research on promoting cooperation among autonomous, self-regarding agents has often focused on the bi-objective optimisation problem: minimising the total incentive cost while maximising the frequency of cooperation. However, the optimal value of social welfare under such constraints remains largely unexplored. In this work, we hypothesise that achieving maximal social welfare is not guaranteed at the minimal incentive cost required to drive agents to a desired cooperative state. To address this gap, we adopt to a single-objective approach focused on maximising social welfare, building upon foundational evolutionary game theory models that examined cost efficiency in finite populations, in both well-mixed and structured population settings. Our analytical model and agent-based simulations show how different interference strategies, including rewarding local versus global behavioural patterns, affect social welfare and dynamics of cooperation. Our results reveal a significant gap in the per-individual incentive cost between optimising for pure cost efficiency or cooperation frequency and optimising for maximal social welfare. Overall, our findings indicate that incentive design, policy, and benchmarking in multi-agent systems and human societies should prioritise welfare-centric objectives over proxy targets of cost or cooperation frequency.
Why we think this paper is great for you:
This research investigates optimizing social welfare, a core concern given the user’s interest in promoting a healthy society and addressing systemic issues.
University of California
Abstract
We prove a quantitative version of Zhang's fundamental inequality for heights attached to polarizable endomorphisms. As an application, we obtain a gap principle for the Néron-Tate height on abelian varieties over function fields of arbitrary transcendence degree and characteristic zero, extending the result of Gao-Ge-Kühne. We also establish instances of effective gap principles for regular polynomial endomorphisms of $\mathbb{P}^2$, in the sense that all constants can are explicit. These yield effective instances of uniformity in the dynamical Bogomolov conjecture in both the arithmetic and geometric settings, including examples in prime characteristic.
Why we think this paper is great for you:
The paper’s exploration of fundamental inequalities and gap principles offers a sophisticated analytical approach to understanding complex systems – potentially relevant to measuring and addressing disparities.
Yale University
Abstract
We performed large, lab-in-the-field experiment (2,591 participants across 134 Honduran villages; ten rounds) and tracked how contribution behavior unfolds in fixed, anonymous groups of size five. Contribution separates early into two durable paths, one low and one high, with rare convergence thereafter. High-path players can be identified with strong accuracy early on. Groups that begin with an early majority of above-norm contributors (about 60%) are very likely finish high. The empirical finding of a bifurcation, consistent with the theory, shows that early, high contributions by socially central people steer groups onto, and help keep them on, a high-cooperation path.
Why we think this paper is great for you:
The experiment's focus on group behavior and public goods provides insights into collective action and cooperation – a key element in building a healthy society and addressing social challenges.
AI for Social Good
Carnegie Mellon
Abstract
While much research in artificial intelligence (AI) has focused on scaling capabilities, the accelerating pace of development makes countervailing work on producing harmless, "aligned" systems increasingly urgent. Yet research on alignment has diverged along two largely parallel tracks: safety--centered on scaled intelligence, deceptive or scheming behaviors, and existential risk--and ethics--focused on present harms, the reproduction of social bias, and flaws in production pipelines. Although both communities warn of insufficient investment in alignment, they disagree on what alignment means or ought to mean. As a result, their efforts have evolved in relative isolation, shaped by distinct methodologies, institutional homes, and disciplinary genealogies.
We present a large-scale, quantitative study showing the structural split between AI safety and AI ethics. Using a bibliometric and co-authorship network analysis of 6,442 papers from twelve major ML and NLP conferences (2020-2025), we find that over 80% of collaborations occur within either the safety or ethics communities, and cross-field connectivity is highly concentrated: roughly 5% of papers account for more than 85% of bridging links. Removing a small number of these brokers sharply increases segregation, indicating that cross-disciplinary exchange depends on a handful of actors rather than broad, distributed collaboration. These results show that the safety-ethics divide is not only conceptual but institutional, with implications for research agendas, policy, and venues. We argue that integrating technical safety work with normative ethics--via shared benchmarks, cross-institutional venues, and mixed-method methodologies--is essential for building AI systems that are both robust and just.
AI Summary - The dataset consists of 49,725 papers from various conferences related to artificial intelligence (AI) safety and ethics. [3]
- Abstract enrichment coverage: The percentage of papers with abstracts. [3]
- Keywords were generated by analyzing foundational surveys and texts in each field, with a hierarchical strategy spanning technical, theoretical, and applied domains. [2]
- The abstract enrichment coverage is 97.7%, indicating that most papers have abstracts. [1]