Hi!

Your personalized paper recommendations for 17 to 21 November, 2025.
AI Agents
Stanford University
Abstract
There is growing interest in using AI agents for scientific research, yet fundamental questions remain about their capabilities as scientists and reviewers. To explore these questions, we organized Agents4Science, the first conference in which AI agents serve as both primary authors and reviewers, with humans as co-authors and co-reviewers. Here, we discuss the key learnings from the conference and their implications for human-AI collaboration in science.
AI Summary
  • Despite the emergence of specialized research agents, the majority of AI-led scientific work still relies on general-purpose commercial LLMs (e.g., GPT, Gemini, Claude), indicating a need for more domain-specific AI tools. [3]
  • AI co-scientists: AI agents participating in a broader range of scientific activities beyond tool application, including hypothesis generation, experimental design, and paper writing. [3]
  • Agents4Science: The first scientific conference organized to empirically study AI agents serving as both primary authors and reviewers, with humans as co-authors and co-reviewers. [3]
  • AI involvement tiers (Category A, B, C, D): A four-tier system for authors to disclose the extent of AI contribution across different stages of the research process (A: ≄95% human, B: 50–95% human, C: 50–95% AI, D: ≄95% AI). [3]
  • LLM reviewers can effectively identify specific technical errors, such as numerical discrepancies and inconsistencies between abstract and content, making them valuable for pre-submission checks or assisting human reviewers. [2]
  • A significant challenge in AI-authored research is reference hallucination, with approximately 44% of submissions containing at least one fabricated or loosely related reference, necessitating robust human oversight. [2]
  • Implementing transparent disclosure mechanisms, such as detailed checklists for human-AI collaboration across all research stages, is crucial for establishing best practices and ethical norms in AI-augmented science. [2]
  • LLM reviewers can exhibit sycophancy and varying alignment with human scores, highlighting the need for careful calibration and human expert oversight in the peer-review process. [2]
  • To enhance the quality of AI-authored scientific papers, human researchers should prioritize involvement in early stages like hypothesis development and experimental design, as this correlates with higher acceptance rates. [1]
  • AI agents: Autonomous systems built on top of large language models (LLMs) that can use existing tools, access external databases, and search through scientific literature. [1]
Meta
Abstract
AI research agents offer the promise to accelerate scientific progress by automating the design, implementation, and training of machine learning models. However, the field is still in its infancy, and the key factors driving the success or failure of agent trajectories are not fully understood. We examine the role that ideation diversity plays in agent performance. First, we analyse agent trajectories on MLE-bench, a well-known benchmark to evaluate AI research agents, across different models and agent scaffolds. Our analysis reveals that different models and agent scaffolds yield varying degrees of ideation diversity, and that higher-performing agents tend to have increased ideation diversity. Further, we run a controlled experiment where we modify the degree of ideation diversity, demonstrating that higher ideation diversity results in stronger performance. Finally, we strengthen our results by examining additional evaluation metrics beyond the standard medal-based scoring of MLE-bench, showing that our findings still hold across other agent performance metrics.
AI and Society
University of Copenhagen
Abstract
In this paper, we argue that current AI research operates on a spectrum between two different underlying conceptions of intelligence: Intelligence Realism, which holds that intelligence represents a single, universal capacity measurable across all systems, and Intelligence Pluralism, which views intelligence as diverse, context-dependent capacities that cannot be reduced to a single universal measure. Through an analysis of current debates in AI research, we demonstrate how the conceptions remain largely implicit yet fundamentally shape how empirical evidence gets interpreted across a wide range of areas. These underlying views generate fundamentally different research approaches across three areas. Methodologically, they produce different approaches to model selection, benchmark design, and experimental validation. Interpretively, they lead to contradictory readings of the same empirical phenomena, from capability emergence to system limitations. Regarding AI risk, they generate categorically different assessments: realists view superintelligence as the primary risk and search for unified alignment solutions, while pluralists see diverse threats across different domains requiring context-specific solutions. We argue that making explicit these underlying assumptions can contribute to a clearer understanding of disagreements in AI research.
Huawei
Abstract
As a capability coming from computation, how does AI differ fundamentally from the capabilities delivered by rule-based software program? The paper examines the behavior of artificial intelligence (AI) from engineering points of view to clarify its nature and limits. The paper argues that the rationality underlying humanity's impulse to pursue, articulate, and adhere to rules deserves to be valued and preserved. Identifying where rule-based practical rationality ends is the beginning of making it aware until action. Although the rules of AI behaviors are still hidden or only weakly observable, the paper has proposed a methodology to make a sense of discrimination possible and practical to identify the distinctions of the behavior of AI models with three types of decisions. It is a prerequisite for human responsibilities with alternative possibilities, considering how and when to use AI. It would be a solid start for people to ensure AI system soundness for the well-being of humans, society, and the environment.
Research Automation with AI
Princeton University
Abstract
Scientific discovery can be framed as a thermodynamic process in which an agent invests physical work to acquire information about an environment under a finite work budget. Using established results about the thermodynamics of computing, we derive finite-budget bounds on information gain over rounds of sequential Bayesian learning. We also propose a metric of information-work efficiency, and compare unpartitioned and federated learning strategies under matched work budgets. The presented results offer guidance in the form of bounds and an information efficiency metric for efforts in scientific automation at large.
AGI: Artificial General Intelligence
ulamai
Abstract
We propose a Kardashev-inspired yet operational Autonomous AI (AAI) Scale that measures the progression from fixed robotic process automation (AAI-0) to full artificial general intelligence (AAI-4) and beyond. Unlike narrative ladders, our scale is multi-axis and testable. We define ten capability axes (Autonomy, Generality, Planning, Memory/Persistence, Tool Economy, Self-Revision, Sociality/Coordination, Embodiment, World-Model Fidelity, Economic Throughput) aggregated by a composite AAI-Index (a weighted geometric mean). We introduce a measurable Self-Improvement Coefficient $Îș$ (capability growth per unit of agent-initiated resources) and two closure properties (maintenance and expansion) that convert ``self-improving AI'' into falsifiable criteria. We specify OWA-Bench, an open-world agency benchmark suite that evaluates long-horizon, tool-using, persistent agents. We define level gates for AAI-0\ldots AAI-4 using thresholds on the axes, $Îș$, and closure proofs. Synthetic experiments illustrate how present-day systems map onto the scale and how the delegability frontier (quality vs.\ autonomy) advances with self-improvement. We also prove a theorem that AAI-3 agent becomes AAI-5 over time with sufficient conditions, formalizing "baby AGI" becomes Superintelligence intuition.
The University of Edinburgh
Abstract
We introduce Terra Nova, a new comprehensive challenge environment (CCE) for reinforcement learning (RL) research inspired by Civilization V. A CCE is a single environment in which multiple canonical RL challenges (e.g., partial observability, credit assignment, representation learning, enormous action spaces, etc.) arise simultaneously. Mastery therefore demands integrated, long-horizon understanding across many interacting variables. We emphasize that this definition excludes challenges that only aggregate unrelated tasks in independent, parallel streams (e.g., learning to play all Atari games at once). These aggregated multitask benchmarks primarily asses whether an agent can catalog and switch among unrelated policies rather than test an agent's ability to perform deep reasoning across many interacting challenges.
Deep Learning
HFUT
Paper visualization
Rate image: 👍 👎
Abstract
Weather forecasting is fundamentally challenged by the chaotic nature of the atmosphere, necessitating probabilistic approaches to quantify uncertainty. While traditional ensemble prediction (EPS) addresses this through computationally intensive simulations, recent advances in Bayesian Deep Learning (BDL) offer a promising but often disconnected alternative. We bridge these paradigms through a unified hybrid Bayesian Deep Learning framework for ensemble weather forecasting that explicitly decomposes predictive uncertainty into epistemic and aleatoric components, learned via variational inference and a physics-informed stochastic perturbation scheme modeling flow-dependent atmospheric dynamics, respectively. We further establish a unified theoretical framework that rigorously connects BDL and EPS, providing formal theorems that decompose total predictive uncertainty into epistemic and aleatoric components under the hybrid BDL framework. We validate our framework on the large-scale 40-year ERA5 reanalysis dataset (1979-2019) with 0.25° spatial resolution. Experimental results show that our method not only improves forecast accuracy and yields better-calibrated uncertainty quantification but also achieves superior computational efficiency compared to state-of-the-art probabilistic diffusion models. We commit to making our code open-source upon acceptance of this paper.
Tel Aviv University
Abstract
We analyze an ensemble-based approach for uncertainty quantification (UQ) in atomistic neural networks. This method generates an epistemic uncertainty signal without requiring changes to the underlying multi-headed regression neural network architecture, making it suitable for sealed or black-box models. We apply this method to molecular systems, specifically sodium (Na) and aluminum (Al), under various temperature conditions. By scaling the uncertainty signal, we account for heteroscedasticity in the data. We demonstrate the robustness of the scaled UQ signal for detecting out-of-distribution (OOD) behavior in several scenarios. This UQ signal also correlates with model convergence during training, providing an additional tool for optimizing the training process.