Hi j34nc4rl0+agi,

Here is our personalized paper recommendations for you sorted by most relevant
AGI Research
INAF-Osservatorio Astrofisico di Arcetri
Abstract
Cosmological models of hierarchical structure formation predict the existence of a widespread population of dual accreting supermassive black holes (SMBHs) on kpc-scale separations, corresponding to projected distances < 0".8 at redshifts higher than 0.5. However, close companions to known active galactic nuclei (AGN) or quasars (QSOs) can also be multiple images of the object itself, strongly lensed by a foreground galaxy, as well as foreground stars in a chance superposition. Thanks to its large sky coverage, sensitivity, and high spatial resolution, Euclid offers a unique opportunity to obtain a large, homogeneous sample of dual/lensed AGN candidates with sub-arcsec projected separations. Here we present a machine learning approach, in particular a Convolutional Neural Network (CNN), to identify close companions to known QSOs down to separations of $\sim\,$0".15 comparable to the Euclid VIS point spread function (PSF). We studied the effectiveness of the CNN in identifying dual AGN and demonstrated that it outperforms traditional techniques. Applying our CNN to a sample of $\sim\,$6000 QSOs from the Q1 Euclid data release, we find a fraction of about 0.25% dual AGN candidates with separation $\sim\,$0".4 (corresponding to $\sim$3 kpc at z=1). Estimating the foreground contamination from stellar objects, we find that most of the pair candidates with separation higher than 0".5 are likely contaminants, while below this limit, contamination is expected to be less than 20%. For objects at higher separation (>0".5, i.e. 4 kpc at z=1), we performed PSF subtraction and used colour-colour diagrams to constrain their nature. We present a first set of dual/lensed AGN candidates detected in the Q1 Euclid data, providing a starting point for the analysis of future data releases.
Changes in the Labor Market
Abstract
Measured aggregate productivity and the income share of top earners are strongly and positively correlated in the Canadian data. Productivity slowdown since the early 2000s was accompanied with a flattening income share of top earners. Motivated by these facts, we study the role of firms' top-paid workers and worker matching in accounting for the slowdown of measured total factor productivity. We first estimate total factor productivity for Canadian firms in the period of 2003-2015, taking into account the assortative matching between top workers and non-top workers. Measured total factor productivity consists of the Hicks-neutral technology and the quality of top workers. Our estimation suggests that measured aggregate total factor productivity declined from 2003 to 2015, in line with that estimated by the statistical agency. The decline of measured productivity is entirely accounted for by the declining quality of top workers, while the Hicks-neutral technology improved. Both the within-firm changes and the cross-firm reallocation of top-worker quality are important in contributing to the decline of overall top-worker quality. We also discuss possible causes of declines in the quality of top workers, e.g., the emigration of top talents as studied in recent literature.
Department of Mathematics, Florida State University
Abstract
The gig economy has grown significantly in recent years, driven by the emergence of various facilitating platforms. Triggering substantial shifts to labour markets across the world, the COVID-19 pandemic has accelerated this growth. To understand the crucial role of such an epidemic on the dynamics of labour markets of both formal and gig economies, we develop and investigate a model that couples disease transmission and a search and match framework of unemployment. We find that epidemics increase gig economy employment at the expense of formal economy employment, and can increase the total long term unemployment. In the short run, large sharp fluctuations in labour market tightness and unemployment can occur, while in the long run, employment is reduced under an endemic disease equilibrium. We analyze a public policies that increase unemployment benefits or provide benefits to gig workers to mitigate these effects, and evaluate their trade-offs in mitigating disease burden and labour market disruptions.
AGI Development
Technion—Israel Institute of Technology
Abstract
The rise of Generative AI (GenAI) is reshaping how workers contribute to shared projects. While workers can use GenAI to boost productivity or reduce effort, managers may use it to replace some workers entirely. We present a theoretical framework to analyze how GenAI affects collaboration in such settings. In our model, the manager selects a team to work on a shared task, with GenAI substituting for unselected workers. Each worker selects how much effort to exert, and incurs a cost that increases with the level of effort. We show that GenAI can lead workers to exert no effort, even if GenAI is almost ineffective. We further show that the manager's optimization problem is NP-complete, and provide an efficient algorithm for the special class of (almost-) linear instances. Our analysis shows that even workers with low individual value may play a critical role in sustaining overall output, and excluding such workers can trigger a cascade. Finally, we conduct extensive simulations to illustrate our theoretical findings.

We did not find tons of content matching your interests we've included some additional topics that are popular. Also be aware that if the topics is not present in arxiv we wont be able to recommend it.

AI Agents
AWorld Team
Abstract
The learning from practice paradigm is crucial for developing capable Agentic AI systems, yet it is severely hampered by inefficient experience generation, a bottleneck especially pronounced in complex benchmarks like GAIA. To address this, we introduce AWorld, an open-source system engineered for large-scale agent-environment interaction. By distributing tasks across a cluster, AWorld accelerates experience collection by 14.6x compared to standard single-node, sequential execution. This critical speedup makes extensive reinforcement learning practical and scalable. Leveraging this capability, we trained a Qwen3-32B-based agent that significantly outperforms its base model, increasing its overall GAIA accuracy from 21.59% to 32.23%. On the benchmark's most challenging levels, our agent achieves a score of 16.33%, surpassing the performance of leading proprietary models. Our open-source system and resulting agent provide a practical blueprint for a complete agentic AI training pipeline, from efficient interaction to demonstrable model improvement.
Department of Machine Learning, MBZUAI, Abu Dhabi, UAE
Abstract
Imagine decision-makers uploading data and, within minutes, receiving clear, actionable insights delivered straight to their fingertips. That is the promise of the AI Data Scientist, an autonomous Agent powered by large language models (LLMs) that closes the gap between evidence and action. Rather than simply writing code or responding to prompts, it reasons through questions, tests ideas, and delivers end-to-end insights at a pace far beyond traditional workflows. Guided by the scientific tenet of the hypothesis, this Agent uncovers explanatory patterns in data, evaluates their statistical significance, and uses them to inform predictive modeling. It then translates these results into recommendations that are both rigorous and accessible. At the core of the AI Data Scientist is a team of specialized LLM Subagents, each responsible for a distinct task such as data cleaning, statistical testing, validation, and plain-language communication. These Subagents write their own code, reason about causality, and identify when additional data is needed to support sound conclusions. Together, they achieve in minutes what might otherwise take days or weeks, enabling a new kind of interaction that makes deep data science both accessible and actionable.
AI and Society
Nanyang Technological University
Abstract
Cognitive Science has profoundly shaped disciplines such as Artificial Intelligence (AI), Philosophy, Psychology, Neuroscience, Linguistics, and Culture. Many breakthroughs in AI trace their roots to cognitive theories, while AI itself has become an indispensable tool for advancing cognitive research. This reciprocal relationship motivates a comprehensive review of the intersections between AI and Cognitive Science. By synthesizing key contributions from both perspectives, we observe that AI progress has largely emphasized practical task performance, whereas its cognitive foundations remain conceptually fragmented. We argue that the future of AI within Cognitive Science lies not only in improving performance but also in constructing systems that deepen our understanding of the human mind. Promising directions include aligning AI behaviors with cognitive frameworks, situating AI in embodiment and culture, developing personalized cognitive models, and rethinking AI ethics through cognitive co-evaluation.
Universitat Oberta de Catalunya
Abstract
Artificial General Intelligence (AGI) is promoted by technology leaders and investors as a system capable of performing all human intellectual tasks, and potentially surpassing them. Despite its vague definition and uncertain feasibility, AGI has attracted major investment and political attention, fuelled by promises of civilisational transformation. This paper conceptualises AGI as sustained by deep hype: a long-term, overpromissory dynamic articulated through sociotechnical fictions that render not-yet-existing technologies desirable and urgent. The analysis highlights how uncertainty, fiction, and venture capital speculation interact to advance a cyberlibertarian and longtermist programme that sidelines democratic oversight and reframes regulation as obsolete, with critical implications for the governance of technological futures.
Research Automation with AI
Jonas Henkel
Abstract
The rapid development of artificial intelligence (AI), marked by breakthroughs like 'AlphaEvolve' and 'Gemini Deep Think', is beginning to offer powerful new tools that have the potential to significantly alter the research practice in many areas of mathematics. This paper explores the current landscape of publicly accessible large language models (LLMs) in a mathematical research context, based on developments up to August 2, 2025. Our analysis of recent benchmarks, such as MathArena and the Open Proof Corpus (Balunovi\'c et al., 2025; Dekoninck et al., 2025), reveals a complex duality: while state-of-the-art models demonstrate strong abilities in solving problems and evaluating proofs, they also exhibit systematic flaws, including a lack of self-critique and a model depending discrepancy between final-answer accuracy and full-proof validity. Based on these findings, we propose a durable framework for integrating AI into the research workflow, centered on the principle of the augmented mathematician. In this model, the AI functions as a copilot under the critical guidance of the human researcher, an approach distilled into five guiding principles for effective and responsible use. We then systematically explore seven fundamental ways AI can be applied across the research lifecycle, from creativity and ideation to the final writing process, demonstrating how these principles translate into concrete practice. We conclude that the primary role of AI is currently augmentation rather than automation. This requires a new skill set focused on strategic prompting, critical verification, and methodological rigor in order to effectively use these powerful tools.
CMU
Abstract
This paper revisits Ramon Llull's Ars combinatoria - a medieval framework for generating knowledge through symbolic recombination - as a conceptual foundation for building a modern Llull's thinking machine for research ideation. Our approach defines three compositional axes: Theme (e.g., efficiency, adaptivity), Domain (e.g., question answering, machine translation), and Method (e.g., adversarial training, linear attention). These elements represent high-level abstractions common in scientific work - motivations, problem settings, and technical approaches - and serve as building blocks for LLM-driven exploration. We mine elements from human experts or conference papers and show that prompting LLMs with curated combinations produces research ideas that are diverse, relevant, and grounded in current literature. This modern thinking machine offers a lightweight, interpretable tool for augmenting scientific creativity and suggests a path toward collaborative ideation between humans and AI.
Deep Learning
Institute of Informatics, University of Warsaw
Abstract
Deep learning has transformed computer vision (CV), achieving outstanding performance in classification, segmentation, and related tasks. Such AI-based CV systems are becoming prevalent, with applications spanning from medical imaging to surveillance. State of the art models such as convolutional neural networks (CNNs) and vision transformers (ViTs) are often regarded as ``black boxes,'' offering limited transparency into their decision-making processes. Despite a recent advancement in explainable AI (XAI), explainability remains underutilized in practical CV deployments. A primary obstacle is the absence of integrated software solutions that connect XAI techniques with robust knowledge management and monitoring frameworks. To close this gap, we have developed Obz AI, a comprehensive software ecosystem designed to facilitate state-of-the-art explainability and observability for vision AI systems. Obz AI provides a seamless integration pipeline, from a Python client library to a full-stack analytics dashboard. With Obz AI, a machine learning engineer can easily incorporate advanced XAI methodologies, extract and analyze features for outlier detection, and continuously monitor AI models in real time. By making the decision-making mechanisms of deep models interpretable, Obz AI promotes observability and responsible deployment of computer vision systems.
Monash Centre for Consciousness and Contemplative Studies
Abstract
I consider motivation and value-alignment in AI systems from the perspective of (constrained) entropy maximization. Though the structures encoding knowledge in any physical system can be understood as energetic constraints, only living agents harness entropy in the endogenous generation of actions. I argue that this exploitation of "mortal" or thermodynamic computation, in which cognitive and physical dynamics are inseparable, is of the essence of desire, motivation, and value, while the lack of true endogenous motivation in simulated "agents" predicts pathologies like reward hacking.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Job Displacement
  • AGI Applications
  • AGI
You can edit or add more interests any time.

Unsubscribe from these updates