đŻ Top Personalized Recommendations
TU Berlin
Why we think this paper is great for you:
This paper directly explores the critical challenge of AI's growing energy footprint and the need for sustainable practices in its development. It will provide valuable insights into the long-term energy implications of advanced AI.
Abstract
AI research is increasingly moving toward complex problem solving, where models are optimized not only for pattern recognition but for multi-step reasoning. Historically, computing's global energy footprint has been stabilized by sustained efficiency gains and natural saturation thresholds in demand. But as efficiency improvements are approaching physical limits, emerging reasoning AI lacks comparable saturation points: performance is no longer limited by the amount of available training data but continues to scale with exponential compute investments in both training and inference. This paper argues that efficiency alone will not lead to sustainable reasoning AI and discusses research and policy directions to embed explicit limits into the optimization and governance of such systems.
AI Summary - Efficiency gains alone are insufficient for sustainable reasoning AI, as these models exhibit virtually unbounded computational demand, leading to a pervasive rebound effect where improvements are reinvested rather than reducing overall energy use. [3]
- Reasoning AI, unlike prior computing paradigms, lacks natural saturation thresholds in performance, as models can continuously improve through additional compute in both training (via reinforcement learning) and inference (via multi-step reasoning like Tree-of-Thought). [3]
- Hardware efficiency improvements are approaching fundamental physical limits, and software optimizations are increasingly absorbed by rebound effects, diminishing their capacity to offset exponential growth in computational demand. [3]
- Prioritizing computationally intensive reasoning AI for applications that demonstrably advance human wellbeing and planetary stewardship (e.g., climate modeling, cancer research) is crucial, rather than allowing market forces alone to dictate deployment. [3]
- The current rapid, gigawatt-scale expansion of AI infrastructure, often relying on fossil fuels, poses significant risks to grid stability and global emissions targets, highlighting a critical mismatch between compute buildout and sustainable energy infrastructure investment. [2]
- To achieve sustainable reasoning AI, explicit limits must be embedded into optimization frameworks, such as incorporating environmental externalities into RL reward functions or enforcing compute governance APIs with verifiable resource budgets. [2]
- Policy interventions, including caps on resource use (e.g., compute budgets for data centers) and Pigouvian taxes on resource-intensive AI, are necessary to internalize environmental costs and mitigate rebound effects. [2]
- Reasoning AI: Models optimized for multi-step reasoning and complex problem-solving, moving beyond simple pattern recognition. [1]
- Chain-of-Thought (CoT) prompting: A technique that conditions models to generate intermediate reasoning steps before producing a final answer, improving performance on logical tasks. [1]
- Tree-of-Thought (ToT) reasoning: A paradigm that models reasoning as a guided search, expanding, evaluating, and pruning multiple partial reasoning paths during inference. [1]
Rome Tor Vergata
Why we think this paper is great for you:
You will find this highly relevant as it proposes innovative solutions for making AI training more environmentally friendly by leveraging renewable energy sources. It offers practical approaches to greening AI infrastructure.
Abstract
The accelerating expansion of AI workloads is colliding with an energy landscape increasingly dominated by intermittent renewable generation. While vast quantities of zero-carbon energy are routinely curtailed, today's centralized datacenter architectures remain poorly matched to this reality in both energy proportionality and geographic flexibility. This work envisions a shift toward a distributed fabric of renewable-powered micro-datacenters that dynamically follow the availability of surplus green energy through live workload migration.
At the core of this vision lies a formal feasibility-domain model that delineates when migratory AI computation is practically achievable. By explicitly linking checkpoint size, wide-area bandwidth, and renewable-window duration, the model reveals that migration is almost always energetically justified, and that time-not energy-is the dominant constraint shaping feasibility. This insight enables the design of a feasibility-aware orchestration framework that transforms migration from a best-effort heuristic into a principled control mechanism. Trace-driven evaluation shows that such orchestration can simultaneously reduce non-renewable energy use and improve performance stability, overcoming the tradeoffs of purely energy-driven strategies.
Beyond the immediate feasibility analysis, the extended version explores the architectural horizon of renewable-aware AI infrastructures. It examines the role of emerging ultra-efficient GPU-enabled edge platforms, anticipates integration with grid-level control and demand-response ecosystems, and outlines paths toward supporting partially migratable and distributed workloads. The work positions feasibility-aware migration as a foundational building block for a future computing paradigm in which AI execution becomes fluid, geographically adaptive, and aligned with renewable energy availability.
FIU
Why we think this paper is great for you:
This paper is a strong match for your interest in ensuring AI systems are equitable and do not perpetuate discrimination. It delves into crucial methods for mitigating bias in AI decision-making.
Abstract
Fairness in artificial intelligence (AI) has become a growing concern due to discriminatory outcomes in AI-based decision-making systems. While various methods have been proposed to mitigate bias, most rely on complete demographic information, an assumption often impractical due to legal constraints and the risk of reinforcing discrimination. This survey examines fairness in AI when demographics are incomplete, addressing the gap between traditional approaches and real-world challenges. We introduce a novel taxonomy of fairness notions in this setting, clarifying their relationships and distinctions. Additionally, we summarize existing techniques that promote fairness beyond complete demographics and highlight open research questions to encourage further progress in the field.
University of Notre Dame
Why we think this paper is great for you:
This technical paper offers a concrete approach to embedding fairness directly into deep learning models, which aligns well with your focus on practical solutions for social equity in AI. It provides a method to achieve fair outcomes in AI applications.
Abstract
As deep learning (DL) techniques become integral to various applications, ensuring model fairness while maintaining high performance has become increasingly critical, particularly in sensitive fields such as medical diagnosis. Although a variety of bias-mitigation methods have been proposed, many rely on computationally expensive debiasing strategies or suffer substantial drops in model accuracy, which limits their practicality in real-world, resource-constrained settings. To address this issue, we propose a fairness-oriented low rank factorization (LRF) framework that leverages singular value decomposition (SVD) to improve DL model fairness. Unlike traditional SVD, which is mainly used for model compression by decomposing and reducing weight matrices, our work shows that SVD can also serve as an effective tool for fairness enhancement. Specifically, we observed that elements in the unitary matrices obtained from SVD contribute unequally to model bias across groups defined by sensitive attributes. Motivated by this observation, we propose a method, named FairLRF, that selectively removes bias-inducing elements from unitary matrices to reduce group disparities, thus enhancing model fairness. Extensive experiments show that our method outperforms conventional LRF methods as well as state-of-the-art fairness-enhancing techniques. Additionally, an ablation study examines how major hyper-parameters may influence the performance of processed models. To the best of our knowledge, this is the first work utilizing SVD not primarily for compression but for fairness enhancement.
Not specified
Why we think this paper is great for you:
This report directly addresses the transformative role of AI in educational settings, offering insights into new opportunities and challenges for science education research. It is highly relevant to your interest in AI's impact on learning.
Abstract
This report summarizes the outcomes of a two-day international scoping workshop on the role of artificial intelligence (AI) in science education research. As AI rapidly reshapes scientific practice, classroom learning, and research methods, the field faces both new opportunities and significant challenges. The report clarifies key AI concepts to reduce ambiguity and reviews evidence of how AI influences scientific work, teaching practices, and disciplinary learning. It identifies how AI intersects with major areas of science education research, including curriculum development, assessment, epistemic cognition, inclusion, and teacher professional development, highlighting cases where AI can support human reasoning and cases where it may introduce risks to equity or validity. The report also examines how AI is transforming methodological approaches across quantitative, qualitative, ethnographic, and design-based traditions, giving rise to hybrid forms of analysis that combine human and computational strengths. To guide responsible integration, a systems-thinking heuristic is introduced that helps researchers consider stakeholder needs, potential risks, and ethical constraints. The report concludes with actionable recommendations for training, infrastructure, and standards, along with guidance for funders, policymakers, professional organizations, and academic departments. The goal is to support principled and methodologically sound use of AI in science education research.
University of Southern
Why we think this paper is great for you:
This paper combines your interests in AI for transportation and energy efficiency by focusing on optimizing fuel consumption in public transport. It demonstrates how AI can lead to more sustainable urban mobility.
Abstract
Enhancing fuel efficiency in public transportation requires the integration of complex multimodal data into interpretable, decision-relevant insights. However, traditional analytics and visualization methods often yield fragmented outputs that demand extensive human interpretation, limiting scalability and consistency. This study presents a multi-agent framework that leverages multimodal large language models (LLMs) to automate data narration and energy insight generation. The framework coordinates three specialized agents, including a data narration agent, an LLM-as-a-judge agent, and an optional human-in-the-loop evaluator, to iteratively transform analytical artifacts into coherent, stakeholder-oriented reports. The system is validated through a real-world case study on public bus transportation in Northern Jutland, Denmark, where fuel efficiency data from 4006 trips are analyzed using Gaussian Mixture Model clustering. Comparative experiments across five state-of-the-art LLMs and three prompting paradigms identify GPT-4.1 mini with Chain-of-Thought prompting as the optimal configuration, achieving 97.3% narrative accuracy while balancing interpretability and computational cost. The findings demonstrate that multi-agent orchestration significantly enhances factual precision, coherence, and scalability in LLM-based reporting. The proposed framework establishes a replicable and domain-adaptive methodology for AI-driven narrative generation and decision support in energy informatics.
Universidade da Corua
Why we think this paper is great for you:
This paper is directly relevant to your interest in AI applications within the healthcare sector, specifically focusing on improving reasoning capabilities in medical AI systems. It highlights advancements in critical healthcare datasets.
Abstract
We introduce HEAD-QA v2, an expanded and updated version of a Spanish/English healthcare multiple-choice reasoning dataset originally released by Vilares and GĂłmez-RodrĂguez (2019). The update responds to the growing need for high-quality datasets that capture the linguistic and conceptual complexity of healthcare reasoning. We extend the dataset to over 12,000 questions from ten years of Spanish professional exams, benchmark several open-source LLMs using prompting, RAG, and probability-based answer selection, and provide additional multilingual versions to support future work. Results indicate that performance is mainly driven by model scale and intrinsic reasoning ability, with complex inference strategies obtaining limited gains. Together, these results establish HEAD-QA v2 as a reliable resource for advancing research on biomedical reasoning and model improvement.
AI for Social Justice
University of Montreal
Abstract
The law draws a sharp distinction between objects and persons, and between two kinds of persons, the ''fictional'' kind (i.e. corporations), and the ''non-fictional'' kind (individual or ''natural'' persons). This paper will assess whether we maximize overall long-term legal coherence by (A) maintaining an object classification for all future AI systems, (B) creating fictional legal persons associated with suitably advanced, individuated AI systems (giving these fictional legal persons derogable rights and duties associated with certified groups of existing persons, potentially including free speech, contract rights, and standing to sue ''on behalf of'' the AI system), or (C) recognizing non-fictional legal personhood through legal identity for suitably advanced, individuated AI systems (recognizing them as entities meriting legal standing with non-derogable rights which for the human case include life, due process, habeas corpus, freedom from slavery, and freedom of conscience). We will clarify the meaning and implications of each option along the way, considering liability, copyright, family law, fundamental rights, civil rights, citizenship, and AI safety regulation. We will tentatively find that the non-fictional personhood approach may be best from a coherence perspective, for at least some advanced AI systems. An object approach may prove untenable for sufficiently humanoid advanced systems, though we suggest that it is adequate for currently existing systems as of 2025. While fictional personhood would resolve some coherence issues for future systems, it would create others and provide solutions that are neither durable nor fit for purpose. Finally, our review will suggest that ''hybrid'' approaches are likely to fail and lead to further incoherence: the choice between object, fictional person and non-fictional person is unavoidable.
AI on Labor Market
Carnegie Mellon
Abstract
The rapid advancement of Large Language Models (LLMs) has generated considerable speculation regarding their transformative potential for labor markets. However, existing approaches to measuring AI exposure in the workforce predominantly rely on concurrent market conditions, offering limited predictive capacity for anticipating future disruptions. This paper presents a predictive study examining whether online discussions about LLMs can function as early indicators of labor market shifts. We employ four distinct analytical approaches to identify the domains and timeframes in which public discourse serves as a leading signal for employment changes, thereby demonstrating its predictive validity for labor market dynamics. Drawing on a comprehensive dataset that integrates the REALM corpus of LLM discussions, LinkedIn job postings, Indeed employment indices, and over 4 million LinkedIn user profiles, we analyze the relationship between discussion intensity across news media and Reddit forums and subsequent variations in job posting volumes, occupational net change ratios, job tenure patterns, unemployment duration, and transitions to GenAI-related roles across thirteen occupational categories. Our findings reveal that discussion intensity predicts employment changes 1-7 months in advance across multiple indicators, including job postings, net hiring rates, tenure patterns, and unemployment duration. These findings suggest that monitoring online discourse can provide actionable intelligence for workers making reskilling decisions and organizations anticipating skill requirements, offering a real-time complement to traditional labor statistics in navigating technological disruption.
Georgia Tech
Abstract
Artificial intelligence (AI) raises expectations of substantial increases in rates of technological and scientific progress, but such anticipations are often not connected to detailed ground-level studies of AI use in innovation processes. Accordingly, it remains unclear how and to what extent AI can accelerate innovation. To help to fill this gap, we report results from 32 interviews with U.S.-based academic manufacturing and materials sciences researchers experienced with AI and machine learning (ML) techniques. Interviewees primarily used AI for modeling of materials and manufacturing processes, facilitating cheaper and more rapid search of design spaces for materials and manufacturing processes alike. They report benefits including cost, time, and computation savings in technology development. However, interviewees also report that AI/ML tools are unreliable outside design spaces for which dense data are already available; that they require skilled and judicious application in tandem with older research techniques; and that AI/ML tools may detrimentally circumvent opportunities for disruptive theoretical advancement. Based on these results, we suggest there is reason for optimism about acceleration in sustaining innovations through the use of to AI/ML; but that support for conventional empirical, computational, and theoretical research is required to maintain the likelihood of further major advances in manufacturing and materials science.
AI on Food
UC Davis
Abstract
Artificial intelligence is accelerating a new era of food innovation, connecting data from farm to consumer to improve formulation, processing, and health outcomes. Recent advances in deep learning, natural language processing, and multi-omics integration make it possible to understand and optimize food systems with unprecedented depth. However, AI adoption across the food sector remains uneven due to heterogeneous datasets, limited model and system interoperability, and a persistent skills gap between data scientists and food domain experts. To address these challenges and advance responsible innovation, the AI Institute for Next Generation Food Systems (AIFS) convened the inaugural AI for Food Product Development Symposium at University of California, Davis, in October 2025. This white paper synthesizes insights from the symposium, organized around five domains where AI can have the greatest near-term impact: supply chain; formulation and processing; consumer insights and sensory prediction; nutrition and health; and education and workforce development. Across the areas, participants emphasized the importance of interoperable data standards, transparent and interpretable models, and cross-sector collaboration to accelerate the translation of AI research into practice. The discussions further highlighted the need for robust digital infrastructure, privacy-preserving data-sharing mechanisms, and interdisciplinary training pathways that integrate AI literacy with domain expertise. Collectively, the priorities outline a roadmap for integrating AI into food manufacturing in ways that enhance innovation, sustainability, and human well-being while ensuring that technological progress remains grounded in ethics, scientific rigor, and societal benefit.
AI for Society
UC Berkeley
Abstract
Artificial intelligence (AI) is no longer futuristic; it is a daily companion shaping our private and work lives. While AI simplifies our lives, its rise also invites us to rethink who we are - and who we wish to remain - as humans. Even if AI does not think, feel, or desire, it learns from our behavior, mirroring our collective values, biases, and aspirations. The question, then, is not what AI is, but what we are allowing it to become through data, computing power, and other parameters "teaching" it - and, even more importantly, who we are becoming through our relationship with AI.
As the EU AI Act and the Vienna Manifesto on Digital Humanism emphasize, technology must serve human dignity,social well-being, and democratic accountability. In our opinion, responsible use of AI is not only a matter of code nor law, but also of conscientious practice: how each of us engages and teaches others to use AI at home and at work. We propose Ten Commandments for the Wise and Responsible Use of AI are meant as guideline for this very engagement. They closely align with Floridi and Cowls' five guiding principles for AI in society - beneficence, non-maleficence, autonomy, justice, and explicability.