Hi J34Nc4Rl0+Ai Impacts On Society,

Your personalized paper recommendations for 03 to 07 November, 2025.

Dear user, for this week we added the possiblity to further personalize your results by adding a personalized description of yourself.

Login in our website and head to the profile tab. There provide any details you want like your profession, age, background. That is then taken into account for the language models to generate something tailored for you.

🎯 Top Personalized Recommendations
RPTU KaiserslauternLand
Why we think this paper is great for you:
This paper directly addresses the energy footprint of AI systems, which is a key area of interest for you. It provides valuable insights into benchmarking and estimating the energy consumption of powerful AI agents.
Rate paper: 👍 👎 ♥ Save
Abstract
Web agents, like OpenAI's Operator and Google's Project Mariner, are powerful agentic systems pushing the boundaries of Large Language Models (LLM). They can autonomously interact with the internet at the user's behest, such as navigating websites, filling search masks, and comparing price lists. Though web agent research is thriving, induced sustainability issues remain largely unexplored. To highlight the urgency of this issue, we provide an initial exploration of the energy and $CO_2$ cost associated with web agents from both a theoretical -via estimation- and an empirical perspective -by benchmarking. Our results show how different philosophies in web agent creation can severely impact the associated expended energy, and that more energy consumed does not necessarily equate to better results. We highlight a lack of transparency regarding disclosing model parameters and processes used for some web agents as a limiting factor when estimating energy consumption. Our work contributes towards a change in thinking of how we evaluate web agents, advocating for dedicated metrics measuring energy consumption in benchmarks.
AI Summary
  • Web agent design philosophies severely impact energy consumption, where more energy consumed does not necessarily equate to better performance, as demonstrated by AutoWebGLM being both the most energy-efficient and best-performing agent on Mind2Web. [3]
  • Theoretical estimation of energy consumption for proprietary LLM-driven web agents is highly unreliable, showing discrepancies up to a factor of 7 compared to empirical measurements, primarily due to a lack of transparency in model parameters and internal processes. [3]
  • The absence of energy consumption metrics in current web agent benchmarks creates a disincentive for developers to prioritize sustainable implementations, necessitating the adoption of energy per benchmark as a core evaluation metric. [3]
  • A critical problem is the lack of transparency regarding web agent energy consumption to end-users, suggesting that displaying estimated CO2 emissions per task could raise awareness and guide users towards more sustainable agents. [3]
  • Step Success Rate (SSR): The de facto standard performance metric for the Mind2Web benchmark, representing the ratio of successful steps taken towards completing a task to the total number of steps. [3]
  • Empirical energy benchmarking for open-source LLM-driven web agents is feasible and should be integrated into standard evaluation practices to provide a holistic assessment of performance and sustainability. [2]
  • Effective preprocessing and the strategic use of smaller, fine-tuned open-source LLMs (e.g., MindAct's approach) are critical for achieving significant energy efficiency compared to relying on large proprietary models like GPT-4. [2]
  • To enable comparability for proprietary LLM-driven agents where direct benchmarking is impossible, developers should report at least the energy consumption per token and the total number of tokens consumed for established benchmarks. [2]
  • Energy per token: A metric used to evaluate the trade-off between performance and sustainability, particularly for LLM inference, by quantifying the energy consumed per processed token. [2]
  • Web Agents: LLM-powered agentic systems capable of autonomously interacting with the internet, such as navigating websites and filling forms, mimicking human user behavior. [1]
Harvard University andor
Why we think this paper is great for you:
This paper directly tackles algorithmic fairness in criminal justice, a critical area aligning with your strong interest in AI for social justice and fairness. It explores different fairness concepts and their implications in high-stakes domains.
Rate paper: 👍 👎 ♥ Save
Abstract
Algorithmic fairness has grown rapidly as a research area, yet key concepts remain unsettled, especially in criminal justice. We review group, individual, and process fairness and map the conditions under which they conflict. We then develop a simple modification to standard group fairness. Rather than exact parity across protected groups, we minimize a weighted error loss while keeping differences in false negative rates within a small tolerance. This makes solutions easier to find, can raise predictive accuracy, and surfaces the ethical choice of error costs. We situate this proposal within three classes of critique: biased and incomplete data, latent affirmative action, and the explosion of subgroup constraints. Finally, we offer a practical framework for deployment in public decision systems built on three pillars: need-based decisions, Transparency and accountability, and narrowly tailored definitions and solutions. Together, these elements link technical design to legitimacy and provide actionable guidance for agencies that use risk assessment and related tools.
INTELI Instituto de Tec
Why we think this paper is great for you:
This paper directly explores the evolving role of AI in education, a topic you've expressed significant interest in. It discusses how AI is transforming learning environments and human intellectual labor.
Rate paper: 👍 👎 ♥ Save
Abstract
The debate over whether "thinking machines" could replace human intellectual labor has existed in both public and expert discussions since the mid-twentieth century, when the concept and terminology of Artificial Intelligence (AI) first emerged. For decades, this idea remained largely theoretical. However, with the recent advent of Generative AI - particularly Large Language Models (LLMs) - and the widespread adoption of tools such as ChatGPT, the issue has become a practical reality. Many fields that rely on human intellectual effort are now being reshaped by AI tools that both expand human capabilities and challenge the necessity of certain forms of work once deemed uniquely human but now easily automated. Education, somewhat unexpectedly, faces a pivotal responsibility: to devise long-term strategies for cultivating human skills that will remain relevant in an era of pervasive AI in the intellectual domain. In this context, we identify the limitations of current AI systems - especially those rooted in LLM technology - argue that the fundamental causes of these weaknesses cannot be resolved through existing methods, and propose directions within the constructivist paradigm for transforming education to preserve the long-term advantages of human intelligence over AI tools.
German Cancer Research D
Why we think this paper is great for you:
This paper addresses critical challenges in the development and adoption of AI in surgery, which is highly relevant to your interest in AI's application in healthcare. It highlights the importance of robust validation practices for clinical AI.
Rate paper: 👍 👎 ♥ Save
Abstract
Surgical data science (SDS) is rapidly advancing, yet clinical adoption of artificial intelligence (AI) in surgery remains severely limited, with inadequate validation emerging as a key obstacle. In fact, existing validation practices often neglect the temporal and hierarchical structure of intraoperative videos, producing misleading, unstable, or clinically irrelevant results. In a pioneering, consensus-driven effort, we introduce the first comprehensive catalog of validation pitfalls in AI-based surgical video analysis that was derived from a multi-stage Delphi process with 91 international experts. The collected pitfalls span three categories: (1) data (e.g., incomplete annotation, spurious correlations), (2) metric selection and configuration (e.g., neglect of temporal stability, mismatch with clinical needs), and (3) aggregation and reporting (e.g., clinically uninformative aggregation, failure to account for frame dependencies in hierarchical data structures). A systematic review of surgical AI papers reveals that these pitfalls are widespread in current practice, with the majority of studies failing to account for temporal dynamics or hierarchical data structure, or relying on clinically uninformative metrics. Experiments on real surgical video datasets provide the first empirical evidence that ignoring temporal and hierarchical data structures can lead to drastic understatement of uncertainty, obscure critical failure modes, and even alter algorithm rankings. This work establishes a framework for the rigorous validation of surgical video analysis algorithms, providing a foundation for safe clinical translation, benchmarking, regulatory review, and future reporting standards in the field.
The University of Melboun
Why we think this paper is great for you:
This paper directly addresses agentic AI applications within the transportation sector, aligning perfectly with your interest in AI's role in transportation. It proposes a novel approach for intention recognition in these systems.
Rate paper: 👍 👎 ♥ Save
Abstract
In this study, a modular, data-free pipeline for multi-label intention recognition is proposed for agentic AI applications in transportation. Unlike traditional intent recognition systems that depend on large, annotated corpora and often struggle with fine-grained, multi-label discrimination, our approach eliminates the need for costly data collection while enhancing the accuracy of multi-label intention understanding. Specifically, the overall pipeline, named DMTC, consists of three steps: 1) using prompt engineering to guide large language models (LLMs) to generate diverse synthetic queries in different transport scenarios; 2) encoding each textual query with a Sentence-T5 model to obtain compact semantic embeddings; 3) training a lightweight classifier using a novel online focal-contrastive (OFC) loss that emphasizes hard samples and maximizes inter-class separability. The applicability of the proposed pipeline is demonstrated in an agentic AI application in the maritime transportation context. Extensive experiments show that DMTC achieves a Hamming loss of 5.35% and an AUC of 95.92%, outperforming state-of-the-art multi-label classifiers and recent end-to-end SOTA LLM-based baselines. Further analysis reveals that Sentence-T5 embeddings improve subset accuracy by at least 3.29% over alternative encoders, and integrating the OFC loss yields an additional 0.98% gain compared to standard contrastive objectives. In conclusion, our system seamlessly routes user queries to task-specific modules (e.g., ETA information, traffic risk evaluation, and other typical scenarios in the transportation domain), laying the groundwork for fully autonomous, intention-aware agents without costly manual labelling.
University of Washington
Why we think this paper is great for you:
This paper presents an AI application for food logging, directly addressing your interest in AI's role in food and its connections to health outcomes. It offers practical insights into AI tools for dietary management.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Food logging, both self-directed and prescribed, plays a critical role in uncovering correlations between diet, medical, fitness, and health outcomes. Through conversations with nutritional experts and individuals who practice dietary tracking, we find current logging methods, such as handwritten and app-based journaling, are inflexible and result in low adherence and potentially inaccurate nutritional summaries. These findings, corroborated by prior literature, emphasize the urgent need for improved food logging methods. In response, we propose SnappyMeal, an AI-powered dietary tracking system that leverages multimodal inputs to enable users to more flexibly log their food intake. SnappyMeal introduces goal-dependent follow-up questions to intelligently seek missing context from the user and information retrieval from user grocery receipts and nutritional databases to improve accuracy. We evaluate SnappyMeal through publicly available nutrition benchmarks and a multi-user, 3-week, in-the-wild deployment capturing over 500 logged food instances. Users strongly praised the multiple available input methods and reported a strong perceived accuracy. These insights suggest that multimodal AI systems can be leveraged to significantly improve dietary tracking flexibility and context-awareness, laying the groundwork for a new class of intelligent self-tracking applications.
KFUPM King Fahd Univeris
Why we think this paper is great for you:
This paper investigates the critical alignment of AI systems with human values, which is central to your interest in responsible AI and its societal implications. It explores how LLMs can be developed to uphold ethical principles.
Rate paper: 👍 👎 ♥ Save
Abstract
Large Language Models (LLMs) are increasingly employed in software engineering tasks such as requirements elicitation, design, and evaluation, raising critical questions regarding their alignment with human judgments on responsible AI values. This study investigates how closely LLMs' value preferences align with those of two human groups: a US-representative sample and AI practitioners. We evaluate 23 LLMs across four tasks: (T1) selecting key responsible AI values, (T2) rating their importance in specific contexts, (T3) resolving trade-offs between competing values, and (T4) prioritizing software requirements that embody those values. The results show that LLMs generally align more closely with AI practitioners than with the US-representative sample, emphasizing fairness, privacy, transparency, safety, and accountability. However, inconsistencies appear between the values that LLMs claim to uphold (Tasks 1-3) and the way they prioritize requirements (Task 4), revealing gaps in faithfulness between stated and applied behavior. These findings highlight the practical risk of relying on LLMs in requirements engineering without human oversight and motivate the need for systematic approaches to benchmark, interpret, and monitor value alignment in AI-assisted software development.
AI Energy Consumption
The Aula Fellowship
Rate paper: 👍 👎 ♥ Save
Abstract
This paper examines how decision makers in academia, government, business, and civil society navigate questions of power in implementations of artificial intelligence. The study explores how individuals experience and exercise levers of power, which are presented as social mechanisms that shape institutional responses to technological change. The study reports on the responses of personalized questionnaires designed to gather insight on a decision maker's institutional purview, based on an institutional governance framework developed from the work of Neo-institutionalists. Findings present the anonymized, real responses and circumstances of respondents in the form of twelve fictional personas of high-level decision makers from North America and Europe. These personas illustrate how personal agency, organizational logics, and institutional infrastructures may intersect in the governance of AI. The decision makers' responses to the questionnaires then inform a discussion of the field-level personal power of decision makers, methods of fostering institutional stability in times of change, and methods of influencing institutional change in the field of AI. The final section of the discussion presents a table of the dynamics of the levers of power in the field of AI for change makers and five testable hypotheses for institutional and social movement researchers. In summary, this study provides insight on the means for policymakers within institutions and their counterparts in civil society to personally engage with AI governance.
AI on Education
Hamdan Bin Mohammed Smart
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
The study explores the current state of artificial intelligence (AI) literacy levels among library professionals employing a quantitative approach consisting of 92 surveys of LIS professionals in the United Arab Emirates (UAE). Findings of the study revealed the presence of strong cognitive competencies, while there were gaps observed in behavioral and normative competencies, especially related to AI biases, AI-powered learning, and ethical considerations. There was a disconnect observed between the perceived importance of AI skills and the effectiveness of the current training programs.
AI for Social Justice
University of Cambridge
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
The character of the "AI assistant" persona generated by modern chatbot large language models influences both surface-level behavior and apparent values, beliefs, and ethics. These all affect interaction quality, perceived intelligence, and alignment with both developer and user intentions. The shaping of this persona, known as character training, is a critical component of industry post-training, yet remains effectively unstudied in the academic literature. We introduce the first open implementation of character training, leveraging Constitutional AI and a new data pipeline using synthetic introspective data to shape the assistant persona in a more effective and controlled manner than alternatives such as constraining system prompts or activation steering. Specifically, we fine-tune three popular open-weights models using 11 example personas, such as humorous, deeply caring, or even malevolent. To track the effects of our approach, we introduce a method which analyzes revealed preferences, uncovering clear and holistic changes in character. We find these changes are more robust to adversarial prompting than the above two alternatives, while also leading to more coherent and realistic generations. Finally, we demonstrate this fine-tuning has little to no effect on general capabilities as measured by common benchmarks. We describe and open-source our full post-training method, the implementation of which can be found at https://github.com/maiush/OpenCharacterTraining.
AI on Energy
The University of Tokyo
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Understanding the current capabilities and risks of AI Scientist systems is essential for ensuring trustworthy and sustainable AI-driven scientific progress while preserving the integrity of the academic ecosystem. To this end, we develop Jr. AI Scientist, a state-of-the-art autonomous AI scientist system that mimics the core research workflow of a novice student researcher: Given the baseline paper from the human mentor, it analyzes its limitations, formulates novel hypotheses for improvement, validates them through rigorous experimentation, and writes a paper with the results. Unlike previous approaches that assume full automation or operate on small-scale code, Jr. AI Scientist follows a well-defined research workflow and leverages modern coding agents to handle complex, multi-file implementations, leading to scientifically valuable contributions. For evaluation, we conducted automated assessments using AI Reviewers, author-led evaluations, and submissions to Agents4Science, a venue dedicated to AI-driven scientific contributions. The findings demonstrate that Jr. AI Scientist generates papers receiving higher review scores than existing fully automated systems. Nevertheless, we identify important limitations from both the author evaluation and the Agents4Science reviews, indicating the potential risks of directly applying current AI Scientist systems and key challenges for future research. Finally, we comprehensively report various risks identified during development. We hope these insights will deepen understanding of current progress and risks in AI Scientist development.
AI on Healthcare
Georgia Institute of Tech
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Representation shapes public attitudes and behaviors. With the arrival and rapid adoption of LLMs, the way these systems are introduced will negotiate societal expectations for their role in high-stakes domains like health. Yet it remains unclear whether current narratives present a balanced view. We analyzed five prominent discourse channels (news, research press, YouTube, TikTok, and Reddit) over a two-year period on lexical style, informational content, and symbolic representation. Discussions were generally positive and episodic, with positivity increasing over time. Risk communication was unthorough and often reduced to information quality incidents, while explanations of LLMs' generative nature were rare. Compared with professional outlets, TikTok and Reddit highlighted wellbeing applications and showed greater variations in tone and anthropomorphism but little attention to risks. We discuss implications for public discourse as a diagnostic tool in identifying literacy and governance gaps, and for communication and design strategies to support more informed LLM engagement.
AI for Social Equality
University of Illinois
Rate paper: 👍 👎 ♥ Save
Abstract
In many real-life settings, algorithms play the role of assistants, while humans ultimately make the final decision. Often, algorithms specifically act as curators, narrowing down a wide range of options into a smaller subset that the human picks between: consider content recommendation or chatbot responses to questions with multiple valid answers. Crucially, humans may not know their own preferences perfectly either, but instead may only have access to a noisy sampling over preferences. Algorithms can assist humans by curating a smaller subset of items, but must also face the challenge of misalignment: humans may have different preferences from each other (and from the algorithm), and the algorithm may not know the exact preferences of the human they are facing at any point in time. In this paper, we model and theoretically study such a setting. Specifically, we show instances where humans benefit by collaborating with a misaligned algorithm. Surprisingly, we show that humans gain more utility from a misaligned algorithm (which makes different mistakes) than from an aligned algorithm. Next, we build on this result by studying what properties of algorithms maximize human welfare when the goals could be either utilitarian welfare or ensuring all humans benefit. We conclude by discussing implications for designers of algorithmic tools and policymakers.
AI on Labor Market
UT Dallas, CKG SB, CKGSB
Rate paper: 👍 👎 ♥ Save
Abstract
Labor mobility is a critical source of technology acquisition for firms. This paper examines how artificial intelligence (AI) knowledge is disseminated across firms through labor mobility and identifies the organizational conditions that facilitate productive spillovers. Using a comprehensive dataset of over 460 million job records from Revelio Labs (2010 to 2023), we construct an inter-firm mobility network of AI workers among over 16,000 U.S. companies. Estimating a Cobb Douglas production function, we find that firms benefit substantially from the AI investments of other firms from which they hire AI talents, with productivity spillovers two to three times larger than those associated with traditional IT after accounting for labor scale. Importantly, these spillovers are contingent on organizational context: hiring from flatter and more lean startup method intensive firms generates significant productivity gains, whereas hiring from firms lacking these traits yields little benefit. Mechanism tests indicate that "flat and lean" organizations cultivate more versatile AI generalists who transfer richer knowledge across firms. These findings reveal that AI spillovers differ fundamentally from traditional IT spillovers: while IT spillovers primarily arise from scale and process standardization, AI spillovers critically depend on the experimental and integrative environments in which AI knowledge is produced. Together, these results underscore the importance of considering both labor mobility and organizational context in understanding the full impact of AI-driven productivity spillovers.
Microsoft AI for Good Res
Rate paper: 👍 👎 ♥ Save
Abstract
Artificial intelligence (AI) is diffusing globally at unprecedented speed, but adoption remains uneven. Frontier Large Language Models (LLMs) are known to perform poorly on low-resource languages due to data scarcity. We hypothesize that this performance deficit reduces the utility of AI, thereby slowing adoption in Low-Resource Language Countries (LRLCs). To test this, we use a weighted regression model to isolate the language effect from socioeconomic and demographic factors, finding that LRLCs have a share of AI users that is approximately 20% lower relative to their baseline. These results indicate that linguistic accessibility is a significant, independent barrier to equitable AI diffusion.
AI Water Consumption
Microsoft AI for Good Lab
Rate paper: 👍 👎 ♥ Save
Abstract
Measuring global AI diffusion remains challenging due to a lack of population-normalized, cross-country usage data. We introduce AI User Share, a novel indicator that estimates the share of each country's working-age population actively using AI tools. Built from anonymized Microsoft telemetry and adjusted for device access and mobile scaling, this metric spans 147 economies and provides consistent, real-time insight into global AI diffusion. We find wide variation in adoption, with a strong correlation between AI User Share and GDP. High uptake is concentrated in developed economies, though usage among internet-connected populations in lower-income countries reveals substantial latent demand. We also detect sharp increases in usage following major product launches, such as DeepSeek in early 2025. While the metric's reliance solely on Microsoft telemetry introduces potential biases related to this user base, it offers an important new lens into how AI is spreading globally. AI User Share enables timely benchmarking that can inform data-driven AI policy.
AI on Air
Edison Scientific Inc, 1
Rate paper: 👍 👎 ♥ Save
Abstract
Data-driven scientific discovery requires iterative cycles of literature search, hypothesis generation, and data analysis. Substantial progress has been made towards AI agents that can automate scientific research, but all such agents remain limited in the number of actions they can take before losing coherence, thus limiting the depth of their findings. Here we present Kosmos, an AI scientist that automates data-driven discovery. Given an open-ended objective and a dataset, Kosmos runs for up to 12 hours performing cycles of parallel data analysis, literature search, and hypothesis generation before synthesizing discoveries into scientific reports. Unlike prior systems, Kosmos uses a structured world model to share information between a data analysis agent and a literature search agent. The world model enables Kosmos to coherently pursue the specified objective over 200 agent rollouts, collectively executing an average of 42,000 lines of code and reading 1,500 papers per run. Kosmos cites all statements in its reports with code or primary literature, ensuring its reasoning is traceable. Independent scientists found 79.4% of statements in Kosmos reports to be accurate, and collaborators reported that a single 20-cycle Kosmos run performed the equivalent of 6 months of their own research time on average. Furthermore, collaborators reported that the number of valuable scientific findings generated scales linearly with Kosmos cycles (tested up to 20 cycles). We highlight seven discoveries made by Kosmos that span metabolomics, materials science, neuroscience, and statistical genetics. Three discoveries independently reproduce findings from preprinted or unpublished manuscripts that were not accessed by Kosmos at runtime, while four make novel contributions to the scientific literature.
AI for Social Good
University of Texas at D
Rate paper: 👍 👎 ♥ Save
Abstract
Effective human-AI collaboration requires humans to accurately gauge AI capabilities and calibrate their trust accordingly. Humans often have context-dependent private information, referred to as Unique Human Knowledge (UHK), that is crucial for deciding whether to accept or override AI's recommendations. We examine how displaying AI reasoning affects trust and UHK utilization through a pre-registered, incentive-compatible experiment (N = 752). We find that revealing AI reasoning, whether brief or extensive, acts as a powerful persuasive heuristic that significantly increases trust and agreement with AI recommendations. Rather than helping participants appropriately calibrate their trust, this transparency induces over-trust that crowds out UHK utilization. Our results highlight the need for careful consideration when revealing AI reasoning and call for better information design in human-AI collaboration systems.
Tsinghua University, Mon
Rate paper: 👍 👎 ♥ Save
Abstract
Generative AI is increasingly positioned as a peer in collaborative learning, yet its effects on ethical deliberation remain unclear. We report a between-subjects experiment with university students (N=217) who discussed an autonomous-vehicle dilemma in triads under three conditions: human-only control, supportive AI teammate, or contrarian AI teammate. Using moral foundations lexicons, argumentative coding from the augmentative knowledge construction framework, semantic trajectory modelling with BERTopic and dynamic time warping, and epistemic network analysis, we traced how AI personas reshape moral discourse. Supportive AIs increased grounded/qualified claims relative to control, consolidating integrative reasoning around care/fairness, while contrarian AIs modestly broadened moral framing and sustained value pluralism. Both AI conditions reduced thematic drift compared with human-only groups, indicating more stable topical focus. Post-discussion justification complexity was only weakly predicted by moral framing and reasoning quality, and shifts in final moral decisions were driven primarily by participants' initial stance rather than condition. Overall, AI teammates altered the process, the distribution and connection of moral frames and argument quality, more than the outcome of moral choice, highlighting the potential of generative AI agents as teammates for eliciting reflective, pluralistic moral reasoning in collaborative learning.
AI on Transportation
Institute of Computing I
Rate paper: 👍 👎 ♥ Save
Abstract
Traditional vehicle routing systems efficiently optimize singular metrics like time or distance, and when considering multiple metrics, they need more processes to optimize . However, they lack the capability to interpret and integrate the complex, semantic, and dynamic contexts of human drivers, such as multi-step tasks, situational constraints, or urgent needs. This paper introduces and evaluates PAVe (Personalized Agentic Vehicular Routing), a hybrid agentic assistant designed to augment classical pathfinding algorithms with contextual reasoning. Our approach employs a Large Language Model (LLM) agent that operates on a candidate set of routes generated by a multi-objective (time, CO2) Dijkstra algorithm. The agent evaluates these options against user-provided tasks, preferences, and avoidance rules by leveraging a pre-processed geospatial cache of urban Points of Interest (POIs). In a benchmark of realistic urban scenarios, PAVe successfully used complex user intent into appropriate route modifications, achieving over 88% accuracy in its initial route selections with a local model. We conclude that combining classical routing algorithms with an LLM-based semantic reasoning layer is a robust and effective approach for creating personalized, adaptive, and scalable solutions for urban mobility optimization.
AI on Food
John A Paulson School of
Rate paper: 👍 👎 ♥ Save
Abstract
Scientific experiment and manufacture rely on complex, multi-step procedures that demand continuous human expertise for precise execution and decision-making. Despite advances in machine learning and automation, conventional models remain confined to virtual domains, while real-world experiment and manufacture still rely on human supervision and expertise. This gap between machine intelligence and physical execution limits reproducibility, scalability, and accessibility across scientific and manufacture workflows. Here, we introduce human-AI co-embodied intelligence, a new form of physical AI that unites human users, agentic AI, and wearable hardware into an integrated system for real-world experiment and intelligent manufacture. In this paradigm, humans provide precise execution and control, while agentic AI contributes memory, contextual reasoning, adaptive planning, and real-time feedback. The wearable interface continuously captures the experimental and manufacture processes, facilitates seamless communication between humans and AI for corrective guidance and interpretable collaboration. As a demonstration, we present Agentic-Physical Experimentation (APEX) system, coupling agentic reasoning with physical execution through mixed-reality. APEX observes and interprets human actions, aligns them with standard operating procedures, provides 3D visual guidance, and analyzes every step. Implemented in a cleanroom for flexible electronics fabrication, APEX system achieves context-aware reasoning with accuracy exceeding general multimodal large language models, corrects errors in real time, and transfers expertise to beginners. These results establish a new class of agentic-physical-human intelligence that extends agentic reasoning beyond computation into the physical domain, transforming scientific research and manufacturing into autonomous, traceable, interpretable, and scalable processes.
AI Air Consumption
Scuola Internazionale di
Rate paper: 👍 👎 ♥ Save
Abstract
PLUMED is an open-source software package that is widely used for analyzing and enhancing molecular dynamics simulations that works in conjunction with most available molecular dynamics softwares. While the computational cost of PLUMED calculations is typically negligible compared to the molecular dynamics code's force evaluation, the software is increasingly being employed for more computationally demanding tasks where performance optimization becomes critical. In this tutorial, we describe a recently implemented tool that can be used to reliably measure code performance. We then use this tool to generate detailed performance benchmarks that show how calculations of large-numbers of distances, angles or torsions can be optimized by using vector-based commands rather than individual scalar operations. We then present benchmarks that illustrate how to optimize calculations of atomic order parameters and secondary structure variables. Throughout the tutorial and in our implementations we endeavor to explain the algorithmic tricks that are being used to optimize the calculations so others can make use of these prescriptions both when they are using PLUMED and when they are writing their own codes.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • AI on Water
  • AI for Social Equity
  • AI for Society
  • AI Impacts on Society
You can edit or add more interests any time.