🎯 Top Personalized Recommendations
IITCNR, University of P
Why we think this paper is great for you:
This paper directly addresses data storytelling, a critical skill for anyone in a data career. Mastering this will significantly enhance your ability to communicate insights and advance professionally.
Abstract
Accessible teaching has been extensively investigated in computer science,
yet its integration into other disciplines, such as data literacy, remains
limited. This paper examines the potential of data storytelling, defined as the
integration of data, visualizations, and narrative, as a possible strategy for
making complex information accessible to diverse learners in compliance with
Title II of the Americans with Disabilities Act (ADA). We propose six design
principles, derived from Title II's core obligations, to guide educators in
applying data storytelling within inclusive learning environments. A simulated
scenario shows the operationalization of these principles, illustrating how
narrative-driven data presentation can enhance comprehension, engagement, and
equitable access across different educational contexts.
AI Summary - Accessible data storytelling is presented as a strategy to make complex information, particularly in data literacy, accessible to diverse learners, addressing a gap in systematic application beyond computer science. [3]
- Data storytelling, through its multimodal nature and narrative structure, can significantly reduce cognitive barriers and enhance comprehension, engagement, and equitable access to data for all students. [3]
- Data Storytelling: The integration of data, visualizations, and narrative to convey insights in a compelling and accessible way, embedding quantitative evidence within a coherent narrative arc. [3]
- The paper proposes six specific design principles (equitable access, effective communication, programmatic accessibility, integration, reasonable modifications, auxiliary aids and services) derived from ADA Title II to guide accessible data storytelling in education. [2]
- The paper demonstrates the operationalization of these six principles through a detailed simulated teaching scenario, providing a proof-of-concept for practical application. [2]
- Accessible Data Storytelling: The application of data storytelling principles, guided by ADA Title II requirements, to ensure complex data information is perceivable and usable by diverse learners, including those with disabilities. [2]
- Equitable Access (ADA Title II Principle): All learners, regardless of disability, must have access to instructional content and learning environments. [2]
- Effective Communication (ADA Title II Principle): Information must be conveyed in ways that are understandable and usable, with necessary aids and services provided. [2]
- The framework emphasizes decoupling the narrative's semantic structure from its technical delivery format, enabling educators to adapt content to various learner needs without compromising narrative coherence. [1]
- The proposed design principles align with Universal Design for Learning (UDL) guidelines, offering a scalable approach for implementing accessibility across various academic disciplines. [1]
Edison Scientific Inc, 1
Why we think this paper is great for you:
You'll find this highly relevant as it explores the emerging role of an "AI Scientist" and the automation of scientific research. This offers valuable insights into potential future career paths in data science.
Abstract
Data-driven scientific discovery requires iterative cycles of literature
search, hypothesis generation, and data analysis. Substantial progress has been
made towards AI agents that can automate scientific research, but all such
agents remain limited in the number of actions they can take before losing
coherence, thus limiting the depth of their findings. Here we present Kosmos,
an AI scientist that automates data-driven discovery. Given an open-ended
objective and a dataset, Kosmos runs for up to 12 hours performing cycles of
parallel data analysis, literature search, and hypothesis generation before
synthesizing discoveries into scientific reports. Unlike prior systems, Kosmos
uses a structured world model to share information between a data analysis
agent and a literature search agent. The world model enables Kosmos to
coherently pursue the specified objective over 200 agent rollouts, collectively
executing an average of 42,000 lines of code and reading 1,500 papers per run.
Kosmos cites all statements in its reports with code or primary literature,
ensuring its reasoning is traceable. Independent scientists found 79.4% of
statements in Kosmos reports to be accurate, and collaborators reported that a
single 20-cycle Kosmos run performed the equivalent of 6 months of their own
research time on average. Furthermore, collaborators reported that the number
of valuable scientific findings generated scales linearly with Kosmos cycles
(tested up to 20 cycles). We highlight seven discoveries made by Kosmos that
span metabolomics, materials science, neuroscience, and statistical genetics.
Three discoveries independently reproduce findings from preprinted or
unpublished manuscripts that were not accessed by Kosmos at runtime, while four
make novel contributions to the scientific literature.
The University of Tokyo
Why we think this paper is great for you:
This paper provides a deeper understanding of AI Scientist systems, which is crucial for staying ahead in data career development. It helps you assess the capabilities and risks of these evolving roles.
Abstract
Understanding the current capabilities and risks of AI Scientist systems is
essential for ensuring trustworthy and sustainable AI-driven scientific
progress while preserving the integrity of the academic ecosystem. To this end,
we develop Jr. AI Scientist, a state-of-the-art autonomous AI scientist system
that mimics the core research workflow of a novice student researcher: Given
the baseline paper from the human mentor, it analyzes its limitations,
formulates novel hypotheses for improvement, validates them through rigorous
experimentation, and writes a paper with the results. Unlike previous
approaches that assume full automation or operate on small-scale code, Jr. AI
Scientist follows a well-defined research workflow and leverages modern coding
agents to handle complex, multi-file implementations, leading to scientifically
valuable contributions. For evaluation, we conducted automated assessments
using AI Reviewers, author-led evaluations, and submissions to Agents4Science,
a venue dedicated to AI-driven scientific contributions. The findings
demonstrate that Jr. AI Scientist generates papers receiving higher review
scores than existing fully automated systems. Nevertheless, we identify
important limitations from both the author evaluation and the Agents4Science
reviews, indicating the potential risks of directly applying current AI
Scientist systems and key challenges for future research. Finally, we
comprehensively report various risks identified during development. We hope
these insights will deepen understanding of current progress and risks in AI
Scientist development.
KFUPM King Fahd Univeris
Why we think this paper is great for you:
Understanding responsible AI values is becoming increasingly vital for data professionals. This research will help you navigate ethical considerations and best practices in your data career.
Abstract
Large Language Models (LLMs) are increasingly employed in software
engineering tasks such as requirements elicitation, design, and evaluation,
raising critical questions regarding their alignment with human judgments on
responsible AI values. This study investigates how closely LLMs' value
preferences align with those of two human groups: a US-representative sample
and AI practitioners. We evaluate 23 LLMs across four tasks: (T1) selecting key
responsible AI values, (T2) rating their importance in specific contexts, (T3)
resolving trade-offs between competing values, and (T4) prioritizing software
requirements that embody those values. The results show that LLMs generally
align more closely with AI practitioners than with the US-representative
sample, emphasizing fairness, privacy, transparency, safety, and
accountability. However, inconsistencies appear between the values that LLMs
claim to uphold (Tasks 1-3) and the way they prioritize requirements (Task 4),
revealing gaps in faithfulness between stated and applied behavior. These
findings highlight the practical risk of relying on LLMs in requirements
engineering without human oversight and motivate the need for systematic
approaches to benchmark, interpret, and monitor value alignment in AI-assisted
software development.
Why we think this paper is great for you:
This paper demonstrates the practical application of analytics and intelligent agents in a specific industry. It offers a glimpse into how data science principles are applied in real-world scenarios, informing potential career specializations.
Abstract
Geothermal field development typically involves complex processes that
require multi-disciplinary expertise in each process. Thus, decision-making
often demands the integration of geological, geophysical, reservoir
engineering, and operational data under tight time constraints. We present
Geothermal Analytics and Intelligent Agent, or GAIA, an AI-based system for
automation and assistance in geothermal field development. GAIA consists of
three core components: GAIA Agent, GAIA Chat, and GAIA Digital Twin, or DT,
which together constitute an agentic retrieval-augmented generation (RAG)
workflow. Specifically, GAIA Agent, powered by a pre-trained large language
model (LLM), designs and manages task pipelines by autonomously querying
knowledge bases and orchestrating multi-step analyses. GAIA DT encapsulates
classical and surrogate physics models, which, combined with built-in
domain-specific subroutines and visualization tools, enable predictive modeling
of geothermal systems. Lastly, GAIA Chat serves as a web-based interface for
users, featuring a ChatGPT-like layout with additional functionalities such as
interactive visualizations, parameter controls, and in-context document
retrieval. To ensure GAIA's specialized capability for handling complex
geothermal-related tasks, we curate a benchmark test set comprising various
geothermal-related use cases, and we rigorously and continuously evaluate the
system's performance. We envision GAIA as a pioneering step toward intelligent
geothermal field development, capable of assisting human experts in
decision-making, accelerating project workflows, and ultimately enabling
automation of the development process.
University of Washington
Why we think this paper is great for you:
This research delves into the technical aspects of training LLM agents in complex environments. It could be relevant if you are looking to specialize in advanced AI development within a data science career.
Abstract
LLM agents excel in compact environments requiring deep reasoning but remain
brittle when operating in broader, more complex contexts that demand robustness
across diverse tools and schemas. Building bespoke environments for training is
heavy, brittle, and limits progress. In this paper, we demonstrate that LLMs
can simulate realistic environment feedback without access to actual testbed
data or APIs. Inspired by this capability, we propose two frameworks:
Simia-SFT, a pipeline that synthesizes SFT data by amplifying small seed sets
into diverse trajectories in an environment-agnostic manner, and Simia-RL, a
framework that enables RL training without real environment implementations
through LLM-simulated feedback. Fine-tuning open models yields consistent
improvements across multiple benchmarks, surpassing GPT-4o and approaching
o4-mini on $\tau^2$-Bench. Together, Simia-SFT and Simia-RL enable scalable
agent training without environment engineering, replacing heavy and brittle
implementations with flexible LLM-based simulation.
Shanghai Jiaotong Univer
Why we think this paper is great for you:
This paper explores the capabilities of LLM agents in problem-solving with diverse tools. It provides insight into advanced AI functionalities that could be part of a specialized data science role.
Abstract
Large language model (LLM) agents have exhibited strong problem-solving
competence across domains like research and coding. Yet, it remains
underexplored whether LLM agents can tackle compounding real-world problems
that require a diverse set of tools to complete. Given a broad, heterogeneous
tool repository, LLM agents must not only select appropriate tools based on
task planning analysis but also strategically schedule the execution order to
ensure efficiency. This paper introduces TPS-Bench to benchmark the ability of
LLM agents in solving such problems that demand Tool Planning and Scheduling.
TPS-Bench collects 200 compounding tasks of two difficulty levels, based on a
tool repository containing hundreds of model context protocol (MCP) tools. In
particular, each task is composed of multiple subtasks, such as web search, map
navigation, calendar checking, etc., and each subtask can be completed by a
basic tool. Our evaluation emphasizes both task completion rate and efficiency.
The empirical studies on popular closed-source and open-source LLMs indicate
that most models can perform reasonable tool planning, but differ in
scheduling. For example, GLM-4.5 achieves an outperforming task completion rate
of 64.72% with extensive sequential tool calls, hence suffering from
significantly long execution time. By contrast, GPT-4o prioritizes parallel
tool calls but achieves only a 45.08% completion rate. Considering
reinforcement learning (RL) can be a viable way to improve the scheduling
efficiency without compromising performance, we perform an initial study on
Qwen3-1.7B and witness a 14% reduction in execution time alongside a 6% gain in
task completion rate based on rarely 100 RL training samples. Our code is
available https://github.com/hanwenxu1/mcp-agent.
Deep Learning
City St Georges, Univer
Abstract
Artificial Intelligence (AI) is a powerful new language of science as
evidenced by recent Nobel Prizes in chemistry and physics that recognized
contributions to AI applied to those areas. Yet, this new language lacks
semantics, which makes AI's scientific discoveries unsatisfactory at best. With
the purpose of uncovering new facts but also improving our understanding of the
world, AI-based science requires formalization through a framework capable of
translating insight into comprehensible scientific knowledge. In this paper, we
argue that logic offers an adequate framework. In particular, we use logic in a
neurosymbolic framework to offer a much needed semantics for deep learning, the
neural network-based technology of current AI. Deep learning and neurosymbolic
AI lack a general set of conditions to ensure that desirable properties are
satisfied. Instead, there is a plethora of encoding and knowledge extraction
approaches designed for particular cases. To rectify this, we introduced a
framework for semantic encoding, making explicit the mapping between neural
networks and logic, and characterizing the common ingredients of the various
existing approaches. In this paper, we describe succinctly and exemplify how
logical semantics and neural networks are linked through this framework, we
review some of the most prominent approaches and techniques developed for
neural encoding and knowledge extraction, provide a formal definition of our
framework, and discuss some of the difficulties of identifying a semantic
encoding in practice in light of analogous problems in the philosophy of mind.
VISTAMILK, Dublin City Un
Abstract
Grasslands, constituting the world's second-largest terrestrial carbon sink,
play a crucial role in biodiversity and the regulation of the carbon cycle.
Currently, the Irish dairy sector, a significant economic contributor, grapples
with challenges related to profitability and sustainability. Presently, grass
growth forecasting relies on impractical mechanistic models. In response, we
propose deep learning models tailored for univariate datasets, presenting
cost-effective alternatives. Notably, a temporal convolutional network designed
for forecasting Perennial Ryegrass growth in Cork exhibits high performance,
leveraging historical grass height data with RMSE of 2.74 and MAE of 3.46.
Validation across a comprehensive dataset spanning 1,757 weeks over 34 years
provides insights into optimal model configurations. This study enhances our
understanding of model behavior, thereby improving reliability in grass growth
forecasting and contributing to the advancement of sustainable dairy farming
practices.
We did not find tons of content matching your interests we've included some additional topics that are popular.
Also be aware that if the topics is not present in arxiv we wont be able to recommend it.
AI Agents
University of Washington
Abstract
LLM agents excel in compact environments requiring deep reasoning but remain
brittle when operating in broader, more complex contexts that demand robustness
across diverse tools and schemas. Building bespoke environments for training is
heavy, brittle, and limits progress. In this paper, we demonstrate that LLMs
can simulate realistic environment feedback without access to actual testbed
data or APIs. Inspired by this capability, we propose two frameworks:
Simia-SFT, a pipeline that synthesizes SFT data by amplifying small seed sets
into diverse trajectories in an environment-agnostic manner, and Simia-RL, a
framework that enables RL training without real environment implementations
through LLM-simulated feedback. Fine-tuning open models yields consistent
improvements across multiple benchmarks, surpassing GPT-4o and approaching
o4-mini on $\tau^2$-Bench. Together, Simia-SFT and Simia-RL enable scalable
agent training without environment engineering, replacing heavy and brittle
implementations with flexible LLM-based simulation.
Shanghai Jiaotong Univer
Abstract
Large language model (LLM) agents have exhibited strong problem-solving
competence across domains like research and coding. Yet, it remains
underexplored whether LLM agents can tackle compounding real-world problems
that require a diverse set of tools to complete. Given a broad, heterogeneous
tool repository, LLM agents must not only select appropriate tools based on
task planning analysis but also strategically schedule the execution order to
ensure efficiency. This paper introduces TPS-Bench to benchmark the ability of
LLM agents in solving such problems that demand Tool Planning and Scheduling.
TPS-Bench collects 200 compounding tasks of two difficulty levels, based on a
tool repository containing hundreds of model context protocol (MCP) tools. In
particular, each task is composed of multiple subtasks, such as web search, map
navigation, calendar checking, etc., and each subtask can be completed by a
basic tool. Our evaluation emphasizes both task completion rate and efficiency.
The empirical studies on popular closed-source and open-source LLMs indicate
that most models can perform reasonable tool planning, but differ in
scheduling. For example, GLM-4.5 achieves an outperforming task completion rate
of 64.72% with extensive sequential tool calls, hence suffering from
significantly long execution time. By contrast, GPT-4o prioritizes parallel
tool calls but achieves only a 45.08% completion rate. Considering
reinforcement learning (RL) can be a viable way to improve the scheduling
efficiency without compromising performance, we perform an initial study on
Qwen3-1.7B and witness a 14% reduction in execution time alongside a 6% gain in
task completion rate based on rarely 100 RL training samples. Our code is
available https://github.com/hanwenxu1/mcp-agent.
AI and Society
KFUPM King Fahd Univeris
Abstract
Large Language Models (LLMs) are increasingly employed in software
engineering tasks such as requirements elicitation, design, and evaluation,
raising critical questions regarding their alignment with human judgments on
responsible AI values. This study investigates how closely LLMs' value
preferences align with those of two human groups: a US-representative sample
and AI practitioners. We evaluate 23 LLMs across four tasks: (T1) selecting key
responsible AI values, (T2) rating their importance in specific contexts, (T3)
resolving trade-offs between competing values, and (T4) prioritizing software
requirements that embody those values. The results show that LLMs generally
align more closely with AI practitioners than with the US-representative
sample, emphasizing fairness, privacy, transparency, safety, and
accountability. However, inconsistencies appear between the values that LLMs
claim to uphold (Tasks 1-3) and the way they prioritize requirements (Task 4),
revealing gaps in faithfulness between stated and applied behavior. These
findings highlight the practical risk of relying on LLMs in requirements
engineering without human oversight and motivate the need for systematic
approaches to benchmark, interpret, and monitor value alignment in AI-assisted
software development.
The University of Tokyo
Abstract
Understanding the current capabilities and risks of AI Scientist systems is
essential for ensuring trustworthy and sustainable AI-driven scientific
progress while preserving the integrity of the academic ecosystem. To this end,
we develop Jr. AI Scientist, a state-of-the-art autonomous AI scientist system
that mimics the core research workflow of a novice student researcher: Given
the baseline paper from the human mentor, it analyzes its limitations,
formulates novel hypotheses for improvement, validates them through rigorous
experimentation, and writes a paper with the results. Unlike previous
approaches that assume full automation or operate on small-scale code, Jr. AI
Scientist follows a well-defined research workflow and leverages modern coding
agents to handle complex, multi-file implementations, leading to scientifically
valuable contributions. For evaluation, we conducted automated assessments
using AI Reviewers, author-led evaluations, and submissions to Agents4Science,
a venue dedicated to AI-driven scientific contributions. The findings
demonstrate that Jr. AI Scientist generates papers receiving higher review
scores than existing fully automated systems. Nevertheless, we identify
important limitations from both the author evaluation and the Agents4Science
reviews, indicating the potential risks of directly applying current AI
Scientist systems and key challenges for future research. Finally, we
comprehensively report various risks identified during development. We hope
these insights will deepen understanding of current progress and risks in AI
Scientist development.
AGI: Artificial General Intelligence
Abstract
Geothermal field development typically involves complex processes that
require multi-disciplinary expertise in each process. Thus, decision-making
often demands the integration of geological, geophysical, reservoir
engineering, and operational data under tight time constraints. We present
Geothermal Analytics and Intelligent Agent, or GAIA, an AI-based system for
automation and assistance in geothermal field development. GAIA consists of
three core components: GAIA Agent, GAIA Chat, and GAIA Digital Twin, or DT,
which together constitute an agentic retrieval-augmented generation (RAG)
workflow. Specifically, GAIA Agent, powered by a pre-trained large language
model (LLM), designs and manages task pipelines by autonomously querying
knowledge bases and orchestrating multi-step analyses. GAIA DT encapsulates
classical and surrogate physics models, which, combined with built-in
domain-specific subroutines and visualization tools, enable predictive modeling
of geothermal systems. Lastly, GAIA Chat serves as a web-based interface for
users, featuring a ChatGPT-like layout with additional functionalities such as
interactive visualizations, parameter controls, and in-context document
retrieval. To ensure GAIA's specialized capability for handling complex
geothermal-related tasks, we curate a benchmark test set comprising various
geothermal-related use cases, and we rigorously and continuously evaluate the
system's performance. We envision GAIA as a pioneering step toward intelligent
geothermal field development, capable of assisting human experts in
decision-making, accelerating project workflows, and ultimately enabling
automation of the development process.
Deep Learning
City St Georges, Univer
Abstract
Artificial Intelligence (AI) is a powerful new language of science as
evidenced by recent Nobel Prizes in chemistry and physics that recognized
contributions to AI applied to those areas. Yet, this new language lacks
semantics, which makes AI's scientific discoveries unsatisfactory at best. With
the purpose of uncovering new facts but also improving our understanding of the
world, AI-based science requires formalization through a framework capable of
translating insight into comprehensible scientific knowledge. In this paper, we
argue that logic offers an adequate framework. In particular, we use logic in a
neurosymbolic framework to offer a much needed semantics for deep learning, the
neural network-based technology of current AI. Deep learning and neurosymbolic
AI lack a general set of conditions to ensure that desirable properties are
satisfied. Instead, there is a plethora of encoding and knowledge extraction
approaches designed for particular cases. To rectify this, we introduced a
framework for semantic encoding, making explicit the mapping between neural
networks and logic, and characterizing the common ingredients of the various
existing approaches. In this paper, we describe succinctly and exemplify how
logical semantics and neural networks are linked through this framework, we
review some of the most prominent approaches and techniques developed for
neural encoding and knowledge extraction, provide a formal definition of our
framework, and discuss some of the difficulties of identifying a semantic
encoding in practice in light of analogous problems in the philosophy of mind.
VISTAMILK, Dublin City Un
Abstract
Grasslands, constituting the world's second-largest terrestrial carbon sink,
play a crucial role in biodiversity and the regulation of the carbon cycle.
Currently, the Irish dairy sector, a significant economic contributor, grapples
with challenges related to profitability and sustainability. Presently, grass
growth forecasting relies on impractical mechanistic models. In response, we
propose deep learning models tailored for univariate datasets, presenting
cost-effective alternatives. Notably, a temporal convolutional network designed
for forecasting Perennial Ryegrass growth in Cork exhibits high performance,
leveraging historical grass height data with RMSE of 2.74 and MAE of 3.46.
Validation across a comprehensive dataset spanning 1,757 weeks over 34 years
provides insights into optimal model configurations. This study enhances our
understanding of model behavior, thereby improving reliability in grass growth
forecasting and contributing to the advancement of sustainable dairy farming
practices.