🎯 Top Personalized Recommendations
LAMSADE, Univ ParisDaup
Why we think this paper is great for you:
This paper directly addresses in-memory indexing and querying within data preparation pipelines, which is highly relevant to your interest in efficient data management and data warehousing. It explores methods for understanding and utilizing data provenance, a key aspect of data quality and analysis.
Abstract
Data provenance has numerous applications in the context of data preparation
pipelines. It can be used for debugging faulty pipelines, interpreting results,
verifying fairness, and identifying data quality issues, which may affect the
sources feeding the pipeline execution. In this paper, we present an indexing
mechanism to efficiently capture and query pipeline provenance. Our solution
leverages tensors to capture fine-grained provenance of data processing
operations, using minimal memory. In addition to record-level lineage
relationships, we provide finer granularity at the attribute level. This is
achieved by augmenting tensors, which capture retrospective provenance, with
prospective provenance information, drawing connections between input and
output schemas of data processing operations. We demonstrate how these two
types of provenance (retrospective and prospective) can be combined to answer a
broad range of provenance queries efficiently, and show effectiveness through
evaluation exercises using both real and synthetic data.
AI Summary - The paper introduces a novel in-memory mechanism leveraging sparse binary tensors to efficiently capture fine-grained provenance (record and attribute-value level) in data preparation pipelines, enabling dynamic exploration during development. [3]
- Attribute-level provenance is achieved by augmenting record-level tensors with minimal metadata (bitsets) that specify input/output column correspondences and data manipulation types, avoiding the high overhead of explicitly tracking every attribute value. [3]
- The system supports a broad range of record-based and attribute-value-based provenance queries (forward, backward, co-contributory, co-dependency, how-provenance) by leveraging tensor slicing, projection, and Einstein summation. [3]
- Provenance of a Data Processing Operation: Encoded using an order n+1 binary tensor T(DP), where T(DP)(o, i1, ..., in) = 1 if output record DO_o is provenance-wise derived from input records (DI1_i1, ..., DIn_in). [3]
- Sparse Binary Tensors: N-dimensional arrays mapping multi-indices to boolean values (0 or 1), used to encode derivation relationships between data records with minimal memory. [3]
- Optimized Tensor Representation: A rooted acyclic graph structure for sparse binary tensors, where nodes represent datasets, record indices, and full provenance triples, designed for efficient lineage traversal. [3]
- A specialized rooted acyclic graph structure is designed for sparse binary tensors, significantly reducing lineage query processing time from O(T) (scanning) to O(d) (tree height, typically 3 accesses) compared to standard sparse tensor formats. [2]
- The solution employs a hybrid provenance capture strategy, using observation-based methods for operations preserving data frame indices and active capture for operations that do not, balancing intrusiveness and computational cost. [2]
- Data processing operations are categorized into 'localized' (recomputable from provenance-related inputs) and 'contextual' (requiring full dataset context), allowing selective materialization of input datasets to balance memory usage and recomputation efficiency. [2]
- Empirical evaluations demonstrate the solution's effectiveness in minimizing storage space for provenance encoding, reducing processing overhead during capture, and accelerating provenance query execution. [2]
University of Toronto
Why we think this paper is great for you:
You will find this paper highly relevant as it delves into applying relational concepts and analytical queries to unstructured data, a critical challenge in modern database design. It offers insights into structuring diverse data types for effective analysis.
Abstract
Unstructured data is pervasive, but analytical queries demand structured
representations, creating a significant extraction challenge. Existing methods
like RAG lack schema awareness and struggle with cross-document alignment,
leading to high error rates. We propose ReDD (Relational Deep Dive), a
framework that dynamically discovers query-specific schemas, populates
relational tables, and ensures error-aware extraction with provable guarantees.
ReDD features a two-stage pipeline: (1) Iterative Schema Discovery (ISD)
identifies minimal, joinable schemas tailored to each query, and (2) Tabular
Data Population (TDP) extracts and corrects data using lightweight classifiers
trained on LLM hidden states. A main contribution of ReDD is SCAPE, a
statistically calibrated method for error detection with coverage guarantees,
and SCAPE-HYB, a hybrid approach that optimizes the trade-off between accuracy
and human correction costs. Experiments across diverse datasets demonstrate
ReDD's effectiveness, reducing data extraction errors from up to 30% to below
1% while maintaining high schema completeness (100% recall) and precision.
ReDD's modular design enables fine-grained control over accuracy-cost
trade-offs, making it a robust solution for high-stakes analytical queries over
unstructured corpora.
Centrum Wiskunde & Inform
Why we think this paper is great for you:
This paper explores natural language interfaces for tabular data analysis and the nuances of queries, directly aligning with your interest in SQL and interacting with relational databases. It provides valuable perspectives on improving query specification and understanding.
Abstract
Natural language interfaces to tabular data must handle ambiguities inherent
to queries. Instead of treating ambiguity as a deficiency, we reframe it as a
feature of cooperative interaction, where the responsibility of query
specification is shared among the user and the system. We develop a principled
framework distinguishing cooperative queries, i.e., queries that yield a
resolvable interpretation, from uncooperative queries that cannot be resolved.
Applying the framework to evaluations for tabular question answering and
analysis, we analyze the queries in 15 popular datasets, and observe an
uncontrolled mixing of query types neither adequate for evaluating a system's
execution accuracy nor for evaluating interpretation capabilities. Our
framework and analysis of queries shifts the perspective from fixing ambiguity
to embracing cooperation in resolving queries. This reflection enables more
informed design and evaluation for natural language interfaces for tabular
data, for which we outline implications and directions for future research.
Why we think this paper is great for you:
This paper focuses on integrating diverse data for analytical purposes, which is central to effective data warehousing and database design practices. It showcases how different data sources can be unified to support complex decision-making.
Abstract
Geothermal field development typically involves complex processes that
require multi-disciplinary expertise in each process. Thus, decision-making
often demands the integration of geological, geophysical, reservoir
engineering, and operational data under tight time constraints. We present
Geothermal Analytics and Intelligent Agent, or GAIA, an AI-based system for
automation and assistance in geothermal field development. GAIA consists of
three core components: GAIA Agent, GAIA Chat, and GAIA Digital Twin, or DT,
which together constitute an agentic retrieval-augmented generation (RAG)
workflow. Specifically, GAIA Agent, powered by a pre-trained large language
model (LLM), designs and manages task pipelines by autonomously querying
knowledge bases and orchestrating multi-step analyses. GAIA DT encapsulates
classical and surrogate physics models, which, combined with built-in
domain-specific subroutines and visualization tools, enable predictive modeling
of geothermal systems. Lastly, GAIA Chat serves as a web-based interface for
users, featuring a ChatGPT-like layout with additional functionalities such as
interactive visualizations, parameter controls, and in-context document
retrieval. To ensure GAIA's specialized capability for handling complex
geothermal-related tasks, we curate a benchmark test set comprising various
geothermal-related use cases, and we rigorously and continuously evaluate the
system's performance. We envision GAIA as a pioneering step toward intelligent
geothermal field development, capable of assisting human experts in
decision-making, accelerating project workflows, and ultimately enabling
automation of the development process.
University of Illinois at
Why we think this paper is great for you:
Given your interest in various database types, this paper on analytical queries for unstructured data will be insightful. It addresses the significant challenge of extracting meaningful information from data that doesn't conform to traditional structures.
Abstract
Unstructured data, in the form of text, images, video, and audio, is produced
at exponentially higher rates. In tandem, machine learning (ML) methods have
become increasingly powerful at analyzing unstructured data. Modern ML methods
can now detect objects in images, understand actions in videos, and even
classify complex legal texts based on legal intent. Combined, these trends make
it increasingly feasible for analysts and researchers to automatically
understand the "real world." However, there are major challenges in deploying
these techniques: 1) executing queries efficiently given the expense of ML
methods, 2) expressing queries over bespoke forms of data, and 3) handling
errors in ML methods.
In this monograph, we discuss challenges and advances in data management
systems for unstructured data using ML, with a particular focus on video
analytics. Using ML to answer queries introduces new challenges.First, even
turning user intent into queries can be challenging: it is not obvious how to
express a query of the form "select instances of cars turning left." Second, ML
models can be orders of magnitude more expensive compared to processing
traditional structured data. Third, ML models and the methods to accelerate
analytics with ML models can be error-prone.
Recent work in the data management community has aimed to address all of
these challenges. Users can now express queries via user-defined functions,
opaquely through standard structured schemas, and even by providing examples.
Given a query, recent work focuses on optimizing queries by approximating
expensive "gold" methods with varying levels of guarantees. Finally, to handle
errors in ML models, recent work has focused on applying outlier and drift
detection to data analytics with ML.
University of Washington
Why we think this paper is great for you:
This paper discusses managing diverse schemas in complex environments, a topic that touches upon the challenges of database design and integration. It offers a perspective on how different data structures interact within a system.
Abstract
LLM agents excel in compact environments requiring deep reasoning but remain
brittle when operating in broader, more complex contexts that demand robustness
across diverse tools and schemas. Building bespoke environments for training is
heavy, brittle, and limits progress. In this paper, we demonstrate that LLMs
can simulate realistic environment feedback without access to actual testbed
data or APIs. Inspired by this capability, we propose two frameworks:
Simia-SFT, a pipeline that synthesizes SFT data by amplifying small seed sets
into diverse trajectories in an environment-agnostic manner, and Simia-RL, a
framework that enables RL training without real environment implementations
through LLM-simulated feedback. Fine-tuning open models yields consistent
improvements across multiple benchmarks, surpassing GPT-4o and approaching
o4-mini on $\tau^2$-Bench. Together, Simia-SFT and Simia-RL enable scalable
agent training without environment engineering, replacing heavy and brittle
implementations with flexible LLM-based simulation.
Edison Scientific Inc, 1
Why we think this paper is great for you:
This paper explores autonomous scientific discovery driven by data, highlighting the importance of robust data analysis and management in research. It provides a broader context for how data-centric approaches are applied to complex problems.
Abstract
Data-driven scientific discovery requires iterative cycles of literature
search, hypothesis generation, and data analysis. Substantial progress has been
made towards AI agents that can automate scientific research, but all such
agents remain limited in the number of actions they can take before losing
coherence, thus limiting the depth of their findings. Here we present Kosmos,
an AI scientist that automates data-driven discovery. Given an open-ended
objective and a dataset, Kosmos runs for up to 12 hours performing cycles of
parallel data analysis, literature search, and hypothesis generation before
synthesizing discoveries into scientific reports. Unlike prior systems, Kosmos
uses a structured world model to share information between a data analysis
agent and a literature search agent. The world model enables Kosmos to
coherently pursue the specified objective over 200 agent rollouts, collectively
executing an average of 42,000 lines of code and reading 1,500 papers per run.
Kosmos cites all statements in its reports with code or primary literature,
ensuring its reasoning is traceable. Independent scientists found 79.4% of
statements in Kosmos reports to be accurate, and collaborators reported that a
single 20-cycle Kosmos run performed the equivalent of 6 months of their own
research time on average. Furthermore, collaborators reported that the number
of valuable scientific findings generated scales linearly with Kosmos cycles
(tested up to 20 cycles). We highlight seven discoveries made by Kosmos that
span metabolomics, materials science, neuroscience, and statistical genetics.
Three discoveries independently reproduce findings from preprinted or
unpublished manuscripts that were not accessed by Kosmos at runtime, while four
make novel contributions to the scientific literature.
AI and Society
KFUPM King Fahd Univeris
Abstract
Large Language Models (LLMs) are increasingly employed in software
engineering tasks such as requirements elicitation, design, and evaluation,
raising critical questions regarding their alignment with human judgments on
responsible AI values. This study investigates how closely LLMs' value
preferences align with those of two human groups: a US-representative sample
and AI practitioners. We evaluate 23 LLMs across four tasks: (T1) selecting key
responsible AI values, (T2) rating their importance in specific contexts, (T3)
resolving trade-offs between competing values, and (T4) prioritizing software
requirements that embody those values. The results show that LLMs generally
align more closely with AI practitioners than with the US-representative
sample, emphasizing fairness, privacy, transparency, safety, and
accountability. However, inconsistencies appear between the values that LLMs
claim to uphold (Tasks 1-3) and the way they prioritize requirements (Task 4),
revealing gaps in faithfulness between stated and applied behavior. These
findings highlight the practical risk of relying on LLMs in requirements
engineering without human oversight and motivate the need for systematic
approaches to benchmark, interpret, and monitor value alignment in AI-assisted
software development.
The University of Tokyo
Abstract
Understanding the current capabilities and risks of AI Scientist systems is
essential for ensuring trustworthy and sustainable AI-driven scientific
progress while preserving the integrity of the academic ecosystem. To this end,
we develop Jr. AI Scientist, a state-of-the-art autonomous AI scientist system
that mimics the core research workflow of a novice student researcher: Given
the baseline paper from the human mentor, it analyzes its limitations,
formulates novel hypotheses for improvement, validates them through rigorous
experimentation, and writes a paper with the results. Unlike previous
approaches that assume full automation or operate on small-scale code, Jr. AI
Scientist follows a well-defined research workflow and leverages modern coding
agents to handle complex, multi-file implementations, leading to scientifically
valuable contributions. For evaluation, we conducted automated assessments
using AI Reviewers, author-led evaluations, and submissions to Agents4Science,
a venue dedicated to AI-driven scientific contributions. The findings
demonstrate that Jr. AI Scientist generates papers receiving higher review
scores than existing fully automated systems. Nevertheless, we identify
important limitations from both the author evaluation and the Agents4Science
reviews, indicating the potential risks of directly applying current AI
Scientist systems and key challenges for future research. Finally, we
comprehensively report various risks identified during development. We hope
these insights will deepen understanding of current progress and risks in AI
Scientist development.
Deep Learning
City St Georges, Univer
Abstract
Artificial Intelligence (AI) is a powerful new language of science as
evidenced by recent Nobel Prizes in chemistry and physics that recognized
contributions to AI applied to those areas. Yet, this new language lacks
semantics, which makes AI's scientific discoveries unsatisfactory at best. With
the purpose of uncovering new facts but also improving our understanding of the
world, AI-based science requires formalization through a framework capable of
translating insight into comprehensible scientific knowledge. In this paper, we
argue that logic offers an adequate framework. In particular, we use logic in a
neurosymbolic framework to offer a much needed semantics for deep learning, the
neural network-based technology of current AI. Deep learning and neurosymbolic
AI lack a general set of conditions to ensure that desirable properties are
satisfied. Instead, there is a plethora of encoding and knowledge extraction
approaches designed for particular cases. To rectify this, we introduced a
framework for semantic encoding, making explicit the mapping between neural
networks and logic, and characterizing the common ingredients of the various
existing approaches. In this paper, we describe succinctly and exemplify how
logical semantics and neural networks are linked through this framework, we
review some of the most prominent approaches and techniques developed for
neural encoding and knowledge extraction, provide a formal definition of our
framework, and discuss some of the difficulties of identifying a semantic
encoding in practice in light of analogous problems in the philosophy of mind.
VISTAMILK, Dublin City Un
Abstract
Grasslands, constituting the world's second-largest terrestrial carbon sink,
play a crucial role in biodiversity and the regulation of the carbon cycle.
Currently, the Irish dairy sector, a significant economic contributor, grapples
with challenges related to profitability and sustainability. Presently, grass
growth forecasting relies on impractical mechanistic models. In response, we
propose deep learning models tailored for univariate datasets, presenting
cost-effective alternatives. Notably, a temporal convolutional network designed
for forecasting Perennial Ryegrass growth in Cork exhibits high performance,
leveraging historical grass height data with RMSE of 2.74 and MAE of 3.46.
Validation across a comprehensive dataset spanning 1,757 weeks over 34 years
provides insights into optimal model configurations. This study enhances our
understanding of model behavior, thereby improving reliability in grass growth
forecasting and contributing to the advancement of sustainable dairy farming
practices.
We did not find tons of content matching your interests we've included some additional topics that are popular.
Also be aware that if the topics is not present in arxiv we wont be able to recommend it.
AI Agents
University of Washington
Abstract
LLM agents excel in compact environments requiring deep reasoning but remain
brittle when operating in broader, more complex contexts that demand robustness
across diverse tools and schemas. Building bespoke environments for training is
heavy, brittle, and limits progress. In this paper, we demonstrate that LLMs
can simulate realistic environment feedback without access to actual testbed
data or APIs. Inspired by this capability, we propose two frameworks:
Simia-SFT, a pipeline that synthesizes SFT data by amplifying small seed sets
into diverse trajectories in an environment-agnostic manner, and Simia-RL, a
framework that enables RL training without real environment implementations
through LLM-simulated feedback. Fine-tuning open models yields consistent
improvements across multiple benchmarks, surpassing GPT-4o and approaching
o4-mini on $\tau^2$-Bench. Together, Simia-SFT and Simia-RL enable scalable
agent training without environment engineering, replacing heavy and brittle
implementations with flexible LLM-based simulation.
Shanghai Jiaotong Univer
Abstract
Large language model (LLM) agents have exhibited strong problem-solving
competence across domains like research and coding. Yet, it remains
underexplored whether LLM agents can tackle compounding real-world problems
that require a diverse set of tools to complete. Given a broad, heterogeneous
tool repository, LLM agents must not only select appropriate tools based on
task planning analysis but also strategically schedule the execution order to
ensure efficiency. This paper introduces TPS-Bench to benchmark the ability of
LLM agents in solving such problems that demand Tool Planning and Scheduling.
TPS-Bench collects 200 compounding tasks of two difficulty levels, based on a
tool repository containing hundreds of model context protocol (MCP) tools. In
particular, each task is composed of multiple subtasks, such as web search, map
navigation, calendar checking, etc., and each subtask can be completed by a
basic tool. Our evaluation emphasizes both task completion rate and efficiency.
The empirical studies on popular closed-source and open-source LLMs indicate
that most models can perform reasonable tool planning, but differ in
scheduling. For example, GLM-4.5 achieves an outperforming task completion rate
of 64.72% with extensive sequential tool calls, hence suffering from
significantly long execution time. By contrast, GPT-4o prioritizes parallel
tool calls but achieves only a 45.08% completion rate. Considering
reinforcement learning (RL) can be a viable way to improve the scheduling
efficiency without compromising performance, we perform an initial study on
Qwen3-1.7B and witness a 14% reduction in execution time alongside a 6% gain in
task completion rate based on rarely 100 RL training samples. Our code is
available https://github.com/hanwenxu1/mcp-agent.
AI and Society
KFUPM King Fahd Univeris
Abstract
Large Language Models (LLMs) are increasingly employed in software
engineering tasks such as requirements elicitation, design, and evaluation,
raising critical questions regarding their alignment with human judgments on
responsible AI values. This study investigates how closely LLMs' value
preferences align with those of two human groups: a US-representative sample
and AI practitioners. We evaluate 23 LLMs across four tasks: (T1) selecting key
responsible AI values, (T2) rating their importance in specific contexts, (T3)
resolving trade-offs between competing values, and (T4) prioritizing software
requirements that embody those values. The results show that LLMs generally
align more closely with AI practitioners than with the US-representative
sample, emphasizing fairness, privacy, transparency, safety, and
accountability. However, inconsistencies appear between the values that LLMs
claim to uphold (Tasks 1-3) and the way they prioritize requirements (Task 4),
revealing gaps in faithfulness between stated and applied behavior. These
findings highlight the practical risk of relying on LLMs in requirements
engineering without human oversight and motivate the need for systematic
approaches to benchmark, interpret, and monitor value alignment in AI-assisted
software development.
The University of Tokyo
Abstract
Understanding the current capabilities and risks of AI Scientist systems is
essential for ensuring trustworthy and sustainable AI-driven scientific
progress while preserving the integrity of the academic ecosystem. To this end,
we develop Jr. AI Scientist, a state-of-the-art autonomous AI scientist system
that mimics the core research workflow of a novice student researcher: Given
the baseline paper from the human mentor, it analyzes its limitations,
formulates novel hypotheses for improvement, validates them through rigorous
experimentation, and writes a paper with the results. Unlike previous
approaches that assume full automation or operate on small-scale code, Jr. AI
Scientist follows a well-defined research workflow and leverages modern coding
agents to handle complex, multi-file implementations, leading to scientifically
valuable contributions. For evaluation, we conducted automated assessments
using AI Reviewers, author-led evaluations, and submissions to Agents4Science,
a venue dedicated to AI-driven scientific contributions. The findings
demonstrate that Jr. AI Scientist generates papers receiving higher review
scores than existing fully automated systems. Nevertheless, we identify
important limitations from both the author evaluation and the Agents4Science
reviews, indicating the potential risks of directly applying current AI
Scientist systems and key challenges for future research. Finally, we
comprehensively report various risks identified during development. We hope
these insights will deepen understanding of current progress and risks in AI
Scientist development.
AGI: Artificial General Intelligence
Abstract
Geothermal field development typically involves complex processes that
require multi-disciplinary expertise in each process. Thus, decision-making
often demands the integration of geological, geophysical, reservoir
engineering, and operational data under tight time constraints. We present
Geothermal Analytics and Intelligent Agent, or GAIA, an AI-based system for
automation and assistance in geothermal field development. GAIA consists of
three core components: GAIA Agent, GAIA Chat, and GAIA Digital Twin, or DT,
which together constitute an agentic retrieval-augmented generation (RAG)
workflow. Specifically, GAIA Agent, powered by a pre-trained large language
model (LLM), designs and manages task pipelines by autonomously querying
knowledge bases and orchestrating multi-step analyses. GAIA DT encapsulates
classical and surrogate physics models, which, combined with built-in
domain-specific subroutines and visualization tools, enable predictive modeling
of geothermal systems. Lastly, GAIA Chat serves as a web-based interface for
users, featuring a ChatGPT-like layout with additional functionalities such as
interactive visualizations, parameter controls, and in-context document
retrieval. To ensure GAIA's specialized capability for handling complex
geothermal-related tasks, we curate a benchmark test set comprising various
geothermal-related use cases, and we rigorously and continuously evaluate the
system's performance. We envision GAIA as a pioneering step toward intelligent
geothermal field development, capable of assisting human experts in
decision-making, accelerating project workflows, and ultimately enabling
automation of the development process.
Deep Learning
City St Georges, Univer
Abstract
Artificial Intelligence (AI) is a powerful new language of science as
evidenced by recent Nobel Prizes in chemistry and physics that recognized
contributions to AI applied to those areas. Yet, this new language lacks
semantics, which makes AI's scientific discoveries unsatisfactory at best. With
the purpose of uncovering new facts but also improving our understanding of the
world, AI-based science requires formalization through a framework capable of
translating insight into comprehensible scientific knowledge. In this paper, we
argue that logic offers an adequate framework. In particular, we use logic in a
neurosymbolic framework to offer a much needed semantics for deep learning, the
neural network-based technology of current AI. Deep learning and neurosymbolic
AI lack a general set of conditions to ensure that desirable properties are
satisfied. Instead, there is a plethora of encoding and knowledge extraction
approaches designed for particular cases. To rectify this, we introduced a
framework for semantic encoding, making explicit the mapping between neural
networks and logic, and characterizing the common ingredients of the various
existing approaches. In this paper, we describe succinctly and exemplify how
logical semantics and neural networks are linked through this framework, we
review some of the most prominent approaches and techniques developed for
neural encoding and knowledge extraction, provide a formal definition of our
framework, and discuss some of the difficulties of identifying a semantic
encoding in practice in light of analogous problems in the philosophy of mind.
VISTAMILK, Dublin City Un
Abstract
Grasslands, constituting the world's second-largest terrestrial carbon sink,
play a crucial role in biodiversity and the regulation of the carbon cycle.
Currently, the Irish dairy sector, a significant economic contributor, grapples
with challenges related to profitability and sustainability. Presently, grass
growth forecasting relies on impractical mechanistic models. In response, we
propose deep learning models tailored for univariate datasets, presenting
cost-effective alternatives. Notably, a temporal convolutional network designed
for forecasting Perennial Ryegrass growth in Cork exhibits high performance,
leveraging historical grass height data with RMSE of 2.74 and MAE of 3.46.
Validation across a comprehensive dataset spanning 1,757 weeks over 34 years
provides insights into optimal model configurations. This study enhances our
understanding of model behavior, thereby improving reliability in grass growth
forecasting and contributing to the advancement of sustainable dairy farming
practices.