Abstract
Large language model (LLM) and agent techniques for data analysis (a.k.a
LLM/Agent-as-Data-Analyst) have demonstrated substantial impact in both
academica and industry. In comparison with traditional rule or small-model
based approaches, (agentic) LLMs enable complex data understanding, natural
language interfaces, semantic analysis functions, and autonomous pipeline
orchestration. The technical evolution further distills five key design goals
for intelligent data analysis agents, namely semantic-aware design,
modality-hybrid integration, autonomous pipelines, tool-augmented workflows,
and support for open-world tasks. From a modality perspective, we review
LLM-based techniques for (i) structured data (e.g., table question answering
for relational data and NL2GQL for graph data), (ii) semi-structured data
(e.g., markup languages understanding and semi-structured table modeling),
(iii) unstructured data (e.g., chart understanding, document understanding,
programming languages vulnerable detection), and (iv) heterogeneous data (e.g.,
data retrieval and modality alignment for data lakes). Finally, we outline the
remaining challenges and propose several insights and practical directions for
advancing LLM/Agent-powered data analysis.
Zhejiang University, ZTE
Abstract
In commercial systems, a pervasive requirement for automatic data preparation
(ADP) is to transfer relational data from disparate sources to targets with
standardized schema specifications. Previous methods rely on labor-intensive
supervision signals or target table data access permissions, limiting their
usage in real-world scenarios. To tackle these challenges, we propose an
effective end-to-end ADP framework MontePrep, which enables training-free
pipeline synthesis with zero target-instance requirements. MontePrep is
formulated as an open-source large language model (LLM) powered tree-structured
search problem. It consists of three pivot components, i.e., a data preparation
action sandbox (DPAS), a fundamental pipeline generator (FPG), and an
execution-aware pipeline optimizer (EPO). We first introduce DPAS, a
lightweight action sandbox, to navigate the search-based pipeline generation.
The design of DPAS circumvents exploration of infeasible pipelines. Then, we
present FPG to build executable DP pipelines incrementally, which explores the
predefined action sandbox by the LLM-powered Monte Carlo Tree Search.
Furthermore, we propose EPO, which invokes pipeline execution results from
sources to targets to evaluate the reliability of the generated pipelines in
FPG. In this way, unreasonable pipelines are eliminated, thus facilitating the
search process from both efficiency and effectiveness perspectives. Extensive
experimental results demonstrate the superiority of MontePrep with significant
improvement against five state-of-the-art competitors.
AI Insights - DPAS sandbox prunes infeasible pipelines before LLM exploration, shrinking search space.
- FPG builds executable DP pipelines incrementally using LLMāguided Monte Carlo Tree Search.
- EPO evaluates sourceātoātarget runs, filtering unreliable pipelines and accelerating convergence.
- Trainingāfree synthesis with zero targetāinstance data makes it ideal for privacyārestricted settings.
- Monte Carlo Tree Search ā stochastic treeāsearch balancing exploration and exploitation via random sampling.
- For deeper context, read āChaināofāThought Prompting elicits reasoning in large language modelsā (NeurIPS 2022).