Zhejiang University, The
Abstract
The rise of LLM-powered agents is driving a fundamental transformation in
services computing: from static, request-response functions to dynamic,
goal-oriented, and autonomous multi-agent ecosystems. In response to this
shift, we introduce Agentic Service Computing (ASC), a new paradigm that
reimagines services as intelligent, self-adaptive, and socially embedded
entities. This comprehensive survey presents a lifecycle-driven framework for
ASC, structured around four core phases: Design, Deployment, Operation, and
Evolution. We systematically analyze ASC through four foundational research
dimensions: (1) Perception, Context, and Environment Modeling, (2) Autonomous
Decision-Making and Task Execution, (3) Multi-Agent Collaboration and
Organization, and (4) Evaluation, Value Alignment, and Trustworthiness. We
examine how these dimensions are instantiated, integrated, and continuously
adapted across the service lifecycle. Our synthesis reveals that agentic
services are not merely assembled but orchestrated: contextual awareness
enables robust deployment; autonomous reasoning supports real-time operation;
collaborative structures emerge and evolve through interaction; and
trustworthiness must be upheld as a cross-cutting, lifelong imperative. We
further identify and discuss emerging trends shaping the future of ASC. By
integrating classical principles of services computing with advances in
LLM-based multi-agent systems, this work establishes a holistic and
forward-looking foundation for ASC. It provides a unified reference for
researchers and practitioners aiming to develop adaptive, accountable, and
human-centered intelligent services.
AI Insights - Federated learning enables privacy‑preserving on‑device updates for agentic services.
- Formal verification can guarantee safety of autonomous decision modules in multi‑agent ecosystems.
- Dynamic resource schedulers adapt to workload shifts, preserving QoS in agentic clusters.
- OpenAPI extensions for agentic interactions standardize cross‑domain collaboration.
- Benchmarks that score explainability, latency, and trust guide agentic framework comparison.
- Human‑in‑the‑loop UIs let users steer agentic goals while preserving autonomy.
- Edge‑centric deployments cut latency and boost resilience for distributed agentic services.
AXA Group Operations, EPF
Abstract
When used in high-stakes settings, AI systems are expected to produce
decisions that are transparent, interpretable, and auditable, a requirement
increasingly expected by regulations. Decision trees such as CART provide clear
and verifiable rules, but they are restricted to structured tabular data and
cannot operate directly on unstructured inputs such as text. In practice, large
language models (LLMs) are widely used for such data, yet prompting strategies
such as chain-of-thought or prompt optimization still rely on free-form
reasoning, limiting their ability to ensure trustworthy behaviors. We present
the Agentic Classification Tree (ACT), which extends decision-tree methodology
to unstructured inputs by formulating each split as a natural-language
question, refined through impurity-based evaluation and LLM feedback via
TextGrad. Experiments on text benchmarks show that ACT matches or surpasses
prompting-based baselines while producing transparent and interpretable
decision paths.
AI Insights - ACT uses iterative prompt refinement guided by impurity metrics to craft discriminative questions.
- Experiments on DIAGNO, SPAM, and JAILBREAK datasets demonstrate ACT's competitive accuracy.
- Qualitative inspection shows ACT-generated questions align with human intuition, enhancing trust.
- The tree structure is fully auditable, allowing end-users to trace every decision path.
- Performance hinges on LLM quality; biases in the underlying model can propagate to the tree.
- The refinement loop can be resource-intensive, suggesting a trade-off between accuracy and cost.
- For deeper understanding, consult BERT and RoBERTa papers, which underpin many LLMs used in ACT.