Abstract
The rapid advancement of large language models (LLMs) has revolutionized
artificial intelligence, shifting from supporting objective tasks (e.g.,
recognition) to empowering subjective decision-making (e.g., planning,
decision). This marks the dawn of general and powerful AI, with applications
spanning a wide range of fields, including programming, education, healthcare,
finance, and law. However, their deployment introduces multifaceted risks. Due
to the black-box nature of LLMs and the human-like quality of their generated
content, issues such as hallucinations, bias, unfairness, and copyright
infringement become particularly significant. In this context, sourcing
information from multiple perspectives is essential.
This survey presents a systematic investigation into provenance tracking for
content generated by LLMs, organized around four interrelated dimensions that
together capture both model- and data-centric perspectives. From the model
perspective, Model Sourcing treats the model as a whole, aiming to distinguish
content generated by specific LLMs from content authored by humans. Model
Structure Sourcing delves into the internal generative mechanisms, analyzing
architectural components that shape the outputs of model. From the data
perspective, Training Data Sourcing focuses on internal attribution, tracing
the origins of generated content back to the training data of model. In
contrast, External Data Sourcing emphasizes external validation, identifying
external information used to support or influence the responses of model.
Moreover, we also propose a dual-paradigm taxonomy that classifies existing
sourcing methods into prior-based (proactive traceability embedding) and
posterior-based (retrospective inference) approaches. Traceability across these
dimensions enhances the transparency, accountability, and trustworthiness of
LLMs deployment in real-world applications.
Abstract
Multimodal Large Language Models (MLLMs) have demonstrated extraordinary
progress in bridging textual and visual inputs. However, MLLMs still face
challenges in situated physical and social interactions in sensorally rich,
multimodal and real-world settings where the embodied experience of the living
organism is essential. We posit that next frontiers for MLLM development
require incorporating both internal and external embodiment -- modeling not
only external interactions with the world, but also internal states and drives.
Here, we describe mechanisms of internal and external embodiment in humans and
relate these to current advances in MLLMs in early stages of aligning to human
representations. Our dual-embodied framework proposes to model interactions
between these forms of embodiment in MLLMs to bridge the gap between multimodal
data and world experience.