Dear user, for this week we added the possiblity to further personalize your results by adding a personalized description of yourself.
Login in our website and head to the profile tab. There provide any details you want like your profession, age, background. That is then taken into account for the language models to generate something tailored for you.
🎯 Top Personalized Recommendations
Institute of Computing I
Why we think this paper is great for you:
This paper directly addresses vehicular routing and incorporates semantic context for human drivers, which is crucial for personalized travel planning and recommendations beyond simple optimization. It offers valuable insights into creating more nuanced and user-centric travel experiences for you.
Abstract
Traditional vehicle routing systems efficiently optimize singular metrics
like time or distance, and when considering multiple metrics, they need more
processes to optimize . However, they lack the capability to interpret and
integrate the complex, semantic, and dynamic contexts of human drivers, such as
multi-step tasks, situational constraints, or urgent needs. This paper
introduces and evaluates PAVe (Personalized Agentic Vehicular Routing), a
hybrid agentic assistant designed to augment classical pathfinding algorithms
with contextual reasoning. Our approach employs a Large Language Model (LLM)
agent that operates on a candidate set of routes generated by a multi-objective
(time, CO2) Dijkstra algorithm. The agent evaluates these options against
user-provided tasks, preferences, and avoidance rules by leveraging a
pre-processed geospatial cache of urban Points of Interest (POIs). In a
benchmark of realistic urban scenarios, PAVe successfully used complex user
intent into appropriate route modifications, achieving over 88% accuracy in its
initial route selections with a local model. We conclude that combining
classical routing algorithms with an LLM-based semantic reasoning layer is a
robust and effective approach for creating personalized, adaptive, and scalable
solutions for urban mobility optimization.
AI Summary - PA Ve effectively integrates LLM-based semantic reasoning with classical multi-objective pathfinding to create personalized, context-aware vehicular routes. [2]
- A locally-hosted, smaller LLM (Qwen 3 - 4B) demonstrated superior overall accuracy (88.24%) and completeness (76.47%) compared to a larger API-based model (GPT-o3), particularly in avoidance scenarios. [2]
- The system successfully translates complex user requests, including urgency, preferences, and avoidance rules, into actionable route modifications by utilizing a pre-processed geospatial POI cache. [2]
- Current limitations include an inconsistent action schema generation by the LLM and a feedback loop restricted to processing only a single waypoint addition. [2]
- Future development should focus on enhancing action reliability through advanced prompting and fine-tuning, integrating multi-stop route planning, and developing a user profile module for adaptive personalization. [2]
- Simply increasing the number of candidate routes (k) can be detrimental to LLM performance, suggesting a need for dynamic k-selection or pre-filtering mechanisms. [2]
- ReAct Agentic Assistant: An agent architecture (used by PA Ve) that integrates reasoning and acting capabilities, allowing the LLM to interact with external tools and make decisions. [2]
- The hybrid agentic approach, leveraging LLMs for contextual understanding and classical algorithms for computational efficiency, significantly outperforms traditional routing systems in handling complex user intent. [1]
- PA Ve (Personalized Agentic Vehicular Routing): A hybrid agentic assistant designed to augment classical pathfinding algorithms with contextual reasoning using an LLM agent. [1]
- Routing Engine Tool (RET): A component responsible for discovering a tuple of possible paths (origin, destination) on a graph-based representation of the urban road network. [1]
Johns Hopkins University
Why we think this paper is great for you:
This research explicitly focuses on mobile personalization and delivering personalized experiences, aligning perfectly with your interest in enhancing travel personalization and recommendations. It explores mechanisms behind personalized mobile interactions, which is highly relevant to your work.
Abstract
Mobile applications increasingly rely on sensor data to infer user context
and deliver personalized experiences. Yet the mechanisms behind this
personalization remain opaque to users and researchers alike. This paper
presents a sandbox system that uses sensor spoofing and persona simulation to
audit and visualize how mobile apps respond to inferred behaviors. Rather than
treating spoofing as adversarial, we demonstrate its use as a tool for
behavioral transparency and user empowerment. Our system injects multi-sensor
profiles - generated from structured, lifestyle-based personas - into Android
devices in real time, enabling users to observe app responses to contexts such
as high activity, location shifts, or time-of-day changes. With automated
screenshot capture and GPT-4 Vision-based UI summarization, our pipeline helps
document subtle personalization cues. Preliminary findings show measurable app
adaptations across fitness, e-commerce, and everyday service apps such as
weather and navigation. We offer this toolkit as a foundation for
privacy-enhancing technologies and user-facing transparency interventions.
Johns Hopkins University
Why we think this paper is great for you:
This paper delves into personalized decision modeling for individuals, offering fundamental insights into how users make choices. Understanding these processes is key to developing more effective personalized travel services and recommendations for you.
Abstract
Decision-making models for individuals, particularly in high-stakes scenarios
like vaccine uptake, often diverge from population optimal predictions. This
gap arises from the uniqueness of the individual decision-making process,
shaped by numerical attributes (e.g., cost, time) and linguistic influences
(e.g., personal preferences and constraints). Developing upon Utility Theory
and leveraging the textual-reasoning capabilities of Large Language Models
(LLMs), this paper proposes an Adaptive Textual-symbolic Human-centric
Reasoning framework (ATHENA) to address the optimal information integration.
ATHENA uniquely integrates two stages: First, it discovers robust, group-level
symbolic utility functions via LLM-augmented symbolic discovery; Second, it
implements individual-level semantic adaptation, creating personalized semantic
templates guided by the optimal utility to model personalized choices.
Validated on real-world travel mode and vaccine choice tasks, ATHENA
consistently outperforms utility-based, machine learning, and other LLM-based
models, lifting F1 score by at least 6.5% over the strongest cutting-edge
models. Further, ablation studies confirm that both stages of ATHENA are
critical and complementary, as removing either clearly degrades overall
predictive performance. By organically integrating symbolic utility modeling
and semantic adaptation, ATHENA provides a new scheme for modeling
human-centric decisions. The project page can be found at
https://yibozh.github.io/Athena.
Beijing Institute of Tecn
Why we think this paper is great for you:
This paper addresses location-routing problems, which are highly relevant to optimizing travel routes, logistics, and the creation of efficient travel itineraries. It provides a strong foundation for understanding complex planning challenges in the travel domain.
Abstract
The capacitated location-routing problems (CLRPs) are classical problems in
combinatorial optimization, which require simultaneously making location and
routing decisions. In CLRPs, the complex constraints and the intricate
relationships between various decisions make the problem challenging to solve.
With the emergence of deep reinforcement learning (DRL), it has been
extensively applied to address the vehicle routing problem and its variants,
while the research related to CLRPs still needs to be explored. In this paper,
we propose the DRL with heterogeneous query (DRLHQ) to solve CLRP and open CLRP
(OCLRP), respectively. We are the first to propose an end-to-end learning
approach for CLRPs, following the encoder-decoder structure. In particular, we
reformulate the CLRPs as a markov decision process tailored to various
decisions, a general modeling framework that can be adapted to other DRL-based
methods. To better handle the interdependency across location and routing
decisions, we also introduce a novel heterogeneous querying attention mechanism
designed to adapt dynamically to various decision-making stages. Experimental
results on both synthetic and benchmark datasets demonstrate superior solution
quality and better generalization performance of our proposed approach over
representative traditional and DRL-based baselines in solving both CLRP and
OCLRP.
Princeton University
Why we think this paper is great for you:
While the core focus is on statistical methods for experimental design, the concept of 'site selection' could tangentially relate to choosing optimal travel destinations or service locations. However, its direct applicability to your primary interests is less pronounced than other papers.
Abstract
How should researchers select experimental sites when the deployment
population differs from observed data? I formulate the problem of experimental
site selection as an optimal transport problem, developing methods to minimize
downstream estimation error by choosing sites that minimize the Wasserstein
distance between population and sample covariate distributions. I develop new
theoretical upper bounds on PATE and CATE estimation errors, and show that
these different objectives lead to different site selection strategies. I
extend this approach by using Wasserstein Distributionally Robust Optimization
to develop a site selection procedure robust to adversarial perturbations of
covariate information: a specific model of distribution shift. I also propose a
novel data-driven procedure for selecting the uncertainty radius the
Wasserstein DRO problem, which allows the user to benchmark robustness levels
against observed variation in their data. Simulation evidence, and a reanalysis
of a randomized microcredit experiment in Morocco (Cr\'epon et al.), show that
these methods outperform random and stratified sampling of sites when
covariates have prognostic R-squared > .5, and alternative optimization methods
i) for moderate-to-large size problem instances ii) when covariates are
moderately informative about treatment effects, and iii) under induced
distribution shift.