Hi!

Your personalized paper recommendations for 17 to 21 November, 2025.
🎯 Top Personalized Recommendations
University of Science and
Why we think this paper is great for you:
This paper directly addresses the critical challenge of generating personalized outputs by leveraging user differences, which is essential for your work in personalization platforms. It offers valuable insights into advanced user modeling techniques for more effective tailored experiences.
Rate paper: 👍 👎 ♥ Save
Abstract
Large Language Models (LLMs) are increasingly integrated into users' daily lives, driving a growing demand for personalized outputs. Prior work has primarily leveraged a user's own history, often overlooking inter-user differences that are critical for effective personalization. While recent methods have attempted to model such differences, their feature extraction processes typically rely on fixed dimensions and quick, intuitive inference (System-1 thinking), limiting both the coverage and granularity of captured user differences. To address these limitations, we propose Difference-aware Reasoning Personalization (DRP), a framework that reconstructs the difference extraction mechanism by leveraging inference scaling to enhance LLM personalization. DRP autonomously identifies relevant difference feature dimensions and generates structured definitions and descriptions, enabling slow, deliberate reasoning (System-2 thinking) over user differences. Experiments on personalized review generation demonstrate that DRP consistently outperforms baseline methods across multiple metrics.
AI Summary
  • DRP consistently outperforms existing baseline methods, including other difference-aware approaches like DPL, achieving up to a 23.0% BLEU gain on personalized review generation tasks. [3]
  • The method demonstrates improved coverage of user differences, as quantified by the Unique Valid Quantity (UVQ) metric, which positively correlates with generation performance. [3]
  • DRP enhances the granularity of extracted differences by capturing deeper cognitive patterns (Semantics, Structure, Pragmatics) rather than just surface-level tendencies (Writing, Emotion). [3]
  • System-1 thinking: Refers to quick, intuitive inference learned at training time, typically relying on fixed dimensions and lacking deep analytical capability. [3]
  • DRP significantly enhances LLM personalization by reconstructing the difference extraction mechanism through inference scaling, enabling autonomous identification of relevant feature dimensions. [2]
  • The framework transitions from limited 'System-1' quick inference to 'System-2' deliberate reasoning for user difference analysis, leading to more granular and comprehensive feature extraction. [2]
  • The reflective validation step in DRP is crucial for filtering out invalid or spurious difference features, ensuring the quality of personalized inputs for the generation phase. [2]
  • Inference Scaling: The approach of incorporating additional test-time computation to enable flexible feature dimensions and extended reasoning chains, applied here to difference extraction in LLM personalization. [2]
  • Reasoning-enhanced LLMs (e.g., DeepSeek-R1-Distill-Qwen) are more effective as difference extractors within DRP compared to standard instruction-tuned models (e.g., Qwen2.5-Instruct), especially at larger parameter scales. [1]
  • Difference-aware Reasoning Personalization (DRP): A novel framework that leverages inference scaling to enhance LLM personalization by autonomously identifying difference feature dimensions and generating structured definitions and descriptions through deliberate reasoning. [1]
Wuhan University
Why we think this paper is great for you:
This research explores a novel approach to collaborative filtering using user behavior data, highly relevant to enhancing your personalization and data-driven CRM strategies. You will find its focus on local user similarities valuable for improving recommender systems.
Rate paper: 👍 👎 ♥ Save
Abstract
To leverage user behavior data from the Internet more effectively in recommender systems, this paper proposes a novel collaborative filtering (CF) method called Local Collaborative Filtering (LCF). LCF utilizes local similarities among users and integrates their data using the law of large numbers (LLN), thereby improving the utilization of user behavior data. Experiments are conducted on the Steam game dataset, and the results of LCF align with real-world needs.
University of Washington
Why we think this paper is great for you:
This framework, synergizing optimization with LLMs for intelligent decision-making, offers a powerful approach to refining your CRM optimization and data-driven strategies. It provides a blueprint for leveraging advanced models to make more informed choices.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
This paper introduces SOLID (Synergizing Optimization and Large Language Models for Intelligent Decision-Making), a novel framework that integrates mathematical optimization with the contextual capabilities of large language models (LLMs). SOLID facilitates iterative collaboration between optimization and LLMs agents through dual prices and deviation penalties. This interaction improves the quality of the decisions while maintaining modularity and data privacy. The framework retains theoretical convergence guarantees under convexity assumptions, providing insight into the design of LLMs prompt. To evaluate SOLID, we applied it to a stock portfolio investment case with historical prices and financial news as inputs. Empirical results demonstrate convergence under various scenarios and indicate improved annualized returns compared to a baseline optimizer-only method, validating the synergy of the two agents. SOLID offers a promising framework for advancing automated and intelligent decision-making across diverse domains.
Columbia University
Why we think this paper is great for you:
This paper discusses the critical aspects of human-data interaction systems, focusing on usability and user experience, which are fundamental to designing effective personalization platforms. It provides a broader context for ensuring your systems meet user expectations.
Rate paper: 👍 👎 ♥ Save
Abstract
Human-data interaction (HDI) presents fundamentally different challenges from traditional data management. HDI systems must meet latency, correctness, and consistency needs that stem from usability rather than query semantics; failing to meet these expectations breaks the user experience. Moreover, interfaces and systems are tightly coupled; neither can easily be optimized in isolation, and effective solutions demand their co-design. This dependence also presents a research opportunity: rather than adapt systems to interface demands, systems innovations and database theory can also inspire new interaction and visualization designs. We survey a decade of our lab's work that embraces this coupling and argue that HDI systems are the foundation for reliable, interactive, AI-driven applications.
National Cheng Kung Univ
Why we think this paper is great for you:
While more hardware-focused, this paper explores performance trade-offs in processing-in-memory, which could indirectly inform the underlying infrastructure considerations for high-performance data-driven platforms. It offers insights into optimizing the computational backbone of your systems.
Rate paper: 👍 👎 ♥ Save
Abstract
Processing-in-memory (PIM) reduces data movement by executing near memory, but our large-scale characterization on real PIM hardware shows that end-to-end performance is often limited by disjoint host and device address spaces that force explicit staging transfers. In contrast, CXL-PIM provides a unified address space and cache-coherent access at the cost of higher access latency. These opposing interface models create workload-dependent tradeoffs that are not captured by small-scale studies. This work presents a side-by-side, large-scale comparison of PIM and CXL-PIM using measurements from real PIM hardware and trace-driven CXL modeling. We identify when unified-address access amortizes link latency enough to overcome transfer bottlenecks, and when tightly coupled PIM remains preferable. Our results reveal phase- and dataset-size regimes in which the relative ranking between the two architectures reverses, offering practical guidance for future near-memory system design.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • MLOps
  • Email Marketing
  • Data Driven CRM
You can edit or add more interests any time.