Hi!

Your personalized paper recommendations for 17 to 21 November, 2025.
🎯 Top Personalized Recommendations
University of Science and
Why we think this paper is great for you:
This paper directly addresses personalization using advanced user modeling, which is highly relevant for enhancing customer experiences and optimizing engagement across various channels. It offers insights into leveraging inter-user differences for more effective tailored outputs.
Rate paper: 👍 👎 ♥ Save
Abstract
Large Language Models (LLMs) are increasingly integrated into users' daily lives, driving a growing demand for personalized outputs. Prior work has primarily leveraged a user's own history, often overlooking inter-user differences that are critical for effective personalization. While recent methods have attempted to model such differences, their feature extraction processes typically rely on fixed dimensions and quick, intuitive inference (System-1 thinking), limiting both the coverage and granularity of captured user differences. To address these limitations, we propose Difference-aware Reasoning Personalization (DRP), a framework that reconstructs the difference extraction mechanism by leveraging inference scaling to enhance LLM personalization. DRP autonomously identifies relevant difference feature dimensions and generates structured definitions and descriptions, enabling slow, deliberate reasoning (System-2 thinking) over user differences. Experiments on personalized review generation demonstrate that DRP consistently outperforms baseline methods across multiple metrics.
AI Summary
  • DRP consistently outperforms existing baseline methods, including other difference-aware approaches like DPL, achieving up to a 23.0% BLEU gain on personalized review generation tasks. [3]
  • The method demonstrates improved coverage of user differences, as quantified by the Unique Valid Quantity (UVQ) metric, which positively correlates with generation performance. [3]
  • DRP enhances the granularity of extracted differences by capturing deeper cognitive patterns (Semantics, Structure, Pragmatics) rather than just surface-level tendencies (Writing, Emotion). [3]
  • System-1 thinking: Refers to quick, intuitive inference learned at training time, typically relying on fixed dimensions and lacking deep analytical capability. [3]
  • DRP significantly enhances LLM personalization by reconstructing the difference extraction mechanism through inference scaling, enabling autonomous identification of relevant feature dimensions. [2]
  • The framework transitions from limited 'System-1' quick inference to 'System-2' deliberate reasoning for user difference analysis, leading to more granular and comprehensive feature extraction. [2]
  • The reflective validation step in DRP is crucial for filtering out invalid or spurious difference features, ensuring the quality of personalized inputs for the generation phase. [2]
  • Inference Scaling: The approach of incorporating additional test-time computation to enable flexible feature dimensions and extended reasoning chains, applied here to difference extraction in LLM personalization. [2]
  • Reasoning-enhanced LLMs (e.g., DeepSeek-R1-Distill-Qwen) are more effective as difference extractors within DRP compared to standard instruction-tuned models (e.g., Qwen2.5-Instruct), especially at larger parameter scales. [1]
  • Difference-aware Reasoning Personalization (DRP): A novel framework that leverages inference scaling to enhance LLM personalization by autonomously identifying difference feature dimensions and generating structured definitions and descriptions through deliberate reasoning. [1]
University of Oslo
Why we think this paper is great for you:
This work on explainable AI and feature attribution is directly pertinent to understanding the impact of various factors, which is essential for optimizing your attribution models and ensuring transparency in decision-making. It helps in making sense of complex model outputs.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Explainable AI (XAI) is increasingly essential as modern models become more complex and high-stakes applications demand transparency, trust, and regulatory compliance. Existing global attribution methods often incur high computational costs, lack stability under correlated inputs, and fail to scale efficiently to large or heterogeneous datasets. We address these gaps with \emph{ExCIR} (Explainability through Correlation Impact Ratio), a correlation-aware attribution score equipped with a lightweight transfer protocol that reproduces full-model rankings using only a fraction of the data. ExCIR quantifies sign-aligned co-movement between features and model outputs after \emph{robust centering} (subtracting a robust location estimate, e.g., median or mid-mean, from features and outputs). We further introduce \textsc{BlockCIR}, a \emph{groupwise} extension of ExCIR that scores \emph{sets} of correlated features as a single unit. By aggregating the same signed-co-movement numerators and magnitudes over predefined or data-driven groups, \textsc{BlockCIR} mitigates double-counting in collinear clusters (e.g., synonyms or duplicated sensors) and yields smoother, more stable rankings when strong dependencies are present. Across diverse text, tabular, signal, and image datasets, ExCIR shows trustworthy agreement with established global baselines and the full model, delivers consistent top-$k$ rankings across settings, and reduces runtime via lightweight evaluation on a subset of rows. Overall, ExCIR provides \emph{computationally efficient}, \emph{consistent}, and \emph{scalable} explainability for real-world deployment.
Wuhan University
Why we think this paper is great for you:
You will find this paper valuable for improving personalization and recommendation systems, which are key for CRM optimization and engaging customers through various marketing channels. It focuses on leveraging local user similarities to enhance recommendations.
Rate paper: 👍 👎 ♥ Save
Abstract
To leverage user behavior data from the Internet more effectively in recommender systems, this paper proposes a novel collaborative filtering (CF) method called Local Collaborative Filtering (LCF). LCF utilizes local similarities among users and integrates their data using the law of large numbers (LLN), thereby improving the utilization of user behavior data. Experiments are conducted on the Steam game dataset, and the results of LCF align with real-world needs.
Sun Yatsen University
Why we think this paper is great for you:
The auction-based approach presented here offers a novel perspective on cost-effective resource allocation, which could be highly relevant to your interests in bidding strategies and optimizing marketing channel spend. It explores efficient interaction within complex systems.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Multi-agent systems (MAS) built on large language models (LLMs) often suffer from inefficient "free-for-all" communication, leading to exponential token costs and low signal-to-noise ratios that hinder their practical deployment. We challenge the notion that more communication is always beneficial, hypothesizing instead that the core issue is the absence of resource rationality. We argue that "free" communication, by ignoring the principle of scarcity, inherently breeds inefficiency and unnecessary expenses. To address this, we introduce the Dynamic Auction-based Language Agent (DALA), a novel framework that treats communication bandwidth as a scarce and tradable resource. Specifically, our DALA regards inter-agent communication as a centralized auction, where agents learn to bid for the opportunity to speak based on the predicted value density of their messages. Thus, our DALA intrinsically encourages agents to produce concise, informative messages while filtering out low-value communication. Extensive and comprehensive experiments demonstrate that our economically-driven DALA achieves new state-of-the-art performance across seven challenging reasoning benchmarks, including 84.32% on MMLU and a 91.21% pass@1 rate on HumanEval. Note that this is accomplished with remarkable efficiency, i.e., our DALA uses only 6.25 million tokens, a fraction of the resources consumed by current state-of-the-art methods on GSM8K. Further analysis reveals that our DALA cultivates the emergent skill of strategic silence, effectively adapting its communication strategies from verbosity to silence in a dynamical manner via resource constraints.
Portland State University
Why we think this paper is great for you:
This paper addresses scalable data access control, a critical aspect for robust data science organizations and effective data management, especially when dealing with sensitive customer information. It provides insights into managing data security and compliance.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
The proliferation of smart technologies and evolving privacy regulations such as the GDPR and CPRA has increased the need to manage fine-grained access control (FGAC) policies in database management systems (DBMSs). Existing approaches to enforcing FGAC policies do not scale to thousands of policies, leading to degraded query performance and reduced system effectiveness. We present Sieve, a middleware for relational DBMSs that combines query rewriting and caching to optimize FGAC policy enforcement. Sieve rewrites a query with guarded expressions that group and filter policies and can efficiently use indexes in the DBMS. It also integrates a caching mechanism with an effective replacement strategy and a refresh mechanism to adapt to dynamic workloads. Experiments on two DBMSs with real and synthetic datasets show that Sieve scales to large datasets and policy corpora, maintaining low query latency and system load and improving policy evaluation performance by between 2x and 10x on workloads with 200 to 1,200 policies. The caching extension further improves query performance by between 6 and 22 percent under dynamic workloads, especially with larger cache sizes. These results highlight Sieve's applicability for real-time access control in smart environments and its support for efficient, scalable management of user preferences and privacy policies.
Memorial Sloan Kettering
Why we think this paper is great for you:
While focused on a specific domain, the principles of data stewardship and governance discussed in this paper are broadly applicable to establishing ethical frameworks within data science organizations and managing data effectively. It highlights the evolving landscape of data responsibility.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Healthcare stands at a critical crossroads. Artificial Intelligence and modern computing are unlocking opportunities, yet their value lies in the data that fuels them. The value of healthcare data is no longer limited to individual patients. However, data stewardship and governance has not kept pace, and privacy-centric policies are hindering both innovation and patient protections. As healthcare moves toward a data-driven future, we must define reformed data stewardship that prioritizes patients' interests by proactively managing modern risks and opportunities while addressing key challenges in cost, efficacy, and accessibility. Current healthcare data policies are rooted in 20th-century legislation shaped by outdated understandings of data-prioritizing perceived privacy over innovation and inclusion. While other industries thrive in a data-driven era, the evolution of medicine remains constrained by regulations that impose social rather than scientific boundaries. Large-scale aggregation is happening, but within opaque, closed systems. As we continue to uphold foundational ethical principles - autonomy, beneficence, nonmaleficence, and justice - there is a growing imperative to acknowledge they exist in evolving technological, social, and cultural realities. Ethical principles should facilitate, rather than obstruct, dialogue on adapting to meet opportunities and address constraints in medical practice and healthcare delivery. The new ethics of data stewardship places patients first by defining governance that adapts to changing landscapes. It also rejects the legacy of treating perceived privacy as an unquestionable, guiding principle. By proactively redefining data stewardship norms, we can drive an era of medicine that promotes innovation, protects patients, and advances equity - ensuring future generations advance medical discovery and care.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Paid Search
  • customer relationship management (crm) optimization
  • Marketing Channels
You can edit or add more interests any time.