Hi!

Your personalized paper recommendations for 10 to 14 November, 2025.
Personalization
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Personalized recommendation systems shape much of user choice online, yet their targeted nature makes separating out the value of recommendation and the underlying goods challenging. We build a discrete choice model that embeds recommendation-induced utility, low-rank heterogeneity, and flexible state dependence and apply the model to viewership data at Netflix. We exploit idiosyncratic variation introduced by the recommendation algorithm to identify and separately value these components as well as to recover model-free diversion ratios that we can use to validate our structural model. We use the model to evaluate counterfactuals that quantify the incremental engagement generated by personalized recommendations. First, we show that replacing the current recommender system with a matrix factorization or popularity-based algorithm would lead to 4% and 12% reduction in engagement, respectively, and decreased consumption diversity. Second, most of the consumption increase from recommendations comes from effective targeting, not mechanical exposure, with the largest gains for mid-popularity goods (as opposed to broadly appealing or very niche goods).
AI Summary
  • The current Netflix RecSys significantly outperforms matrix factorization (4% engagement reduction, 37.5% HHI increase) and popularity-based (12% engagement reduction, 42.5% HHI increase) algorithms, demonstrating substantial value from modern algorithmic advances. [3]
  • Effective targeting accounts for 41.9% of the consumption increase from recommendations, making it 7 times more impactful than mechanical exposure (6.8%) and primarily benefiting mid-popularity goods. [3]
  • The developed discrete choice model, incorporating recommendation-induced utility and flexible state dependence, accurately reproduces model-free diversion ratios with an R2 of 0.73 and a correlation of 0.86, validating its ability to capture substitution patterns. [3]
  • The current RecSys successfully balances high user engagement with maintaining consumption diversity, preventing the concentration of viewership seen with traditional matrix factorization and popularity-based approaches. [3]
  • The framework allows for credible quantification of incremental engagement for specific goods, including new content, by simulating counterfactual catalog changes and incorporating pre-consumption good embeddings for novel items. [3]
  • Recommendation "bonus": An additive utility boost for goods that are recommended, serving as a reduced-form characterization of the positive informational role recommendations play in guiding user choices. [3]
  • Transformer-style architecture for state dependence: An adaptation of machine learning's transformer concept to model user preferences that flexibly adapt over time based on their full consumption history, avoiding explicit user-specific embeddings. [3]
  • Incremental engagement: Defined as the difference in platform engagement when a particular good is available versus when it is removed, quantifiable through counterfactual simulations of choice probabilities. [3]
  • Low-rank discrete choice model: A model where the user preferences for goods are represented by a low-rank decomposition of good and user embeddings, allowing endogenous learning of good-level 'characteristics'. [2]
  • The model's architecture, which dynamically represents user preferences as a function of past watch history via a sequence model, enables scalable estimation for millions of users without requiring individual user embeddings. [1]
Rate paper: 👍 👎 ♥ Save
Abstract
People have different creative writing preferences, and large language models (LLMs) for these tasks can benefit from adapting to each user's preferences. However, these models are often trained over a dataset that considers varying personal tastes as a monolith. To facilitate developing personalized creative writing LLMs, we introduce LiteraryTaste, a dataset of reading preferences from 60 people, where each person: 1) self-reported their reading habits and tastes (stated preference), and 2) annotated their preferences over 100 pairs of short creative writing texts (revealed preference). With our dataset, we found that: 1) people diverge on creative writing preferences, 2) finetuning a transformer encoder could achieve 75.8% and 67.7% accuracy when modeling personal and collective revealed preferences, and 3) stated preferences had limited utility in modeling revealed preferences. With an LLM-driven interpretability pipeline, we analyzed how people's preferences vary. We hope our work serves as a cornerstone for personalizing creative writing technologies.
Data Science Management
Humboldt University
Rate paper: 👍 👎 ♥ Save
Abstract
Provenance plays a crucial role in scientific workflow execution, for instance by providing data for failure analysis, real-time monitoring, or statistics on resource utilization for right-sizing allocations. The workflows themselves, however, become increasingly complex in terms of involved components. Furthermore, they are executed on distributed cluster infrastructures, which makes the real-time collection, integration, and analysis of provenance data challenging. Existing provenance systems struggle to balance scalability, real-time processing, online provenance analytics, and integration across different components and compute resources. Moreover, most provenance solutions are not workflow-aware; by focusing on arbitrary workloads, they miss opportunities for workflow systems where optimization and analysis can exploit the availability of a workflow specification that dictates, to some degree, task execution orders and provides abstractions for physical tasks at a logical level. In this paper, we present HyProv, a hybrid provenance management system that combines centralized and federated paradigms to offer scalable, online, and workflow-aware queries over workflow provenance traces. HyProv uses a centralized component for efficient management of the small and stable workflow-specification-specific provenance, and complements this with federated querying over different scalable monitoring and provenance databases for the large-scale execution logs. This enables low-latency access to current execution data. Furthermore, the design supports complex provenance queries, which we exemplify for the workflow system Airflow in combination with the resource manager Kubernetes. Our experiments indicate that HyProv scales to large workflows, answers provenance queries with sub-second latencies, and adds only modest CPU and memory overhead to the cluster.
Presidency University
Rate paper: 👍 👎 ♥ Save
Abstract
An AI-powered data visualization platform that automates the entire data analysis process, from uploading a dataset to generating an interactive visualization. Advanced machine learning algorithms are employed to clean and preprocess the data, analyse its features, and automatically select appropriate visualizations. The system establishes the process of automating AI-based analysis and visualization from the context of data-driven environments, and eliminates the challenge of time-consuming manual data analysis. The combination of a Python Flask backend to access the dataset, paired with a React frontend, provides a robust platform that automatically interacts with Firebase Cloud Storage for numerous data processing and data analysis solutions and real-time sources. Key contributions include automatic and intelligent data cleaning, with imputation for missing values, and detection of outliers, via analysis of the data set. AI solutions to intelligently select features, using four different algorithms, and intelligent title generation and visualization are determined by the attributes of the dataset. These contributions were evaluated using two separate datasets to assess the platform's performance. In the process evaluation, the initial analysis was performed in real-time on datasets as large as 100000 rows, while the cloud-based demand platform scales to meet requests from multiple users and processes them simultaneously. In conclusion, the cloud-based data visualization application allowed for a significant reduction of manual inputs to the data analysis process while maintaining a high quality, impactful visual outputs, and user experiences
Marketing Channels
Rate paper: 👍 👎 ♥ Save
Abstract
This paper investigates a key challenge faced by joint source-channel coding (JSCC) in digital semantic communication (SemCom): the incompatibility between existing JSCC schemes that yield continuous encoded representations and digital systems that employ discrete variable-length codewords. It further results in feasibility issues in achieving physical bit-level rate control via such JSCC approaches for efficient semantic transmission. In this paper, we propose a novel end-to-end coding (E2EC) framework to tackle it. The semantic coding problem is formed by extending the information bottleneck (IB) theory over noisy channels, which is a tradeoff between bit-level communication rate and semantic distortion. With a structural decomposition of encoding to handle code length and content respectively, we can construct an end-to-end trainable encoder that supports the direct compression of a data source into a finite codebook. To optimize our E2EC across non-differentiable operations, e.g., sampling, we use the powerful policy gradient to support gradient-based updates. Experimental results illustrate that E2EC achieves high inference quality with low bit rates, outperforming representative baselines compatible with digital SemCom systems.
Bidding
University of Bergen
Rate paper: 👍 👎 ♥ Save
Abstract
In diffusion auctions, sellers can leverage an underlying social network to broaden participation, thereby increasing their potential revenue. Specifically, sellers can incentivise participants in their auction to diffuse information about the auction through the network. While numerous variants of such auctions have been recently studied in the literature, the formal verification and strategic reasoning perspectives have not been investigated yet. Our contribution is threefold. First, we introduce a logical formalism that captures the dynamics of diffusion and its strategic dimension. Second, for such a logic, we provide model-checking procedures that allow one to verify properties as the Nash equilibrium, and that pave the way towards checking the existence of sellers' strategies. Third, we establish computational complexity results for the presented algorithms.
CSIC
Rate paper: 👍 👎 ♥ Save
Abstract
This book is an introductory textbook targeted towards computer science students who are completely new to the topic of automated negotiation. It does not require any prerequisite knowledge, except for elementary mathematics and basic programming skills. This book comes with an simple toy-world negotiation framework implemented in Python that can be used by the readers to implement their own negotiation algorithms and perform experiments with them. This framework is small and simple enough that any reader who does not like to work in Python should be able to re-implement it very quickly in any other programming language of their choice.
Attribution
Fuzhou University
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Explanation-guided learning (EGL) has shown promise in aligning model predictions with interpretable reasoning, particularly in computer vision tasks. However, most approaches rely on external annotations or heuristic-based segmentation to supervise model explanations, which can be noisy, imprecise and difficult to scale. In this work, we provide both empirical and theoretical evidence that low-quality supervision signals can degrade model performance rather than improve it. In response, we propose ALIGN, a novel framework that jointly trains a classifier and a masker in an iterative manner. The masker learns to produce soft, task-relevant masks that highlight informative regions, while the classifier is optimized for both prediction accuracy and alignment between its saliency maps and the learned masks. By leveraging high-quality masks as guidance, ALIGN improves both interpretability and generalizability, showing its superiority across various settings. Experiments on the two domain generalization benchmarks, VLCS and Terra Incognita, show that ALIGN consistently outperforms six strong baselines in both in-distribution and out-of-distribution settings. Besides, ALIGN also yields superior explanation quality concerning sufficiency and comprehensiveness, highlighting its effectiveness in producing accurate and interpretable models.
Rate paper: 👍 👎 ♥ Save
Abstract
The proliferation of complex, black-box AI models has intensified the need for techniques that can explain their decisions. Feature attribution methods have become a popular solution for providing post-hoc explanations, yet the field has historically lacked a formal problem definition. This paper addresses this gap by introducing a formal definition for the problem of feature attribution, which stipulates that explanations be supported by an underlying probability distribution represented by the given dataset. Our analysis reveals that many existing model-agnostic methods fail to meet this criterion, while even those that do often possess other limitations. To overcome these challenges, we propose Distributional Feature Attribution eXplanations (DFAX), a novel, model-agnostic method for feature attribution. DFAX is the first feature attribution method to explain classifier predictions directly based on the data distribution. We show through extensive experiments that DFAX is more effective and efficient than state-of-the-art baselines.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Paid Search
  • customer relationship management (crm) optimization
  • Direction on Data Science Organizations
You can edit or add more interests any time.