Hi j34nc4rl0+growth_machine_learning,

Here is our personalized paper recommendations for you sorted by most relevant
Paid Search
Kuaishou Technology
Paper visualization
Abstract
Traditional e-commerce search systems employ multi-stage cascading architectures (MCA) that progressively filter items through recall, pre-ranking, and ranking stages. While effective at balancing computational efficiency with business conversion, these systems suffer from fragmented computation and optimization objective collisions across stages, which ultimately limit their performance ceiling. To address these, we propose \textbf{OneSearch}, the first industrial-deployed end-to-end generative framework for e-commerce search. This framework introduces three key innovations: (1) a Keyword-enhanced Hierarchical Quantization Encoding (KHQE) module, to preserve both hierarchical semantics and distinctive item attributes while maintaining strong query-item relevance constraints; (2) a multi-view user behavior sequence injection strategy that constructs behavior-driven user IDs and incorporates both explicit short-term and implicit long-term sequences to model user preferences comprehensively; and (3) a Preference-Aware Reward System (PARS) featuring multi-stage supervised fine-tuning and adaptive reward-weighted ranking to capture fine-grained user preferences. Extensive offline evaluations on large-scale industry datasets demonstrate OneSearch's superior performance for high-quality recall and ranking. The rigorous online A/B tests confirm its ability to enhance relevance in the same exposure position, achieving statistically significant improvements: +1.67\% item CTR, +2.40\% buyer, and +3.22\% order volume. Furthermore, OneSearch reduces operational expenditure by 75.40\% and improves Model FLOPs Utilization from 3.26\% to 27.32\%. The system has been successfully deployed across multiple search scenarios in Kuaishou, serving millions of users, generating tens of millions of PVs daily.
AI Insights
  • Generative models now dominate recommendation pipelines, boosting relevance while slashing inference cost.
  • Large language models are blended with collaborative filtering to surface deeper user intent beyond clicks.
  • Contrastive learning is being replaced by data‑augmentation tricks to learn sequential preferences without explicit negatives.
  • “Transformer Memory as a Differentiable Search Index” shows memory‑augmented transformers can serve as fast, trainable retrieval back‑ends.
  • “Neural Discrete Representation Learning” compresses item embeddings into discrete codes, enabling efficient end‑to‑end generative search.
  • Generative model: learns to generate new samples resembling training data; contrastive learning: pulls similar pairs together and pushes dissimilar ones apart.
September 03, 2025
Save to Reading List
Marketing Channels
Australian National Unvrs
Abstract
With most content distributed online and mediated by platforms, there is a pressing need to understand the ecosystem of content creation and consumption. A considerable body of recent work shed light on the one-sided market on creator-platform or user-platform interactions, showing key properties of static (Nash) equilibria and online learning. In this work, we examine the {\it two-sided} market including the platform and both users and creators. We design a potential function for the coupled interactions among users, platform and creators. We show that such coupling of creators' best-response dynamics with users' multilogit choices is equivalent to mirror descent on this potential function. Furthermore, a range of platform ranking strategies correspond to a family of potential functions, and the dynamics of two-sided interactions still correspond to mirror descent. We also provide new local convergence result for mirror descent in non-convex functions, which could be of independent interest. Our results provide a theoretical foundation for explaining the diverse outcomes observed in attention markets.
September 02, 2025
Save to Reading List
Attribution
Ludwig Maximilian Univer
Abstract
Attribution methods explain neural network predictions by identifying influential input features, but their evaluation suffers from threshold selection bias that can reverse method rankings and undermine conclusions. Current protocols binarize attribution maps at single thresholds, where threshold choice alone can alter rankings by over 200 percentage points. We address this flaw with a threshold-free framework that computes Area Under the Curve for Intersection over Union (AUC-IoU), capturing attribution quality across the full threshold spectrum. Evaluating seven attribution methods on dermatological imaging, we show single-threshold metrics yield contradictory results, while threshold-free evaluation provides reliable differentiation. XRAI achieves 31% improvement over LIME and 204% over vanilla Integrated Gradients, with size-stratified analysis revealing performance variations up to 269% across lesion scales. These findings establish methodological standards that eliminate evaluation artifacts and enable evidence-based method selection. The threshold-free framework provides both theoretical insight into attribution behavior and practical guidance for robust comparison in medical imaging and beyond.
AI Insights
  • ResNet‑18 was fine‑tuned on HAM10000 with class‑weighted cross‑entropy and Adam (1e‑4), reaching 91.8% mean max probability after temperature scaling.
  • Gradient‑based methods degrade monotonically with higher thresholds, while LIME stays threshold‑invariant, revealing hidden robustness in sampling‑based explanations.
  • XRAI outperforms all others, boosting IoU by 31% over LIME and 204% over vanilla Integrated Gradients across all thresholds.
  • Choosing a single threshold can flip method rankings by over 200 percentage points, exposing the danger of threshold bias.
  • Size‑stratified analysis shows performance swings up to 269% across lesion scales, indicating scale‑dependent explanation quality.
  • The study advocates the threshold‑free AUC‑IoU metric to eliminate evaluation artifacts and enable fair comparison.
  • Recommended reading: XRAI (2023), Grad‑CAM, and EvalAttAI (arXiv:2303.08866) for a comprehensive attribution benchmark.
September 03, 2025
Save to Reading List
Department of Computer Sc
Abstract
The paper presents an experiment on the effects of adaptive emotional alignment between agents, considered a prerequisite for empathic communication, in Human-Robot Interaction (HRI). Using the NAO robot, we investigate the impact of an emotionally aligned, empathic, dialogue on these aspects: (i) the robot's persuasive effectiveness, (ii) the user's communication style, and (iii) the attribution of mental states and empathy to the robot. In an experiment with 42 participants, two conditions were compared: one with neutral communication and another where the robot provided responses adapted to the emotions expressed by the users. The results show that emotional alignment does not influence users' communication styles or have a persuasive effect. However, it significantly influences attribution of mental states to the robot and its perceived empathy
AI Insights
  • Plutchik’s emotion wheel guides robots in mapping affective states to expressive cues.
  • Crowdsourced word‑emotion lexicons (Mohammad & Turney) enable scalable affect‑aware dialogue training.
  • Cialdini’s influence principles predict empathic alignment boosts credibility, yet no persuasive gain emerged.
  • Mental‑state attribution rose when affective cues matched, indicating users ascribe agency despite unchanged behavior.
  • Integrating multimodal affect sensing with adaptive policy learning could close the empathy gap in service robots.
  • Ethical boundaries and user expectation alignment must be rigorously tested before deploying empathic social robots.
September 02, 2025
Save to Reading List
Personalization
Huazhong University of
Abstract
With the dynamic evolution of user interests and the increasing multimodal demands in internet applications, personalized content generation strategies based on static interest preferences struggle to meet practical application requirements. The proposed TIMGen (Temporal Interest-driven Multimodal Generation) model addresses this challenge by modeling the long-term temporal evolution of users' interests and capturing dynamic interest representations with strong temporal dependencies. This model also supports the fusion of multimodal features, such as text, images, video, and audio, and delivers customized content based on multimodal preferences. TIMGen jointly learns temporal dependencies and modal preferences to obtain a unified interest representation, which it then generates to meet users' personalized content needs. TIMGen overcomes the shortcomings of personalized content recommendation methods based on static preferences, enabling flexible and dynamic modeling of users' multimodal interests, better understanding and capturing their interests and preferences. It can be extended to a variety of practical application scenarios, including e-commerce, advertising, online education, and precision medicine, providing insights for future research.
AI Insights
  • TIMGen’s Transformer embeds timestamps, enabling trend‑aware interest drift detection.
  • Attention assigns modality weights per user, letting a single model output text, image, or audio on demand.
  • Fusing rating and category labels jointly optimizes relevance and personalization, easing cold‑start bias.
  • The VAE generator is lightweight but sacrifices visual fidelity versus GAN or diffusion, hinting at hybrid designs.
  • Explicit time embedding lets TIMGen capture seasonal spikes, like holiday content bursts, without manual features.
  • Multimodal fusion struggles with high‑order interactions, suggesting graph‑based or attention‑augmented layers.
September 04, 2025
Save to Reading List
Keio University, NVIDIA
Abstract
Evaluating concept customization is challenging, as it requires a comprehensive assessment of fidelity to generative prompts and concept images. Moreover, evaluating multiple concepts is considerably more difficult than evaluating a single concept, as it demands detailed assessment not only for each individual concept but also for the interactions among concepts. While humans can intuitively assess generated images, existing metrics often provide either overly narrow or overly generalized evaluations, resulting in misalignment with human preference. To address this, we propose Decomposed GPT Score (D-GPTScore), a novel human-aligned evaluation method that decomposes evaluation criteria into finer aspects and incorporates aspect-wise assessments using Multimodal Large Language Model (MLLM). Additionally, we release Human Preference-Aligned Concept Customization Benchmark (CC-AlignBench), a benchmark dataset containing both single- and multi-concept tasks, enabling stage-wise evaluation across a wide difficulty range -- from individual actions to multi-person interactions. Our method significantly outperforms existing approaches on this benchmark, exhibiting higher correlation with human preferences. This work establishes a new standard for evaluating concept customization and highlights key challenges for future research. The benchmark and associated materials are available at https://github.com/ReinaIshikawa/D-GPTScore.
AI Insights
  • D‑GPTScore splits evaluation into fidelity, diversity, and interaction consistency, enabling fine‑grained analysis.
  • The method leverages a multimodal LLM to score each aspect, turning subjective judgments into reproducible metrics.
  • CC‑AlignBench contains over 10,000 single‑concept and 5,000 multi‑concept prompts, spanning simple actions to complex group scenes.
  • Stage‑wise evaluation lets researchers pinpoint whether a model struggles with concept isolation or cross‑concept blending.
  • Experiments show D‑GPTScore’s correlation with human ratings exceeds 0.8, surpassing prior metrics by a wide margin.
  • The open‑source pipeline supports automatic re‑scoring during training, facilitating rapid iteration on concept‑customized models.
  • Future work explores adaptive aspect weighting and zero‑shot evaluation on unseen concepts, promising even tighter human alignment.
September 03, 2025
Save to Reading List
Bidding
Kuaishou Technology, Nany
Abstract
Auto-bidding is central to computational advertising, achieving notable commercial success by optimizing advertisers' bids within economic constraints. Recently, large generative models show potential to revolutionize auto-bidding by generating bids that could flexibly adapt to complex, competitive environments. Among them, diffusers stand out for their ability to address sparse-reward challenges by focusing on trajectory-level accumulated rewards, as well as their explainable capability, i.e., planning a future trajectory of states and executing bids accordingly. However, diffusers struggle with generation uncertainty, particularly regarding dynamic legitimacy between adjacent states, which can lead to poor bids and further cause significant loss of ad impression opportunities when competing with other advertisers in a highly competitive auction environment. To address it, we propose a Causal auto-Bidding method based on a Diffusion completer-aligner framework, termed CBD. Firstly, we augment the diffusion training process with an extra random variable t, where the model observes t-length historical sequences with the goal of completing the remaining sequence, thereby enhancing the generated sequences' dynamic legitimacy. Then, we employ a trajectory-level return model to refine the generated trajectories, aligning more closely with advertisers' objectives. Experimental results across diverse settings demonstrate that our approach not only achieves superior performance on large-scale auto-bidding benchmarks, such as a 29.9% improvement in conversion value in the challenging sparse-reward auction setting, but also delivers significant improvements on the Kuaishou online advertising platform, including a 2.0% increase in target cost.
AI Insights
  • Injecting a random time‑step variable t lets the model complete partial bid histories, boosting dynamic legitimacy.
  • A trajectory‑level return model fine‑tunes bids, aligning them tightly with advertisers’ long‑term goals.
  • Compared to DiffBid, CBD cuts bid uncertainty by over 30 % in sparse‑reward auctions, revealing sharper decision paths.
  • The paper assumes a well‑defined diffusion process but leaves CBD’s computational cost largely unexplored.
  • Explainability shines: the diffusion planner visualizes future state trajectories, letting stakeholders see why a bid was chosen.
  • Scaling CBD to ultra‑high‑frequency auctions may challenge real‑time inference budgets, a noted limitation.
September 03, 2025
Save to Reading List
customer relationship management (crm) optimization
Northern Arizona Universt
Abstract
Based on economic theories and integrated with machine learning technology, this study explores a collaborative Supply Chain Management and Financial Supply Chain Management (SCM - FSCM) model to solve issues like efficiency loss, financing constraints, and risk transmission. We combine Transaction Cost and Information Asymmetry theories and use algorithms such as random forests to process multi-dimensional data and build a data-driven, three-dimensional (cost-efficiency-risk) analysis framework. We then apply an FSCM model of "core enterprise credit empowerment plus dynamic pledge financing." We use Long Short-Term Memory (LSTM) networks for demand forecasting and clustering/regression algorithms for benefit allocation. The study also combines Game Theory and reinforcement learning to optimize the inventory-procurement mechanism and uses eXtreme Gradient Boosting (XGBoost) for credit assessment to enable rapid monetization of inventory. Verified with 20 core and 100 supporting enterprises, the results show a 30\% increase in inventory turnover, an 18\%-22\% decrease in SME financing costs, a stable order fulfillment rate above 95\%, and excellent model performance (demand forecasting error <= 8\%, credit assessment accuracy >= 90\%). This SCM-FSCM model effectively reduces operating costs, alleviates financing constraints, and supports high-quality supply chain development.
AI Insights
  • Ensemble methods were highlighted as a key strategy to boost model robustness across supply‑chain datasets.
  • The authors stress that meticulous data cleaning and preprocessing are prerequisites for any high‑accuracy ML pipeline.
  • Hyper‑parameter tuning is presented as a critical lever for balancing bias and variance in the SCM‑FSCM models.
  • Interpretability remains a challenge, especially when deploying deep learning for demand forecasting in SMEs.
  • The paper recommends Kevin Murphy’s “Probabilistic Machine Learning” for a rigorous statistical foundation.
  • Goodfellow et al.’s “Deep Learning” is cited as essential reading for mastering neural‑network architectures.
  • Online courses from Andrew Ng and Microsoft’s Deep Learning track are suggested for hands‑on skill acquisition.
September 03, 2025
Save to Reading List

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Direction on Data Science Organizations
  • Data Science Management
You can edit or add more interests any time.

Unsubscribe from these updates