Hi j34nc4rl0+growth_machine_learning,

Here is our personalized paper recommendations for you sorted by most relevant
Data Science Management
Abstract
As data science emerges as a distinct academic discipline, introductory data science (IDS) courses have also drawn attention to their role in providing foundational knowledge of data science to students. IDS courses not only help students transition to higher education but also expose students to the field, often for the first time. They are often taught by instructors without formal training in data science or pedagogy, creating a unique context for examining their pedagogical content knowledge (PCK). This study explores IDS instructors' PCK, particularly how instructors' varied backgrounds interact with their instructional practices. Employing empirical phenomenological methodology, we conducted semi-structured interviews to understand the nature of their PCK. Comparing instructors' PCK was inherently challenging due to their diverse backgrounds and teaching contexts. Prior experiences played a central role in shaping participants' instructional choices. Their perceptions regarding the goals and rationale for teaching data science reflected three distinct orientations. Instructors also acknowledged students entering IDS courses often brought preconceived notions that shaped their learning experiences. Despite the absence of national guidelines, participants demonstrated notable overlap in foundational IDS content, though some instructors felt less confident with advanced or specialized topics. Additionally, instructors commonly employed formative and summative assessment approaches, though few explicitly labeled their practices using these terms. The findings highlight key components of PCK in IDS and offer insights into supporting instructor development through targeted training and curriculum design. This work contributes to ongoing efforts to build capacity in data science education and expand the scope of PCK research into new interdisciplinary domains.
Abstract
We present Datarus-R1-14B, a 14 B-parameter open-weights language model fine-tuned from Qwen 2.5-14B-Instruct to act as a virtual data analyst and graduate-level problem solver. Datarus is trained not on isolated question-answer pairs but on full analytical trajectories including reasoning steps, code execution, error traces, self-corrections, and final conclusions, all captured in a ReAct-style notebook format spanning finance, medicine, numerical analysis, and other quantitative domains. Our training pipeline combines (i) a trajectory-centric synthetic data generator that yielded 144 000 tagged notebook episodes, (ii) a dual-reward framework blending a lightweight tag-based structural signal with a Hierarchical Reward Model (HRM) that scores both single-step soundness and end-to-end coherence, and (iii) a memory-optimized implementation of Group Relative Policy Optimization (GRPO) featuring KV-cache reuse, sequential generation, and reference-model sharding. A cosine curriculum smoothly shifts emphasis from structural fidelity to semantic depth, reducing the format collapse and verbosity that often plague RL-aligned LLMs. A central design choice in Datarus is it dual reasoning interface. In agentic mode the model produces ReAct-tagged steps that invoke Python tools to execute real code; in reflection mode it outputs compact Chain-of-Thought (CoT) traces delimited by and tags. On demanding postgraduate-level problems, Datarus exhibits an "AHA-moment" pattern: it sketches hypotheses, revises them once or twice, and converges avoiding the circular, token-inflating loops common to contemporary systems. Across standard public benchmarks Datarus surpasses similar size models and even reaches the level of larger reasoning models such as QwQ-32B achieving up to 30% higher accuracy on AIME 2024/2025 and LiveCodeBench while emitting 18-49% fewer tokens per solution.
Paid Search
Abstract
Test-time scaling (TTS) for large language models (LLMs) has thus far fallen into two largely separate paradigms: (1) reinforcement learning (RL) methods that optimize sparse outcome-based rewards, yet suffer from instability and low sample efficiency; and (2) search-based techniques guided by independently trained, static process reward models (PRMs), which require expensive human- or LLM-generated labels and often degrade under distribution shifts. In this paper, we introduce AIRL-S, the first natural unification of RL-based and search-based TTS. Central to AIRL-S is the insight that the reward function learned during RL training inherently represents the ideal PRM for guiding downstream search. Specifically, we leverage adversarial inverse reinforcement learning (AIRL) combined with group relative policy optimization (GRPO) to learn a dense, dynamic PRM directly from correct reasoning traces, entirely eliminating the need for labeled intermediate process data. At inference, the resulting PRM simultaneously serves as the critic for RL rollouts and as a heuristic to effectively guide search procedures, facilitating robust reasoning chain extension, mitigating reward hacking, and enhancing cross-task generalization. Experimental results across eight benchmarks, including mathematics, scientific reasoning, and code generation, demonstrate that our unified approach improves performance by 9 % on average over the base model, matching GPT-4o. Furthermore, when integrated into multiple search algorithms, our PRM consistently outperforms all baseline PRMs trained with labeled data. These results underscore that, indeed, your reward function for RL is your best PRM for search, providing a robust and cost-effective solution to complex reasoning tasks in LLMs.
Abstract
Offline reinforcement learning refers to the process of learning policies from fixed datasets, without requiring additional environment interaction. However, it often relies on well-defined reward functions, which are difficult and expensive to design. Human feedback is an appealing alternative, but its two common forms, expert demonstrations and preferences, have complementary limitations. Demonstrations provide stepwise supervision, but they are costly to collect and often reflect limited expert behavior modes. In contrast, preferences are easier to collect, but it is unclear which parts of a behavior contribute most to a trajectory segment, leaving credit assignment unresolved. In this paper, we introduce a Search-Based Preference Weighting (SPW) scheme to unify these two feedback sources. For each transition in a preference labeled trajectory, SPW searches for the most similar state-action pairs from expert demonstrations and directly derives stepwise importance weights based on their similarity scores. These weights are then used to guide standard preference learning, enabling more accurate credit assignment that traditional approaches struggle to achieve. We demonstrate that SPW enables effective joint learning from preferences and demonstrations, outperforming prior methods that leverage both feedback types on challenging robot manipulation tasks.
Personalization
Abstract
The optimal signaling schemes in information design (Bayesian persuasion) problems often involve non-explainable randomization or disconnected partitions of state space, which are too intricate to be audited or communicated. We propose explainable information design in the context of information design with a continuous state space, restricting the information designer to use $K$-partitional signaling schemes defined by deterministic and monotone partitions of the state space, where a unique signal is sent for all states in each part. We first prove that the price of explainability (PoE) -- the ratio between the performances of the optimal explainable signaling scheme and unrestricted signaling scheme -- is exactly $1/2$ in the worst case, meaning that partitional signaling schemes are never worse than arbitrary signaling schemes by a factor of 2. We then study the complexity of computing optimal explainable signaling schemes. We show that the exact optimization problem is NP-hard in general. But for Lipschitz utility functions, an $\varepsilon$-approximately optimal explainable signaling scheme can be computed in polynomial time. And for piecewise constant utility functions, we provide an efficient algorithm to find an explainable signaling scheme that provides a $1/2$ approximation to the optimal unrestricted signaling scheme, which matches the worst-case PoE bound. A technical tool we develop is a conversion from any optimal signaling scheme (which satisfies a bi-pooling property) to a partitional signaling scheme that achieves $1/2$ fraction of the expected utility of the former. We use this tool in the proofs of both our PoE result and algorithmic result.
Abstract
Recent advances in NeRF and 3DGS have significantly enhanced the efficiency and quality of 3D content synthesis. However, efficient personalization of generated 3D content remains a critical challenge. Current 3D personalization approaches predominantly rely on knowledge distillation-based methods, which require computationally expensive retraining procedures. To address this challenge, we propose \textbf{Invert3D}, a novel framework for convenient 3D content personalization. Nowadays, vision-language models such as CLIP enable direct image personalization through aligned vision-text embedding spaces. However, the inherent structural differences between 3D content and 2D images preclude direct application of these techniques to 3D personalization. Our approach bridges this gap by establishing alignment between 3D representations and text embedding spaces. Specifically, we develop a camera-conditioned 3D-to-text inverse mechanism that projects 3D contents into a 3D embedding aligned with text embeddings. This alignment enables efficient manipulation and personalization of 3D content through natural language prompts, eliminating the need for computationally retraining procedures. Extensive experiments demonstrate that Invert3D achieves effective personalization of 3D content. Our work is available at: https://github.com/qsong2001/Invert3D.
Attribution
Abstract
Conversion rate (CVR) prediction is a core component of online advertising systems, where the attribution mechanisms-rules for allocating conversion credit across user touchpoints-fundamentally determine label generation and model optimization. While many industrial platforms support diverse attribution mechanisms (e.g., First-Click, Last-Click, Linear, and Data-Driven Multi-Touch Attribution), conventional approaches restrict model training to labels from a single production-critical attribution mechanism, discarding complementary signals in alternative attribution perspectives. To address this limitation, we propose a novel Multi-Attribution Learning (MAL) framework for CVR prediction that integrates signals from multiple attribution perspectives to better capture the underlying patterns driving user conversions. Specifically, MAL is a joint learning framework consisting of two core components: the Attribution Knowledge Aggregator (AKA) and the Primary Target Predictor (PTP). AKA is implemented as a multi-task learner that integrates knowledge extracted from diverse attribution labels. PTP, in contrast, focuses on the task of generating well-calibrated conversion probabilities that align with the system-optimized attribution metric (e.g., CVR under the Last-Click attribution), ensuring direct compatibility with industrial deployment requirements. Additionally, we propose CAT, a novel training strategy that leverages the Cartesian product of all attribution label combinations to generate enriched supervision signals. This design substantially enhances the performance of the attribution knowledge aggregator. Empirical evaluations demonstrate the superiority of MAL over single-attribution learning baselines, achieving +0.51% GAUC improvement on offline metrics. Online experiments demonstrate that MAL achieved a +2.6% increase in ROI (Return on Investment).
Abstract
The increasing adoption of large language models (LLMs) has been accompanied by growing concerns regarding their reliability and trustworthiness. As a result, a growing body of research focuses on evidence-based text generation with LLMs, aiming to link model outputs to supporting evidence to ensure traceability and verifiability. However, the field is fragmented due to inconsistent terminology, isolated evaluation practices, and a lack of unified benchmarks. To bridge this gap, we systematically analyze 134 papers, introduce a unified taxonomy of evidence-based text generation with LLMs, and investigate 300 evaluation metrics across seven key dimensions. Thereby, we focus on approaches that use citations, attribution, or quotations for evidence-based text generation. Building on this, we examine the distinctive characteristics and representative methods in the field. Finally, we highlight open challenges and outline promising directions for future work.
Bidding
Abstract
The rise of auto-bidding has created challenges for ensuring advertiser incentive compatibility, particularly when advertisers delegate bidding to agents with high-level constraints. One challenge in defining incentive compatibility is the multiplicity of equilibria. After advertisers submit reports, it is unclear what the result will be and one only has knowledge of a range of possible results. Nevertheless, Alimohammadi et al. proposed a notion of Auto-bidding Incentive Compatibility (AIC) which serves to highlight that auctions may not incentivize truthful reporting of constraints. However, their definition of AIC is very stringent as it requires that the worst-case outcome of an advertiser's truthful report is at least as good as the best-case outcome of any of the advertiser's possible deviations. Indeed, they show both First-Price Auction and Second-Price Auction are not AIC. Moreover, the AIC definition precludes having ordinal preferences on the possible constraints that the advertiser can report. In this paper, we introduce two refined and relaxed concepts: Risk-Averse Auto-bidding Incentive Compatibility (RAIC) and Optimistic Auto-bidding Incentive Compatibility (OAIC). RAIC (OAIC) stipulates that truthful reporting is preferred if its least (most) favorable equilibrium outcome is no worse than the least (most) favorable equilibrium outcome from any misreport. This distinction allows for a clearer modeling of ordinal preferences for advertisers with differing attitudes towards equilibrium uncertainty. We demonstrate that SPA satisfies both RAIC and OAIC. Furthermore, we show that SPA also meets these conditions for two advertisers when they are assumed to employ uniform bidding. These findings provide new insights into the incentive properties of SPA in auto-bidding environments, particularly when considering advertisers' perspectives on equilibrium selection.
Marketing Channels
Paper visualization
Abstract
Personalized marketing has emerged as a pivotal strategy for enhancing customer engagement and driving business growth. Academic and industry efforts have predominantly focused on recommendation systems and personalized advertisements. Nonetheless, this facet of personalization holds significant potential for increasing conversion rates and improving customer satisfaction. Prior studies suggest that well-executed personalization strategies can boost revenue by up to 40 percent, underscoring the strategic importance of developing intelligent, data-driven approaches for offer generation. This work introduces SLM4Offer, a generative AI model for personalized offer generation, developed by fine-tuning a pre-trained encoder-decoder language model, specifically Google's Text-to-Text Transfer Transformer (T5-Small 60M) using a contrastive learning approach. SLM4Offer employs InfoNCE (Information Noise-Contrastive Estimation) loss to align customer personas with relevant offers in a shared embedding space. A key innovation in SLM4Offer lies in the adaptive learning behaviour introduced by contrastive loss, which reshapes the latent space during training and enhances the model's generalizability. The model is fine-tuned and evaluated on a synthetic dataset designed to simulate customer behaviour and offer acceptance patterns. Experimental results demonstrate a 17 percent improvement in offer acceptance rate over a supervised fine-tuning baseline, highlighting the effectiveness of contrastive objectives in advancing personalized marketing.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • customer relationship management (crm) optimization
  • Direction on Data Science Organizations
You can edit or add more interests any time.

Unsubscribe from these updates