Hi J34Nc4Rl0+Crm Topics,

Your personalized paper recommendations for 03 to 07 November, 2025.

Dear user, for this week we added the possiblity to further personalize your results by adding a personalized description of yourself.

Login in our website and head to the profile tab. There provide any details you want like your profession, age, background. That is then taken into account for the language models to generate something tailored for you.

🎯 Top Personalized Recommendations
Technical University of M
Why we think this paper is great for you:
This paper directly addresses MLOps, focusing on automated pipelines for model development and monitoring, which is highly relevant to your interests in operationalizing machine learning.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
The rapid expansion of artificial intelligence and machine learning (ML) applications has intensified the demand for integrated environments that unify model development, deployment, and monitoring. Traditional Integrated Development Environments (IDEs) focus primarily on code authoring, lacking intelligent support for the full ML lifecycle, while existing MLOps platforms remain detached from the coding workflow. To address this gap, this study proposes the design of an LLM-Integrated IDE with automated MLOps pipelines that enables continuous model development and monitoring within a single environment. The proposed system embeds a Large Language Model (LLM) assistant capable of code generation, debugging recommendation, and automatic pipeline configuration. The backend incorporates automated data validation, feature storage, drift detection, retraining triggers, and CI/CD deployment orchestration. This framework was implemented in a prototype named SmartMLOps Studio and evaluated using classification and forecasting tasks on the UCI Adult and M5 datasets. Experimental results demonstrate that SmartMLOps Studio reduces pipeline configuration time by 61%, improves experiment reproducibility by 45%, and increases drift detection accuracy by 14% compared to traditional workflows. By bridging intelligent code assistance and automated operational pipelines, this research establishes a novel paradigm for AI engineering - transforming the IDE from a static coding tool into a dynamic, lifecycle-aware intelligent platform for scalable and efficient model development.
AI Summary
  • The proposed system enhances experiment reproducibility by 45% and increases drift detection accuracy by 14% compared to traditional workflows, demonstrating improved reliability in dynamic ML environments. [3]
  • Experimental validation on UCI Adult and M5 Forecasting datasets shows superior model performance (e.g., 0.874 Accuracy, 0.685 RMSSE) alongside significant MLOps efficiency gains. [3]
  • Population Stability Index (PSI): A metric used to quantify data drift by comparing the distribution of observations in bins between a reference and current dataset. [3]
  • The LLM-integrated IDE transforms traditional development by embedding intelligence throughout the ML lifecycle, providing code generation, debugging recommendations, and automatic pipeline configuration. [2]
  • The backend incorporates automated data validation using KL divergence, a centralized Feature Store, and CI/CD orchestration via Docker and Kubernetes, ensuring robust and consistent ML operations. [2]
  • A continuous monitoring and retraining engine utilizes Population Stability Index (PSI) and a Bayesian updating policy to automatically trigger retraining pipelines when model drift is detected, maintaining optimal performance in production. [2]
  • The framework democratizes MLOps by automating tasks that traditionally require specialized DevOps expertise, making advanced ML lifecycle management accessible to a broader range of data scientists and ML engineers. [2]
  • LLM-Integrated IDE: An Integrated Development Environment that embeds a Large Language Model assistant for intelligent code assistance and automated MLOps pipeline configuration. [2]
  • Automated MLOps Pipelines: Backend services that automate the machine learning lifecycle, including data validation, feature storage, model versioning, CI/CD orchestration, and continuous monitoring. [2]
  • SmartMLOps Studio significantly reduces ML pipeline configuration time by 61% by integrating an LLM assistant for automated pipeline generation, streamlining operational complexities. [1]
Johns Hopkins University
Why we think this paper is great for you:
This research explores mobile personalization and delivering personalized experiences, directly aligning with your focus on personalization platforms and strategies.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Mobile applications increasingly rely on sensor data to infer user context and deliver personalized experiences. Yet the mechanisms behind this personalization remain opaque to users and researchers alike. This paper presents a sandbox system that uses sensor spoofing and persona simulation to audit and visualize how mobile apps respond to inferred behaviors. Rather than treating spoofing as adversarial, we demonstrate its use as a tool for behavioral transparency and user empowerment. Our system injects multi-sensor profiles - generated from structured, lifestyle-based personas - into Android devices in real time, enabling users to observe app responses to contexts such as high activity, location shifts, or time-of-day changes. With automated screenshot capture and GPT-4 Vision-based UI summarization, our pipeline helps document subtle personalization cues. Preliminary findings show measurable app adaptations across fitness, e-commerce, and everyday service apps such as weather and navigation. We offer this toolkit as a foundation for privacy-enhancing technologies and user-facing transparency interventions.
Johns Hopkins University
Why we think this paper is great for you:
This paper delves into personalized decision modeling, which is crucial for understanding individual behaviors and optimizing outcomes, a key aspect of personalization.
Rate paper: 👍 👎 ♥ Save
Abstract
Decision-making models for individuals, particularly in high-stakes scenarios like vaccine uptake, often diverge from population optimal predictions. This gap arises from the uniqueness of the individual decision-making process, shaped by numerical attributes (e.g., cost, time) and linguistic influences (e.g., personal preferences and constraints). Developing upon Utility Theory and leveraging the textual-reasoning capabilities of Large Language Models (LLMs), this paper proposes an Adaptive Textual-symbolic Human-centric Reasoning framework (ATHENA) to address the optimal information integration. ATHENA uniquely integrates two stages: First, it discovers robust, group-level symbolic utility functions via LLM-augmented symbolic discovery; Second, it implements individual-level semantic adaptation, creating personalized semantic templates guided by the optimal utility to model personalized choices. Validated on real-world travel mode and vaccine choice tasks, ATHENA consistently outperforms utility-based, machine learning, and other LLM-based models, lifting F1 score by at least 6.5% over the strongest cutting-edge models. Further, ablation studies confirm that both stages of ATHENA are critical and complementary, as removing either clearly degrades overall predictive performance. By organically integrating symbolic utility modeling and semantic adaptation, ATHENA provides a new scheme for modeling human-centric decisions. The project page can be found at https://yibozh.github.io/Athena.
Huawei Noahs Ark Lab,McG
Why we think this paper is great for you:
Focusing on e-commerce, this paper aims to improve product relevance and spark shopping behaviors, which is highly applicable to data-driven CRM and personalization strategies in marketing.
Rate paper: 👍 👎 ♥ Save
Abstract
Finding relevant products given a user query plays a pivotal role in an e-commerce platform, as it can spark shopping behaviors and result in revenue gains. The challenge lies in accurately predicting the correlation between queries and products. Recently, mining the cross-features between queries and products based on the commonsense reasoning capacity of Large Language Models (LLMs) has shown promising performance. However, such methods suffer from high costs due to intensive real-time LLM inference during serving, as well as human annotations and potential Supervised Fine Tuning (SFT). To boost efficiency while leveraging the commonsense reasoning capacity of LLMs for various e-commerce tasks, we propose the Efficient Commonsense-Augmented Recommendation Enhancer (E-CARE). During inference, models augmented with E-CARE can access commonsense reasoning with only a single LLM forward pass per query by utilizing a commonsense reasoning factor graph that encodes most of the reasoning schema from powerful LLMs. The experiments on 2 downstream tasks show an improvement of up to 12.1% on precision@5.
RWTH Aachen University
Why we think this paper is great for you:
While discussing automated workflows, this paper's core focus on materials science and crystal defect states does not align with your areas of interest.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
Abstract
Defect phase diagrams provide a unified description of crystal defect states for materials design and are central to the scientific objectives of the Collaborative Research Centre (CRC) 1394. Their construction requires the systematic integration of heterogeneous experimental and simulation data across research groups and locations. In this setting, research data management (RDM) is a key enabler of new scientific insight by linking distributed research activities and making complex data reproducible and reusable. To address the challenge of heterogeneous data sources and formats, a comprehensive RDM infrastructure has been established that links experiment, data, and analysis in a seamless workflow. The system combines: (1) a joint electronic laboratory notebook and laboratory information management system, (2) easy-to-use large-object data storage, (3) automatic metadata extraction from heterogeneous and proprietary file formats, (4) interactive provenance graphs for data exploration and reuse, and (5) automated reporting and analysis workflows. The two key technological elements are the openBIS electronic laboratory notebook and laboratory information management system, and a newly developed companion application that extends openBIS with large-scale data handling, automated metadata capture, and federated access to distributed research data. This integrated approach reduces friction in data capture and curation, enabling traceable and reusable datasets that accelerate the construction of defect phase diagrams across institutions.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Personalization
  • CRM Optimization
You can edit or add more interests any time.