Hi!

Your personalized paper recommendations for 26 to 30 January, 2026.
Xidian University
AI Insights
  • Neural networks: a machine learning model composed of interconnected nodes (neurons) that process inputs and produce outputs. (ML: 0.96)πŸ‘πŸ‘Ž
  • Dropout rates: the probability of dropping out individual elements during training, used to prevent overfitting in neural networks. (ML: 0.95)πŸ‘πŸ‘Ž
  • Evolutionary algorithms: a class of optimization techniques inspired by natural selection, which use populations of individuals to search for optimal solutions. (ML: 0.93)πŸ‘πŸ‘Ž
  • Black-box optimization: a type of optimization problem where the objective function is unknown and can only be evaluated through input-output interactions. (ML: 0.83)πŸ‘πŸ‘Ž
  • The convergence analysis provides theoretical guarantees for ABOM's performance, making it a reliable choice for practitioners. (ML: 0.82)πŸ‘πŸ‘Ž
  • The model is designed to handle high-dimensional search spaces and large evaluation budgets, making it suitable for complex optimization tasks. (ML: 0.80)πŸ‘πŸ‘Ž
  • ABOM uses a combination of evolutionary algorithms and neural networks to adaptively select, crossover, and mutate individuals in the population. (ML: 0.79)πŸ‘πŸ‘Ž
  • The Adaptive meta Black-box Optimization Model (ABOM) is a novel approach for solving black-box optimization problems. (ML: 0.74)πŸ‘πŸ‘Ž
  • ABOM is a promising approach for solving complex optimization problems, offering adaptability and scalability. (ML: 0.70)πŸ‘πŸ‘Ž
  • A convergence analysis of ABOM is provided, showing that the algorithm converges with probability 1 to the global optimum of the objective function under certain assumptions. (ML: 0.66)πŸ‘πŸ‘Ž
Abstract
Handcrafted optimizers become prohibitively inefficient for complex black-box optimization (BBO) tasks. MetaBBO addresses this challenge by meta-learning to automatically configure optimizers for low-level BBO tasks, thereby eliminating heuristic dependencies. However, existing methods typically require extensive handcrafted training tasks to learn meta-strategies that generalize to target tasks, which poses a critical limitation for realistic applications with unknown task distributions. To overcome the issue, we propose the Adaptive meta Black-box Optimization Model (ABOM), which performs online parameter adaptation using solely optimization data from the target task, obviating the need for predefined task distributions. Unlike conventional metaBBO frameworks that decouple meta-training and optimization phases, ABOM introduces a closed-loop adaptive parameter learning mechanism, where parameterized evolutionary operators continuously self-update by leveraging generated populations during optimization. This paradigm shift enables zero-shot optimization: ABOM achieves competitive performance on synthetic BBO benchmarks and realistic unmanned aerial vehicle path planning problems without any handcrafted training tasks. Visualization studies reveal that parameterized evolutionary operators exhibit statistically significant search patterns, including natural selection and genetic recombination.
Why we are recommending this paper?
Due to your Interest in CRM Optimization

This paper’s focus on meta-learning for black-box optimization aligns with optimizing complex systems, a key area within your interests in CRM and personalization. The automated configuration approach directly addresses the need for efficient, data-driven optimization strategies.
South China University of Technology
AI Insights
  • EvoPrompting has several advantages over traditional methods, including improved scalability, flexibility, and interpretability. (ML: 0.96)πŸ‘πŸ‘Ž
  • It can handle complex problems with multiple objectives and constraints, and it can adapt to changing problem settings in real-time. (ML: 0.93)πŸ‘πŸ‘Ž
  • Large Language Models (LLMs): Deep learning models that can understand and generate human-like text, enabling them to perform various tasks such as language translation, text summarization, and question answering. (ML: 0.93)πŸ‘πŸ‘Ž
  • The approach can be applied to a wide range of domains, including machine learning model selection, hyperparameter tuning, and other tasks that require optimized algorithms. (ML: 0.93)πŸ‘πŸ‘Ž
  • The authors also propose a new framework for evaluating the performance of optimization algorithms, which takes into account both the quality of the solutions found and the computational resources required to achieve them. (ML: 0.92)πŸ‘πŸ‘Ž
  • The authors also discuss potential applications of EvoPrompting beyond optimization, such as generating optimized algorithms for other tasks like machine learning model selection or hyperparameter tuning. (ML: 0.92)πŸ‘πŸ‘Ž
  • Black-Box Optimization: The process of finding the optimal solution to a problem without prior knowledge of its internal workings or structure. (ML: 0.91)πŸ‘πŸ‘Ž
  • EvoPrompting uses an LLM to generate a sequence of instructions that can be executed by a computer program, allowing it to adapt to different problem settings and optimize its performance accordingly. (ML: 0.88)πŸ‘πŸ‘Ž
  • EvoPrompting: A novel approach to black-box optimization that leverages large language models (LLMs) to generate optimized algorithms for solving complex optimization problems. (ML: 0.88)πŸ‘πŸ‘Ž
  • The paper presents a novel approach to black-box optimization called EvoPrompting, which leverages large language models (LLMs) to generate optimized algorithms for solving complex optimization problems. (ML: 0.87)πŸ‘πŸ‘Ž
  • The authors demonstrate the effectiveness of their method on various benchmark problems and compare its performance with state-of-the-art methods. (ML: 0.83)πŸ‘πŸ‘Ž
  • EvoPrompting has the potential to revolutionize the field of black-box optimization by providing a flexible and adaptable framework for solving complex problems. (ML: 0.79)πŸ‘πŸ‘Ž
Abstract
Benchmark Design in Black-Box Optimization (BBO) is a fundamental yet open-ended topic. Early BBO benchmarks are predominantly human-crafted, introducing expert bias and constraining diversity. Automating this design process can relieve the human-in-the-loop burden while enhancing diversity and objectivity. We propose Evolution of Benchmark (EoB), an automated BBO benchmark designer empowered by the large language model (LLM) and its program evolution capability. Specifically, we formulate benchmark design as a bi-objective optimization problem towards maximizing (i) landscape diversity and (ii) algorithm-differentiation ability across a portfolio of BBO solvers. Under this paradigm, EoB iteratively prompts LLM to evolve a population of benchmark programs and employs a reflection-based scheme to co-evolve the landscape and its corresponding program. Comprehensive experiments validate our EoB is a competitive candidate in multi-dimensional usages: 1) Benchmarking BBO algorithms; 2) Training and testing learning-assisted BBO algorithms; 3) Extending proxy for expensive real-world problems.
Why we are recommending this paper?
Due to your Interest in CRM Optimization

Given your interest in personalization platforms and data-driven approaches, this work on automating BBO benchmark design is highly relevant. The focus on diversity and objectivity will be valuable for creating robust personalization models.
York University
AI Insights
  • AUPRC: Area Under the Precision-Recall Curve Lack of spread: A measure of model performance that is not well-defined in this context. (ML: 0.97)πŸ‘πŸ‘Ž
  • Calibration slope: A measure of model calibration. (ML: 0.96)πŸ‘πŸ‘Ž
  • When using the second loss function (Lβˆ—βˆ—), which includes the AUPRC as a measure of discrimination, the optimal Mis proportion is much higher than under Lβˆ—. (ML: 0.96)πŸ‘πŸ‘Ž
  • CITL: Calibration-In-The-Large, a measure of model calibration. (ML: 0.96)πŸ‘πŸ‘Ž
  • The best performing model under Lβˆ— has an AUPRC value of 0.475 and a lack of spread value of 0.063 when Ξ± = 0.1. (ML: 0.95)πŸ‘πŸ‘Ž
  • The results suggest that the choice of loss function and the value of Ξ± have a significant impact on the performance of the model. (ML: 0.95)πŸ‘πŸ‘Ž
  • The proposed algorithm for tuning the size of subpopulation (Mis) is applied to a real-world dataset from the eICU cardiac database. (ML: 0.93)πŸ‘πŸ‘Ž
  • When using the first loss function (Lβˆ—), the optimal Mis proportion is 0.29 under all values of Ξ± except Ξ± = 0.9, where it is 0.20. (ML: 0.90)πŸ‘πŸ‘Ž
  • The results show that the optimal Mis value varies depending on the choice of alpha (Ξ±), which controls the emphasis on discrimination and calibration in the loss function. (ML: 0.89)πŸ‘πŸ‘Ž
Abstract
Advances in precision medicine increasingly drive methodological innovation in health research. A key development is the use of personalized prediction models (PPMs), which are fit using a similar subpopulation tailored to a specific index patient, and have been shown to outperform one-size-fits-all models, particularly in terms of model discrimination performance. We propose a generalized loss function that enables tuning of the subpopulation size used to fit a PPM. This loss function allows joint optimization of discrimination and calibration, allowing both the performance measures and their relative weights to be specified by the user. To reduce computational burden, we conducted extensive simulation studies to identify practical bounds for the grid of subpopulation sizes. Based on these results, we recommend using a lower bound of 20\% and an upper bound of 70\% of the entire training dataset. We apply the proposed method to both simulated and real-world datasets and demonstrate that previously observed relationships between subpopulation size and model performance are robust. Furthermore, we show that the choice of performance measures in the loss function influences the optimal subpopulation size selected. These findings support the flexible and computationally efficient implementation of PPMs in precision health research.
Why we are recommending this paper?
Due to your Interest in Personalization

This paper’s exploration of personalized prediction models directly addresses the core of your interest in data-driven CRM optimization. The use of a mixture loss function is a sophisticated technique for tailoring models to individual needs.
Plaksha University
AI Insights
  • Further research is needed to refine the granularity of real-time difficulty calibration and explore the long-term effects of using such a system. (ML: 0.98)πŸ‘πŸ‘Ž
  • The study suggests that GuideAI's use of physiological data to inform adaptive interventions can lead to better learning outcomes and increased user engagement. (ML: 0.98)πŸ‘πŸ‘Ž
  • Further research is needed to refine the granularity of real-time difficulty calibration. (ML: 0.97)πŸ‘πŸ‘Ž
  • Previous studies have shown that personalized learning systems can improve learning outcomes, but few have explored the use of physiological data to inform adaptive interventions. (ML: 0.97)πŸ‘πŸ‘Ž
  • The study found that GuideAI, a real-time personalized learning solution with adaptive interventions, significantly improved learning outcomes and user experience compared to a control group. (ML: 0.96)πŸ‘πŸ‘Ž
  • The study demonstrates the potential of GuideAI's biosensor-driven approach to improve learning outcomes and user experience. (ML: 0.96)πŸ‘πŸ‘Ž
  • The study's sample size was relatively small. (ML: 0.95)πŸ‘πŸ‘Ž
  • GuideAI: A real-time personalized learning solution with adaptive interventions. (ML: 0.95)πŸ‘πŸ‘Ž
  • GuideAI's biosensor-driven interventions were rated positively by participants, who appreciated the system's ability to detect and respond to cognitive-affective shifts in real time. (ML: 0.94)πŸ‘πŸ‘Ž
  • Biosensor-driven interventions: Adaptive interventions informed by physiological data, such as heart rate or skin conductance, to adjust the learning experience in real time. (ML: 0.94)πŸ‘πŸ‘Ž
Abstract
Large Language Models (LLMs) have emerged as powerful learning tools, but they lack awareness of learners' cognitive and physiological states, limiting their adaptability to the user's learning style. Contemporary learning techniques primarily focus on structured learning paths, knowledge tracing, and generic adaptive testing but fail to address real-time learning challenges driven by cognitive load, attention fluctuations, and engagement levels. Building on findings from a formative user study (N=66), we introduce GuideAI, a multi-modal framework that enhances LLM-driven learning by integrating real-time biosensory feedback including eye gaze tracking, heart rate variability, posture detection, and digital note-taking behavior. GuideAI dynamically adapts learning content and pacing through cognitive optimizations (adjusting complexity based on learning progress markers), physiological interventions (breathing guidance and posture correction), and attention-aware strategies (redirecting focus using gaze analysis). Additionally, GuideAI supports diverse learning modalities, including text-based, image-based, audio-based, and video-based instruction, across varied knowledge domains. A preliminary study (N = 25) assessed GuideAI's impact on knowledge retention and cognitive load through standardized assessments. The results show statistically significant improvements in both problem-solving capability and recall-based knowledge assessments. Participants also experienced notable reductions in key NASA-TLX measures including mental demand, frustration levels, and effort, while simultaneously reporting enhanced perceived performance. These findings demonstrate GuideAI's potential to bridge the gap between current LLM-based learning systems and individualized learner needs, paving the way for adaptive, cognition-aware education at scale.
Why we are recommending this paper?
Due to your Interest in Personalization

Considering your interest in personalization platforms, this research on GuideAI offers a novel approach to adaptive learning, potentially applicable to personalized learning experiences within your domain. The use of LLMs for real-time adaptation is a promising area.
Amazon
AI Insights
  • It's like having a personal assistant who helps you make data-driven decisions. (ML: 0.97)πŸ‘πŸ‘Ž
  • The paper introduces Insight Agents (IA), a hierarchical multi-agent system leveraging Large Language Models (LLMs) built on a plan-and-execute paradigm to provide personalized, actionable insights for e-commerce sellers. (ML: 0.96)πŸ‘πŸ‘Ž
  • Insight Agents (IA) Large Language Models (LLMs) plan-and-execute paradigm IA sets the stage for future AI-driven decision supporting systems, transforming data interaction and driving impactful outcomes in e-commerce and beyond. (ML: 0.96)πŸ‘πŸ‘Ž
  • The system may not be able to handle complex or nuanced queries. (ML: 0.95)πŸ‘πŸ‘Ž
  • Insight Agents (IA) is a system that uses Large Language Models (LLMs) to provide personalized and actionable insights. (ML: 0.94)πŸ‘πŸ‘Ž
  • Insight Agents (IA) is a hierarchical multi-agent system leveraging Large Language Models (LLMs) built on a plan-and-execute paradigm to provide personalized, actionable insights for e-commerce sellers. (ML: 0.94)πŸ‘πŸ‘Ž
  • Imagine you're an e-commerce seller, and you want to get insights about your sales. (ML: 0.92)πŸ‘πŸ‘Ž
  • The system significantly reduces cognitive load, achieving 89.5% accuracy with sub-15-second latency. (ML: 0.88)πŸ‘πŸ‘Ž
  • The paper cites several studies on LLMs and their applications in various domains. (ML: 0.86)πŸ‘πŸ‘Ž
Abstract
Today, E-commerce sellers face several key challenges, including difficulties in discovering and effectively utilizing available programs and tools, and struggling to understand and utilize rich data from various tools. We therefore aim to develop Insight Agents (IA), a conversational multi-agent Data Insight system, to provide E-commerce sellers with personalized data and business insights through automated information retrieval. Our hypothesis is that IA will serve as a force multiplier for sellers, thereby driving incremental seller adoption by reducing the effort required and increase speed at which sellers make good business decisions. In this paper, we introduce this novel LLM-backed end-to-end agentic system built on a plan-and-execute paradigm and designed for comprehensive coverage, high accuracy, and low latency. It features a hierarchical multi-agent structure, consisting of manager agent and two worker agents: data presentation and insight generation, for efficient information retrieval and problem-solving. We design a simple yet effective ML solution for manager agent that combines Out-of-Domain (OOD) detection using a lightweight encoder-decoder model and agent routing through a BERT-based classifier, optimizing both accuracy and latency. Within the two worker agents, a strategic planning is designed for API-based data model that breaks down queries into granular components to generate more accurate responses, and domain knowledge is dynamically injected to to enhance the insight generator. IA has been launched for Amazon sellers in US, which has achieved high accuracy of 90% based on human evaluation, with latency of P90 below 15s.
Why we are recommending this paper?
Due to your Interest in Data Driven CRM

This paper’s development of an LLM-based multi-agent system for data insights aligns with your interest in leveraging data for CRM optimization. The conversational agent approach could be particularly useful for understanding complex data patterns.
University of Notre Dame
AI Insights
  • Participants may have had prior experience with computer science concepts, which could influence their performance in the study. (ML: 0.99)πŸ‘πŸ‘Ž
  • The study suggests that providing multiple representations can support the learning of data structures by BVI individuals, but it is essential to consider individual differences in learning styles and preferences. (ML: 0.98)πŸ‘πŸ‘Ž
  • The study found that BVI individuals use multiple representations to understand and reason about data structures in a way that is consistent with sighted individuals, but they may require more time and practice to develop their skills. (ML: 0.98)πŸ‘πŸ‘Ž
  • The study aimed to investigate how blind or visually impaired (BVI) individuals use multiple representations to understand and reason about data structures, specifically arrays and binary trees. (ML: 0.96)πŸ‘πŸ‘Ž
  • The participants used the tabular representation most frequently for arrays, while the navigable representation was preferred for binary trees. (ML: 0.95)πŸ‘πŸ‘Ž
  • Data Structure: A way of organizing and storing data in a computer so that it can be efficiently accessed and modified. (ML: 0.94)πŸ‘πŸ‘Ž
  • Limited sample size of 8 participants. (ML: 0.93)πŸ‘πŸ‘Ž
  • Binary Tree: A hierarchical structure where each node has at most two children (left child and right child). (ML: 0.85)πŸ‘πŸ‘Ž
  • Array: A linear collection of elements, each identified by an index. (ML: 0.82)πŸ‘πŸ‘Ž
  • Blind or Visually Impaired (BVI): Individuals who are unable to see or have limited vision. (ML: 0.77)πŸ‘πŸ‘Ž
Abstract
Blind and visually impaired (BVI) computer science students face systematic barriers when learning data structures: current accessibility approaches typically translate diagrams into alternative text, focusing on visual appearance rather than preserving the underlying structure essential for conceptual understanding. More accessible alternatives often do not scale in complexity, cost to produce, or both. Motivated by a recent shift to tools for creating visual diagrams from code, we propose a solution that automatically creates accessible representations from structural information about diagrams. Based on a Wizard-of-Oz study, we derive design requirements for an automated system, Arboretum, that compiles text-based diagram specifications into three synchronized nonvisual formats$\unicode{x2013}$tabular, navigable, and tactile. Our evaluation with BVI users highlights the strength of tactile graphics for complex tasks such as binary search; the benefits of offering multiple, complementary nonvisual representations; and limitations of existing digital navigation patterns for structural reasoning. This work reframes access to data structures by preserving their structural properties. The solution is a practical system to advance accessible CS education.
Why we are recommending this paper?
Due to your Interest in Data Driven CRM
University of the Basque Country UPVEHU
Paper visualization
Rate image: πŸ‘ πŸ‘Ž
AI Insights
  • MLOps: Machine Learning Operations, an emerging field that focuses on the operationalization of machine learning models in production environments. (ML: 0.95)πŸ‘πŸ‘Ž
  • MLflow: An open-source platform for managing the end-to-end machine learning lifecycle, from model development to deployment. (ML: 0.93)πŸ‘πŸ‘Ž
  • Kubeflow Pipelines: An open-source platform for building and deploying machine learning pipelines, based on the Kubeflow framework. (ML: 0.91)πŸ‘πŸ‘Ž
  • MLflow is a strong contender for MLOps due to its comprehensive documentation, ease of installation, and flexibility in configuration. (ML: 0.88)πŸ‘πŸ‘Ž
  • Metaflow: A Python library for building and managing data science workflows, with a focus on reproducibility and collaboration. (ML: 0.88)πŸ‘πŸ‘Ž
  • Metaflow offers a natural instrumentation approach that minimizes intrusiveness and facilitates porting of existing models. (ML: 0.82)πŸ‘πŸ‘Ž
  • Apache Airflow: A popular open-source workflow management system that can be used in MLOps scenarios. (ML: 0.82)πŸ‘πŸ‘Ž
  • MLflow provides a comprehensive and well-structured official documentation, covering installation and basic use of the tracking server as well as model management in production. (ML: 0.80)πŸ‘πŸ‘Ž
  • Kubeflow Pipelines requires defining each component as a Docker container or as a DSL-decorated function, which can be intrusive and add significant overhead. (ML: 0.66)πŸ‘πŸ‘Ž
  • Metaflow offers a very natural instrumentation centered on the experiment's logical flow, with each step defined with the @step decorator. (ML: 0.62)πŸ‘πŸ‘Ž
Abstract
Given the increasing adoption of AI solutions in professional environments, it is necessary for developers to be able to make informed decisions about the current tool landscape. This work empirically evaluates various MLOps (Machine Learning Operations) tools to facilitate the management of the ML model lifecycle: MLflow, Metaflow, Apache Airflow, and Kubeflow Pipelines. The tools are evaluated by assessing the criteria of Ease of installation, Configuration flexibility, Interoperability, Code instrumentation complexity, result interpretability, and Documentation when implementing two common ML scenarios: Digit classifier with MNIST and Sentiment classifier with IMDB and BERT. The evaluation is completed by providing weighted results that lead to practical conclusions on which tools are best suited for different scenarios.
Why we are recommending this paper?
Due to your Interest in MLOps

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Personalization Platform
  • Email Marketing
You can edit or add more interests any time.