Hi!

Your personalized paper recommendations for 24 to 28 November, 2025.
🎯 Top Personalized Recommendations
Rate paper: 👍 👎 ♥ Save
Abstract
Personalized Large Language Models (PLLMs) aim to align model outputs with individual user preferences, a crucial capability for user-centric applications. However, the prevalent approach of fine-tuning a separate module for each user faces two major limitations: (1) storage costs scale linearly with the number of users, rendering the method unscalable; and (2) fine-tuning a static model from scratch often yields suboptimal performance for users with sparse data. To address these challenges, we propose MTA, a Merge-then-Adapt framework for PLLMs. MTA comprises three key stages. First, we construct a shared Meta-LoRA Bank by selecting anchor users and pre-training meta-personalization traits within meta-LoRA modules. Second, to ensure scalability and enable dynamic personalization combination beyond static models, we introduce an Adaptive LoRA Fusion stage. This stage retrieves and dynamically merges the most relevant anchor meta-LoRAs to synthesize a user-specific one, thereby eliminating the need for user-specific storage and supporting more flexible personalization. Third, we propose a LoRA Stacking for Few-Shot Personalization stage, which applies an additional ultra-low-rank, lightweight LoRA module on top of the merged LoRA. Fine-tuning this module enables effective personalization under few-shot settings. Extensive experiments on the LaMP benchmark demonstrate that our approach outperforms existing SOTA methods across multiple tasks.
Why we think this paper is great for you:
This paper directly addresses challenges in scaling personalized models, which is crucial for delivering tailored experiences efficiently. It offers insights into building robust, user-centric applications.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Summary
  • The dataset is called PVIT (Personalized Visual Instruction Tuning) and consists of 100,000 image-concept pairs with corresponding questions and answers. [3]
  • It also mentions recent advances in visual language understanding tasks such as image captioning, visual question answering, and visual reasoning. [3]
  • The method uses a large dataset of images with multiple concepts and corresponding questions to test the computer's ability to identify and describe these concepts accurately. [3]
  • The paper proposes a new dataset and evaluation framework for visual language models (VLMs) that can understand and describe concepts in images. [2]
Abstract
Personalized Visual Language Models (VLMs) are gaining increasing attention for their formidable ability in user-specific concepts aligned interactions (e.g., identifying a user's bike). Existing methods typically require the learning of separate embeddings for each new concept, which fails to support real-time adaptation during testing. This limitation becomes particularly pronounced in large-scale scenarios, where efficient retrieval of concept embeddings is not achievable. To alleviate this gap, we propose Online-PVLM, a framework for online concept learning by leveraging hyperbolic representations. Our approach makes a train-free paradigm for concept embeddings generation at test time, making the use of personalized VLMs both scalable and efficient. In addition, we develop OP-Eval, a comprehensive and large-scale benchmark comprising 1,292 concepts and over 30K high-quality instances with diverse question types, designed to rigorously assess online concept learning in realistic scenarios. Extensive experiments demonstrate the state-of-the-art performance of our proposed framework. Our source code and dataset will be made available.
Why we think this paper is great for you:
This research explores advancing personalized models with dynamic, user-specific concept learning, which is highly relevant for evolving individual preferences. It provides a pathway for more adaptive and responsive systems.
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Summary
  • It provides advanced pipeline techniques to streamline the execution of complex inference workflows. [3]
  • The system employs a dependency analysis algorithm based on the DAG to optimize the execution flow. [3]
  • The system is evaluated using nine widely-used open datasets and compared with several open-source AI-native databases. [3]
  • DAG (Directed Acyclic Graph): A graph data structure consisting of nodes and directed edges, used to represent dependencies between operators in MorphingDB. [3]
  • Dependency analysis algorithm: An algorithm used by MorphingDB to analyze the dependencies between operators in a DAG and optimize the execution flow. [3]
  • Batch size: The number of data points processed together in a single inference operation, which affects the trade-off between throughput and latency. [3]
  • MorphingDB is a task-centric AI-native DBMS that supports model management and inference. [2]
  • MorphingDB is a powerful AI-native DBMS that supports model management and inference tasks within the database itself. [1]
Abstract
The increasing demand for deep neural inference within database environments has driven the emergence of AI-native DBMSs. However, existing solutions either rely on model-centric designs requiring developers to manually select, configure, and maintain models, resulting in high development overhead, or adopt task-centric AutoML approaches with high computational costs and poor DBMS integration. We present MorphingDB, a task-centric AI-native DBMS that automates model storage, selection, and inference within PostgreSQL. To enable flexible, I/O-efficient storage of deep learning models, we first introduce specialized schemas and multi-dimensional tensor data types to support BLOB-based all-in-one and decoupled model storage. Then we design a transfer learning framework for model selection in two phases, which builds a transferability subspace via offline embedding of historical tasks and employs online projection through feature-aware mapping for real-time tasks. To further optimize inference throughput, we propose pre-embedding with vectoring sharing to eliminate redundant computations and DAG-based batch pipelines with cost-aware scheduling to minimize the inference time. Implemented as a PostgreSQL extension with LibTorch, MorphingDB outperforms AI-native DBMSs (EvaDB, Madlib, GaussML) and AutoML platforms (AutoGluon, AutoKeras, AutoSklearn) across nine public datasets, encompassing series, NLP, and image tasks. Our evaluation demonstrates a robust balance among accuracy, resource consumption, and time cost in model selection and significant gains in throughput and resource efficiency.
Why we think this paper is great for you:
This paper focuses on the operational backbone for AI, specifically model management and inference within database environments. It's essential for efficiently deploying and maintaining your machine learning solutions.
Rate paper: 👍 👎 ♥ Save
Abstract
Recommender systems shape how people discover information, form opinions, and connect with society. Yet, as their influence grows, traditional metrics, e.g., accuracy, clicks, and engagement, no longer capture what truly matters to humans. The workshop on Human-Centered Recommender Systems (HCRS) calls for a paradigm shift from optimizing engagement toward designing systems that truly understand, involve, and benefit people. It brings together researchers in recommender systems, human-computer interaction, AI safety, and social computing to explore how human values, e.g., trust, safety, fairness, transparency, and well-being, can be integrated into recommendation processes. Centered around three thematic axes-Human Understanding, Human Involvement, and Human Impact-HCRS features keynotes, panels, and papers covering topics from LLM-based interactive recommenders to societal welfare optimization. By fostering interdisciplinary collaboration, HCRS aims to shape the next decade of responsible and human-aligned recommendation research.
Why we think this paper is great for you:
Recommender systems are fundamental to delivering personalized experiences, and this workshop emphasizes a human-centered approach. This perspective is vital for optimizing user engagement and satisfaction.
Rate paper: 👍 👎 ♥ Save
AI Summary
  • The study demonstrates that targeted promotional strategies can significantly improve participation from underrepresented racial/ethnic backgrounds and low-income communities in extracurricular K-12 computing education programs. [3]
  • The association between the improved promotion strategy and ethnicity is large (Cramer's V =0.53), while the association with income level is medium (V =0.49). [3]
  • The study suggests that teachers and other community partners act as cultural brokers who communicate the value of CS programs to families who might otherwise see them as 'not for them'. [3]
  • Broadening Participation in Computing (BPC): efforts aimed at increasing diversity and inclusion in computing education. [3]
  • The findings support the hypothesis that limited promotion in affluent neighborhoods, leveraged teacher and community partner channels, and highlighted cost supports will improve the proportion of underrepresented students participating in the program. [3]
  • The study's results highlight the importance of active, intentional recruitment strategies for broadening participation in computing education. [3]
  • Reducing promotion to affluent areas, amplifying communication channels that reach lower-income communities, and providing details about cost supports upfront in marketing materials are effective tactics for broadening participation. [2]
Abstract
Many studies have aimed to broaden participation in computing (BPC) through extracurricular educational initiatives. When these initiatives are structured as open-enrollment extracurricular programs, their success often depends on their marketing approach. However, there is little in the computing education research literature about how to conduct effective marketing for these initiatives. We describe the changes made to the marketing strategy of one such program, an educational hackathon for middle school and high school students in the Pacific Northwest. These included reducing promotion to affluent families, using targeted school-based communication, and emphasizing cost supports during initial promotion. We then compare attendance and self-reported demographics before and after the intervention. Results indicate a higher proportion of students from marginalized and low-income communities and no reduction in overall attendance.
Why we think this paper is great for you:
This paper's focus on selective marketing effectiveness offers valuable insights into targeted communication strategies. It can inform how you optimize your outreach and engagement efforts.
Rate paper: 👍 👎 ♥ Save
AI Summary
  • Data-Driven Methods (DDMs): approaches that utilize data to inform design decisions. [2]
  • The use of data-driven methods in mechanical engineering design is increasing, with a focus on system integration and validation. [1]
Abstract
The increasing availability of data and advancements in computational intelligence have accelerated the adoption of data-driven methods (DDMs) in product development. However, their integration into product development remains fragmented. This fragmentation stems from uncertainty, particularly the lack of clarity on what types of DDMs to use and when to employ them across the product development lifecycle. To address this, a necessary first step is to investigate the usage of DDM in engineering design by identifying which methods are being used, at which development stages, and for what application. This paper presents a PRISMA systematic literature review. The V-model as a product development framework was adopted and simplified into four stages: system design, system implementation, system integration, and validation. A structured search across Scopus, Web of Science, and IEEE Xplore (2014--2024) retrieved 1{,}689 records. After screening, 114 publications underwent full-text analysis. Findings show that machine learning (ML) and statistical methods dominate current practice, whereas deep learning (DL), though still less common, exhibits a clear upward trend in adoption. Additionally, supervised learning, clustering, regression analysis, and surrogate modeling are prevalent in design, implementation, and integration system stages but contributions to validation remain limited. Key challenges in existing applications include limited model interpretability, poor cross-stage traceability, and insufficient validation under real-world conditions. Additionally, it highlights key limitations and opportunities such as the need for interpretable hybrid models. This review is a first step toward design-stage guidelines; a follow-up synthesis should map computer science algorithms to engineering design problems and activities.
Why we think this paper is great for you:
This review highlights the integration of data-driven methods and AI, providing a broad perspective on their adoption and challenges. It offers foundational knowledge for leveraging data across your initiatives.
Rate paper: 👍 👎 ♥ Save
Abstract
In this paper, we propose a novel online optimization algorithm built by combining ideas from control theory and system identification. The foundation of our algorithm is a control-based design that makes use of the internal model of the online problem. Since such prior knowledge of this internal model might not be available in practice, we incorporate an identification routine that learns this model on the fly. The algorithm is designed starting from quadratic online problems but can be applied to general problems. For quadratic cases, we characterize the asymptotic convergence to the optimal solution trajectory. We compare the proposed algorithm with existing approaches, and demonstrate how the identification routine ensures its adaptability to changes in the underlying internal model. Numerical results also indicate strong performance beyond the quadratic setting.
Why we think this paper is great for you:
This research on online optimization techniques is pertinent for continuously improving system performance and adaptability. It offers a sophisticated approach to enhancing operational efficiency.
CRM Optimization
Rate paper: 👍 👎 ♥ Save
Abstract
Adaptive optimizers are the de facto standard in non-private training as they often enable faster convergence and improved performance. In contrast, differentially private (DP) training is still predominantly performed with DP-SGD, typically requiring extensive compute and hyperparameter tuning. We propose DP-MicroAdam, a memory-efficient and sparsity-aware adaptive DP optimizer. We prove that DP-MicroAdam converges in stochastic non-convex optimization at the optimal $\mathcal{O}(1/\sqrt{T})$ rate, up to privacy-dependent constants. Empirically, DP-MicroAdam outperforms existing adaptive DP optimizers and achieves competitive or superior accuracy compared to DP-SGD across a range of benchmarks, including CIFAR-10, large-scale ImageNet training, and private fine-tuning of pretrained transformers. These results demonstrate that adaptive optimization can improve both performance and stability under differential privacy.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • MLOps
You can edit or add more interests any time.