Papers from 15 to 19 September, 2025

Here are the personalized paper recommendations sorted by most relevant
Personalization
👍 👎 ♥ Save
University of Siegen, Go
Abstract
As global warming soars, the need to assess and reduce the environmental impact of recommender systems is becoming increasingly urgent. Despite this, the recommender systems community hardly understands, addresses, and evaluates the environmental impact of their work. In this study, we examine the environmental impact of recommender systems research by reproducing typical experimental pipelines. Based on our results, we provide guidelines for researchers and practitioners on how to minimize the environmental footprint of their work and implement green recommender systems - recommender systems designed to minimize their energy consumption and carbon footprint. Our analysis covers 79 papers from the 2013 and 2023 ACM RecSys conferences, comparing traditional "good old-fashioned AI" models with modern deep learning models. We designed and reproduced representative experimental pipelines for both years, measuring energy consumption using a hardware energy meter and converting it into CO2 equivalents. Our results show that papers utilizing deep learning models emit approximately 42 times more CO2 equivalents than papers using traditional models. On average, a single deep learning-based paper generates 2,909 kilograms of CO2 equivalents - more than the carbon emissions of a person flying from New York City to Melbourne or the amount of CO2 sequestered by one tree over 260 years. This work underscores the urgent need for the recommender systems and wider machine learning communities to adopt green AI principles, balancing algorithmic advancements and environmental responsibility to build a sustainable future with AI-powered personalization.
AI Insights
  • The authors provide a reproducible pipeline that measures real hardware energy use, not just theoretical FLOPs.
  • A detailed checklist urges authors to disclose energy budgets, COâ‚‚ equivalents, and hardware specs for each experiment.
  • Comparative tables show deep‑learning recommenders emit 42Ă— more COâ‚‚ than classic matrix‑factorization models.
  • The paper argues environmental cost justification should link to tangible societal benefits, encouraging research.
  • It recommends low‑power hardware and algorithmic pruning to shrink the carbon footprint of future systems.
  • By framing sustainability as a research metric, the study invites curiosity about how green AI can coexist with high recommendation accuracy.
👍 👎 ♥ Save
Tsinghua University, Tsia
Abstract
Standardized, one-size-fits-all educational content often fails to connect with students' individual backgrounds and interests, leading to disengagement and a perceived lack of relevance. To address this challenge, we introduce PAGE, a novel framework that leverages large language models (LLMs) to automatically personalize educational materials by adapting them to each student's unique context, such as their major and personal interests. To validate our approach, we deployed PAGE in a semester-long intelligent tutoring system and conducted a user study to evaluate its impact in an authentic educational setting. Our findings show that students who received personalized content demonstrated significantly improved learning outcomes and reported higher levels of engagement, perceived relevance, and trust compared to those who used standardized materials. This work demonstrates the practical value of LLM-powered personalization and offers key design implications for creating more effective, engaging, and trustworthy educational experiences.
AI Insights
  • Structured prompts steer LLMs to extract user profiles, generate personalized search queries, and rank lecture scripts by Instructional Accuracy, Clarity, and Logical Coherence.
  • Six learning‑effectiveness dimensions—Learning new Concepts, Deepening, Attractiveness, Efficiency, Stimulation, Dependability—were scored 1‑5 for 40 students, linking Personalization Relevance to higher engagement.
  • Table 10 shows higher Personalization Relevance consistently boosts Student Engagement and Trust.
  • Prompt design with explicit output requirements and example templates markedly improves content accuracy and relevance.
  • Recommended reading: “Natural Language Processing with Python,” “Deep Learning for Natural Language Processing,” and the paper “Learning in Context: A Framework for Personalized Education using Large Language Models.”
Personalization Platform
👍 👎 ♥ Save
University of Technology
Abstract
Federated foundation models represent a new paradigm to jointly fine-tune pre-trained foundation models across clients. It is still a challenge to fine-tune foundation models for a small group of new users or specialized scenarios, which typically involve limited data compared to the large-scale data used in pre-training. In this context, the trade-off between personalization and federation becomes more sensitive. To tackle these, we proposed a bi-level personalization framework for federated fine-tuning on foundation models. Specifically, we conduct personalized fine-tuning on the client-level using its private data, and then conduct a personalized aggregation on the server-level using similar users measured by client-specific task vectors. Given the personalization information gained from client-level fine-tuning, the server-level personalized aggregation can gain group-wise personalization information while mitigating the disturbance of irrelevant or interest-conflict clients with non-IID data. The effectiveness of the proposed algorithm has been demonstrated by extensive experimental analysis in benchmark datasets.
AI Insights
  • FedAWA uses client vectors to adaptively weight server aggregation, improving robustness to non‑IID data.
  • L‑dawa adds layer‑wise divergence awareness, boosting performance in self‑supervised vision tasks.
  • All fine‑tuning stays local; only lightweight task vectors are shared, preserving privacy.
  • Benchmarks show 3–5 % accuracy gains over FedAvg/FedProx with similar communication overhead.
  • Open‑source code on GitHub lets researchers prototype the method quickly.
  • Grouping users by task vectors balances personalization and global convergence.
  • For more, read “Federated Learning: Algorithms, Systems, and Applications” and the FedAWA arXiv paper.
Data Driven CRM
👍 👎 ♥ Save
Abstract
Industrial monitoring systems, especially when deployed in Industry 4.0 environments, are experiencing a shift in paradigm from traditional rule-based architectures to data-driven approaches leveraging machine learning and artificial intelligence. This study presents a comparison between these two methodologies, analyzing their respective strengths, limitations, and application scenarios, and proposes a basic framework to evaluate their key properties. Rule-based systems offer high interpretability, deterministic behavior, and ease of implementation in stable environments, making them ideal for regulated industries and safety-critical applications. However, they face challenges with scalability, adaptability, and performance in complex or evolving contexts. Conversely, data-driven systems excel in detecting hidden anomalies, enabling predictive maintenance and dynamic adaptation to new conditions. Despite their high accuracy, these models face challenges related to data availability, explainability, and integration complexity. The paper suggests hybrid solutions as a possible promising direction, combining the transparency of rule-based logic with the analytical power of machine learning. Our hypothesis is that the future of industrial monitoring lies in intelligent, synergic systems that leverage both expert knowledge and data-driven insights. This dual approach enhances resilience, operational efficiency, and trust, paving the way for smarter and more flexible industrial environments.
👍 👎 ♥ Save
Perfios Software Solution
Abstract
Personalized financial advice requires consideration of user goals, constraints, risk tolerance, and jurisdiction. Prior LLM work has focused on support systems for investors and financial planners. Simultaneously, numerous recent studies examine broader personal finance tasks, including budgeting, debt management, retirement, and estate planning, through agentic pipelines that incur high maintenance costs, yielding less than 25% of their expected financial returns. In this study, we introduce a novel and reproducible framework that integrates relevant financial context with behavioral finance studies to construct supervision data for end-to-end advisors. Using this framework, we create a 19k sample reasoning dataset and conduct a comprehensive fine-tuning of the Qwen-3-8B model on the dataset. Through a held-out test split and a blind LLM-jury study, we demonstrate that through careful data curation and behavioral integration, our 8B model achieves performance comparable to significantly larger baselines (14-32B parameters) across factual accuracy, fluency, and personalization metrics while incurring 80% lower costs than the larger counterparts.
AI Insights
  • The 8B model matched 14–32B baselines in factual accuracy, yet cut inference cost by 80%.
  • Case C2 failure traced to a material filing error, underscoring the need for robust data validation.
  • Baseline‑L’s multi‑step reasoning consistently outperformed the 8B model on nuanced financial scenarios.
  • The framework’s 19k reasoning samples were generated by blending behavioral finance experiments with domain context.
  • Efficiency gains were achieved through selective fine‑tuning rather than scaling parameters.
  • Literature suggests financial literacy directly improves investment decisions, aligning with the model’s personalization metrics.
  • Future work could integrate real‑time regulatory updates to mitigate filing‑related inaccuracies.
CRM Optimization
👍 👎 ♥ Save
Peking University
Abstract
With the diminishing return from Moore's Law, system-technology co-optimization (STCO) has emerged as a promising approach to sustain the scaling trends in the VLSI industry. By bridging the gap between system requirements and technology innovations, STCO enables customized optimizations for application-driven system architectures. However, existing research lacks sufficient discussion on efficient STCO methodologies, particularly in addressing the information gap across design hierarchies and navigating the expansive cross-layer design space. To address these challenges, this paper presents Orthrus, a dual-loop automated framework that synergizes system-level and technology-level optimizations. At the system level, Orthrus employs a novel mechanism to prioritize the optimization of critical standard cells using system-level statistics. It also guides technology-level optimization via the normal directions of the Pareto frontier efficiently explored by Bayesian optimization. At the technology level, Orthrus leverages system-aware insights to optimize standard cell libraries. It employs a neural network-assisted enhanced differential evolution algorithm to efficiently optimize technology parameters. Experimental results on 7nm technology demonstrate that Orthrus achieves 12.5% delay reduction at iso-power and 61.4% power savings at iso-delay over the baseline approaches, establishing new Pareto frontiers in STCO.
AI Insights
  • Orthrus integrates graph neural networks to capture inter‑cell connectivity, enabling fine‑grained feature extraction beyond conventional statistical models.
  • The framework leverages a neural‑network‑assisted differential evolution algorithm, combining gradient‑free search with learned surrogate guidance for rapid convergence.
  • Experimental validation on 7 nm technology demonstrates that Orthrus can push the Pareto frontier by up to 61 % power savings while maintaining delay targets.
  • Despite its performance gains, the method’s reliance on large‑scale training data exposes it to scalability bottlenecks in industrial‑scale designs.
  • The authors cite Openbox and Sequential Model‑Based Optimization as foundational tools for their black‑box Bayesian search strategy.
  • Design‑Technology Co‑Optimization (DTCO) is formally defined as a joint optimization of circuit layout and process parameters to meet system‑level constraints.
  • Graph Neural Networks (GNNs) are employed to model the circuit as a graph, allowing the optimizer to learn structural patterns that influence timing and power.
👍 👎 ♥ Save
LTCI, T el ecom Paris
Abstract
We analyze a zeroth-order particle algorithm for the global optimization of a non-convex function, focusing on a variant of Consensus-Based Optimization (CBO) with small but fixed noise intensity. Unlike most previous studies restricted to finite horizons, we investigate its long-time behavior with fixed parameters. In the mean-field limit, a quantitative Laplace principle shows exponential convergence to a neighborhood of the minimizer x * . For finitely many particles, a block-wise analysis yields explicit error bounds: individual particles achieve long-time consistency near x * , and the global best particle converge to x * . The proof technique combines a quantitative Laplace principle with block-wise control of Wasserstein distances, avoiding the exponential blow-up typical of Gr{\"o}nwall-based estimates.
AI Insights
  • A quantitative Laplace principle yields exponential convergence to a minimizer neighborhood, beating classic Gronwall bounds.
  • Block‑wise Wasserstein control gives explicit finite‑particle error bounds, keeping each agent near the global optimum forever.
  • Remarkably, fixed non‑vanishing noise still guarantees long‑time consistency, avoiding the usual diminishing‑noise requirement.
  • Uniform‑in‑time propagation of chaos (Gerber et al., 2025) underpins the empirical measure’s convergence across all horizons.
  • The mean‑field limit marries probability theory and PDE tools, handling non‑convex objectives in a single framework.
  • Key references—Fournier & Guillin (2015) on Wasserstein rates and FKR24 on global CBO convergence—anchor the analysis.
  • Authors flag smoothness as an assumption, pointing to future work that will tackle truly rugged landscapes.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • MLOps
  • Email Marketing
You can edit or add more interests any time.

Unsubscribe from these updates