Papers from 15 to 19 September, 2025

Here are the personalized paper recommendations sorted by most relevant
Search
👍 👎 ♥ Save
Meta
Abstract
Beyond general web-scale search, social network search uniquely enables users to retrieve information and discover potential connections within their social context. We introduce a framework of modernized Facebook Group Scoped Search by blending traditional keyword-based retrieval with embedding-based retrieval (EBR) to improve the search relevance and diversity of search results. Our system integrates semantic retrieval into the existing keyword search pipeline, enabling users to discover more contextually relevant group posts. To rigorously assess the impact of this blended approach, we introduce a novel evaluation framework that leverages large language models (LLMs) to perform offline relevance assessments, providing scalable and consistent quality benchmarks. Our results demonstrate that the blended retrieval system significantly enhances user engagement and search quality, as validated by both online metrics and LLM-based evaluation. This work offers practical insights for deploying and evaluating advanced retrieval systems in large-scale, real-world social platforms.
👍 👎 ♥ Save
UCLA
Abstract
Most research on query optimization has centered on binary join algorithms like hash join and sort-merge join. However, recent years have seen growing interest in theoretically optimal algorithms, notably Yannakakis' algorithm. These algorithms rely on join trees, which differ from the operator trees for binary joins and require new optimization techniques. We propose three approaches to constructing join trees for acyclic queries. First, we give an algorithm to enumerate all join trees of an alpha-acyclic query by edits with amortized constant delay, which forms the basis of a cost-based optimizer for acyclic joins. Second, we show that the Maximum Cardinality Search algorithm by Tarjan and Yannakakis constructs a unique shallowest join tree, rooted at any relation, for a Berge-acyclic query; this tree enables parallel execution of large join queries. Finally, we prove that any connected left-deep linear plan for a gamma-acyclic query can be converted into a join tree by a simple algorithm, allowing reuse of optimization infrastructure developed for binary joins.
Personalization
👍 👎 ♥ Save
University of Siegen, Go
Abstract
As global warming soars, the need to assess and reduce the environmental impact of recommender systems is becoming increasingly urgent. Despite this, the recommender systems community hardly understands, addresses, and evaluates the environmental impact of their work. In this study, we examine the environmental impact of recommender systems research by reproducing typical experimental pipelines. Based on our results, we provide guidelines for researchers and practitioners on how to minimize the environmental footprint of their work and implement green recommender systems - recommender systems designed to minimize their energy consumption and carbon footprint. Our analysis covers 79 papers from the 2013 and 2023 ACM RecSys conferences, comparing traditional "good old-fashioned AI" models with modern deep learning models. We designed and reproduced representative experimental pipelines for both years, measuring energy consumption using a hardware energy meter and converting it into CO2 equivalents. Our results show that papers utilizing deep learning models emit approximately 42 times more CO2 equivalents than papers using traditional models. On average, a single deep learning-based paper generates 2,909 kilograms of CO2 equivalents - more than the carbon emissions of a person flying from New York City to Melbourne or the amount of CO2 sequestered by one tree over 260 years. This work underscores the urgent need for the recommender systems and wider machine learning communities to adopt green AI principles, balancing algorithmic advancements and environmental responsibility to build a sustainable future with AI-powered personalization.
AI Insights
  • The authors provide a reproducible pipeline that measures real hardware energy use, not just theoretical FLOPs.
  • A detailed checklist urges authors to disclose energy budgets, CO₂ equivalents, and hardware specs for each experiment.
  • Comparative tables show deep‑learning recommenders emit 42× more CO₂ than classic matrix‑factorization models.
  • The paper argues environmental cost justification should link to tangible societal benefits, encouraging research.
  • It recommends low‑power hardware and algorithmic pruning to shrink the carbon footprint of future systems.
  • By framing sustainability as a research metric, the study invites curiosity about how green AI can coexist with high recommendation accuracy.
👍 👎 ♥ Save
Tsinghua University, Tsia
Abstract
Standardized, one-size-fits-all educational content often fails to connect with students' individual backgrounds and interests, leading to disengagement and a perceived lack of relevance. To address this challenge, we introduce PAGE, a novel framework that leverages large language models (LLMs) to automatically personalize educational materials by adapting them to each student's unique context, such as their major and personal interests. To validate our approach, we deployed PAGE in a semester-long intelligent tutoring system and conducted a user study to evaluate its impact in an authentic educational setting. Our findings show that students who received personalized content demonstrated significantly improved learning outcomes and reported higher levels of engagement, perceived relevance, and trust compared to those who used standardized materials. This work demonstrates the practical value of LLM-powered personalization and offers key design implications for creating more effective, engaging, and trustworthy educational experiences.
AI Insights
  • Structured prompts steer LLMs to extract user profiles, generate personalized search queries, and rank lecture scripts by Instructional Accuracy, Clarity, and Logical Coherence.
  • Six learning‑effectiveness dimensions—Learning new Concepts, Deepening, Attractiveness, Efficiency, Stimulation, Dependability—were scored 1‑5 for 40 students, linking Personalization Relevance to higher engagement.
  • Table 10 shows higher Personalization Relevance consistently boosts Student Engagement and Trust.
  • Prompt design with explicit output requirements and example templates markedly improves content accuracy and relevance.
  • Recommended reading: “Natural Language Processing with Python,” “Deep Learning for Natural Language Processing,” and the paper “Learning in Context: A Framework for Personalized Education using Large Language Models.”
Deep Learning
👍 👎 ♥ Save
University of North Carol
Abstract
We develop a deep learning algorithm for approximating functional rational expectations equilibria of dynamic stochastic economies in the sequence space. We use deep neural networks to parameterize equilibrium objects of the economy as a function of truncated histories of exogenous shocks. We train the neural networks to fulfill all equilibrium conditions along simulated paths of the economy. To illustrate the performance of our method, we solve three economies of increasing complexity: the stochastic growth model, a high-dimensional overlapping generations economy with multiple sources of aggregate risk, and finally an economy where households and firms face uninsurable idiosyncratic risk, shocks to aggregate productivity, and shocks to idiosyncratic and aggregate volatility. Furthermore, we show how to design practical neural policy function architectures that guarantee monotonicity of the predicted policies, facilitating the use of the endogenous grid method to simplify parts of our algorithm.
AI Insights
  • Equilibrium objects are parameterized as functions of truncated exogenous shock histories, enabling temporal learning.
  • Training enforces all equilibrium conditions along simulated paths, avoiding iterative solves.
  • Monotonicity is guaranteed by neural architectures, simplifying endogenous grid use.
  • The method scales to high‑dimensional overlapping‑generations models with multiple aggregate risks.
  • A test case adds idiosyncratic risk, productivity shocks, and stochastic volatility, showing robustness.
  • The paper reviews numerical, projection, and machine‑learning methods, positioning deep learning as a unifying framework.
  • Suggested readings: Judd’s Numerical Methods in Economics and Druedahl & Ropke’s endogenous grid papers.
👍 👎 ♥ Save
Sun Yatsen University,Sh
Abstract
Convolutional neural networks are constructed with massive operations with different types and are highly computationally intensive. Among these operations, multiplication operation is higher in computational complexity and usually requires {more} energy consumption with longer inference time than other operations, which hinders the deployment of convolutional neural networks on mobile devices. In many resource-limited edge devices, complicated operations can be calculated via lookup tables to reduce computational cost. Motivated by this, in this paper, we introduce a generic and efficient lookup operation which can be used as a basic operation for the construction of neural networks. Instead of calculating the multiplication of weights and activation values, simple yet efficient lookup operations are adopted to compute their responses. To enable end-to-end optimization of the lookup operation, we construct the lookup tables in a differentiable manner and propose several training strategies to promote their convergence. By replacing computationally expensive multiplication operations with our lookup operations, we develop lookup networks for the image classification, image super-resolution, and point cloud classification tasks. It is demonstrated that our lookup networks can benefit from the lookup operations to achieve higher efficiency in terms of energy consumption and inference speed while maintaining competitive performance to vanilla convolutional networks. Extensive experiments show that our lookup networks produce state-of-the-art performance on different tasks (both classification and regression tasks) and different data types (both images and point clouds).
AI Insights
  • Deep Lookup Network replaces costly multiplications with differentiable lookup tables, enabling end‑to‑end training.
  • Adaptive table refinement and gradient updates accelerate lookup convergence.
  • State‑of‑the‑art accuracy on image classification, super‑resolution, and point‑cloud tasks is achieved while cutting energy use by up to 60 %.
  • Lookup tables are jointly optimized with convolutional kernels, yielding a compact model ideal for mobile inference.
  • The paper surveys quantization, pruning, and knowledge distillation as complementary efficiency techniques.
  • Recommended resources: the survey “Quantization of Neural Networks” and Goodfellow’s “Deep Learning.”
Information Retrieval
👍 👎 ♥ Save
Kuaishou Technology, Peki
Abstract
Retrieval-Augmented Generation (RAG) has emerged as a promising approach to address key limitations of Large Language Models (LLMs), such as hallucination, outdated knowledge, and lacking reference. However, current RAG frameworks often struggle with identifying whether retrieved documents meaningfully contribute to answer generation. This shortcoming makes it difficult to filter out irrelevant or even misleading content, which notably impacts the final performance. In this paper, we propose Document Information Gain (DIG), a novel metric designed to quantify the contribution of retrieved documents to correct answer generation. DIG measures a document's value by computing the difference of LLM's generation confidence with and without the document augmented. Further, we introduce InfoGain-RAG, a framework that leverages DIG scores to train a specialized reranker, which prioritizes each retrieved document from exact distinguishing and accurate sorting perspectives. This approach can effectively filter out irrelevant documents and select the most valuable ones for better answer generation. Extensive experiments across various models and benchmarks demonstrate that InfoGain-RAG can significantly outperform existing approaches, on both single and multiple retrievers paradigm. Specifically on NaturalQA, it achieves the improvements of 17.9%, 4.5%, 12.5% in exact match accuracy against naive RAG, self-reflective RAG and modern ranking-based RAG respectively, and even an average of 15.3% increment on advanced proprietary model GPT-4o across all datasets. These results demonstrate the feasibility of InfoGain-RAG as it can offer a reliable solution for RAG in multiple applications.
👍 👎 ♥ Save
Whoop, Boston, MA, USA
Abstract
We consider a streaming signal in which each sample is linked to a latent class. We assume that multiple classifiers are available, each providing class probabilities with varying degrees of accuracy. These classifiers are employed following a straightforward and fixed policy. In this setting, we consider the problem of fusing the output of the classifiers while incorporating the temporal aspect to improve classification accuracy. We propose a state-space model and develop a filter tailored for realtime execution. We demonstrate the effectiveness of the proposed filter in an activity classification application based on inertial measurement unit (IMU) data from a wearable device.
AI Insights
  • The filter models class probabilities with a Dirichlet prior, enabling principled Bayesian updates on streaming data.
  • Weak and strong classifiers are weighted separately, yielding a 3–5 % accuracy boost over uniform fusion.
  • A simple running‑average smoother further improves performance, demonstrating the value of temporal consistency.
  • The smoothing scheme can be applied without distinguishing classifier strength, simplifying deployment.
  • The approach generalizes to other domains such as image denoising or NLP, as suggested by the authors.
  • Key references include “Bayesian Filtering and Smoothing” by S. Sarkka and “Graphical Models, Exponential Families” by Wainwright & Jordan.
  • Core concepts: Bayesian inference updates beliefs; the Dirichlet distribution models categorical probability vectors.
Ranking
👍 👎 ♥ Save
University of Michigan
Paper visualization
Rate this image: 😍 👍 👎
Abstract
While social media feed rankings are primarily driven by engagement signals rather than any explicit value system, the resulting algorithmic feeds are not value-neutral: engagement may prioritize specific individualistic values. This paper presents an approach for social media feed value alignment. We adopt Schwartz's theory of Basic Human Values -- a broad set of human values that articulates complementary and opposing values forming the building blocks of many cultures -- and we implement an algorithmic approach that models and then ranks feeds by expressions of Schwartz's values in social media posts. Our approach enables controls where users can express weights on their desired values, combining these weights and post value expressions into a ranking that respects users' articulated trade-offs. Through controlled experiments (N=141 and N=250), we demonstrate that users can use these controls to architect feeds reflecting their desired values. Across users, value-ranked feeds align with personal values, diverging substantially from existing engagement-driven feeds.
👍 👎 ♥ Save
Ant Group
Abstract
Pre-ranking plays a crucial role in large-scale recommender systems by significantly improving the efficiency and scalability within the constraints of providing high-quality candidate sets in real time. The two-tower model is widely used in pre-ranking systems due to a good balance between efficiency and effectiveness with decoupled architecture, which independently processes user and item inputs before calculating their interaction (e.g. dot product or similarity measure). However, this independence also leads to the lack of information interaction between the two towers, resulting in less effectiveness. In this paper, a novel architecture named learnable Fully Interacted Two-tower Model (FIT) is proposed, which enables rich information interactions while ensuring inference efficiency. FIT mainly consists of two parts: Meta Query Module (MQM) and Lightweight Similarity Scorer (LSS). Specifically, MQM introduces a learnable item meta matrix to achieve expressive early interaction between user and item features. Moreover, LSS is designed to further obtain effective late interaction between the user and item towers. Finally, experimental results on several public datasets show that our proposed FIT significantly outperforms the state-of-the-art baseline pre-ranking models.
Unsubscribe from these updates