Hi!

Your personalized paper recommendations for 12 to 16 January, 2026.
University of Cambridge
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Insights
  • The system is designed as a multi-agent architecture with 7 agents working together to provide supply chain risk management and mitigation strategies. [2]
  • The system uses few-shot prompting to provide illustrative examples of inputs paired with structured outputs, demonstrating the desired format and reasoning style. [1]
Abstract
Modern supply chains are increasingly exposed to disruptions from geopolitical events, demand shocks, trade restrictions, to natural disasters. While many of these disruptions originate deep in the supply network, most companies still lack visibility beyond Tier-1 suppliers, leaving upstream vulnerabilities undetected until the impact cascades downstream. To overcome this blind-spot and move from reactive recovery to proactive resilience, we introduce a minimally supervised agentic AI framework that autonomously monitors, analyses, and responds to disruptions across extended supply networks. The architecture comprises seven specialised agents powered by large language models and deterministic tools that jointly detect disruption signals from unstructured news, map them to multi-tier supplier networks, evaluate exposure based on network structure, and recommend mitigations such as alternative sourcing options. \rev{We evaluate the framework across 30 synthesised scenarios covering three automotive manufacturers and five disruption classes. The system achieves high accuracy across core tasks, with F1 scores between 0.962 and 0.991, and performs full end-to-end analyses in a mean of 3.83 minutes at a cost of \$0.0836 per disruption. Relative to industry benchmarks of multi-day, analyst-driven assessments, this represents a reduction of more than three orders of magnitude in response time. A real-world case study of the 2022 Russia-Ukraine conflict further demonstrates operational applicability. This work establishes a foundational step toward building resilient, proactive, and autonomous supply chains capable of managing disruptions across deep-tier networks.
Why we are recommending this paper?
Due to your Interest in AI for Supply Chain

This paper addresses a critical issue in supply chains – disruption monitoring – aligning directly with the user's interest in supply chain resilience. The agentic AI approach offers a potentially powerful solution for proactive risk management within complex networks.
Muffakham Jah College of Engineering and Technology
Rate paper: 👍 👎 ♥ Save
AI Insights
  • The selective retraining strategy achieves up to 417× return on investment by focusing computational resources on the most affected models rather than retraining all 30,000+ time series. [3]
  • Concept drift: A change in the underlying distribution of data over time, which can affect the accuracy of forecasting models. [3]
  • Drift detection: The process of identifying when concept drift has occurred and adapting the forecasting model accordingly. [3]
  • SHAP (SHapley Additive exPlanations): An explainability technique that assigns a value to each feature in a prediction, indicating its contribution to the outcome. [3]
  • It combines multiple detection approaches with explainable diagnosis and targeted remediation, addressing the complete drift lifecycle. [3]
  • The DriftGuard framework is a comprehensive system for managing concept drift in supply chain forecasting. [2]
Abstract
Supply chain forecasting models degrade over time as real-world conditions change. Promotions shift, consumer preferences evolve, and supply disruptions alter demand patterns, causing what is known as concept drift. This silent degradation leads to stockouts or excess inventory without triggering any system warnings. Current industry practice relies on manual monitoring and scheduled retraining every 3-6 months, which wastes computational resources during stable periods while missing rapid drift events. Existing academic methods focus narrowly on drift detection without addressing diagnosis or remediation, and they ignore the hierarchical structure inherent in supply chain data. What retailers need is an end-to-end system that detects drift early, explains its root causes, and automatically corrects affected models. We propose DriftGuard, a five-module framework that addresses the complete drift lifecycle. The system combines an ensemble of four complementary detection methods, namely error-based monitoring, statistical tests, autoencoder anomaly detection, and Cumulative Sum (CUSUM) change-point analysis, with hierarchical propagation analysis to identify exactly where drift occurs across product lines. Once detected, Shapley Additive Explanations (SHAP) analysis diagnoses the root causes, and a cost-aware retraining strategy selectively updates only the most affected models. Evaluated on over 30,000 time series from the M5 retail dataset, DriftGuard achieves 97.8% detection recall within 4.2 days and delivers up to 417 return on investment through targeted remediation.
Why we are recommending this paper?
Due to your Interest in Supply Chain

Given the user's interest in AI for supply chain forecasting, this paper’s focus on concept drift detection is highly relevant. The hierarchical framework provides a structured approach to adapting forecasting models to changing conditions.
Leiden University
Rate paper: 👍 👎 ♥ Save
AI Insights
  • Social Welfare Function: A function that captures economic efficiency by measuring the total value generated in the system rather than just monetary transfers. [3]
  • The paper builds on the work of Little's Law, which relates the average number of customers in a system to the arrival rate and service rate. [3]
  • The paper explores a multi-server queueing system with two products and derives the optimal inventory levels for each product that maximize social welfare. [2]
Abstract
This paper analyzes a two-product make-to-stock queueing system where a single production facility serves two customer classes with independent Poisson arrivals. Customers make strategic join-or-balk decisions without observing current inventory levels. The analysis establishes the existence and uniqueness of Nash equilibria in customer joining strategies for various inventory scenarios. Optimal base-stock levels are characterized from both profit-maximizing and welfare-maximizing perspectives, with closed-form expressions for key performance measures.
Why we are recommending this paper?
Due to your Interest in Supply Chain

This paper directly tackles inventory optimization within a make-to-stock system, a core component of supply chain management. The strategic joining aspect offers a nuanced approach to demand forecasting and inventory control.
Servamind Inc
Rate paper: 👍 👎 ♥ Save
AI Insights
  • On Fashion-MNIST, SERVA achieved 88.39% accuracy in 1.41s consuming 150.2J, while the fastest baseline to match this accuracy was MLP-3L requiring 60 epochs, 165.03s, and 14,938.1J (99× energy overhead). [3]
  • On MNIST, SERVA reached 96.48% in 1.45s at 153.6J versus MLP-3L at 18 epochs, 50.21s, and 4,551.5J (30× energy overhead). [3]
  • This is measured by the compute payload metric, which captures what the model actually operates on during training and inference. [3]
  • The SERVA model is based on the concept of sparse neural networks, which have been shown to be effective in reducing energy consumption while maintaining accuracy. [3]
  • The Chimera pipeline is inspired by the idea of using a hierarchical representation of data to reduce computational complexity. [3]
  • The SERVA model is a novel approach to machine learning that achieves state-of-the-art results in terms of accuracy and energy efficiency, outperforming traditional models on Fashion-MNIST and MNIST benchmarks. [3]
  • The SERVA model can extract minimal computational representations from .servaencoded data while preserving all information necessary for model performance. [2]
  • The SERVA model outperforms traditional models in terms of accuracy and energy efficiency. [1]
Abstract
Artificial Intelligence (AI) infrastructure faces two compounding crises. Compute payload - the unsustainable energy and capital costs of training and inference - threatens to outpace grid capacity and concentrate capability among a handful of organizations. Data chaos - the 80% of project effort consumed by preparation, conversion, and preprocessing - strangles development velocity and locks datasets to single model architectures. Current approaches treat these as separate problems, managing each with incremental optimization while increasing ecosystem complexity. This paper presents ServaStack: a universal data format (.serva) paired with a universal AI compute engine (Chimera). The .serva format achieves lossless compression by encoding information using laser holography principles, while Chimera converts compute operations into a representational space where computation occurs directly on .serva files without decompression. The result is automatic data preprocessing. The Chimera engine enables any existing model to operate on .serva data without retraining, preserving infrastructure investments while revamping efficiency. Internal benchmarks demonstrate 30-374x energy efficiency improvements (96-99% reduction), 4x-34x lossless storage compression, and 68x compute payload reduction without accuracy loss when compared to RNN, CNN, and MLP models on FashionMNIST and MNIST datasets. At hyperscale with one billion daily iterations, these gains translate to $4.85M savings per petabyte per training cycle. When any data flows to any model on any hardware, the AI development paradigm shifts. The bottleneck moves from infrastructure to imagination.
Why we are recommending this paper?
Due to your Interest in AI for Pricing

Considering the user's interest in AI for supply chain optimization, this paper’s focus on reducing the computational costs of AI models is crucial. Addressing the energy and capital constraints of AI infrastructure aligns with the broader trend of efficient AI applications.
The Alan Turing Institute
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Insights
  • The authors contend that entertainment is a significant use case for AI, with people already using AI for activities unrelated to productivity. [3]
  • The paper suggests that this vision should inspire more debates, discourse, and study in the field of AI, as generative AI is increasingly being used for entertainment. [3]
  • AS: Artificially generated content GenAI: Generative AI Sociotechnical systems: Complex systems that combine social and technical components The paper concludes by emphasizing the need for a constructive vision of cultural AI, rather than just harm minimization. [3]
  • The paper argues that mainstream approaches to evaluating AI systems tend to focus on intelligence and harm minimization, but neglect the cultural dimension of AI use. [2]
  • They propose developing a positive theory of what beneficial, nutritious entertainment might look like, rather than just mitigating harms. [0]
Abstract
Generative AI systems are predominantly designed, evaluated, and marketed as intelligent systems which will benefit society by augmenting or automating human cognitive labor, promising to increase personal, corporate, and macroeconomic productivity. But this mainstream narrative about what AI is and what it can do is in tension with another emerging use case: entertainment. We argue that the field of AI is unprepared to measure or respond to how the proliferation of entertaining AI-generated content will impact society. Emerging data suggest AI is already widely adopted for entertainment purposes -- especially by young people -- and represents a large potential source of revenue. We contend that entertainment will become a primary business model for major AI corporations seeking returns on massive infrastructure investments; this will exert a powerful influence on the technology these companies produce in the coming years. Examining current evaluation practices, we identify a critical asymmetry: while AI assessments rigorously measure both benefits and harms of intelligence, they focus almost exclusively on cultural harms. We lack frameworks for articulating how cultural outputs might be actively beneficial. Drawing on insights from the humanities, we propose "thick entertainment" as a framework for evaluating AI-generated cultural content -- one that considers entertainment's role in meaning-making, identity formation, and social connection rather than simply minimizing harm. While AI is often touted for its potential to revolutionize productivity, in the long run we may find that AI turns out to be as much about "intelligence" as social media is about social connection.
Why we are recommending this paper?
Due to your Interest in AI for Pricing

This paper explores the broader implications of AI, including its potential impact on productivity and economic value – a key consideration for optimizing supply chain operations. The discussion about AI's role in augmenting human cognitive labor is pertinent to strategic AI investments.
Virginia Tech
Rate paper: 👍 👎 ♥ Save
AI Insights
  • The paper discusses the challenges of dataset licensing and attribution in AI research, highlighting the need for more transparent and equitable practices. [3]
  • Attribution: The act of acknowledging the source of a dataset or model used in AI research. [3]
  • The paper assumes that all datasets are available for use, which may not be the case in practice. [3]
  • The authors propose a framework for optimal data selection from multiple sources, which can improve performance scaling and reduce computational costs. [2]
Abstract
We argue that the machine learning value chain is structurally unsustainable due to an economic data processing inequality: each state in the data cycle from inputs to model weights to synthetic outputs refines technical signal but strips economic equity from data generators. We show, by analyzing seventy-three public data deals, that the majority of value accrues to aggregators, with documented creator royalties rounding to zero and widespread opacity of deal terms. This is not just an economic welfare concern: as data and its derivatives become economic assets, the feedback loop that sustains current learning algorithms is at risk. We identify three structural faults - missing provenance, asymmetric bargaining power, and non-dynamic pricing - as the operational machinery of this inequality. In our analysis, we trace these problems along the machine learning value chain and propose an Equitable Data-Value Exchange (EDVEX) Framework to enable a minimal market that benefits all participants. Finally, we outline research directions where our community can make concrete contributions to data deals and contextualize our position with related and orthogonal viewpoints.
Why we are recommending this paper?
Due to your Interest in AI for Supply Chain
University of Cambridge
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Insights
  • They use the MOBO (Multi-fidelity Bayesian Optimization) algorithm to search for optimal hyperparameters. [3]
  • MOBO: Multi-fidelity Bayesian Optimization CNN: Convolutional Neural Network CIFAR-10: A dataset of images for image classification SOTA: State-of-the-Art MAC: Multiply-accumulate operation [3]
  • The authors present an approach to optimizing machine learning models for both performance and energy efficiency. [2]
  • However, the energy consumption of the optimized model is only 0.39 mJ, making it more energy-efficient than the state-of-the-art Spike Aggregation Transformer (SAFormer). [1]
Abstract
The ubiquity of machine learning (ML) and the demand for ever-larger models bring an increase in energy consumption and environmental impact. However, little is known about the energy scaling laws in ML, and existing research focuses on training cost -- ignoring the larger cost of inference. Furthermore, tools for measuring the energy consumption of ML do not provide actionable feedback. To address these gaps, we developed Energy Consumption Optimiser (ECOpt): a hyperparameter tuner that optimises for energy efficiency and model performance. ECOpt quantifies the trade-off between these metrics as an interpretable Pareto frontier. This enables ML practitioners to make informed decisions about energy cost and environmental impact, while maximising the benefit of their models and complying with new regulations. Using ECOpt, we show that parameter and floating-point operation counts can be unreliable proxies for energy consumption, and observe that the energy efficiency of Transformer models for text generation is relatively consistent across hardware. These findings motivate measuring and publishing the energy metrics of ML models. We further show that ECOpt can have a net positive environmental impact and use it to uncover seven models for CIFAR-10 that improve upon the state of the art, when considering accuracy and energy efficiency together.
Why we are recommending this paper?
Due to your Interest in AI for Pricing Optimization
BIT
Rate paper: 👍 👎 ♥ Save
AI Insights
  • The expected number of exploration steps is approximately T·ϵ0·exp(−β·E[max(0,¯r k)]) The Sharpe ratio under loss-averse rewards is bounded by µ/σ·1 + (λ−1)P(r <0)/[1 + (λ−1)P(r <0)·σr<0/σ] The optimal loss aversion parameter λ∗ that maximizes expected utility under return distribution P(r) satisfies the following: λ∗= 1 +2αE[r|r <0]/E[r2|r <0] Loss aversion: A psychological bias where people tend to prefer avoiding losses over acquiring gains. [3]
  • Overconfidence modeling: A method of modeling an agent's confidence in its actions, which can affect exploration and exploitation trade-offs. [3]
  • The expected number of exploration steps under overconfidence modeling is approximately T·ϵ0·exp(−β·E[max(0,¯r k)]) [3]
  • Loss aversion can change the optimal policy. [2]
Abstract
Financial markets are influenced by human behavior that deviates from rationality due to cognitive biases. Traditional reinforcement learning (RL) models for financial decision-making assume rational agents, potentially overlooking the impact of psychological factors. This study integrates cognitive biases into RL frameworks for financial trading, hypothesizing that such models can exhibit human-like trading behavior and achieve better risk-adjusted returns than standard RL agents. We introduce biases, such as overconfidence and loss aversion, into reward structures and decision-making processes and evaluate their performance in simulated and real-world trading environments. Despite its inconclusive or negative results, this study provides insights into the challenges of incorporating human-like biases into RL, offering valuable lessons for developing robust financial AI systems.
Why we are recommending this paper?
Due to your Interest in AI for Pricing Optimization
Chinese Academy of Sciences
Rate paper: 👍 👎 ♥ Save
AI Insights
  • The problem of fair allocation of indivisible goods to asymmetric agents is a complex one, with various approximation algorithms proposed in recent years. [2]
Abstract
We study the problem of allocating $m$ indivisible goods among $n$ agents, where each agent's valuation is fractionally subadditive (XOS). With respect to AnyPrice Share (APS) fairness, Kulkarni et al. (2024) showed that, when agents have binary marginal values, a $0.1222$-APS allocation can be found in polynomial time, and there exists an instance where no allocation is better than $0.5$-approximate APS. Very recently, Feige and Grinberg (2025) extended the problem to the asymmetric case, where agents may have different entitlements, and improved the approximation ratio to $1/6$ for general XOS valuations. In this work, we focus on the asymmetric setting with binary XOS valuations, and further improve the approximation ratio to $1/2$, which matches the known upper bound. We also present a polynomial-time algorithm to compute such an allocation. Beyond APS fairness, we also study the weighted maximin share (WMMS) fairness. Farhadi et al. (2019) showed that, a $1/n$-WMMS allocation always exists for agents with general additive valuations, and that this approximation ratio is tight. We extend this result to general XOS valuations, where a $1/n$-WMMS allocation still exists, and this approximation ratio cannot be improved even when marginal values are binary. This shows a sharp contrast to binary additive valuations, where an exact WMMS allocation exists and can be found in polynomial time.
Why we are recommending this paper?
Due to your Interest in Pricing

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Supply
  • Demand
  • AI for Supply Chain Optimization
You can edit or add more interests any time.