Hi!

Your personalized paper recommendations for 02 to 06 February, 2026.
National Institute of Advanced Industrial Science and Technology
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Insights
  • However, the authors acknowledge some limitations of their approach, such as the need for high-quality data and the potential for bias in the LLMs. (ML: 0.99)👍👎
  • The paper discusses the application of large language models (LLMs) in supply chain management, specifically in inventory management. (ML: 0.95)👍👎
  • They suggest that future research should focus on addressing these limitations and exploring other applications of LLMs in supply chain management. (ML: 0.93)👍👎
  • The paper also discusses the potential benefits of using LLMs in supply chain management, including improved decision-making and reduced costs. (ML: 0.92)👍👎
  • They compare the performance of InvAgent with other existing methods and find that it outperforms them in terms of accuracy and efficiency. (ML: 0.90)👍👎
  • Large Language Models (LLMs): Deep learning models that are trained on vast amounts of text data to generate human-like language. (ML: 0.89)👍👎
  • Supply Chain Management: The management of the flow of goods, services, and information from raw materials to end customers. (ML: 0.85)👍👎
  • Multi-Agent System: A system composed of multiple agents that interact with each other and their environment to achieve a common goal. (ML: 0.83)👍👎
  • The authors propose a multi-agent system based on LLMs for inventory management, which they call InvAgent. (ML: 0.77)👍👎
  • Inventory Management: The process of managing the flow of goods, materials, and supplies into and out of an organization. (ML: 0.73)👍👎
Abstract
This study investigates large language model (LLM) -based multi-agent systems (MASs) as a promising approach to inventory management, which is a key component of supply chain management. Although these systems have gained considerable attention for their potential to address the challenges associated with typical inventory management methods, key uncertainties regarding their effectiveness persist. Specifically, it is unclear whether LLM-based MASs can consistently derive optimal ordering policies and adapt to diverse supply chain scenarios. To address these questions, we examine an LLM-based MAS with a fixed-ordering strategy prompt that encodes the stepwise processes of the problem setting and a safe-stock strategy commonly used in inventory management. Our empirical results demonstrate that, even without detailed prompt adjustments, an LLM-based MAS can determine optimal ordering decisions in a restricted scenario. To enhance adaptability, we propose a novel agent called AIM-RM, which leverages similar historical experiences through similarity matching. Our results show that AIM-RM outperforms benchmark methods across various supply chain scenarios, highlighting its robustness and adaptability.
Why we are recommending this paper?
Due to your Interest in Supply Chain

This paper explores the application of LLMs within multi-agent systems for supply chain management, directly aligning with the user's interest in AI for supply chain optimization. The focus on structured decision prompts and memory retrieval is a relevant approach to improving supply chain efficiency.
University of Toronto
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Insights
  • NP-GMM: Normalized Price Generalized Method of Moments ABLP: Average Brand-Level Price GMM: Generalized Method of Moments 𝑇(ℓ): Total execution time using ℓ threads 𝑇 parallel: Time required to perform tasks that can be parallelized 𝑇 serial: Time required for inherently serial tasks 𝑇 overhead(ℓ): Overhead introduced by parallelization, increasing in ℓ The NP-GMM algorithm is approximately five times faster than the ABLP algorithm in evaluating the GMM objective function and gradients. (ML: 0.91)👍👎
  • The study only considers a limited number of model settings and does not explore other scenarios. (ML: 0.90)👍👎
  • The NP-GMM algorithm outperforms the ABLP algorithm in terms of computational efficiency and speed. (ML: 0.84)👍👎
Abstract
We propose a fast algorithm for computing the GMM estimator in the BLP demand model (Berry, Levinsohn, and Pakes, 1995). Inspired by nested pseudo-likelihood methods for dynamic discrete choice models, our approach avoids repeatedly solving the inverse demand system by swapping the order of the GMM optimization and the fixed-point computation. We show that, by fixing consumer-level outside-option probabilities, BLP's market-share to mean-utility inversion becomes closed-form and, crucially, separable across products, yielding a nested pseudo-GMM algorithm with analytic gradients. The resulting estimator scales dramatically better with the number of products and is naturally suited for parallel and multithreaded implementation. In the inner loop, outside-option probabilities are treated as fixed objects while a pseudo-GMM criterion is minimized with respect to the structural parameters, substantially reducing computational cost. Monte Carlo simulations and an empirical application show that our method is significantly faster than the fastest existing alternatives, with efficiency gains that grow more than proportionally in the number of products. We provide MATLAB and Julia code to facilitate implementation.
Why we are recommending this paper?
Due to your Interest in Demand

This research utilizes a GMM estimator for demand modeling, a core technique within supply chain analysis and pricing. The algorithm’s focus on differentiated products directly addresses a key element of the user’s interests.
University of Southern California
Rate paper: 👍 👎 ♥ Save
Paper visualization
Rate image: 👍 👎
AI Insights
  • The researchers emphasize the importance of incorporating diverse human judgments, cross-cultural perspectives, and participatory evaluation to ensure that AI systems align with human values. (ML: 0.99)👍👎
  • The researchers found that LLMs tend to homogenize values, favoring certain perspectives over others, and often mirroring human biases. (ML: 0.99)👍👎
  • The study examines the value preferences of large language models (LLMs) by analyzing their responses to a dataset of 100,000 choice dilemmas. (ML: 0.99)👍👎
  • The study highlights concerns about whose values are amplified or marginalized in AI value preferences, emphasizing the need for diverse human judgments, cross-cultural perspectives, and participatory evaluation when designing or deploying advice-giving AI systems. (ML: 0.99)👍👎
  • The study's findings have implications for the development and deployment of LLMs, highlighting the need for more nuanced understanding of value preferences and their potential impact on society. (ML: 0.99)👍👎
  • LLMs: Large Language Models Value homogenization: The tendency of LLMs to favor certain values over others, often mirroring human biases. (ML: 0.99)👍👎
  • The study relies on a limited dataset of 100,000 choice dilemmas, which may not be representative of all possible scenarios. (ML: 0.98)👍👎
  • Participatory evaluation: Involving diverse stakeholders and experts in the design and deployment of AI systems to ensure they align with human values. (ML: 0.98)👍👎
  • The researchers acknowledge that their findings may not generalize to broader model families and prompting conditions. (ML: 0.97)👍👎
Abstract
People increasingly seek advice online from both human peers and large language model (LLM)-based chatbots. Such advice rarely involves identifying a single correct answer; instead, it typically requires navigating trade-offs among competing values. We aim to characterize how LLMs navigate value trade-offs across different advice-seeking contexts. First, we examine the value trade-off structure underlying advice seeking using a curated dataset from four advice-oriented subreddits. Using a bottom-up approach, we inductively construct a hierarchical value framework by aggregating fine-grained values extracted from individual advice options into higher-level value categories. We construct value co-occurrence networks to characterize how values co-occur within dilemmas and find substantial heterogeneity in value trade-off structures across advice-seeking contexts: a women-focused subreddit exhibits the highest network density, indicating more complex value conflicts; women's, men's, and friendship-related subreddits exhibit highly correlated value-conflict patterns centered on security-related tensions (security vs. respect/connection/commitment); by contrast, career advice forms a distinct structure where security frequently clashes with self-actualization and growth. We then evaluate LLM value preferences against these dilemmas and find that, across models and contexts, LLMs consistently prioritize values related to Exploration & Growth over Benevolence & Connection. This systemically skewed value orientation highlights a potential risk of value homogenization in AI-mediated advice, raising concerns about how such systems may shape decision-making and normative outcomes at scale.
Why we are recommending this paper?
Due to your Interest in Demand

This paper investigates how LLMs handle value trade-offs, a critical consideration in pricing and decision-making within supply chains. Understanding LLM behavior is essential for leveraging AI in pricing optimization.
NTT Research
Rate paper: 👍 👎 ♥ Save
AI Insights
  • It is mostly CPU-bound and focused on computational efficiency, scalability, and generalizability for tackling complex models with bilevel optimization. (ML: 0.83)👍👎
  • DAS: Dynamic Adaptive Sampling. (ML: 0.79)👍👎
  • IQTS focuses on knowledge about the problem structure to decompose it into smaller sub-problems, which can then be tackled using quantum optimization. (ML: 0.77)👍👎
  • ISF: Improved Solution Feasibility. (ML: 0.74)👍👎
  • It is largely I/O-bound and achieves fast convergence by keeping exploration close to the feasible domain. (ML: 0.73)👍👎
  • ISG: Initial Solution Generator. (ML: 0.68)👍👎
  • QUBO: Quadratic Unconstrained Binary Optimization problem. (ML: 0.65)👍👎
  • IQTS: Informed Quantum-Enhanced Tree Solver. (ML: 0.60)👍👎
  • HBS leverages multiple sub-solvers in a collaborative way, allowing for a combination of classical, quantum-inspired, and pure quantum optimizers. (ML: 0.54)👍👎
  • The problem of solving a Quadratic Unconstrained Binary Optimization (QUBO) problem is addressed using a hybrid quantum-classical approach. (ML: 0.52)👍👎
  • Two solvers are proposed: Informed Quantum-Enhanced Tree Solver (IQTS) and Hybrid Quantum-Classical Bilevel Solver (HBS). (ML: 0.51)👍👎
  • HBS: Hybrid Quantum-Classical Bilevel Solver. (ML: 0.47)👍👎
Abstract
A multi-objective logistics optimization problem from a real-world supply chain is formulated as a Quadratic Unconstrained Binary Optimization Problem (QUBO) that minimizes cost, emissions, and delivery time, while maintaining target distributions of supplier workshare. The model incorporates realistic constraints, including part dependencies, double sourcing, and multimodal transport. Two hybrid quantum-classical solvers are proposed: a structure-aware informed tree search (IQTS) and a modular bilevel framework (HBS), combining quantum subroutines with classical heuristics. Experimental results on IonQ's Aria-1 hardware demonstrate a methodology to map real-world logistics problems onto emerging combinatorial optimization-specialized hardware, yielding high-quality, Pareto-optimal solutions.
Why we are recommending this paper?
Due to your Interest in Supply Chain

The paper tackles multi-objective optimization within supply chain logistics, a complex area directly relevant to the user's interests in supply chain optimization. Utilizing QUBO provides a powerful framework for addressing these challenges.
Maastricht University
Rate paper: 👍 👎 ♥ Save
AI Insights
  • The text also discusses the relationship between groundedness and maximization of complete and transitive preference relations. (ML: 0.97)👍👎
  • Some of the key concepts explored include consistency, monotonicity, and weak axiom of revealed preference (WARP). (ML: 0.97)👍👎
  • The results have implications for understanding rationalizability and groundedness in choice theory. (ML: 0.95)👍👎
  • GAIC: Grounded Axiom of Revealed Preference. (ML: 0.92)👍👎
  • A choice function c is said to satisfy GMAIC if it maximizes a complete and transitive preference relation over non-empty subsets of X. (ML: 0.91)👍👎
  • Groundedness: A choice function c satisfies groundedness if for all x ∈ X, there exists a set S ⊆ X \{x such that I(S) = ∅. (ML: 0.89)👍👎
  • GMAIC: Grounded Maximizing Axiom of Choice. (ML: 0.89)👍👎
  • The provided text provides a comprehensive proof of various theorems and propositions related to choice theory. (ML: 0.89)👍👎
  • The proofs cover topics such as injectivity, surjectivity, and double union closure of interpretation functions. (ML: 0.88)👍👎
  • A choice function c is said to satisfy GAIC if it satisfies groundedness and the corresponding interpretation I satisfies consistency, monotonicity, and WARP. (ML: 0.88)👍👎
  • The proofs demonstrate the relationship between different axioms and properties of choice functions. (ML: 0.86)👍👎
  • The provided text appears to be a proof of various theorems and propositions related to choice theory, specifically in the context of rationalizability and groundedness. (ML: 0.86)👍👎
Abstract
This paper proposes a model of choice via agentic artificial intelligence (AI). A key feature is that the AI may misinterpret a menu before recommending what to choose. A single acyclicity condition guarantees that there is a monotonic interpretation and a strict preference relation that together rationalize the AI's recommendations. Since this preference is in general not unique, there is no safeguard against it misaligning with that of a decision maker. What enables the verification of such AI alignment is interpretations satisfying double monotonicity. Indeed, double monotonicity ensures full identifiability and internal consistency. But, an additional idempotence property is required to guarantee that recommendations are fully rational and remain grounded within the original feasible set.
Why we are recommending this paper?
Due to your Interest in AI for Supply Chain

This paper proposes a model of choice involving AI, exploring potential misinterpretations and preference relations – a fundamental area for understanding AI’s role in decision-making processes, particularly in pricing scenarios.
Chinese University of Hong Kong
Rate paper: 👍 👎 ♥ Save
AI Insights
  • Poisson regression: A type of generalized linear model used for analyzing count data. (ML: 0.99)👍👎
  • The paper highlights the importance of considering fairness in insurance pricing, particularly in long-term insurance products where age is an important factor. (ML: 0.98)👍👎
  • The paper also presents a case study using data from the Health and Retirement Study (HRS) to demonstrate the application of the fair pricing framework. (ML: 0.97)👍👎
  • Fairness: The concept of ensuring that individuals or groups are treated equally and without bias in decision-making processes, such as insurance pricing. (ML: 0.97)👍👎
  • The paper presents a fair pricing framework for long-term insurance products, which involves reformulating the multi-state modeling problem as a set of Poisson regression problems and applying fair pricing methods to each constituent problem. (ML: 0.96)👍👎
  • The authors argue that fair pricing methods can help to reduce disparities in access to insurance coverage and promote more equitable outcomes. (ML: 0.96)👍👎
  • The pre-processing approach modifies covariate values to ensure independence between the outcome variable and the sensitive variable, while the in-processing approach uses an adversarial learning procedure to obtain a predictor that is independent of the sensitive variable. (ML: 0.96)👍👎
  • The fair pricing framework presented in the paper provides a practical and effective way to ensure fairness in long-term insurance products. (ML: 0.96)👍👎
  • Adversarial debiasing: A machine learning approach for removing biases from models by training them on data that has been adversarially perturbed to induce bias. (ML: 0.96)👍👎
  • The results show that the adjusted prices are more equitable across racial groups, with a reduced gap between the Black/African American and White/Caucasian groups. (ML: 0.96)👍👎
  • The framework is applied using two illustrations: pre-processing via optimal transport and in-processing via adversarial debiasing. (ML: 0.94)👍👎
  • Multi-state modeling: A statistical approach for modeling complex systems with multiple states or stages. (ML: 0.87)👍👎
  • Optimal transport: A mathematical framework for finding the most efficient way to transform one probability distribution into another while minimizing a given cost function. (ML: 0.87)👍👎
Abstract
Extant literature on fair pricing methods for actuarial contexts has primarily focused on the regression setting. While such approaches are well-suited to short-term products, it is unclear how they generalize to long-term products, whose pricing essentially relies on estimating transition rates in multi-state models. To address this gap, we propose a unified framework that recasts the estimation of any given multi-state transition model as a set of Poisson regression problems. This reformulation enables the direct application of existing fair pricing methods, which together constitute our proposed methodology. As an illustration, we apply the framework to a fair pricing exercise for a stylized long-term care insurance product using data from the University of Michigan Health and Retirement Study (HRS), focusing on a post-processing approach. We further explain how the framework readily accommodates pre-processing and in-processing fairness methods.
Why we are recommending this paper?
Due to your Interest in Pricing
University of California, Berkeley
Rate paper: 👍 👎 ♥ Save
AI Insights
  • The composition law becomes d'Alembert's functional equation (2.5) on R, and the calibration plays two distinct roles: it provides minimal regularity for classical classification results to apply, and it fixes the remaining scale parameter. (ML: 0.93)👍👎
  • Several directions remain for further study, including whether the composition law can be derived from weaker assumptions and alternative calibration conditions. (ML: 0.89)👍👎
  • The paper studies a rigidity problem for nonnegative functions F: R > 0 → R ≥ 0 normalized by F(1) = 0. (ML: 0.85)👍👎
  • The goal is to determine which structural assumptions force F to have a unique functional form. (ML: 0.85)👍👎
  • Section 3 establishes the minimality of the assumptions. (ML: 0.85)👍👎
  • Even under the normalization F(1) = 0, the composition law (2.6) admits non-measurable solutions. (ML: 0.77)👍👎
  • Without fixing the calibration, one gets a one-parameter family of continuous solutions Fλ(x) = cosh(λlnx) - 1. (ML: 0.77)👍👎
  • Under the normalization F(1) = 0, the composition law (2.6) on R > 0 and the unit quadratic calibration (2.4) uniquely determine F. (ML: 0.76)👍👎
  • The normalization F(1) = 0 together with the calibration does not imply the composition law (2.6). (ML: 0.76)👍👎
  • The calibrated solution is forced onto the single branch H(t) = cosh(t), hence F = J. (ML: 0.74)👍👎
  • The composition law together with F(1) = 0 implies reciprocity F(x) = F(x-1). (ML: 0.69)👍👎
  • The resulting function is the canonical reciprocal cost J(x) = 1/2(x + x-1) - 1, equivalently J(et) = cosh(t) - 1 in logarithmic coordinates. (ML: 0.58)👍👎
Abstract
We study a rigidity problem for functions \(F:\R_{>0}\to\R_{\ge 0}\) that penalize deviation of a positive ratio from equilibrium \(x=1\). Assuming (i) normalization \(F(1)=0\), (ii) a d'Alembert-type composition law on \(\R_{>0}\), and (iii) a single quadratic calibration at the identity (in logarithmic coordinates), we prove that \(F\) is uniquely determined. The unique solution is called the canonical reciprocal cost, namely the difference between the arithmetic and geometric means of \(x\) and its reciprocal. Our proof uses the logarithmic coordinates \(H(t)=F(e^t)+1\), where the composition law becomes d'Alembert's functional equation on \(\R\). The calibration provides the minimal regularity needed to invoke the classical classification of continuous solutions and fixes the remaining scaling freedom, selecting the hyperbolic-cosine branch. We also establish necessity of each assumption: without calibration the composition law admits a continuous one-parameter family, without the composition law the calibration does not determine the global form, and without regularity the composition law admits pathological non-measurable solutions. Finally, we establish a stability estimate for approximate solutions under bounded defect and characterize some properties of the canonical cost.
Why we are recommending this paper?
Due to your Interest in Pricing

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Supply
  • AI for Pricing
  • AI for Pricing Optimization
  • AI for Supply Chain Optimization
You can edit or add more interests any time.