Papers from 08 to 12 September, 2025

Here are the personalized paper recommendations sorted by most relevant
Pricing
👍 👎 ♥ Save
University of California
Abstract
Regulators and utilities have been exploring hourly retail electricity pricing, with several existing programs providing day-ahead hourly pricing schedules. At the same time, customers are deploying distributed energy resources and smart energy management systems that have significant flexibility and can optimally follow price signals. In aggregate, these optimally controlled loads can create congestion management issues for distribution system operators (DSOs). In this paper, we describe a new linear pricing mechanism for day-ahead retail electricity pricing that provides a signal for customers to follow to mitigate over-consumption while still consuming energy at hours that are preferential for system performance. We show that by broadcasting a linear price designed for price-signal control of cost-optimizing loads, we can shape customer load profiles to provide congestion management without the need for bi-directional communication or customer bidding programs.
👍 👎 ♥ Save
Abstract
For premium consumer products, pricing strategy is not about a single number, but about understanding the perceived monetary value of the features that justify a higher cost. This paper proposes a robust methodology to deconstruct a product's price into the tangible value of its constituent parts. We employ Bayesian Hierarchical Conjoint Analysis, a sophisticated statistical technique, to solve this high-stakes business problem using the Apple iPhone as a universally recognizable case study. We first simulate a realistic choice based conjoint survey where consumers choose between different hypothetical iPhone configurations. We then develop a Bayesian Hierarchical Logit Model to infer consumer preferences from this choice data. The core innovation of our model is its ability to directly estimate the Willingness-to-Pay (WTP) in dollars for specific feature upgrades, such as a "Pro" camera system or increased storage. Our results demonstrate that the model successfully recovers the true, underlying feature valuations from noisy data, providing not just a point estimate but a full posterior probability distribution for the dollar value of each feature. This work provides a powerful, practical framework for data-driven product design and pricing strategy, enabling businesses to make more intelligent decisions about which features to build and how to price them.
AI for Supply Chain
👍 👎 ♥ Save
UPF, CREi, BSE & CEPRL
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Despite growing policy interest, the determinants of supply chain resilience are still not well understood. We propose a new theory of supply chain formation with compatibility frictions: only compatible inputs can be used in final good production. Intermediate producers choose the degree of specialization of their goods, trading off higher productivity against a lower share of compatible final producers. We model supply chains as complex production processes in which multiple complementary inputs must be sourced for final production to take place. Specialization choices, production complexity, and search frictions jointly determine supply chain resilience. Relative to the efficient allocation, the equilibrium is characterized by over-specialization due to a novel network externality arising from the interplay between frictional markets, endogenous specialization, and complex production. Over-specialization makes supply chains more productive in normal times but less resilient to disruptions than socially desirable. We show how a targeted transaction subsidy can decentralize efficient resilience in supply chains, and examine the implications of setting compatibility standards.
AI Insights
  • Efficient equilibrium arises only when N = 1/µ, linking producer count to search share.
  • Downstream cost K adds a µ N ln(1/f) term to the planner’s FOC, exposing a specialization‑search trade‑off.
  • Supply chains are under‑resilient when LS > 1 − µ, a threshold driven by network externalities.
  • With extra final‑producer costs, appropriability and business‑stealing externalities misalign, making specialization inefficient.
  • A small transaction subsidy decentralizes resilience, while compatibility standards can either aid or hinder depending on costs.
  • “Under‑robustness” is defined as E[x−κ]/E[A−κ] < (1−fN)µ, flagging low robustness investment.
  • For background, read “The Theory of Industrial Organization” and the authors’ dynamic entry‑exit series.
👍 👎 ♥ Save
Imperial College London
Abstract
In supply chain management, decision-making often involves balancing multiple conflicting objectives, such as cost reduction, service level improvement, and environmental sustainability. Traditional multi-objective optimization methods, such as linear programming and evolutionary algorithms, struggle to adapt in real-time to the dynamic nature of supply chains. In this paper, we propose an approach that combines Reinforcement Learning (RL) and Multi-Objective Evolutionary Algorithms (MOEAs) to address these challenges for dynamic multi-objective optimization under uncertainty. Our method leverages MOEAs to search the parameter space of policy neural networks, generating a Pareto front of policies. This provides decision-makers with a diverse population of policies that can be dynamically switched based on the current system objectives, ensuring flexibility and adaptability in real-time decision-making. We also introduce Conditional Value-at-Risk (CVaR) to incorporate risk-sensitive decision-making, enhancing resilience in uncertain environments. We demonstrate the effectiveness of our approach through case studies, showcasing its ability to respond to supply chain dynamics and outperforming state-of-the-art methods in an inventory management case study. The proposed strategy not only improves decision-making efficiency but also offers a more robust framework for managing uncertainty and optimizing performance in supply chains.
AI for Pricing
👍 👎 ♥ Save
Google DeepMind, Harvard
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Coordination tasks traditionally performed by humans are increasingly being delegated to autonomous agents. As this pattern progresses, it becomes critical to evaluate not only these agents' performance but also the processes through which they negotiate in dynamic, multi-agent environments. Furthermore, different agents exhibit distinct advantages: traditional statistical agents, such as Bayesian models, may excel under well-specified conditions, whereas large language models (LLMs) can generalize across contexts. In this work, we compare humans (N = 216), LLMs (GPT-4o, Gemini 1.5 Pro), and Bayesian agents in a dynamic negotiation setting that enables direct, identical-condition comparisons across populations, capturing both outcomes and behavioral dynamics. Bayesian agents extract the highest surplus through aggressive optimization, at the cost of frequent trade rejections. Humans and LLMs can achieve similar overall surplus, but through distinct behaviors: LLMs favor conservative, concessionary trades with few rejections, while humans employ more strategic, risk-taking, and fairness-oriented behaviors. Thus, we find that performance parity -- a common benchmark in agent evaluation -- can conceal fundamental differences in process and alignment, which are critical for practical deployment in real-world coordination tasks.
AI Insights
  • LLMs' surplus jumps with richer game‑state and opponent data, showing a data‑driven learning curve.
  • LLMs prioritize short‑term gains, often hurting overall trade efficiency.
  • Bayesian agents maximize surplus aggressively but reject many trades, exposing a cost of pure efficiency.
  • Adding game‑theoretic modules to LLMs could curb myopia and improve alignment.
  • Surplus Value: net chips received minus chips given up in a trade.
  • Incentive Compatible processes make agents act on true preferences, a feature missing in current LLMs.
  • Key references: Aumann & Maschler (1995) on repeated games; Myerson (1978) on Nash bargaining refinements.
👍 👎 ♥ Save
Shanghai University of F
Abstract
With the rapid advancement of large language models (LLMs), Multi-agent Systems (MAS) have achieved significant progress in various application scenarios. However, substantial challenges remain in designing versatile, robust, and efficient platforms for agent deployment. To address these limitations, we propose \textbf{LightAgent}, a lightweight yet powerful agentic framework, effectively resolving the trade-off between flexibility and simplicity found in existing frameworks. LightAgent integrates core functionalities such as Memory (mem0), Tools, and Tree of Thought (ToT), while maintaining an extremely lightweight structure. As a fully open-source solution, it seamlessly integrates with mainstream chat platforms, enabling developers to easily build self-learning agents. We have released LightAgent at \href{https://github.com/wxai-space/LightAgent}{https://github.com/wxai-space/LightAgent}
AI Insights
  • LightAgent’s swarm design lets dozens of agents coordinate via one LightSwarm instance, boosting throughput.
  • Each agent carries a distinct instruction set, enabling domain‑specific roles such as code synthesis or data retrieval.
  • A built‑in text UI turns user prompts into executable code snippets, streamlining rapid prototyping.
  • Tree‑of‑Thought logic lets agents iteratively refine plans, cutting hallucinations and improving accuracy.
  • The lightweight core keeps memory usage under 200 MB on a single GPU while still supporting custom tool plugins.
  • Advanced features can be daunting for beginners, and highly specialized tasks may still need manual tuning.
  • LightAgent has been applied to robotics, finance, and healthcare, proving its versatility beyond chat‑bot demos.
AI for Pricing Optimization
👍 👎 ♥ Save
UNSW Sydney NSW 2052, AU
Abstract
This paper introduces the Actuarial Neural Additive Model, an inherently interpretable deep learning model for general insurance pricing that offers fully transparent and interpretable results while retaining the strong predictive power of neural networks. This model assigns a dedicated neural network (or subnetwork) to each individual covariate and pairwise interaction term to independently learn its impact on the modeled output while implementing various architectural constraints to allow for essential interpretability (e.g. sparsity) and practical requirements (e.g. smoothness, monotonicity) in insurance applications. The development of our model is grounded in a solid foundation, where we establish a concrete definition of interpretability within the insurance context, complemented by a rigorous mathematical framework. Comparisons in terms of prediction accuracy are made with traditional actuarial and state-of-the-art machine learning methods using both synthetic and real insurance datasets. The results show that the proposed model outperforms other methods in most cases while offering complete transparency in its internal logic, underscoring the strong interpretability and predictive capability.
👍 👎 ♥ Save
Abstract
DemandLens demonstrates an innovative Prophet based forecasting model for the mattress-in-a-box industry, incorporating COVID-19 metrics and SKU-specific hyperparameter optimization. This industry has seen significant growth of E-commerce players in the recent years, wherein the business model majorly relies on outsourcing Mattress manufacturing and related logistics and supply chain operations, focusing on marketing the product and driving conversions through Direct-to-Consumer sales channels. Now, within the United States, there are a limited number of Mattress contract manufacturers available, and hence, it is important that they manage their raw materials, supply chain, and, inventory intelligently, to be able to cater maximum Mattress brands. Our approach addresses the critical need for accurate Sales Forecasting in an industry that is heavily dependent on third-party Contract Manufacturing. This, in turn, helps the contract manufacturers to be prepared, hence, avoiding bottleneck scenarios, and aiding them to source raw materials at optimal rates. The model demonstrates strong predictive capabilities through SKU-specific Hyperparameter optimization, offering the Contract Manufacturers and Mattress brands a reliable tool to streamline supply chain operations.
Supply Chain
👍 👎 ♥ Save
North Carolina State Unv
Abstract
Software supply chain attacks have increased exponentially since 2020. The primary attack vectors for supply chain attacks are through: (1) software components; (2) the build infrastructure; and (3) humans (a.k.a software practitioners). Software supply chain risk management frameworks provide a list of tasks that an organization can adopt to reduce software supply chain risk. Exhaustively adopting all the tasks of these frameworks is infeasible, necessitating the prioritized adoption of tasks. Software organizations can benefit from being guided in this prioritization by learning what tasks other teams have adopted. The goal of this study is to aid software development organizations in understanding the adoption of security tasks that reduce software supply chain risk through an interview study of software practitioners engaged in software supply chain risk management efforts. An interview study was conducted with 61 practitioners at nine software development organizations that have focused efforts on reducing software supply chain risk. The results of the interviews indicate that organizations had implemented the most adopted software tasks before the focus on software supply chain security. Therefore, their implementation in organizations is more mature. The tasks that mitigate the novel attack vectors through software components and the build infrastructure are in the early stages of adoption. Adoption of these tasks should be prioritized.
Demand
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
This paper presents the design, implementation, and validation of a smart, low-cost Energy Management System (EMS) and Demand Charge Management (DCM) prototype, developed as part of an undergraduate senior design project. The system serves as both a practical solution for reducing electricity costs and a pedagogical tool for teaching real-time energy control concepts in power and embedded systems courses. Unlike conventional EMS/DCM solutions that rely on high-cost commercial hardware or purely theoretical models, the proposed system integrates grid power, lithium-iron phosphate (LiFePO4) battery storage, and real-time control into a unified, scalable platform constructed at a fraction of the cost, approximately $1,800 compared to over $16,000 for leading commercial options. The controller dynamically optimizes energy usage by switching between grid and battery sources based on real-time measurements of electricity prices, load power, and battery state of charge (SoC). This enables peak shaving, energy arbitrage, and backup power functionality, thereby enhancing cost efficiency and grid resilience for both residential and small commercial users. The architecture features a modular three-layer design comprising a sensing layer for electrical data acquisition, a control layer executing Python-based logic on a Raspberry Pi, and an actuator layer for seamless energy switching. Data is communicated via MQTT and visualized through the Blynk IoT platform, providing an intuitive and remotely accessible user interface. The prototype's effectiveness was validated through real-world testing, confirming its capability to reduce demand charges and ensure reliable energy delivery under varying operational conditions. Its affordability, open-source control logic, and educational versatility make it an ideal candidate for both deployment and instructional use.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Supply
  • AI for Supply Chain Optimization
You can edit or add more interests any time.

Unsubscribe from these updates