Hi j34nc4rl0+marketplace_supply,

Here is our personalized paper recommendations for you sorted by most relevant
AI for Pricing Optimization
Abstract
Passive liquidity providers (LPs) in automated market makers (AMMs) face losses due to adverse selection (LVR), which static trading fees often fail to offset in practice. We study the key determinants of LP profitability in a dynamic reduced-form model where an AMM operates in parallel with a centralized exchange (CEX), traders route their orders optimally to the venue offering the better price, and arbitrageurs exploit price discrepancies. Using large-scale simulations and real market data, we analyze how LP profits vary with market conditions such as volatility and trading volume, and characterize the optimal AMM fee as a function of these conditions. We highlight the mechanisms driving these relationships through extensive comparative statics, and confirm the model's relevance through market data calibration. A key trade-off emerges: fees must be low enough to attract volume, yet high enough to earn sufficient revenues and mitigate arbitrage losses. We find that under normal market conditions, the optimal AMM fee is competitive with the trading cost on the CEX and remarkably stable, whereas in periods of very high volatility, a high fee protects passive LPs from severe losses. These findings suggest that a threshold-type dynamic fee schedule is both robust enough to market conditions and improves LP outcomes.
Abstract
Auto-bidding is extensively applied in advertising systems, serving a multitude of advertisers. Generative bidding is gradually gaining traction due to its robust planning capabilities and generalizability. In contrast to traditional reinforcement learning-based bidding, generative bidding does not rely on the Markov Decision Process (MDP) exhibiting superior planning capabilities in long-horizon scenarios. Conditional diffusion modeling approaches have demonstrated significant potential in the realm of auto-bidding. However, relying solely on return as the optimality condition is weak to guarantee the generation of genuinely optimal decision sequences, lacking personalized structural information. Moreover, diffusion models' t-step autoregressive generation mechanism inherently carries timeliness risks. To address these issues, we propose a novel conditional diffusion modeling method based on expert trajectory guidance combined with a skip-step sampling strategy to enhance generation efficiency. We have validated the effectiveness of this approach through extensive offline experiments and achieved statistically significant results in online A/B testing, achieving an increase of 11.29% in conversion and a 12.35% in revenue compared with the baseline.
Supply Chain
Abstract
In supply chain networks, firms dynamically form or dissolve partnerships to adapt to market fluctuations, posing a challenge for predicting future supply relationships. We model the occurrence of supply edges (firm i to firm j) as a non-homogeneous Poisson process (NHPP), using historical event counts to estimate the Poisson intensity function up to time t. However, forecasting future intensities is hindered by the limitations of historical data alone. To overcome this, we propose a novel Graph Double Exponential Smoothing (GDES) model, which integrates graph neural networks (GNNs) with a nonparametric double exponential smoothing approach to predict the probability of future supply edge formations.Recognizing the interdependent economic dynamics between upstream and downstream firms, we assume that the Poisson intensity functions of supply edges are correlated, aligning with the non-homogeneous nature of the process.Our model is interpretable, decomposing intensity increments into contributions from the current edge's historical data and influences from neighboring edges in the supply chain network. Evaluated on a large-scale supply chain dataset with 87,969 firms, our approach achieves an AUC of 93.84 % in dynamic link prediction, demonstrating its effectiveness in capturing complex supply chain interactions for accurate forecasting.
Abstract
Critical for policy-making and business operations, the study of global supply chains has been severely hampered by a lack of detailed data. Here we harness global firm-level transaction data covering 20m global firms, and 1 billion cross-border transactions, to infer key inputs for over 1200 products. Transforming this data to a directed network, we find that products are clustered into three large groups including textiles, chemicals and food, and machinery and metals. European industrial nations and China dominate critical intermediate products in the network such as metals, common components and tools, while industrial complexity is correlated with embeddedness in densely connected supply chains. To validate the network, we find structural similarities with two alternative product networks, one generated via LLM queries and the other derived by NAFTA to track product origins. We further detect linkages between products identified in manually mapped single sector supply chains, including electric vehicle batteries and semi-conductors. Finally, metrics derived from network structure capturing both forward and backward linkages are able to predict country-product diversification patterns with high accuracy.
AI for Pricing
Abstract
As AI becomes more "agentic," it faces technical and socio-legal issues it must address if it is to fulfill its promise of increased economic productivity and efficiency. This paper uses technical and legal perspectives to explain how things change when AI systems start being able to directly execute tasks on behalf of a user. We show how technical conceptions of agents track some, but not all, socio-legal conceptions of agency. That is, both computer science and the law recognize the problems of under-specification for an agent, and both disciplines have robust conceptions of how to address ensuring an agent does what the programmer, or in the law, the principal desires and no more. However, to date, computer science has under-theorized issues related to questions of loyalty and to third parties that interact with an agent, both of which are central parts of the law of agency. First, we examine the correlations between implied authority in agency law and the principle of value-alignment in AI, wherein AI systems must operate under imperfect objective specification. Second, we reveal gaps in the current computer science view of agents pertaining to the legal concepts of disclosure and loyalty, and how failure to account for them can result in unintended effects in AI ecommerce agents. In surfacing these gaps, we show a path forward for responsible AI agent development and deployment.
Abstract
The quick growth of shops using artificial intelligence (AI) techniques has changed digital marketing activities and changed how businesses interact and reach their consumers. (AI) techniques are reshaping digital interactions between shops and consumers interact digitally by providing a more efficient and customized experience, fostering deeper engagement and more informed decision-making. This study investigates how (AI) techniques affect consumer interaction and decision-making over purchases with shops that use digital marketing. The partial least squares method was used to evaluate data from a survey with 300 respondents. When consumer engagement mediates this relationship, artificial intelligence (AI) techniques have a more favorable impact on purchasing decision-making. Consequently, decision-making is positively impacted through consumer engagement. The findings emphasize that for a bigger impact of the (AI) techniques on decision-making, the consumer must initially interact with the (AI) techniques. This research unveils a contemporary pathway in the field of AI-supported shop engagements and illustrates the distinct impact of (AI) techniques on consumer satisfaction, trust, and loyalty, revolutionizing traditional models of customer-purchase decision-making and shop engagement processes. This study provides previously unheard-of insight, into the revolutionary potential of (AI) techniques in influencing customer behavior and shop relationships
Pricing
Abstract
We consider a resource allocation problem with agents that have additive ternary valuations for a set of indivisible items, and bound the price of envy-free up to one item (EF1) allocations. For a large number $n$ of agents, we show a lower bound of $\Omega(\sqrt{n})$, implying that the price of EF1 is no better than when the agents have general subadditive valuations. We then focus on instances with few agents and show that the price of EF1 is $12/11$ for $n=2$, and between $1.2$ and $1.256$ for $n=3$.
Abstract
Traffic is a significant source of global carbon emissions. In this paper, we study how carbon pricing can be used to guide traffic towards equilibria that respect given emission budgets. In particular, we consider a general multi-commodity flow model with flow-dependent externalities. These externalities may represent carbon emissions, entering a priced area, or the traversal of paths regulated by tradable credit schemes. We provide a complete characterization of all flows that can be attained as Wardrop equilibria when assigning a single price to each externality. More precisely, we show that every externality budget achievable by any feasible flow in the network can also be achieved as a Wardrop equilibrium by setting appropriate prices. For extremal and Pareto-minimal budgets, we show that there are prices such that all equilibria respect the budgets. Although the proofs of existence of these particular prices rely on fixed-point arguments and are non-constructive, we show that in the case where the equilibrium minimizes a convex potential, the prices can be obtained as Lagrange multipliers of a suitable convex program. In the case of a single externality, we prove that the total externality caused by the traffic flow is decreasing in the price. For increasing, continuous, and piecewise affine travel time functions with a single externality, we give an output-polynomial algorithm that computes all equilibria implementable by pricing the externality. Even though there are networks where the output size is exponential in the input size, we show that the minimal price obeying a given budget can be computed in polynomial time. This allows the efficient computation of the market price of tradable credit schemes. Overall, our results show that carbon pricing is a viable and (under mild assumptions) tractable approach to achieve all feasible emission goals in traffic networks.
AI for Supply Chain
Paper visualization
Abstract
Recent advances in mathematical reasoning and the long-term planning capabilities of large language models (LLMs) have precipitated the development of agents, which are being increasingly leveraged in business operations processes. Decision models to optimize inventory levels are one of the core elements of operations management. However, the capabilities of the LLM agent in making inventory decisions in uncertain contexts, as well as the decision-making biases (e.g. framing effect, etc.) of the agent, remain largely unexplored. This prompts concerns regarding the capacity of LLM agents to effectively address real-world problems, as well as the potential implications of biases that may be present. To address this gap, we introduce AIM-Bench, a novel benchmark designed to assess the decision-making behaviour of LLM agents in uncertain supply chain management scenarios through a diverse series of inventory replenishment experiments. Our results reveal that different LLMs typically exhibit varying degrees of decision bias that are similar to those observed in human beings. In addition, we explored strategies to mitigate the pull-to-centre effect and the bullwhip effect, namely cognitive reflection and implementation of information sharing. These findings underscore the need for careful consideration of the potential biases in deploying LLMs in Inventory decision-making scenarios. We hope that these insights will pave the way for mitigating human decision bias and developing human-centred decision support systems for supply chains.
Demand
Paper visualization
Abstract
Time series forecasting is a critical first step in generating demand plans for supply chains. Experiments on time series models typically focus on demonstrating improvements in forecast accuracy over existing/baseline solutions, quantified according to some accuracy metric. There is no doubt that forecast accuracy is important; however in production systems, demand planners often value consistency and stability over incremental accuracy improvements. Assuming that the inputs have not changed significantly, forecasts that vary drastically from one planning cycle to the next require high amounts of human intervention, which frustrates demand planners and can even cause them to lose trust in ML forecasting models. We study model-induced stochasticity, which quantifies the variance of a set of forecasts produced by a single model when the set of inputs is fixed. Models with lower variance are more stable. Recently the forecasting community has seen significant advances in forecast accuracy through the development of deep machine learning models for time series forecasting. We perform a case study measuring the stability and accuracy of state-of-the-art forecasting models (Chronos, DeepAR, PatchTST, Temporal Fusion Transformer, TiDE, and the AutoGluon best quality ensemble) on public data sets from the M5 competition and Favorita grocery sales. We show that ensemble models improve stability without significantly deteriorating (or even improving) forecast accuracy. While these results may not be surprising, the main point of this paper is to propose the need for further study of forecast stability for models that are being deployed in production systems.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Supply
  • AI for Supply Chain Optimization
You can edit or add more interests any time.

Unsubscribe from these updates