Hi j34nc4rl0+marketplace_supply,

Here is our personalized paper recommendations for you sorted by most relevant
AI for Pricing
Paper visualization
Abstract
As artificial intelligence (AI) becomes foundational to enterprise infrastructure, organizations face growing challenges in accurately assessing the full economic implications of AI deployment. Existing metrics such as API token costs, GPU-hour billing, or Total Cost of Ownership (TCO) fail to capture the complete lifecycle costs of AI systems and provide limited comparability across deployment models. This paper introduces the Levelized Cost of Artificial Intelligence (LCOAI), a standardized economic metric designed to quantify the total capital (CAPEX) and operational (OPEX) expenditures per unit of productive AI output, normalized by valid inference volume. Analogous to established metrics like LCOE (levelized cost of electricity) and LCOH (levelized cost of hydrogen) in the energy sector, LCOAI offers a rigorous, transparent framework to evaluate and compare the cost-efficiency of vendor API deployments versus self-hosted, fine-tuned models. We define the LCOAI methodology in detail and apply it to three representative scenarios, OpenAI GPT-4.1 API, Anthropic Claude Haiku API, and a self-hosted LLaMA-2-13B deployment demonstrating how LCOAI captures critical trade-offs in scalability, investment planning, and cost optimization. Extensive sensitivity analyses further explore the impact of inference volume, CAPEX, and OPEX variability on lifecycle economics. The results illustrate the practical utility of LCOAI in procurement, infrastructure planning, and automation strategy, and establish it as a foundational benchmark for AI economic analysis. Policy implications and areas for future refinement, including environmental and performance-adjusted cost metrics, are also discussed.
Department of Machine Learning, MBZUAI, Abu Dhabi, UAE
Abstract
Imagine decision-makers uploading data and, within minutes, receiving clear, actionable insights delivered straight to their fingertips. That is the promise of the AI Data Scientist, an autonomous Agent powered by large language models (LLMs) that closes the gap between evidence and action. Rather than simply writing code or responding to prompts, it reasons through questions, tests ideas, and delivers end-to-end insights at a pace far beyond traditional workflows. Guided by the scientific tenet of the hypothesis, this Agent uncovers explanatory patterns in data, evaluates their statistical significance, and uses them to inform predictive modeling. It then translates these results into recommendations that are both rigorous and accessible. At the core of the AI Data Scientist is a team of specialized LLM Subagents, each responsible for a distinct task such as data cleaning, statistical testing, validation, and plain-language communication. These Subagents write their own code, reason about causality, and identify when additional data is needed to support sound conclusions. Together, they achieve in minutes what might otherwise take days or weeks, enabling a new kind of interaction that makes deep data science both accessible and actionable.
Supply Chain
Paper visualization
Abstract
This paper presents an integrated framework that combines traditional network optimization models with large language models (LLMs) to deliver interactive, explainable, and role-aware decision support for supply chain planning. The proposed system bridges the gap between complex operations research outputs and business stakeholder understanding by generating natural language summaries, contextual visualizations, and tailored key performance indicators (KPIs). The core optimization model addresses tactical inventory redistribution across a network of distribution centers for multi-period and multi-item, using a mixed-integer formulation. The technical architecture incorporates AI agents, RESTful APIs, and a dynamic user interface to support real-time interaction, configuration updates, and simulation-based insights. A case study demonstrates how the system improves planning outcomes by preventing stockouts, reducing costs, and maintaining service levels. Future extensions include integrating private LLMs, transfer learning, reinforcement learning, and Bayesian neural networks to enhance explainability, adaptability, and real-time decision-making.
Institute of Information Engineering, Chinese Academy of Sciences
Abstract
Pickle deserialization vulnerabilities have persisted throughout Python's history, remaining widely recognized yet unresolved. Due to its ability to transparently save and restore complex objects into byte streams, many AI/ML frameworks continue to adopt pickle as the model serialization protocol despite its inherent risks. As the open-source model ecosystem grows, model-sharing platforms such as Hugging Face have attracted massive participation, significantly amplifying the real-world risks of pickle exploitation and opening new avenues for model supply chain poisoning. Although several state-of-the-art scanners have been developed to detect poisoned models, their incomplete understanding of the poisoning surface leaves the detection logic fragile and allows attackers to bypass them. In this work, we present the first systematic disclosure of the pickle-based model poisoning surface from both model loading and risky function perspectives. Our research demonstrates how pickle-based model poisoning can remain stealthy and highlights critical gaps in current scanning solutions. On the model loading surface, we identify 22 distinct pickle-based model loading paths across five foundational AI/ML frameworks, 19 of which are entirely missed by existing scanners. We further develop a bypass technique named Exception-Oriented Programming (EOP) and discover 9 EOP instances, 7 of which can bypass all scanners. On the risky function surface, we discover 133 exploitable gadgets, achieving almost a 100% bypass rate. Even against the best-performing scanner, these gadgets maintain an 89% bypass rate. By systematically revealing the pickle-based model poisoning surface, we achieve practical and robust bypasses against real-world scanners. We responsibly disclose our findings to corresponding vendors, receiving acknowledgments and a $6000 bug bounty.
Pricing
HSBC
Abstract
Over the past decade, many dealers have implemented algorithmic models to automatically respond to RFQs and manage flows originating from their electronic platforms. In parallel, building on the foundational work of Ho and Stoll, and later Avellaneda and Stoikov, the academic literature on market making has expanded to address trade size distributions, client tiering, complex price dynamics, alpha signals, and the internalization versus externalization dilemma in markets with dealer-to-client and interdealer-broker segments. In this paper, we tackle two critical dimensions: adverse selection, arising from the presence of informed traders, and price reading, whereby the market maker's own quotes inadvertently reveal the direction of their inventory. These risks are well known to practitioners, who routinely face informed flows and algorithms capable of extracting signals from quoting behavior. Yet they have received limited attention in the quantitative finance literature, beyond stylized toy models with limited actionability. Extending the existing literature, we propose a tractable and implementable framework that enables market makers to adjust their quotes with greater awareness of informational risk.
Abstract
We study an American option pricing problem with liquidity risks and transaction fees. As endogenous transaction costs, liquidity risks of the underlying asset are modeled by a mean-reverting process. Transaction fees are exogenous transaction costs and are assumed to be proportional to the trading amount, with the long-run liquidity level depending on the proportional transaction costs rate. Two nonlinear partial differential equations are established to characterize the option values for the holder and the writer, respectively. To illustrate the impact of these transaction costs on option prices and optimal exercise prices, we apply the alternating direction implicit method to solve the linear complementarity problem numerically. Finally, we conduct model calibration from market data via maximum likelihood estimation, and find that our model incorporating liquidity risks outperforms the Leland model significantly.
AI for Pricing Optimization
Abstract
Critic-free reinforcement learning methods, particularly group policies, have attracted considerable attention for their efficiency in complex tasks. However, these methods rely heavily on multiple sampling and comparisons within the policy to estimate advantage, which may cause the policy to fall into local optimum and increase computational cost. To address these issues, we propose PVPO, an efficient reinforcement learning method enhanced by an advantage reference anchor and data pre-sampling. Specifically, we use the reference model to rollout in advance and employ the calculated reward score as a reference anchor. Our approach effectively corrects the cumulative bias introduced by intra-group comparisons and significantly reduces reliance on the number of rollouts. Meanwhile, the reference model can assess sample difficulty during data pre-sampling, enabling effective selection of high-gain data to improve training efficiency. Experiments conducted on nine datasets across two domains demonstrate that PVPO achieves State-Of-The-Art (SOTA) performance. Our approach not only demonstrates robust generalization across multiple tasks, but also exhibits scalable performance across models of varying scales.
AI for Supply Chain Optimization
Fisher College of Business, The Ohio State University
Abstract
Generative Agent-Based Models (GABMs) powered by large language models (LLMs) offer promising potential for empirical logistics and supply chain management (LSCM) research by enabling realistic simulation of complex human behaviors. Unlike traditional agent-based models, GABMs generate human-like responses through natural language reasoning, which creates potential for new perspectives on emergent LSCM phenomena. However, the validity of LLMs as proxies for human behavior in LSCM simulations is unknown. This study evaluates LLM equivalence of human behavior through a controlled experiment examining dyadic customer-worker engagements in food delivery scenarios. I test six state-of-the-art LLMs against 957 human participants (477 dyads) using a moderated mediation design. This study reveals a need to validate GABMs on two levels: (1) human equivalence testing, and (2) decision process validation. Results reveal GABMs can effectively simulate human behaviors in LSCM; however, an equivalence-versus-process paradox emerges. While a series of Two One-Sided Tests (TOST) for equivalence reveals some LLMs demonstrate surface-level equivalence to humans, structural equation modeling (SEM) reveals artificial decision processes not present in human participants for some LLMs. These findings show GABMs as a potentially viable methodological instrument in LSCM with proper validation checks. The dual-validation framework also provides LSCM researchers with a guide to rigorous GABM development. For practitioners, this study offers evidence-based assessment for LLM selection for operational tasks.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Demand
  • Supply
  • AI for Supply Chain
You can edit or add more interests any time.

Unsubscribe from these updates