Papers from 06 to 10 October, 2025

Here are the personalized paper recommendations sorted by most relevant
Pricing
👍 👎 ♥ Save
Abstract
We build a mechanism design framework where a platform designs GenAI models to screen users who obtain instrumental value from the generated conversation and privately differ in their preference for latency. We show that the revenue-optimal mechanism is simple: deploy a single aligned (user-optimal) model and use token cap as the only instrument to screen the user. The design decouples model training from pricing, is readily implemented with token metering, and mitigates misalignment pressures.
👍 👎 ♥ Save
Abstract
Unlike Business-to-Consumer e-commerce platforms (e.g., Amazon), inexperienced individual sellers on Consumer-to-Consumer platforms (e.g., eBay) often face significant challenges in setting prices for their second-hand products efficiently. Therefore, numerous studies have been proposed for automating price prediction. However, most of them are based on static regression models, which suffer from poor generalization performance and fail to capture market dynamics (e.g., the price of a used iPhone decreases over time). Inspired by recent breakthroughs in Large Language Models (LLMs), we introduce LLP, the first LLM-based generative framework for second-hand product pricing. LLP first retrieves similar products to better align with the dynamic market change. Afterwards, it leverages the LLMs' nuanced understanding of key pricing information in free-form text to generate accurate price suggestions. To strengthen the LLMs' domain reasoning over retrieved products, we apply a two-stage optimization, supervised fine-tuning (SFT) followed by group relative policy optimization (GRPO), on a dataset built via bidirectional reasoning. Moreover, LLP employs a confidence-based filtering mechanism to reject unreliable price suggestions. Extensive experiments demonstrate that LLP substantially surpasses existing methods while generalizing well to unseen categories. We have successfully deployed LLP on Xianyu\footnote\{Xianyu is China's largest second-hand e-commerce platform.\}, significantly outperforming the previous pricing method. Under the same 30\% product coverage, it raises the static adoption rate (SAR) from 40\% to 72\%, and maintains a strong SAR of 47\% even at 90\% recall.
AI for Supply Chain
👍 👎 ♥ Save
Abdelmalek Essaadi Univer
Paper visualization
Rate this image: 😍 👍 👎
Abstract
The paper explores the transformation of port logistics operations with artificial intelligence during the port transformation into a smart port. The research integrates capabilities-based resource analysis and dynamic capabilities with sociotechnicalimplementations of technologies and resilience approaches of complex systems under disruptions. The system applies robustdata infrastructures to propel analytical and AI modules that become effective once integrated with sufficient governance systems and trained personnel and operational processes to transform planning and safety and sustainability operations.It applies Scopus bibliometric research to analyze 123 articles using a systematic approach with both a search protocol and a document screening and duplication verification. It incorporates annual behavior and distribution of author and country performance analysis with science mapping techniques that explore keyword relation and co-citation and bibliographic coupling and conceptual structuring tools that construct thematic maps and multiple correspondence analysis with community detection while applying explicit thresholding and robust tests.The research connects AI applications to smart port domains through specific data-to-impact pathways while providing a method for bibliometric analysis that enables future updates. The research presents a step-by-step approach for data readiness followed by predictive and optimization implementation and organizational integration. The paper supports public policy through recommendations for data sharing standards and complete environmental benefit assessments. The research proposes a future study plan whichcombines field-based testing with multiple port assessments to enhance both cause-effect understanding and research applicability.
AI Insights
  • China, USA, and South Korea comprise over 60% of the 123 AI‑in‑maritime logistics papers.
  • Themes evolved from basic automation to complex decision‑support and sustainability‑driven models.
  • Empirical studies are scarce in low‑income regions, revealing a geographic gap.
  • Ethical and governance aspects remain under‑represented in AI applications.
  • Bibliometrix maps trends, key contributors, and frontiers via co‑citation and clustering.
  • Recommended reading: “Port Economics, Management and Policy” (Notteboom & Rodrigue, 2021) and “Digital Technologies and Logistics” (Parola & Satta, 2020).
  • Sole reliance on Scopus and exclusion of non‑English papers may bias the global AI‑maritime logistics view.
👍 👎 ♥ Save
Abstract
Capacity management is critical for software organizations to allocate resources effectively and meet operational demands. An important step in capacity management is predicting future resource needs often relies on data-driven analytics and machine learning (ML) forecasting models, which require frequent retraining to stay relevant as data evolves. Continuously retraining the forecasting models can be expensive and difficult to scale, posing a challenge for engineering teams tasked with balancing accuracy and efficiency. Retraining only when the data changes appears to be a more computationally efficient alternative, but its impact on accuracy requires further investigation. In this work, we investigate the effects of retraining capacity forecasting models for time series based on detected changes in the data compared to periodic retraining. Our results show that drift-based retraining achieves comparable forecasting accuracy to periodic retraining in most cases, making it a cost-effective strategy. However, in cases where data is changing rapidly, periodic retraining is still preferred to maximize the forecasting accuracy. These findings offer actionable insights for software teams to enhance forecasting systems, reducing retraining overhead while maintaining robust performance.
AI for Pricing
👍 👎 ♥ Save
Project Management Instit
Abstract
Generative AI does more than cut costs. It pulls products toward a shared template, making offerings look and feel more alike while making true originality disproportionately expensive. We capture this centripetal force in a standard two-stage differentiated-competition framework and show how a single capability shift simultaneously compresses perceived differences, lowers marginal cost and raises fixed access costs. The intuition is straightforward. When buyers see smaller differences across products, the payoff to standing apart shrinks just as the effort to do so rises, so firms cluster around the template. Prices fall and customers become more willing to switch. But the same homogenization also squeezes operating margins, and rising fixed outlays deepen the squeeze. The combination yields a structural prediction. There is a capability threshold at which even two firms cannot both cover fixed costs, and in a many-firm extension the sustainable number of firms falls as capability grows. Concentration increases, and prices still fall. Our results hold under broader preference shapes, non-uniform consumer densities, outside options, capability-dependent curvatures, and modest asymmetries. We translate the theory into two sufficient statistics for enforcement. On the one hand, a conduct statistic and a viability statistic. Transactions or platform rules that strengthen template pull or raise fixed access and originality costs can lower prices today yet push the market toward monoculture. Remedies that broaden access and promote template plurality and interoperability preserve the price benefits of AI while protecting entry and variety. The paper thus reconciles a live policy paradox. AI can make prices lower and entry harder at the same time. It prescribes what to measure to tell which force is dominant in practice.
👍 👎 ♥ Save
Abstract
This study evaluates Artificial Intelligence (AI) agents for Dhumbal, a culturally significant multiplayer card game with imperfect information, through a systematic comparison of rule-based, search-based, and learning-based strategies. We formalize Dhumbal's mechanics and implement diverse agents, including heuristic approaches (Aggressive, Conservative, Balanced, Opportunistic), search-based methods such as Monte Carlo Tree Search (MCTS) and Information Set Monte Carlo Tree Search (ISMCTS), and reinforcement learning approaches including Deep Q-Network (DQN) and Proximal Policy Optimization (PPO), and a random baseline. Evaluation involves within-category tournaments followed by a cross-category championship. Performance is measured via win rate, economic outcome, Jhyap success, cards discarded per round, risk assessment, and decision efficiency. Statistical significance is assessed using Welch's t-test with Bonferroni correction, effect sizes via Cohen's d, and 95% confidence intervals (CI). Across 1024 simulated rounds, the rule-based Aggressive agent achieves the highest win rate (88.3%, 95% CI: [86.3, 90.3]), outperforming ISMCTS (9.0%) and PPO (1.5%) through effective exploitation of Jhyap declarations. The study contributes a reproducible AI framework, insights into heuristic efficacy under partial information, and open-source code, thereby advancing AI research and supporting digital preservation of cultural games.
AI for Pricing Optimization
👍 👎 ♥ Save
University of California
Abstract
Reinsurance treaty pricing must satisfy stringent regulatory standards, yet current quoting practices remain opaque and difficult to audit. We introduce ClauseLens, a clause-grounded reinforcement learning framework that produces transparent, regulation-compliant, and risk-aware treaty quotes. ClauseLens models the quoting task as a Risk-Aware Constrained Markov Decision Process (RA-CMDP). Statutory and policy clauses are retrieved from legal and underwriting corpora, embedded into the agent's observations, and used both to constrain feasible actions and to generate clause-grounded natural language justifications. Evaluated in a multi-agent treaty simulator calibrated to industry data, ClauseLens reduces solvency violations by 51%, improves tail-risk performance by 27.9% (CVaR_0.10), and achieves 88.2% accuracy in clause-grounded explanations with retrieval precision of 87.4% and recall of 91.1%. These findings demonstrate that embedding legal context into both decision and explanation pathways yields interpretable, auditable, and regulation-aligned quoting behavior consistent with Solvency II, NAIC RBC, and the EU AI Act.
Supply Chain
👍 👎 ♥ Save
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Maritime transportation systems (MTS) play a crucial role in ensuring the uninterrupted supply of essential goods and services, impacting the economy, border security and general welfare. However, MTS operations face disruptions from natural disasters, man-made disturbances, or cascading combinations of these events. These threats can disrupt trade routes, pose serious risks to personnel, and damage critical port infrastructure. In today's advanced technological world, MTS also face growing threats from cyberattack and terrorist attack. Given these evolving risks, efforts to ensure resilience in port operations must be more inclusive, reflecting the full spectrum of potential threats. This article focuses on two key aspects of maritime transportation supply chain resilience (MTSCR): operational excellence and system technology. It presents a structured review framework of relevant literature including research papers and government documents, to explore strategies for strengthening MTSCR. The article classifies the review findings and highlights vulnerable areas requiring future research.
👍 👎 ♥ Save
Purdue University
Abstract
Software signing provides a formal mechanism for provenance by ensuring artifact integrity and verifying producer identity. It also imposes tooling and operational costs to implement in practice. In an era of centralized registries such as PyPI, npm, Maven Central, and Hugging Face, it is reasonable to ask whether hardening registry security controls obviates the need for end-to-end artifact signing. In this work, we posit that the core guarantees of signing, provenance, integrity, and accountability are not automatically carried across different software distribution boundaries. These boundaries include mirrors, corporate proxies, re-hosting, and air-gapped transfers, where registry security controls alone cannot provide sufficient assurance. We synthesize historical practice and present a trust model for modern distribution modes to identify when signing is necessary to extend trust beyond registry control. Treating signing as a baseline layer of defense strengthens software supply chain assurance even when registries are secure.
AI Insights
  • AI‑powered scanners can detect slopsquatting, flagging malicious package name hijacks before deployment.
  • Mirrors, corporate proxies, re‑hosting, and air‑gapped transfers can silently alter artifacts, breaking provenance.
  • The paper’s trust model maps when end‑to‑end signing is required beyond registry controls.
  • Even a fully hardened registry cannot guarantee integrity once artifacts traverse external boundaries.
  • Historical case studies of package‑manager breaches inform the design of modern defense layers.
  • Developers must routinely audit dependency chains for hidden vulnerabilities that static analysis may miss.
  • Signing is positioned as a baseline defense that extends assurance across all distribution modes.
Demand
👍 👎 ♥ Save
Gwinnett Technical Collge
Paper visualization
Rate this image: 😍 👍 👎
Abstract
Effective supply chain management under high-variance demand requires models that jointly address demand uncertainty and digital contracting adoption. Existing research often simplifies demand variability or treats adoption as an exogenous decision, limiting relevance in e-commerce and humanitarian logistics. This study develops an optimization framework combining dynamic Negative Binomial (NB) demand modeling with endogenous smart contract adoption. The NB process incorporates autoregressive dynamics in success probability to capture overdispersion and temporal correlation. Simulation experiments using four real-world datasets, including Delhivery Logistics and the SCMS Global Health Delivery system, apply maximum likelihood estimation and grid search to optimize adoption intensity and order quantity. Across all datasets, the NB specification outperforms Poisson and Gaussian benchmarks, with overdispersion indices exceeding 1.5. Forecasting comparisons show that while ARIMA and Exponential Smoothing achieve similar point accuracy, the NB model provides superior stability under high variance. Scenario analysis reveals that when dispersion exceeds a critical threshold (r > 6), increasing smart contract adoption above 70% significantly enhances profitability and service levels. This framework offers actionable guidance for balancing inventory costs, service levels, and implementation expenses, highlighting the importance of aligning digital adoption strategies with empirically observed demand volatility.
AI Insights
  • Negative Binomial regression achieved the lowest RMSE and MAE, proving its edge for overdispersed demand.
  • 10,000‑run simulations show smart‑contract adoption from 0 % to 100 % raises expected profit by $20–$25 per order, regardless of quantity.
  • Larger order quantities amplify profit variability, highlighting overdispersion risk at scale.
  • Adoption shrinks profit interquartile ranges, reducing downside volatility across scenarios.
  • When dispersion r > 6, adopting smart contracts above 70 % markedly boosts profitability and service levels.
  • The SCMS dataset used r = 4.5, p = 0.3, showing the model’s fit to real humanitarian logistics.
  • These results suggest tuning smart‑contract adoption balances inventory costs against high‑variance demand.
👍 👎 ♥ Save
The Hong Kong Polytechnic
Abstract
The spatial-temporal imbalance between supply and demand in shared micro-mobility services often leads to observed demand being censored, resulting in incomplete records of the underlying real demand. This phenomenon undermines the reliability of the collected demand data and hampers downstream applications such as demand forecasting, fleet management, and micro-mobility planning. How to accurately estimate the real demand is challenging and has not been well explored in existing studies. In view of this, we contribute to real demand estimation for shared micro-mobility services by proposing an analytical method that rigorously derives the real demand under appropriate assumptions. Rather than directly modeling the intractable relationship between observed demand and real demand, we propose a novel random variable, Generalized Vehicle Survival Time (GVST), which is observable from trip records. The relationship between GVST and real demand is characterized by introducing a flipped queueing model (FQM) that captures the operational dynamics of shared micro-mobility services. Specifically, the distribution of GVST is derived within the FQM, which allows the real demand estimation problem to be transformed into an inverse queueing problem. We analytically derive the real demand in closed form using a one-sided estimation method, and solve the problem by a system of equations in a two-sided estimation method. We validate the proposed methods using synthetic data and conduct empirical analyses using real-world datasets from bike-sharing and shared e-scooter systems. The experimental results show that both the two-sided and one-sided methods outperform benchmark models. In particular, the one-sided approach provides a closed-form solution that delivers acceptable accuracy, constituting a practical rule of thumb for demand-related analytics and decision-making processes.
AI Insights
  • FQM turns real‑demand estimation into an inverse queueing problem, enabling analytic solutions.
  • GVST, observable from trip records, links queue dynamics to hidden demand.
  • One‑sided estimator gives a closed‑form demand estimate; two‑sided solves a system of equations.
  • Both methods beat benchmarks on bike‑sharing and e‑scooter data, with one‑sided achieving acceptable accuracy.
  • The framework explicitly handles censoring from spatial‑temporal imbalance, a common shared‑mobility pitfall.
  • Read “Fundamentals of Queueing Theory” and “Locally balanced inductive matrix completion” for complementary insights.
  • Ready for fleet‑management and demand‑forecasting pipelines, it bridges queue theory and urban mobility.

Interests not found

We did not find any papers that match the below interests. Try other terms also consider if the content exists in arxiv.org.
  • Supply
  • AI for Supply Chain Optimization
You can edit or add more interests any time.

Unsubscribe from these updates