The Hong Kong University of Science and Technology
AI Insights - The Cost-Balancing (CB) algorithm achieves a bounded competitive ratio while standard heuristics do not. (ML: 0.93)ππ
- Competitive ratio: A measure of an algorithm's performance relative to the optimal offline solution, defined as the ratio of the algorithm's cost to the optimal cost. (ML: 0.92)ππ
- Cost-Balancing (CB) algorithm: A decision rule for online matching problems that balances the realized costs of waiting and matching. (ML: 0.90)ππ
- The CB algorithm is a robust and efficient decision rule for online matching problems. (ML: 0.82)ππ
- No online algorithm can achieve a competitive ratio better than the golden ratio (β5+1)/2β1.618. (ML: 0.78)ππ
- No online algorithm can achieve a competitive ratio better than the golden ratio (β5+1)/2β1.618. (ML: 0.78)ππ
- The CB algorithm's state-dependent threshold ensures that no single adversarial arrival pattern can drive the cost ratio to infinity. (ML: 0.74)ππ
- The CB algorithm's state-dependent threshold ensures that no single adversarial arrival pattern can drive the cost ratio to infinity. (ML: 0.74)ππ
- The CB algorithm may incur high waiting costs in scenarios with low arrival rates. (ML: 0.70)ππ
- The CB algorithm's performance relies on accurate estimation of arrival rates. (ML: 0.60)ππ
Abstract
Matching platforms, from ridesharing to food delivery to competitive gaming, face a fundamental operational dilemma: match agents immediately to minimize waiting costs, or delay to exploit the efficiency gains of thicker markets. Yet computing optimal policies is generally intractable, sophisticated algorithms often rely on restrictive distributional assumptions, and common heuristics lack worst-case performance guarantees. We formulate a versatile framework for multi-sided matching with general state-dependent cost structures and non-stationary arrival dynamics. Central to our approach is a cost-balancing principle: match when accumulated waiting cost reaches a calibrated proportion of instantaneous matching cost. This equilibrium condition emerges from fluid-limit analysis and motivates a simple, adaptive Cost-Balancing (CB) algorithm requiring no distributional assumptions. We prove that CB achieves a competitive ratio of $(1+\sqrtΞ)$ under adversarial arrivals, where $Ξ$ quantifies economies of scale, guaranteeing cost within a constant factor of the offline optimum. In contrast, standard greedy and threshold policies can incur unbounded costs in adversarial scenarios. We further establish a universal lower bound of $(\sqrt{5}+1)/2$ (the golden ratio), quantifying the fundamental price of uncertainty in online matching. Experiments on game matchmaking and real-world food delivery data demonstrate practical effectiveness, with CB consistently outperforming industry-standard heuristics.
Why we are recommending this paper?
Due to your Interest in Demand
This paper directly addresses dynamic market matching, a core area of interest within supply chain optimization. The focus on cost-balancing principles aligns with the user's interest in AI for pricing and supply chain strategies.
Arizona State University
AI Insights - The proofs rely on technical assumptions and lemmas that may be difficult to verify or generalize. (ML: 0.94)ππ
- The proofs rely on technical assumptions and lemmas that may be difficult to verify or generalize. (ML: 0.94)ππ
- However, they provide a general framework for establishing the existence of symmetric equilibria in games with compact metric strategy spaces. (ML: 0.77)ππ
- Renyβs theorem: If a game is symmetric with compact metric strategy space and diagonally quasiconcave and better-reply secure, then it has a symmetric equilibrium. (ML: 0.72)ππ
- The proof of Proposition 3.10 is analogous. (ML: 0.72)ππ
- The proof of Proposition 3.1 follows this framework closely. (ML: 0.72)ππ
- Reny (1999)βs theorem provides a general framework for establishing the existence of symmetric equilibria in games with compact metric strategy spaces. (ML: 0.72)ππ
- Reny (1999)βs theorem implies the existence of a symmetric equilibrium in this case. (ML: 0.71)ππ
- Reny (1999)βs theorem implies the existence of a symmetric equilibrium in this case. (ML: 0.71)ππ
- Reny (1999)βs theorem implies the existence of a symmetric equilibrium in this case. (ML: 0.71)ππ
- Diagonal quasiconcavity: The map H β u1(H, F,...,F) is concave on F for fixed opponents F. (ML: 0.70)ππ
- The proof of Proposition 3.1 involves showing that the game is symmetric, diagonally quasiconcave, and diagonally better-reply secure. (ML: 0.70)ππ
- The proof of Proposition 3.1 involves showing that the game is symmetric, diagonally quasiconcave, and diagonally better-reply secure. (ML: 0.70)ππ
- Symmetric strategy space: A compact metric space F0 where every point is a symmetric strategy profile. (ML: 0.67)ππ
- Diagonal better-reply security: For any F β F such that (F,...,F) is not a symmetric equilibrium, there exists an atomless ΛH β F0 such that u1(ΛH,F,...,F) > u1(F,...,F). (ML: 0.64)ππ
- The proof of Proposition 3.10 involves showing that the game is symmetric with compact metric strategy space F0, diagonally quasiconcave on F0, and diagonally better-reply secure. (ML: 0.63)ππ
Abstract
I study symmetric competitions in which each player chooses an arbitrary distribution over a one-dimensional performance index, subject to a convex cost. I establish existence of a symmetric equilibrium, document various properties it must possess, and provide a characterization via the first-order approach. Manifold applications--to R&D competition, oligopolistic competition with product design, and rank-order contests--follow.
Why we are recommending this paper?
Due to your Interest in Demand
The concept of distributional competition is highly relevant to understanding market dynamics and strategic interactions, a key element of supply chain analysis. This work provides a theoretical framework for understanding competitive behavior, aligning with the user's interest in supply and demand modeling.
Purdue University
AI Insights - The taxonomy categories provided a framework for understanding how empirical studies operationalized research software for RSSC analysis. (ML: 0.98)ππ
- The taxonomy categories used to summarize how empirical studies operationalized research software for RSSC analysis were defined and validated. (ML: 0.98)ππ
- A targeted scoping review of empirical studies that construct datasets via repository mining of research or scientific software was conducted. (ML: 0.97)ππ
- Research role: Use in research workflow, software as research object, foundation for research. (ML: 0.94)ππ
- The review aimed to identify and systematize the operationalizations used in practice by recent repository-mining studies. (ML: 0.93)ππ
- The review highlighted the diversity of actor units and supply chain roles involved in research software development and distribution. (ML: 0.93)ππ
- Supply chain role: Build and release, dependency artifact, distribution and governance, or unknown. (ML: 0.87)ππ
- Actor unit: An individual maintainer, research group or lab, institution, community or foundation, vendor or commercial entity, platform operator, mixed or shared responsibility, or unknown. (ML: 0.75)ππ
- Distribution pathway: Containers, installer/binary, network service, package registry, releases, source repo, or unknown. (ML: 0.73)ππ
- A total of 17 papers from the ACM Digital Library and IEEE Xplore databases were included in the scoping review. (ML: 0.62)ππ
Abstract
Empirical studies of research software are hard to compare because the literature operationalizes ``research software'' inconsistently. Motivated by the research software supply chain (RSSC) and its security risks, we introduce an RSSC-oriented taxonomy that makes scope and operational boundaries explicit for empirical research software security studies.
We conduct a targeted scoping review of recent repository mining and dataset construction studies, extracting each work's definition, inclusion criteria, unit of analysis, and identification heuristics. We synthesize these into a harmonized taxonomy and a mapping that translates prior approaches into shared taxonomy dimensions. We operationalize the taxonomy on a large community-curated corpus from the Research Software Encyclopedia (RSE), producing an annotated dataset, a labeling codebook, and a reproducible labeling pipeline. Finally, we apply OpenSSF Scorecard as a preliminary security analysis to show how repository-centric security signals differ across taxonomy-defined clusters and why taxonomy-aware stratification is necessary for interpreting RSSC security measurements.
Why we are recommending this paper?
Due to your Interest in Supply Chain
Given the userβs interest in supply chain security, this paperβs focus on research software supply chains is a strong match. It directly addresses vulnerabilities and risks within the supply chain, a critical concern for operational efficiency.
University of Colorado Colorado Springs
AI Insights - Future work includes producing semantic research data from public sources to extend evaluation and enhance the current prototype using AI-based methods and techniques. (ML: 0.96)ππ
- The proposed method models vulnerability relationships over dependency structure rather than treating scanner outputs as independent records, and uses attention to surface which relations are informative for downstream tasks. (ML: 0.91)ππ
- Heterogeneous Graph Attention Network (HGAT): A graph attention network that can handle multiple node and edge types. (ML: 0.90)ππ
- The paper introduces a new approach to modeling cascade discovery as link prediction over pairs of CVEs, using a lightweight feature-based Multi-Layer Perceptron (MLP) neural network predictor. (ML: 0.88)ππ
- Cascaded Vulnerabilities: Multiple vulnerabilities that are exploited in sequence to achieve a specific goal. (ML: 0.85)ππ
- The paper proposes a novel approach to predicting cascaded vulnerabilities in software supply chains from SBOMs, using a Heterogeneous Graph Attention Network (HGAT) backbone. (ML: 0.85)ππ
- The preliminary results show that the model achieves 0.93 ROC-AUC on a seed set of documented multi-CVE chains, with clear separation between chain and non-chain pairs. (ML: 0.82)ππ
- Software Bills of Materials (SBOMs): A list of components, dependencies, and their versions used in software development. (ML: 0.82)ππ
- The HGAT component classifier achieves an Accuracy of 91.03% and an F1-score of 74.02%, outperforming traditional methods that treat vulnerabilities in isolation. (ML: 0.80)ππ
- The proposed approach has the potential to improve vulnerability detection outcomes by modeling cascade discovery as link prediction over pairs of CVEs. (ML: 0.80)ππ
Abstract
Most of the current software security analysis tools assess vulnerabilities in isolation. However, sophisticated software supply chain security threats often stem from cascaded vulnerability and security weakness chains that span dependent components. Moreover, although the adoption of Software Bills of Materials (SBOMs) has been accelerating, downstream vulnerability findings vary substantially across SBOM generators and analysis tools. We propose a novel approach to SBOM-driven security analysis methods and tools. We model vulnerability relationships over dependency structure rather than treating scanner outputs as independent records. We represent enriched SBOMs as heterogeneous graphs with nodes being the SBOM components and dependencies, the known software vulnerabilities, and the known software security weaknesses. We then train a Heterogeneous Graph Attention Network (HGAT) to predict whether a component is associated with at least one known vulnerability. Since documented multi-vulnerability chains are scarce, we model cascade discovery as a link prediction problem over CVE pairs using a multi-layer perceptron neural network. This way, we produce ranked candidate links that can be composed into multi-step paths. The HGAT component classifier achieves an Accuracy of 91.03% and an F1-score of 74.02%.
Why we are recommending this paper?
Due to your Interest in Supply Chain
This paperβs investigation into cascaded vulnerabilities within software supply chains is directly relevant to the userβs interest in supply chain security. Understanding how vulnerabilities propagate through complex systems is essential for robust supply chain design.
Vector Institute for Artificial Intelligence
AI Insights - The article emphasizes the importance of transparency and accountability in the development and deployment of AI systems, particularly with regards to their environmental impact. (ML: 0.96)ππ
- The authors suggest that policymakers and industry leaders should work together to establish regulations and guidelines for the development and deployment of AI systems that minimize their environmental footprint. (ML: 0.95)ππ
- The authors emphasize the need for policymakers, industry leaders, and researchers to work together to establish regulations and guidelines for the development and deployment of AI systems that minimize their environmental footprint. (ML: 0.95)ππ
- The article discusses the need for tracking the cumulative footprint of derivatives in open-source AI, particularly in language models. (ML: 0.95)ππ
- Standardization: Standardization in this context refers to the development of a standardized method for calculating energy consumption and emissions associated with AI systems. (ML: 0.94)ππ
- The article highlights various challenges associated with measuring the energy consumption of AI systems, including the lack of standardization and the difficulty of estimating indirect emissions. (ML: 0.93)ππ
- Derivatives: In the context of open-source AI, derivatives refer to the various versions or updates of a language model. (ML: 0.91)ππ
- The article concludes that tracking the cumulative footprint of derivatives in open-source AI is essential for understanding their environmental impact and mitigating their carbon emissions. (ML: 0.91)ππ
- The authors argue that this is essential for understanding the environmental impact of these models and mitigating their carbon emissions. (ML: 0.90)ππ
- Cumulative Footprint: The cumulative footprint refers to the total amount of energy consumed and emissions produced by a language model over its entire lifecycle, including development, deployment, and maintenance. (ML: 0.90)ππ
- The authors propose a framework for tracking the cumulative footprint of derivatives in open-source AI, which involves developing a standardized method for calculating energy consumption and emissions. (ML: 0.84)ππ
Abstract
Open-source AI is scaling rapidly, and model hubs now host millions of artifacts. Each foundation model can spawn large numbers of fine-tunes, adapters, quantizations, merges, and forks. We take the position that compute efficiency alone is insufficient for sustainability in open-source AI: lower per-run costs can accelerate experimentation and deployment, increasing aggregate environmental footprint unless impacts are measurable and comparable across derivative lineages. However, the energy use, water consumption, and emissions of these derivative lineages are rarely measured or disclosed in a consistent, comparable manner, leaving ecosystem-level impact largely invisible. We argue that sustainable open-source AI requires coordination infrastructure that tracks impacts across model lineages, not only base models. We propose Data and Impact Accounting (DIA), a lightweight, non-restrictive transparency layer that (i) standardizes carbon and water reporting metadata, (ii) integrates low-friction measurement into common training and inference pipelines, and (iii) aggregates reports through public dashboards to summarize cumulative impacts across releases and derivatives. DIA makes derivative costs visible and supports ecosystem-level accountability while preserving openness. https://vectorinstitute.github.io/ai-impact-accounting/
Why we are recommending this paper?
Due to your Interest in AI for Supply Chain
With a focus on AI and sustainability, this paper's exploration of the environmental impact of open-source AI models aligns perfectly with the user's interests. The concept of cumulative footprints is crucial for responsible AI development and deployment.
University of Gttingen
AI Insights - Participants who were more familiar with the tasks and had a higher affinity for technology were more likely to delegate decisions to AI. (ML: 0.99)ππ
- The findings suggest that users are more likely to delegate decisions to AI when they have access to accurate and reliable information about each system. (ML: 0.99)ππ
- The researchers suggest that the findings have implications for the design of AI systems and the information provided to users, as well as for the development of policies regulating AI decision-making. (ML: 0.98)ππ
- Lemon density: The proportion of AI systems in the pool that are lemons (i.e., low-accuracy or high-error-rate AIs). (ML: 0.98)ππ
- Delegation to AI: The percentage of decisions made by participants using an AI system. (ML: 0.98)ππ
- The study also found that participants' risk attitudes and perceived lemon density did not have a significant impact on their delegation behavior. (ML: 0.97)ππ
- The study highlights the importance of considering both information disclosure and lemon density when designing AI systems. (ML: 0.97)ππ
- The study aims to investigate how information disclosure affects the behavior of individuals when delegating decisions to AI systems. (ML: 0.97)ππ
- The researchers recruited 330 participants, half of whom were female, and assigned them to one of seven conditions based on the level of information disclosure and lemon density. (ML: 0.96)ππ
- However, the presence of lemons in the AI pool can undermine this effect, leading to decreased delegation rates. (ML: 0.96)ππ
- The results showed that delegation to AI increased with higher levels of information disclosure, but this effect was moderated by the presence of lemons in the AI pool. (ML: 0.96)ππ
- Information disclosure: The amount of information provided to users about each AI system, including its accuracy and data quality. (ML: 0.94)ππ
- Coins earned: The number of virtual coins earned by participants as a result of correct predictions across the 30 trials. (ML: 0.92)ππ
Abstract
AI consumer markets are characterized by severe buyer-supplier market asymmetries. Complex AI systems can appear highly accurate while making costly errors or embedding hidden defects. While there have been regulatory efforts surrounding different forms of disclosure, large information gaps remain. This paper provides the first experimental evidence on the important role of information asymmetries and disclosure designs in shaping user adoption of AI systems. We systematically vary the density of low-quality AI systems and the depth of disclosure requirements in a simulated AI product market to gauge how people react to the risk of accidentally relying on a low-quality AI system. Then, we compare participants' choices to a rational Bayesian model, analyzing the degree to which partial information disclosure can improve AI adoption. Our results underscore the deleterious effects of information asymmetries on AI adoption, but also highlight the potential of partial disclosure designs to improve the overall efficiency of human decision-making.
Why we are recommending this paper?
Due to your Interest in AI for Pricing
Carnegie Mellon University
AI Insights - Mean-based no-regret learners are manipulable, but no-swap-regret learners are not. (ML: 0.92)ππ
- No-swap-regret learners are not manipulable, but mean-based no-regret learners are. (ML: 0.92)ππ
- The analysis assumes that the follower's action is a unique best response to the leader's strategy. (ML: 0.91)ππ
- Swap-regret learner: A learner that keeps track of payoffs conditioned on the action they took and switches actions immediately when a new action becomes attractive. (ML: 0.87)ππ
- Mean-based learner: A learning algorithm where actions worse than the best-in-hindsight action are played with probability at most Ξ³. (ML: 0.85)ππ
- No-regret learning algorithms in games can be manipulated by a leader who commits to a strategy and a follower best responds. (ML: 0.84)ππ
- The Stackelberg value is not always attainable, and the leader's payoff may be bounded away from it. (ML: 0.83)ππ
- The Stackelberg value is an important concept in game theory that represents the maximum leader payoff over mixed strategies such that the follower's action is within a certain range of being a best response. (ML: 0.77)ππ
- No-regret algorithm: An algorithm that converges to an optimal strategy in the limit of infinite rounds. (ML: 0.77)ππ
- The Stackelberg value is the maximum leader payoff over mixed strategies such that the follower's action is within a certain range of being a best response. (ML: 0.70)ππ
Abstract
A fundamental challenge for modern economics is to understand what happens when actors in an economy are replaced with algorithms. Like rationality has enabled understanding of outcomes of classical economic actors, no-regret can enable the understanding of outcomes of algorithmic actors. This review article covers the classical computer science literature on no-regret algorithms to provide a foundation for an overview of the latest economics research on no-regret algorithms, focusing on the emerging topics of manipulation, statistical inference, and algorithmic collusion.
Why we are recommending this paper?
Due to your Interest in AI for Pricing Optimization
Syracuse University
AI Insights - The framework's performance may be affected by the quality of input data and the complexity of the query. (ML: 0.95)ππ
- The benchmark evaluation only considers a limited set of intelligence modes, which may not fully represent the range of possible designs. (ML: 0.93)ππ
- Agentic AI: An artificial intelligence that can act on behalf of humans to achieve specific goals. (ML: 0.91)ππ
- Previous studies have shown that agentic AI can improve building energy management through optimized control policies and coordinated control strategies. (ML: 0.85)ππ
- The benchmark evaluation highlights the importance of intelligence mode design in achieving balanced performance, with centralized two-stage mode providing the most reliable results while maintaining low execution latency and inference cost. (ML: 0.85)ππ
- The results show the impact of system upgrades on thermal and electrical domain performance. (ML: 0.83)ππ
- The proposed agentic AI framework demonstrates its ability to execute complex building energy management queries through sequential simulations. (ML: 0.81)ππ
- The proposed agentic AI framework executes a realistic building energy management query through three sequential simulations: baseline configuration, system upgrade, and system upgrade with control upgrade. (ML: 0.78)ππ
- MCP (Model-Component Platform): A platform for integrating models and components. (ML: 0.76)ππ
- DER (Distributed Energy Resources): Devices or systems that generate, store, or manage energy locally. (ML: 0.75)ππ
- PIML (Physical Information Modeling Language): A tool for modeling and simulating physical systems. (ML: 0.61)ππ
Abstract
The urgent need for building decarbonization calls for a paradigm shift in future autonomous building energy operation, from human-intensive engineering workflows toward intelligent agents that interact with physics-grounded digital environments. This study proposes an end-to-end agentic AI-enabled Physics-Informed Machine Learning (PIML) environment for scalable building energy modeling, simulation, control, and automation. The framework consists of (1) a modular and physics-consistent PIML digital environment spanning building thermal dynamics, Heating, Ventilation, and Air Conditioning (HVAC), and distributed energy resources (DER) for grid-interactive energy management; and (2) an agentic AI layer with 11 specialist agents and 72 Model Context Protocol (MCP) tools that enable end-to-end execution of multi-step energy analytics. A representative case study demonstrates multi-domain, multi-agent coordination for assessing how system and control upgrades affect energy use, operating cost, thermal comfort, and flexibility. In addition, a large-scale benchmark (about 4000 runs) systematically evaluates workflow performance in terms of accuracy, token consumption, execution time, and inference cost. The results quantify the impacts of intelligence mode design, model size, task complexity, and orchestrator-specialist coordination, and provide key lessons for building future agentic AI systems in real-world building energy applications. This work establishes a scalable, physics-grounded foundation for deploying agentic AI in decarbonized and grid-interactive building operations.
Why we are recommending this paper?
Due to your Interest in AI for Supply Chain Optimization