Project Management Instit
Abstract
Generative AI does more than cut costs. It pulls products toward a shared
template, making offerings look and feel more alike while making true
originality disproportionately expensive. We capture this centripetal force in
a standard two-stage differentiated-competition framework and show how a single
capability shift simultaneously compresses perceived differences, lowers
marginal cost and raises fixed access costs. The intuition is straightforward.
When buyers see smaller differences across products, the payoff to standing
apart shrinks just as the effort to do so rises, so firms cluster around the
template. Prices fall and customers become more willing to switch. But the same
homogenization also squeezes operating margins, and rising fixed outlays deepen
the squeeze. The combination yields a structural prediction. There is a
capability threshold at which even two firms cannot both cover fixed costs, and
in a many-firm extension the sustainable number of firms falls as capability
grows. Concentration increases, and prices still fall. Our results hold under
broader preference shapes, non-uniform consumer densities, outside options,
capability-dependent curvatures, and modest asymmetries. We translate the
theory into two sufficient statistics for enforcement. On the one hand, a
conduct statistic and a viability statistic. Transactions or platform rules
that strengthen template pull or raise fixed access and originality costs can
lower prices today yet push the market toward monoculture. Remedies that
broaden access and promote template plurality and interoperability preserve the
price benefits of AI while protecting entry and variety. The paper thus
reconciles a live policy paradox. AI can make prices lower and entry harder at
the same time. It prescribes what to measure to tell which force is dominant in
practice.
Abstract
This study evaluates Artificial Intelligence (AI) agents for Dhumbal, a
culturally significant multiplayer card game with imperfect information,
through a systematic comparison of rule-based, search-based, and learning-based
strategies. We formalize Dhumbal's mechanics and implement diverse agents,
including heuristic approaches (Aggressive, Conservative, Balanced,
Opportunistic), search-based methods such as Monte Carlo Tree Search (MCTS) and
Information Set Monte Carlo Tree Search (ISMCTS), and reinforcement learning
approaches including Deep Q-Network (DQN) and Proximal Policy Optimization
(PPO), and a random baseline. Evaluation involves within-category tournaments
followed by a cross-category championship. Performance is measured via win
rate, economic outcome, Jhyap success, cards discarded per round, risk
assessment, and decision efficiency. Statistical significance is assessed using
Welch's t-test with Bonferroni correction, effect sizes via Cohen's d, and 95%
confidence intervals (CI). Across 1024 simulated rounds, the rule-based
Aggressive agent achieves the highest win rate (88.3%, 95% CI: [86.3, 90.3]),
outperforming ISMCTS (9.0%) and PPO (1.5%) through effective exploitation of
Jhyap declarations. The study contributes a reproducible AI framework, insights
into heuristic efficacy under partial information, and open-source code,
thereby advancing AI research and supporting digital preservation of cultural
games.