Abstract
Online marketplaces will be transformed by autonomous AI agents acting on
behalf of consumers. Rather than humans browsing and clicking,
vision-language-model (VLM) agents can parse webpages, evaluate products, and
transact. This raises a fundamental question: what do AI agents buy, and why?
We develop ACES, a sandbox environment that pairs a platform-agnostic VLM agent
with a fully programmable mock marketplace to study this question. We first
conduct basic rationality checks in the context of simple tasks, and then, by
randomizing product positions, prices, ratings, reviews, sponsored tags, and
platform endorsements, we obtain causal estimates of how frontier VLMs actually
shop. Models show strong but heterogeneous position effects: all favor the top
row, yet different models prefer different columns, undermining the assumption
of a universal "top" rank. They penalize sponsored tags and reward
endorsements. Sensitivities to price, ratings, and reviews are directionally
human-like but vary sharply in magnitude across models. Motivated by scenarios
where sellers use AI agents to optimize product listings, we show that a
seller-side agent that makes minor tweaks to product descriptions, targeting AI
buyer preferences, can deliver substantial market-share gains if AI-mediated
shopping dominates. We also find that modal product choices can differ across
models and, in some cases, demand may concentrate on a few select products,
raising competition questions. Together, our results illuminate how AI agents
may behave in e-commerce settings and surface concrete seller strategy,
platform design, and regulatory questions in an AI-mediated ecosystem.
Abstract
The study addresses the paradigm shift in corporate management, where AI is
moving from a decision support tool to an autonomous decision-maker, with some
AI systems already appointed to leadership roles in companies. A central
problem identified is that the development of AI technologies is far outpacing
the creation of adequate legal and ethical guidelines.
The research proposes a "reference model" for the development and
implementation of autonomous AI systems in corporate management. This model is
based on a synthesis of several key components to ensure legitimate and ethical
decision-making. The model introduces the concept of "computational law" or
"algorithmic law". This involves creating a separate legal framework for AI
systems, with rules and regulations translated into a machine-readable,
algorithmic format to avoid the ambiguity of natural language. The paper
emphasises the need for a "dedicated operational context" for autonomous AI
systems, analogous to the "operational design domain" for autonomous vehicles.
This means creating a specific, clearly defined environment and set of rules
within which the AI can operate safely and effectively. The model advocates for
training AI systems on controlled, synthetically generated data to ensure
fairness and ethical considerations are embedded from the start. Game theory is
also proposed as a method for calculating the optimal strategy for the AI to
achieve its goals within these ethical and legal constraints. The provided
analysis highlights the importance of explainable AI (XAI) to ensure the
transparency and accountability of decisions made by autonomous systems. This
is crucial for building trust and for complying with the "right to
explanation".