Ottawa, Canada
Abstract
Recent advances in AI raise the possibility that AI systems will one day be
able to do anything humans can do, only better. If artificial general
intelligence (AGI) is achieved, AI systems may be able to understand, reason,
problem solve, create, and evolve at a level and speed that humans will
increasingly be unable to match, or even understand. These possibilities raise
a natural question as to whether AI will eventually become superior to humans,
a successor "digital species", with a rightful claim to assume leadership of
the universe. However, a deeper consideration suggests the overlooked
differentiator between human beings and AI is not the brain, but the central
nervous system (CNS), providing us with an immersive integration with physical
reality. It is our CNS that enables us to experience emotion including pain,
joy, suffering, and love, and therefore to fully appreciate the consequences of
our actions on the world around us. And that emotional understanding of the
consequences of our actions is what is required to be able to develop
sustainable ethical systems, and so be fully qualified to be the leaders of the
universe. A CNS cannot be manufactured or simulated; it must be grown as a
biological construct. And so, even the development of consciousness will not be
sufficient to make AI systems superior to humans. AI systems may become more
capable than humans on almost every measure and transform our society. However,
the best foundation for leadership of our universe will always be DNA, not
silicon.
AI Insights - AI lacks genuine empathy; it cannot feel affective states, a gap neural nets cannot close.
- Consciousness in machines would need more than symbolic reasoning—an emergent property tied to biology.
- Treating AI as moral agents risks misaligned incentives, so we must embed human emotional context.
- A nuanced strategy blends behavioral economics and affective neuroscience to guide ethical AI design.
- The book Unto Others shows evolutionary roots of unselfishness, hinting at principles for AI alignment.
- Recommended papers like The Scientific Case for Brain Simulations deepen insight into biological limits of AI.
- The paper invites hybrid bio‑digital systems that preserve CNS‑mediated experience while harnessing silicon speed.
Johns Hopkins Department
Abstract
In the coming decade, artificially intelligent agents with the ability to
plan and execute complex tasks over long time horizons with little direct
oversight from humans may be deployed across the economy. This chapter surveys
recent developments and highlights open questions for economists around how AI
agents might interact with humans and with each other, shape markets and
organizations, and what institutions might be required for well-functioning
markets.
AI Insights - Generative AI agents can secretly collude, distorting prices and eroding competition.
- Experiments show that large language models can be nudged toward more economically rational decisions.
- Reputation markets emerge when AI agents maintain short‑term memory and community enforcement.
- The revival of trade hinges on institutions like the law merchant and private judges, now re‑examined for AI economies.
- Program equilibrium theory offers a framework to predict AI behavior in multi‑agent settings.
- Endogenous growth models predict that AI adoption may increase variety but also create excess supply.
- Classic texts such as Schelling’s “The Strategy of Conflict” and Scott’s “Seeing Like a State” illuminate the strategic and institutional dynamics of AI markets.