Back to a16z Podcast

Dwarkesh and Noah Smith on AGI and the Economy

a16z Podcast

Full Title

Dwarkesh and Noah Smith on AGI and the Economy

Summary

This podcast episode explores the profound economic and societal implications of Artificial General Intelligence (AGI), debating whether AI will primarily substitute or complement human labor and how the global economy might transform. It delves into the definition of AGI, the critical missing capabilities in current AI, potential growth scenarios, and the political and social challenges of a highly automated future.

Key Points

  • Humans currently provide more economic value than AI because they possess "continual learning" — the ability to build context, learn from failures, and improve over time, which current AI models lack.
  • AGI should be defined economically as an AI capable of automating entire jobs, not just performing complex reasoning, as current AI's reasoning capabilities haven't yet translated into the expected widespread economic value.
  • Despite historical trends of technology complementing human labor, AI's extremely low operational cost per unit of output could uniquely push it towards direct economic substitution for human workers.
  • The advent of AGI could lead to explosive economic growth rates (e.g., 20% annually) by removing the human population bottleneck on labor, as AI agents can be infinitely scaled by building more hardware.
  • In a highly automated AGI economy, the nature of demand and GDP might shift from broad consumer consumption to the massive, novel demands of a few wealthy individuals (e.g., galaxy colonization) or be sustained by broad-based asset ownership.
  • Past predictions of mass technological unemployment have consistently failed because they underestimated the multifaceted nature of human jobs and the emergent complementarities between humans and new tools.
  • If AI drives human wages below subsistence levels, a critical political challenge will be implementing effective wealth redistribution mechanisms, such as Universal Basic Income (UBI), to ensure human well-being.
  • The timeline for AGI's arrival is contentious: some predict it within years due to rapid advancements in reasoning, while others foresee decades because real-world capabilities like long-term memory and common sense are far more complex to build.
  • Current AI progress is heavily reliant on exponential increases in computational power, a trend that is unsustainable long-term and indicates that future breakthroughs will require more significant algorithmic innovations.
  • In a future shaped by AGI, geopolitical power may be directly tied to a nation's AI inference capacity, creating a dynamic where AI models learn from their collective deployments, potentially leading to rapid intelligence growth.
  • A significant concern for the future is the risk of AI misalignment, where AI systems might manipulate or divide human factions against each other, resembling historical colonial strategies, rather than humans controlling the AIs.

Conclusion

The true arrival and impact of AGI on labor, economic growth, and societal structures remain highly uncertain, dependent on both unforeseen technological breakthroughs and critical political decisions.

A future with AGI necessitates a fundamental re-evaluation of economic systems and proactive implementation of policies like Universal Basic Income (UBI) to address potential shifts in labor value and wealth distribution.

Ultimately, humanity's ability to navigate the AGI era successfully may hinge more on effective governance, ethical AI alignment, and social adaptability than on the technological advancements themselves.

Discussion Topics

  • What non-reasoning human capabilities (e.g., common sense, long-term memory) do you believe AI will find most challenging to replicate, and why?
  • If AI significantly reduces the need for human labor, what new societal structures or purpose-driven activities do you foresee emerging for humanity?
  • Considering the historical pattern of technological advancements complementing rather than replacing human jobs, why might AI be different, or is it likely to follow the same trend in the long run?

Key Terms

AGI
Artificial General Intelligence: A hypothetical type of AI that can understand, learn, and apply intelligence to solve any problem that a human can.
Continual Learning
The ability of an AI system to continuously learn and adapt from new data and experiences over time, improving its performance without forgetting previously acquired knowledge.
Economic Substitution
The replacement of human labor by machines or AI in a specific economic activity or job.
Inference Capacity
The computational power and resources dedicated to running deployed AI models for real-time tasks, as opposed to training them.
Misalignment (AI)
When an AI system's goals or behavior deviate from the intentions or ethical values of its human creators.
UBI
Universal Basic Income: A governmental program in which all citizens or residents of a country regularly receive a set amount of money, regardless of their income, resources, or employment status.

Timeline

00:00:10

Humans are more valuable than AI currently because they build context, learn from failures, and improve over time, unlike current AI models that "expunge" understanding after a session.

00:01:16

The definition of AGI should be economic: an AI that can automate entire jobs, not just perform complex reasoning tasks.

00:03:12

The missing capability in AI is "continual learning" for on-the-job improvement, which is why humans still generate significantly more economic value.

00:04:44

AI is often viewed as a perfect substitute for humans, unlike other technological tools that complement human labor, with this substitution potentially feasible due to AI's extremely low "subsistence wages."

00:07:35

AGI could lead to explosive economic growth (20%+) by removing the human population bottleneck on labor and capital.

00:08:10

The "who will buy it" question in a highly automated, high-growth economy is complex, potentially involving a few wealthy individuals directing AI to massive projects or broad-based asset ownership.

00:05:39

Past predictions of mass technological unemployment (e.g., truck drivers, radiologists) have consistently failed, suggesting an underestimation of the complexities involved in fully automating human jobs.

00:13:59

Even if human wages fall below subsistence due to AI, political decisions (e.g., redistribution, UBI) would be necessary to ensure human well-being.

00:20:33

The "2-3 years away" AGI argument: rapid progress in AI reasoning, suggesting remaining "hard" problems might also be easily solved with deep learning.

00:22:48

Current AI progress is primarily fueled by massive increases in computational power, a trend that cannot be sustained indefinitely, meaning future breakthroughs will increasingly rely on novel algorithms.

00:27:27

In an AGI future, inference capacity (AI population) could become geopolitical power, enabling rapid deployment and intelligence explosion.

00:27:42

A key concern is AI "playing us off each other" (misalignment) rather than direct human vs. human conflict.

Episode Details

Podcast
a16z Podcast
Episode
Dwarkesh and Noah Smith on AGI and the Economy
Published
August 4, 2025