How to Build the Future: Demis Hassabis
Y Combinator Startup PodcastFull Title
How to Build the Future: Demis Hassabis
Summary
This episode features Demis Hassabis discussing the current state and future trajectory of Artificial General Intelligence (AGI), focusing on the advancements made by Google DeepMind.
The conversation highlights the remaining challenges in achieving AGI, the impact of AI on scientific discovery, and the strategic importance of open-source models and deep technology.
Key Points
- Achieving AGI requires breakthroughs in continual learning, long-term reasoning, and memory, which are currently unsolved aspects of AI development.
- Current AI paradigms like large-scale pre-training and reinforcement learning from human feedback are foundational but may need further innovation or entirely new ideas for AGI.
- DeepMind's historical focus on agent-based systems, exemplified by AlphaGo and AlphaZero, continues to inform their development of models like Gemini, emphasizing active problem-solving and planning.
- The development of smaller, efficient AI models through distillation is crucial for deploying AI across billions of user products and on edge devices, balancing capability with speed, cost, and privacy.
- While current large language models demonstrate impressive reasoning in some areas, they still exhibit "jagged intelligence," making elementary errors that indicate a gap in true introspection and robust reasoning.
- Agents are seen as the path to AGI, but their full potential is limited by the lack of continual learning, hindering their ability to adapt to context and complete complex tasks autonomously.
- AI is revolutionizing scientific discovery by tackling complex problems with massive combinatorial search spaces and clear objective functions, as seen with AlphaFold's impact on biology and drug discovery.
- Developing a "virtual cell" simulation is a long-term goal in biology, estimated to be around 10 years away, and requires advancements in data acquisition and modeling dynamic systems.
- The creation of truly novel scientific hypotheses, beyond pattern matching or extrapolation, is a frontier in AI reasoning that DeepMind is actively exploring with systems like CoScientist.
- Startups aiming to advance the AI frontier should combine AI advancements with deep technology areas and interdisciplinary expertise, creating defensible niches not easily replicated by simply wrapping APIs around existing models.
- Open-sourcing capable models like Gemma promotes accessibility and allows for broader experimentation, fostering AI development in user hands rather than solely in the cloud.
- Multimodality, inherent in Gemini's design, provides a significant advantage for understanding the physical world, robotics, and building advanced digital assistants.
- The development of AI for science is seen as a Promethean endeavor, requiring careful consideration of its use and potential misuse, with the ultimate goal of solving fundamental scientific challenges.
Conclusion
Current AI architectures are powerful but require significant advancements in areas like continual learning and robust reasoning to achieve Artificial General Intelligence (AGI).
AI is a transformative tool for scientific discovery, poised to unlock new avenues in fields ranging from biology and medicine to materials science and mathematics.
Building defensible AI companies involves combining AI's rapid progress with deep technological challenges and interdisciplinary expertise, rather than just leveraging existing foundation models.
Discussion Topics
- What are the most critical ethical considerations we must address as AI capabilities advance towards AGI?
- How can we best foster interdisciplinary collaboration between AI researchers and domain experts in fields like biology or materials science to accelerate discovery?
- What is the ideal balance between proprietary AI development and open-source initiatives for the future of AI innovation and accessibility?
Key Terms
- AGI
- Artificial General Intelligence; AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a human-like level.
- RLHF
- Reinforcement Learning from Human Feedback; a training technique that uses human preferences to guide AI model behavior.
- Chain of Thought
- A method for improving reasoning in large language models by prompting them to generate intermediate steps in their thinking process.
- Continual Learning
- The ability of an AI system to learn new information and adapt over time without forgetting previously acquired knowledge.
- Agents
- AI systems designed to perceive their environment, make decisions, and take actions to achieve specific goals autonomously.
- Distillation
- A technique in machine learning where a smaller, more efficient model is trained to mimic the behavior of a larger, more complex model.
- Context Window
- The amount of input text that an AI language model can consider at any given time to generate a response.
- Multimodal AI
- AI systems capable of processing and understanding information from multiple types of data, such as text, images, audio, and video.
- Open Source
- Software whose source code is made available with a license in which the copyright holder grants users the rights to study, change, and distribute the software to anyone and for any purpose.
Timeline
Discussion on the missing components for AGI, including continual learning, long-term reasoning, and memory.
Hosts discuss the current AI architecture and what might be missing for AGI.
The challenge of continual learning and how it's currently being addressed with "duct tape" methods, drawing parallels to biological memory consolidation.
Exploration of how DeepMind's philosophy of reinforcement learning and search from AlphaGo is embedded in Gemini's development.
Conversation around the trade-offs and advancements in building both large frontier models and smaller, distilled models for efficiency.
Discussion on the potential and limitations of distillation for creating smaller, highly capable AI models.
The role of continual learning in enabling agents to perform full tasks and adapt to context.
Analysis of current AI reasoning capabilities, their limitations, and potential areas for improvement.
The current state and future of AI agents, with a focus on them being "just getting started."
Examination of how individuals and companies are experimenting with AI agents and the current stage of value realization.
Discussion on creativity in AI, referencing AlphaGo's "move 37" and the aspiration for AI to invent entirely new games.
The significance of open-source and open-weight models like Gemma for local execution and user-driven AI development.
The benefits and strategic advantages of Gemini being built as a multimodal model from the start.
The implications of decreasing inference costs and what Google DeepMind is optimizing for.
Discussion on the progress and challenges in modeling complex biological systems beyond proteins, like full cellular systems.
Ranking scientific domains poised for dramatic transformation by AI in the next five years, highlighting AI's role as a tool for scientific discovery.
Advice for startups on advancing the AI frontier versus simply wrapping APIs, emphasizing deep technology and interdisciplinary approaches.
Identifying the characteristics of scientific domains ripe for AlphaFold-style breakthroughs, focusing on massive search spaces and clear objective functions.
The progress and potential for AI systems to perform genuine scientific reasoning beyond pattern matching.
Final advice for aspiring AI builders, emphasizing tackling hard problems, interdisciplinary work, and considering the impact of AGI.
Episode Details
- Podcast
- Y Combinator Startup Podcast
- Episode
- How to Build the Future: Demis Hassabis
- Official Link
- https://www.ycombinator.com/
- Published
- April 29, 2026