How François Chollet Is Building A New Path To AGI
Y Combinator Startup PodcastFull Title
How François Chollet Is Building A New Path To AGI
Summary
François Chollet discusses his new lab, MDA, which is exploring a paradigm shift in AI research away from deep learning towards symbolic models and "symbolic descent" for greater efficiency and optimality.
The conversation also covers the evolution of the ARC Prize benchmarks, the limitations of current LLM approaches, and the definition and measurement of general intelligence, with a prediction of AGI by 2030.
Key Points
- MDA is developing a new branch of machine learning that aims to be more optimal than deep learning by using concise symbolic models instead of parametric curves, moving beyond gradient descent to a "symbolic descent" approach.
- Current AI progress, particularly with LLMs, is heavily reliant on the existing deep learning stack, which Chollet believes will not be the foundation of AI in 50 years; he advocates for exploring alternative, more optimal approaches.
- The success of coding agents is attributed to the verifiable reward signal provided by code, enabling full automation in domains with formal verification, unlike domains like natural language essay writing.
- Chollet defines Artificial General Intelligence (AGI) as a system that can approach any new problem with the same efficiency as a human, requiring minimal data and compute, differentiating it from the common definition of automating economically valuable tasks.
- The ARC Prize benchmarks (V1, V2, V3) have served as indicators of AI progress, with V1 and V2 highlighting the limitations of base LLMs and the emergence of reasoning capabilities, while V3 focuses on measuring "agentic intelligence" through interactive exploration.
- The saturation of ARC V2 was achieved not necessarily through smarter models, but through a new paradigm of post-training and "agent harness" strategies that enable brute-force mining of problem spaces in verifiable domains.
- Chollet predicts AGI will arrive around 2030, coinciding with potential ARC 6 or 7 benchmarks, and believes future AI will be built on more fundamental, optimal principles rather than simply scaling current LLM stacks.
- The MDA approach emphasizes creating extremely concise symbolic models, potentially leading to AI systems that are much smaller and more efficient than current LLMs, with a fluid intelligence engine that could fit into megabytes.
- Chollet suggests that while current LLM scaling is impressive, a more fundamental shift in approach is needed for true AGI, and that future AI might be built on codebases as small as 10,000 lines, focusing on learning and compounding improvements without human bottlenecks.
- The evolution of ARC benchmarks reflects a moving target, with V3 specifically designed to measure agentic intelligence in interactive environments, making it more resistant to the targeted strategies used for V2.
- Chollet emphasizes that AI progress is inevitable and advises individuals to leverage AI tools and expertise rather than fearing job displacement, framing AI as an empowering force for those who understand and utilize it effectively.
Conclusion
The current AI landscape is dominated by scaling existing paradigms, but true progress towards AGI may require fundamental shifts towards more optimal, symbolic approaches, as championed by MDA.
Benchmarks like the ARC Prize are crucial for identifying and measuring the capabilities that distinguish general intelligence from specialized task performance, guiding future research.
AI progress is inevitable and accelerating; individuals and organizations should focus on understanding and leveraging these advancements as tools for empowerment and innovation.
Discussion Topics
- How can the AI community balance the pursuit of incremental improvements within existing paradigms with the exploration of fundamentally new research directions?
- What are the most promising alternative AI architectures or methodologies that could lead to breakthroughs beyond current LLM capabilities?
- As AI becomes more capable, how can we ensure its development is guided by principles of optimality, efficiency, and human benefit, rather than just raw scaling?
Key Terms
- AGI
- Artificial General Intelligence, a hypothetical AI with human-level cognitive abilities across a wide range of tasks.
- ARC Prize
- A global competition designed to benchmark AI's ability to solve abstract reasoning and program synthesis problems.
- Deep Learning
- A subfield of machine learning that uses artificial neural networks with multiple layers to learn from data.
- Gradient Descent
- An optimization algorithm used to find the minimum of a function, commonly used in training deep learning models.
- LLM
- Large Language Model, a type of AI model trained on vast amounts of text data to understand and generate human language.
- MDA
- A new AI research lab founded by François Chollet, focused on developing a new paradigm in frontier AI research.
- Symbolic Descent
- An optimization approach proposed by Chollet that aims to find the simplest possible symbolic model to explain data, as an alternative to gradient descent.
- Symbolic Model
- A representation of data or relationships using symbols and logical rules, as opposed to numerical parameters.
Timeline
MDA aims to build a new branch of machine learning that is much closer to optimal, unlike deep learning.
MDA's research focuses on building a new learning substrate, an alternative to deep learning, rather than just coding agents.
The core idea is to replace deep learning's parametric curves with the smallest possible symbolic models, using "symbolic descent."
The industry's focus on the current LLM stack is questioned, with Chollet advocating for exploring more optimal, alternative approaches.
The success of coding agents is attributed to verifiable reward signals in domains like code, which LLMs can exploit.
Chollet defines AGI as a system capable of approaching any new task with human-level efficiency in skill acquisition.
Chollet recounts his shift away from the belief that deep learning alone could solve all problems, stemming from research into reasoning tasks.
The origin and purpose of the ARC Prize benchmark as a measure of program synthesis and intelligence are explained.
The evolution of ARC benchmarks (V1, V2, V3) reflects progress in AI, from modeling to reasoning and now agentic intelligence.
ARC V2's saturation was driven by a new paradigm of post-training and agent harnesses, not necessarily a fundamental increase in model intelligence.
ARC V3 shifts focus from passive modeling to measuring "agentic intelligence" in interactive environments where agents must explore and acquire goals.
MDA's approach aims for highly efficient AI solutions, potentially orders of magnitude cheaper than LLM-based solutions.
The future of AGI may involve small, elegant codebases for the core intelligence engine, supported by larger knowledge bases.
The MDA approach is described as "science incarnate," replicating the scientific method in algorithmic form.
The potential link between human learning, program synthesis, and the underlying principles of intelligence is explored.
The founding of MDA focused on a clear vision of symbolic learning and program synthesis, guided by deep learning.
The ARC benchmark is a moving target designed to measure residual gaps in AI capabilities compared to human abilities, with future versions planned.
Chollet predicts AGI by 2030, with significant progress expected from both current stacks and alternative approaches like MDA.
The potential for other approaches beyond LLMs, such as scaled genetic algorithms and alternative architectures, is discussed.
The key to scaling AI is to remove human bottlenecks in the improvement loop, allowing for compounding capabilities.
Advice for starting open-source projects in AI focuses on usability, community building, and creating a compounding stack.
The future of AI is framed as an opportunity for empowerment through leveraging these tools, rather than a cause for pessimism.
Episode Details
- Podcast
- Y Combinator Startup Podcast
- Episode
- How François Chollet Is Building A New Path To AGI
- Official Link
- https://www.ycombinator.com/
- Published
- March 27, 2026