François Chollet predicts AGI will likely be achieved in the early 2030s, but argues the current deep learning stack is inefficient and will not be the long-term foundation for AI.
Chollet's new lab, Endia, is developing an alternative AI substrate based on program synthesis and "symbolic descent," aiming for more data-efficient and generalizable models.
The recent explosion in AI capabilities, especially in coding, is attributed to the presence of formally verifiable reward signals, a principle that will soon revolutionize other domains like mathematics.
The ARC-AGI benchmark has evolved to V3, which measures "agentic intelligence" in novel, game-like environments to better test for true fluid intelligence and prevent "teaching to the test."
12 quotes
Concerns Raised
The current deep learning/LLM stack is inefficient and not the optimal long-term path to AGI.
Scaling existing models alone is insufficient for achieving general intelligence, as demonstrated by early ARC benchmark results.
AI progress in many domains is limited by the lack of formally verifiable reward signals, unlike in coding or mathematics.
Opportunities Identified
Developing alternative AI substrates like program synthesis could lead to breakthroughs in efficiency and generalization.
Any problem domain with a formally verifiable reward signal can be fully automated with current technology.
Individuals can leverage AI as a powerful tool for empowerment by combining it with deep domain expertise.