The primary bottleneck preventing current AI from achieving true AGI and widespread economic impact is its inability to perform 'continual learning' or learn on the job like a human.
The current strategy of pre-baking specific skills into models via reinforcement learning (RL) is inefficient and doesn't scale to the complexity and context-specific nature of real-world knowledge work.
While AI model capabilities will continue to improve at an impressive rate, their economic utility will grow more slowly.
Expect hundreds of billions in revenue by 2030, but not the full automation of knowledge work.
The path to AGI will be an incremental and competitive race, with breakthroughs in areas like continual learning being quickly replicated, preventing a single company from achieving a runaway, winner-take-all advantage.
12 quotes
Concerns Raised
Current AI models lack the crucial capability of continual, on-the-job learning, limiting their economic impact.
The industry's reliance on pre-baking skills via reinforcement learning is an inefficient and unscalable approach to AGI.
Scaling laws for pre-training do not apply to RL, which may require orders of magnitude more compute for meaningful gains.
The hype around AI capabilities is outpacing their actual economic utility and deployment in real-world jobs.
Opportunities Identified
Solving continual learning represents the most significant opportunity to unlock true AGI and trillions of dollars in economic value.
A future AI architecture of specialized, continually learning agents contributing to a central 'hive mind' could be a powerful paradigm.
The massive market for knowledge work (tens of trillions in wages) provides a clear benchmark and incentive for developing more capable AI.