The current AI paradigm of scaling large language models (LLMs) has hit fundamental limits due to a scarcity of high-quality training data, unsolvable issues like hallucination, and an inability to learn continuously.
The hundreds of billions of dollars being invested in data centers for this generation of AI represents a 'catastrophic misallocation of capital', with market signals like Oracle's CDS reflecting this risk.
Top AI researchers (LeCun, Silver, Sutskever) have left major tech companies to found startups focused on next-generation, brain-inspired AI that is vastly more power-efficient and capable of continual learning.
A new wave of AI is emerging from companies like Fractal Brain AI and Innate AI, which use novel architectures to solve the core problems of current models and are already outperforming them on benchmarks.
12 quotes
Concerns Raised
The current approach of scaling LLMs is a technological dead end with unsolvable flaws like hallucination and catastrophic forgetting.
Massive investment in data centers for current-gen AI is a 'catastrophic misallocation of capital' that will lead to significant financial losses.
The automation of entry-level software roles by AI could create a long-term skills gap for senior engineers.
Next-generation, continually adapting AI systems present new and 'spooky' safety risks as they cannot be permanently outsmarted.
Opportunities Identified
Investing in startups developing next-generation, brain-inspired AI architectures that can learn continuously.
A potential arbitrage strategy: shorting companies over-leveraged in data center expansion (e.g., Oracle) and going long on capital-light tech firms.
Investing in 'picks and shovels' companies that improve data center efficiency (cooling, algorithms) to profit from the current build-out.
The rise of powerful, efficient AI models that can run locally on laptops, creating new markets and disrupting the cloud-centric model.