▶Maretski consistently argues that the current AI paradigm of scaling large language models has reached a fundamental limit, citing the exhaustion of high-quality training data, a three-year plateau in benchmark performance, and the problem of 'model collapse' from AI-generated data.Apr 2026
▶He repeatedly describes the hundreds of billions of dollars being invested in data centers for current-generation AI as a 'catastrophic misallocation of capital,' believing the underlying technology is flawed and will soon be obsolete.Apr 2026
▶A core tenet of his view is the imminent rise of a new generation of brain-inspired AI systems (from companies like Fractal Brain and Innate AI) that are capable of continual learning and are orders of magnitude more power-efficient.Apr 2026
▶He points to the exodus of top AI researchers (like Ilya Sutskever, Dave Silver, Jan LeCun) from major labs as a key indicator that the industry's intellectual leaders are abandoning the current LLM approach to pursue fundamentally new ideas.Apr 2026
▶While the prevailing market narrative fuels massive investment in AI data centers, Maretski argues this is a 'false start' and that companies avoiding this capex, like Apple, are positioned to be the long-term winners.Apr 2026
▶In contrast to the common perception of rapid, ongoing AI progress, Maretski asserts that performance on standard benchmarks has actually been stagnant for the past three years.Apr 2026
▶While the industry is focused on large, cloud-based models, Maretski's contrarian view is that the future of powerful AI is decentralized, with the most potent models running locally on consumer laptops within one to three years.Apr 2026
▶Maretski challenges the idea that AI can be improved by training on its own output, instead claiming this practice leads to 'model collapse,' where models progressively degrade in quality and become 'dumber.'Apr 2026
Not enough data for timeline
Sign up free to see the full intelligence report
Get started free