▶Kokotajlo consistently expresses a high probability of existential risk from AI, citing a personal P(Doom) of 70% and stating the current trajectory risks 'the death of every human' [5, 13].Apr 2026
▶He forecasts extremely short timelines for transformative AI, predicting superintelligence before the end of the 2020s and setting his personal median for AGI at the end of 2028 [10, 17, 20].Apr 2026
▶He is deeply skeptical of the willingness of leading AI labs like OpenAI and Anthropic to prioritize safety, believing they will not pause development even with a competitive lead and are allocating 'wildly inadequate' resources to alignment [1, 4, 22].Apr 2026
▶He advocates for greater transparency and external oversight in AI development, arguing that a larger community is needed to solve alignment and that government intervention is necessary [2, 7, 33].Apr 2026
▶Kokotajlo's personal AGI forecast (end of 2028) is more aggressive than the median forecast of other researchers on his own 'AI 2027' report team (2029-2031) [12, 20].Apr 2026
▶His stance on nationalizing AI labs has evolved from opposition to support, indicating a significant shift in his views on the feasibility of corporate self-regulation [4].Apr 2026
▶He presents a conflicting view on the US vs. China AI race, stating the US has an 80-90% chance of leading, yet also claiming the gap is 'effectively zero' due to poor security at US labs allowing for technology theft by the CCP [14, 23].Apr 2026
▶His public dispute with OpenAI over its non-disparagement agreements, as detailed by journalist Kelsey Piper, contrasts his perspective on safety culture with the company's official stance and policies [3, 26].Apr 2026
Not enough data for timeline
Sign up free to see the full intelligence report
Get started free