Current AI benchmarks like MMLU are saturating, necessitating the creation of more challenging evaluations like "Humanity's Last Exam" and "Enigma Eval" to accurately track progress towards superintelligence.
The proposal for a US-led "Manhattan Project" for AGI is fraught with peril, as it would be viewed as highly escalatory by China, vulnerable to sabotage, and would exclude key international talent.
AI should be treated as a dual-use technology analogous to nuclear or biological weapons, shifting the strategic focus from a simple race to a more nuanced approach of supply chain security and nonproliferation of advanced chips.
Unchecked economic and military competition will likely lead to an irreversible "loss of control," where critical decision-making is ceded to AI systems, a concept explored in the paper "Natural Selection Favors AIs Over Humans".
10 quotes
Concerns Raised
A state-led AGI 'Manhattan Project' would be highly escalatory and vulnerable to sabotage.
The US is critically vulnerable to disruptions in its semiconductor and robotics supply chains, particularly from a conflict over Taiwan.
Competitive economic and military pressures are driving an irreversible loss of human control to AI systems.
Saturating benchmarks may obscure the true pace and nature of AI capability advancements, leading to strategic surprise.
Opportunities Identified
Developing more robust benchmarks like 'Humanity's Last Exam' can provide a clearer signal of AI progress.
Shifting geopolitical strategy from a 'race to AGI' to securing supply chains and promoting market share for allied AI systems.
Implementing a nonproliferation strategy for advanced AI chips, treating them like fissile material to prevent access by rogue states.
Using AI's own forecasting capabilities to better predict and mitigate long-term risks like loss of control.