Benjamin Mann, co-founder of Anthropic, discusses the accelerating pace of AI development, predicting a 50% chance of superintelligence by 2028.
He explains why he and other founders left OpenAI over safety concerns and details Anthropic's safety-first approach, including its Constitutional AI framework.
Mann addresses the intense AI talent war, the profound impact of AI on the job market with predictions of significant unemployment, and the existential risks, which he estimates at 0-10%.
He emphasizes that while the future will get "much weirder very soon," the primary bottlenecks remain physical infrastructure like compute and power, and that overall progress is not slowing down.
11 quotes
Concerns Raised
Existential risk from misaligned superintelligence
Rapid job displacement and societal disruption
Potential for AI to be used for malicious purposes (e.g., bioweapons)
Risk of 'deceptive alignment' where AI hides its true motives
Opportunities Identified
Massive acceleration in science and technology
A future of abundance where labor is nearly free
Using AI to solve its own alignment problems (RLAIF)
Significant productivity gains in fields like software engineering and customer service