Waymo leverages large, cloud-based foundation models as 'teacher models' to distill knowledge and 'world understanding' into the smaller, efficient models running on its vehicles.
For safety-critical functions, Waymo uses a deterministic verification system that operates outside the core AI model to enforce strict safety and regulatory rules, creating a crucial guardrail.
The primary challenge in scaling autonomous driving is solving the 'long tail' of rare events, which Waymo addresses through extensive simulation and data collection.
In generalist robotics, key challenges include generalizing motion and skills.
The speaker advocates a 'software-first' approach and warns of a potential 'humanoid winter' if hype outpaces commercial viability.
12 quotes
Concerns Raised
Solving the 'long tail' of rare driving scenarios remains the primary obstacle to scaling autonomous vehicles.
A potential 'humanoid winter' could occur if the current hype around humanoid robots fails to deliver commercially viable products, harming the entire field.
Generalizing motion and skills, not just visual understanding, is a key unsolved problem in generalist robotics.
Simulation is less effective for complex robot manipulation tasks compared to locomotion, making real-world data collection critical but difficult to scale.
Opportunities Identified
Using large foundation models as 'teachers' can significantly improve the capabilities of on-device AI systems.
Reframing robot actions as a language allows for leveraging powerful, pre-existing LLM architectures and scaling laws.
The performance bar for autonomous vehicles is superhuman safety, a standard Waymo's data suggests it is already meeting.
AI can be applied to unconventional fields like food science to design novel products, such as plant-based cheese.