▶Both podcast appearances establish Schulhoff as a pioneer in AI red teaming, specifically through his partnership with OpenAI to create the 'Hack a Prompt' competition, whose dataset is now widely used by frontier labs and Fortune 500 companies.Apr 2026
▶Across both sources, Schulhoff consistently argues that current AI security measures, particularly AI guardrails and prompt-based defenses, are fundamentally ineffective against adversarial attacks like prompt injection and jailbreaking.Apr 2026
▶He is presented in both podcasts as a leading expert on the technical specifics of AI vulnerabilities, citing numerous recent examples of successful attacks against systems like ChatGPT, ServiceNow's Assist AI, and the Comet AI browser.
▶Schulhoff presents a conflict between AI labs' internal priorities and external risks; he claims frontier labs don't prioritize solving adversarial robustness because models aren't yet dangerous (Claim 6), yet he personally predicts these models will cause tangible real-world harm within a year (Claim 25).Apr 2026
▶There is a tension in his discourse between the theoretical impossibility and practical mitigation of AI vulnerabilities. He asserts that top researchers cannot solve the core problem (Claim 23), while also citing OpenAI CEO Sam Altman's belief that they can mitigate 95-99% of attacks (Claim 39).
▶He highlights a debate between security and market viability, noting that human-in-the-loop verification is a good security measure for AI agents but is not a viable long-term solution because the market demands full autonomy (Claim 11).Apr 2026
▶Schulhoff points to a disagreement on the imminence of a major AI-driven attack. He cites Alex Komorosky's view that the only reason a massive attack hasn't happened is the early stage of AI adoption, not the effectiveness of security (Claim 8), contrasting with the implicit confidence of companies deploying these systems.Apr 2026
Not enough data for timeline
Sign up free to see the full intelligence report
Get started free