Episode 28 — Adversarial ML

Adversarial machine learning focuses on how attackers manipulate AI models and how defenders respond. This episode introduces four major categories of adversarial attacks: evasion, where crafted inputs mislead models; poisoning, where malicious data corrupts training; extraction, where repeated queries replicate models; and inference, where attackers uncover sensitive training data. Learners gain an overview of why AI is uniquely vulnerable, especially in high-dimensional models such as neural networks.
The discussion expands into defense strategies. Adversarial training, input preprocessing, and detection tools provide partial resilience, while governance practices such as red teaming and incident response integrate technical and organizational safeguards. Case examples highlight adversarial stickers confusing image recognition in autonomous driving and prompt manipulations subverting generative models. The episode emphasizes the arms race nature of adversarial ML: attackers innovate, defenders adapt, and resilience requires continuous investment. Learners finish with a practical understanding of why adversarial ML is central to responsible AI security practices. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.
Episode 28 — Adversarial ML
Broadcast by