Episode 27 — Threat Modeling for AI Systems

Threat modeling is the process of systematically identifying and prioritizing risks that could compromise AI systems. This episode introduces the core components of threat modeling: defining assets, identifying adversaries, mapping attack surfaces, and assessing likelihood and impact. Learners see how existing frameworks like STRIDE (spoofing, tampering, repudiation, information disclosure, denial of service, elevation of privilege) can be adapted to AI contexts, particularly given vulnerabilities in data pipelines, APIs, and model deployment environments.
The episode explores AI-specific threats such as data poisoning, adversarial examples, model extraction, and misuse scenarios. Case examples include diagnostic healthcare systems exposed to malicious inputs and fraud detection models targeted by extraction attacks. Learners are guided through how to document findings in risk registers, create living threat models, and prioritize mitigations. By applying structured threat modeling, organizations strengthen resilience and ensure AI systems are not only technically robust but also ethically and socially protected from harmful misuse. Produced by BareMetalCyber.com, where you’ll find more cyber audio courses, books, and information to strengthen your certification path.
Episode 27 — Threat Modeling for AI Systems
Broadcast by