Video Series/Episode 2
Episode 02Foundation

Threat-Modeling an AI Deployment in 60 Minutes

Threat Model TemplateDownload PDF

Video coming soon

Threat-Modeling an AI Deployment in 60 Minutes

Security teams know how to threat-model traditional applications using STRIDE or attack trees. But AI systems have fundamentally different attack surfaces — data poisoning, model extraction, prompt injection, adversarial inputs — that existing threat modeling frameworks don't cover.

If someone asked you to threat-model an AI deployment right now, could you? Not a web app. Not an API. An actual AI system with a training pipeline, a model registry, and inference endpoints. I built a template that lets you do it in 60 minutes.

Architecture Diagrams

RAG-based LLM data flow diagram with trust boundaries marked
MITRE ATLAS tactic-to-technique mapping table
Risk assessment heat map (Likelihood vs. Impact grid)

Build Notes

  • Walks through the AI Threat Model Template — Part A (blank template) and Part B (completed example)
  • Uses MITRE ATLAS-based threat identification for structured coverage
  • Risk scoring with Likelihood x Impact creates a prioritization framework
  • Control recommendations map directly to the Reference Architecture security layers

Lessons Learned

  • Most AI threat models fail because they start with tools, not with data flows and trust boundaries
  • The MITRE ATLAS taxonomy is the single most useful resource for structured AI threat identification
  • A completed threat model is the foundation document that every other security control depends on
  • Teams that skip threat modeling discover their gaps during incidents, not during planning

Discussion

Has your security team done a formal threat model on any AI system in your organization? If so, what framework did you use? If not, what’s blocking it?