Video Series/Episode 4
Episode 04Risk

What CISOs Get Wrong About AI Risk

NIST AI RMF Bridge Guide

Video coming soon

What CISOs Get Wrong About AI Risk

CISOs are being asked to incorporate AI risk into their programs, but the NIST AI RMF speaks a different language than traditional cybersecurity frameworks. Most CISOs either ignore AI risk entirely or treat it as a separate compliance exercise disconnected from their existing security program.

I spent 20 years building cybersecurity programs. When I started studying AI risk, I realized most CISOs are making the same mistake: they think AI risk is someone else’s problem. It’s not. And I built the bridge document to prove it.

Architecture Diagrams

NIST AI RMF function-to-cybersecurity program mapping
AI-specific extensions overlay on existing security program
Quick-start implementation timeline (Week 1–4, Month 2–3)

Build Notes

  • Shows how AI risk management integrates into existing cybersecurity programs
  • Mapping: Asset Inventory → add models/datasets, Vuln Management → add adversarial robustness testing
  • IR extension: add AI-specific playbooks, model quarantine procedures
  • Boards and regulators will hold CISOs accountable for AI risk whether or not the charter includes it

Lessons Learned

  • CISOs who wait for a formal mandate to own AI risk will be behind when the mandate arrives
  • The gap is not capability — it’s translation. CISOs have the skills; they need the mapping
  • AI incident response requires model quarantine procedures that don’t exist in traditional IR plans
  • The biggest win is Week 1: inventory all AI models and datasets

Discussion

CISOs — does your program formally include AI risk today? If not, what’s the single biggest obstacle preventing it?