
I recently reviewed a security team’s quarterly report to their board. Twelve pages of charts. Vulnerabilities remediated (trending up, great!). Phishing simulations conducted (four this quarter!). Security awareness training completion (97%!). Mean time to detect (reduced by 3 hours!).
The board nodded politely. The CISO felt good about the presentation. Two months later, a phishing email led to a credential compromise that cost the company $2 million in incident response and customer notification.
None of those metrics predicted—or would have prevented—the breach.
Most security metrics measure activity, not outcomes. They tell you the security team is busy. They don’t tell you whether the organization is actually more secure.
Let’s autopsy the usual suspects:
“Vulnerabilities remediated” counts effort, not risk reduction. Remediating 500 low-severity findings while 5 critical CVEs age in production makes the number go up while security goes down. Without severity weighting and context, this metric is misleading at best.
“Mean time to detect” (MTTD) sounds important but usually measures noise. If your SIEM fires 10,000 alerts and 9,900 are false positives, a fast MTTD means you’re quickly looking at garbage. And what gets detected matters more than how fast—detecting failed logins quickly is less valuable than detecting lateral movement slowly.
“Phishing simulation click rates” measure performance on simulated phishing, which users learn to recognize. Sophisticated phishing doesn’t look like training examples. A 2% click rate on obvious simulations says nothing about resilience to targeted attacks.
“Training completion rate” measures compliance with mandatory training, not behavioral change. I’ve seen 100% training completion at organizations where employees still shared passwords and clicked suspicious links daily.
“Security incidents” as a standalone metric is perverse. Does a rising count mean better detection or worse security? Does a falling count mean improved defenses or blind spots? Without context, the number is meaningless.
Useful security metrics share three characteristics:
They tie to business outcomes. What does the business actually care about? Availability. Customer trust. Regulatory standing. Cost. Metrics should connect to these, even if indirectly.
They measure capability, not activity. Not “we did security things” but “we can stop/detect/recover from specific threats.”
They’re actionable. A metric that can’t drive a decision is a vanity metric. Every measurement should answer “what would we do differently if this number changed?”
Here’s a framework that works.
Coverage answers: “For each control we claim to have, what percentage of our environment actually has it?”
This sounds basic. It isn’t. Most organizations can’t accurately answer this question for any control.
Example: EDR Coverage
To calculate this, you need accurate asset inventory (already hard) and EDR deployment reports that match reality (also hard). The gap between “we have EDR” and “we have EDR deployed with 98% coverage” is where attackers live.
Example: MFA Coverage
Calculate separately for different account types. Service accounts and break-glass accounts complicate the math—exclude them with explicit documentation, don’t let them silently drag down coverage.
Example: Vulnerability Remediation Coverage
This is different from raw counts. A team that remediates 50⁄50 criticals within SLA is outperforming a team that remediates 500 lows while criticals age.
Coverage metrics tell you whether your security investments are actually deployed. They expose the gap between “we bought it” and “it’s protecting us.”
Coverage tells you controls exist. Efficacy tells you they work.
Example: Detection Efficacy
Run regular atomic tests—scripted simulations of specific attack techniques. Track:
If you simulate Kerberoasting and no alert fires, you know your detection gap. If alerts fire but are auto-suppressed or deprioritized, you know your tuning is wrong.
This requires ongoing testing, not annual pentests. Monthly or quarterly runs of automated attack simulation give you trend data. Did last month’s detection rule changes actually improve detection?
Example: Response Efficacy
Time metrics matter here, but specific ones:
Don’t average across all incidents—segment by severity. Response time for a low-priority alert doesn’t matter. Response time for a confirmed ransomware deployment matters a lot.
Example: Recovery Efficacy
Test backup restoration. Actually test it. Track:
Most organizations claim they can recover. Few have tested it. The ones who have often discover their “4-hour RTO” is actually 72 hours once they account for real conditions.
Exposure metrics quantify your attack surface—how much opportunity exists for attackers?
Example: External Attack Surface
Continuous attack surface monitoring is now table stakes for organizations with any complexity. Track what’s exposed, trend it monthly, hold teams accountable when exposure grows without justification.
Example: Credential Exposure
Credential exposure predicts breaches better than almost any other metric. Track it obsessively.
Example: Third-Party Exposure
Your security posture includes your vendors’ security. A supply chain compromise bypasses your controls entirely.
These connect security to what executives actually care about.
Example: Security’s Impact on Sales
If security enables faster sales cycles and more deal wins, that’s quantifiable business value.
Example: Regulatory Standing
Regulatory failure has direct business impact—fines, lost business, operational restrictions.
Example: Cost Metrics
These aren’t “less is better” metrics. Underspending on security creates false savings. But understanding cost structure helps with budgeting and comparison.
Most executives don’t want twelve pages of charts. They want to understand:
Build a one-page dashboard with the top metric from each category, plus trends. Color code red/yellow/green if your executives need that simplification, but be honest about thresholds. A 94% when target is 98% is yellow, not green.
Reserve the detailed metrics for security team operations. Executives don’t need to know your atomic test results by MITRE ATT&CK technique. They need to know whether detection is improving or degrading.
Meaningful metrics change behavior.
When you track coverage accurately, you find the gaps. That server nobody owns with no EDR, no patching, and an admin account from 2019. The coverage metric forces uncomfortable conversations about asset management and accountability.
When you track efficacy, you stop assuming controls work. Most organizations have never tested whether their SIEM detects common attack patterns. Testing reveals the answer is usually “not reliably.”
When you track exposure, you quantify risk in terms executives understand. “We have 47 internet-facing assets with critical vulnerabilities” is more compelling than “we need to prioritize patching.”
When you track business outcomes, security becomes a business function instead of a cost center. Revenue impact gets attention that vulnerability counts never will.
Here’s one that’s rarely tracked but highly predictive:
Security decision velocity. How long does it take to make and implement a security decision?
Track time from “we identified a problem” to “the fix is deployed.” When this number is high—weeks or months—your security program is bureaucratically constrained regardless of what other metrics say. When it’s low—days or hours for non-critical issues, hours or less for critical ones—you can actually respond to threats.
Fast is secure. Slow is vulnerable. Most organizations are slow.