
Dwell time—the period between initial compromise and detection—is a key security metric. The longer attackers operate undetected, the more damage they cause. Mandiant’s M-Trends reports, Verizon DBIR, and CrowdStrike threat reports all track median dwell time across their incident response cases.
The conventional wisdom is that better detection reduces dwell time. Deploy a SIEM. Write more rules. Buy a UEBA tool. Watch the dwell time drop.
Reality is more complicated. Some controls have significant impact on dwell time. Others are security theater—expensive investments that don’t move the needle. And some of what works isn’t about detection at all.
Let’s start with what incident data tells us.
Detection source matters enormously. The majority of breaches are still detected by external parties—customers reporting fraud, law enforcement notification, threat researchers disclosing findings. Internal detection has shorter dwell time than external notification, often by a factor of 3-5x.
Ransomware changed the calculus. Ransomware attacks are self-revealing—the attackers announce themselves. This has dramatically reduced average dwell time in aggregate statistics. But it hasn’t improved detection capability; it’s just that attackers aren’t hiding anymore.
Industry variation is huge. Financial services and tech companies detect faster than healthcare and retail. This isn’t just about spending—it’s about security maturity, talent availability, and organizational structure.
Initial access vector affects dwell time. Compromises via phishing are often detected faster than compromises via exposed services. The attack chain from phishing is more visible—email security, endpoint detection, user reporting—while external exploitation may go directly to server-side components with less monitoring.
Based on incident analysis, here’s what consistently correlates with faster detection:
Why it works: EDR operates where attackers eventually work—on endpoints and servers. Good EDR catches known malware, suspicious behaviors, and post-exploitation activity. It generates alerts for things like credential dumping, lateral movement tools, and persistence mechanisms.
The caveat: “EDR” varies wildly in capability. A well-tuned enterprise EDR with behavioral detection is different from antivirus with EDR branding. Many organizations have EDR deployed but not configured to alert on the behaviors that matter.
What makes EDR effective:
Controversial take: Many organizations would be better served by basic EDR with near-100% deployment than by advanced EDR with 80% deployment. Coverage beats capability for detection.
Why it works: Users are often the first to notice something wrong. A suspicious email. An unexpected prompt. Something that doesn’t feel right. Organizations with strong phishing reporting culture catch attacks earlier because employees escalate concerns.
What makes it effective:
Controversial take: User security awareness training has questionable impact on click rates. But building a reporting culture—where users who click something suspicious immediately report it—has measurable impact on dwell time. The training focus should shift from “don’t click” to “click and report.”
Why it works: Attackers need credentials to move laterally. Identity systems (Active Directory, Entra ID, Okta) see this activity. Organizations with strong identity monitoring catch credential abuse faster.
What makes it effective:
The caveat: Identity-based detection requires baseline knowledge. You can’t detect “unusual” without knowing “usual.” This takes time to build and maintain.
Why it works: Detection is useless without response. Organizations with defined IR processes move from alert to investigation faster. Those without spend time figuring out who does what, what tools to use, and what authority they have.
What makes it effective:
Controversial take: Response speed matters as much as detection speed. An organization that detects at hour 3 but responds at hour 24 has effectively 24-hour dwell time. Investing in response capacity—automation, playbooks, authority—often has more impact than investing in another detection tool.
Why it works: Segmentation doesn’t prevent initial access, but it forces attackers to make more moves—and each move is an opportunity for detection. Flat networks allow single-hop access to crown jewels. Segmented networks create multiple detection opportunities along the path.
What makes it effective:
The caveat: Segmentation is expensive and operationally disruptive. Poorly implemented segmentation creates friction without security benefit. It works best when designed around real threat models rather than compliance checklists.
Having a SIEM doesn’t reduce dwell time. Having a SIEM with well-tuned detection rules, active monitoring, and rapid response reduces dwell time. The correlation is with the people and process around the SIEM, not the SIEM itself.
Many organizations buy SIEMs, collect logs, write some rules, and then… nothing changes. The SIEM generates alerts that nobody investigates. The log data exists but isn’t analyzed.
The honest assessment: A SIEM is infrastructure, not a solution. Its impact depends entirely on what you do with it.
Consuming threat intelligence feeds doesn’t automatically reduce dwell time. Most threat intel is stale by the time it reaches you—indicators of compromise (IOCs) that were relevant weeks ago but have already changed.
What works: actionable threat intelligence integrated into detection and response. What doesn’t work: a dashboard of threat feeds that nobody uses to drive action.
Vulnerability scanners find vulnerabilities. They don’t detect active attacks. Organizations sometimes conflate vulnerability management with threat detection—they’re different capabilities.
Fixing vulnerabilities reduces attack surface. It doesn’t reduce dwell time for attacks that succeed via other vectors.
Training teaches users what not to do. It doesn’t teach them to detect ongoing attacks or even always prevent initial compromise. The impact on dwell time is indirect at best.
(Phishing reporting is different—that’s about behavior after something suspicious happens, not preventing the click.)
Log aggregation has diminishing returns. Collecting every possible data source creates noise, not signal. Most breaches are detectable in a handful of data sources—endpoint telemetry, identity logs, network metadata—if you’re looking correctly.
Adding data sources without corresponding detection logic and analyst capacity is waste.
“We generate 10,000 alerts per day” is not evidence of good detection. It’s evidence of something, but probably not security effectiveness.
Alert volume often correlates inversely with detection quality. High-noise environments train analysts to ignore alerts, creating blind spots for the alerts that matter.
The industry obsesses over detection speed. But if you detect in 10 minutes and respond in 10 hours, you haven’t achieved 10-minute dwell time—you’ve achieved 10-hour damage.
Response is often the bottleneck. Improving response capacity and speed may be more valuable than yet another detection investment.
UEBA, AI-powered detection, behavioral analytics—these have value in mature environments. In immature environments, they often generate more noise than signal without the baseline data and analyst expertise to interpret outputs.
Basic detection done well beats advanced detection done poorly.
Mapping controls to dwell time impact:
| Control | Dwell Time Impact | Implementation Difficulty | Common Failure Mode |
|---|---|---|---|
| EDR (well-tuned) | High | Medium | Partial deployment, alerts ignored |
| Phishing reporting culture | High | Low | Blame culture suppresses reporting |
| Identity monitoring | High | Medium | No baseline, too many alerts |
| Incident response process | High | Medium | Process exists but isn’t followed |
| Network segmentation | Medium | High | Incomplete, easily bypassed |
| SIEM | Medium (dependent) | High | Deployed but not operationalized |
| Threat intel feeds | Low-Medium | Low | Consumed but not actioned |
| Vulnerability scanning | Low (indirect) | Low | Confused with detection |
If reducing dwell time is a priority:
First, ensure response capability. Can you actually respond to detection? If not, faster detection doesn’t help.
Second, maximize EDR coverage. Get it on everything, tune it for real threats, ensure alerts are investigated.
Third, build identity monitoring. Authentication events are high-signal, low-noise for detecting compromise.
Fourth, build reporting culture. Make it easy and safe for employees to report concerns.
Fifth, assess your detection operationally. Are alerts investigated? How quickly? What percentage are true positives?
Then consider additional detection investments. But only after the fundamentals are working.
Dwell time improves when detection is effective and response is fast. Both matter. Neither is sufficient alone.