January 2, 2026

How to Intelligently Monitor for Attacker Recon and Anomalous Behavior

Your SIEM generates noise. Here's how to build detection that catches actual attackers—signal over noise, resource-appropriate approaches for teams without a 50-person SOC.

“Monitor everything” is the conventional wisdom. Collect all the logs. Build all the alerts. Correlate all the events.

In practice, this produces a SIEM that generates hundreds of alerts daily, most of which are noise. Your security team spends hours triaging false positives. Actual attacks hide in the flood.

Smart monitoring isn’t about more alerts—it’s about better alerts. Detecting attacker behavior rather than just logging everything and hoping you’ll notice.

The Detection Philosophy Problem

Most organizations approach detection backwards. They start with data sources—”we have Windows event logs, let’s write alerts against them”—and end up with detection based on what’s available rather than what matters.

Flip the approach. Start with attacker behavior. What do attackers actually do? Then ask: how would we detect that behavior? What data do we need?

This sounds obvious but is rarely practiced. The result of data-first approaches is alert fatigue, missed attacks, and detection that covers easy-to-detect behaviors while missing harder-to-detect ones.

What Attackers Actually Do: The Reconnaissance Phase

Before exploitation comes reconnaissance. Attackers map your environment to find opportunities. Detecting this phase provides early warning—but it’s also where most monitoring fails.

External Reconnaissance

What happens: Port scanning, service enumeration, vulnerability scanning, banner grabbing, DNS reconnaissance.

Why detection is hard: Internet background noise is massive. Automated scanners, researchers, bots—everyone is scanning everyone. Your attack surface receives constant probing. Distinguishing targeted reconnaissance from noise is nearly impossible at scale.

What to detect instead:

  • Unusual patterns in failed authentication attempts (not just counts—patterns across accounts, timing, source distribution)
  • Probes of specific internal services that shouldn’t be known externally
  • Reconnaissance that correlates with later targeted activity
  • Concentration of reconnaissance from single sources on high-value targets

Resource-appropriate approach: Don’t try to alert on all scanning. Focus on scanning that targets your crown jewels specifically, or scanning followed by authentication attempts. Correlation is key.

Internal Reconnaissance

Once inside (via phishing, compromised credentials, etc.), attackers map the internal network.

What happens: Network scanning, LDAP/AD enumeration, share discovery, internal DNS queries, service discovery.

What to detect:

  • Internal port scanning (especially from user workstations—users don’t typically run port scans)
  • Mass LDAP queries for group membership, user lists
  • SMB share enumeration across multiple hosts
  • Unusual DNS query patterns (high volume of internal lookups, queries for sensitive systems)

Example detections:


Source IP X attempted connections to > 20 unique internal IPs on port 445 in 10 minutes

Account X queried >100 user or group objects from Active Directory in 5 minutes

Host X connected to >50 unique ports across internal network in 1 hour

Detecting Lateral Movement

After reconnaissance, attackers move. This is often the most detectable phase—movement creates logs.

Credential-Based Movement

What happens: Attackers use stolen credentials to authenticate to other systems—RDP, SSH, WinRM, SMB.

What to detect:

  • First-ever login from an account to a particular system
  • Login patterns inconsistent with user’s normal behavior
  • Service account authentication from unexpected sources
  • Successful authentication after credential access indicators (e.g., Mimikatz detection followed by new logins from that host)

Example detections:


Account X logged into Host Y for the first time (never seen in 90-day baseline)

Account X normally authenticates from Workstation A; now authenticating from Workstation B

Service account S authenticated interactively (service accounts shouldn't log in interactively)

Pass-the-Hash / Pass-the-Ticket

What happens: Attackers use credential hashes or Kerberos tickets without knowing passwords.

What to detect:

  • NTLM authentication from accounts that typically use Kerberos
  • Kerberos ticket requests with unusual encryption types
  • Authentication events without corresponding password logon events
  • Overpass-the-hash patterns (Kerberos request immediately following NTLM event from same host)

Resource-appropriate approach: These detections require baseline knowledge of normal authentication patterns. Start with service accounts and privileged accounts—they have more predictable patterns.

Remote Execution

What happens: PsExec, WMI, WinRM, SSH—attackers execute commands on remote systems.

What to detect:

  • PsExec or equivalent service installations
  • WMI process creation on remote hosts
  • Remote PowerShell sessions (especially from unexpected sources)
  • SSH sessions from user workstations to servers

Example detections:


A service was created on Host Y via remote call from Host X

PowerShell remoting session initiated from Host X to Host Y (not typical admin activity)

Detecting Data Access and Exfiltration

Attackers access data. Often, this is the objective.

Unusual Data Access

What to detect:

  • Bulk file access (user accesses 1,000 files in an hour vs. normal baseline of 50)
  • Access to sensitive file shares from unusual accounts or systems
  • Database queries retrieving more data than typical
  • Access outside normal working hours for that user

Example detections:


User X accessed >500 files on SharePoint in 1 hour (baseline: <100)

Account X accessed HR share for the first time

Privileged account activity at 3 AM (account has no prior 3 AM activity)

Exfiltration Indicators

What to detect:

  • Large outbound data transfers (especially to unusual destinations)
  • HTTP/S uploads to file sharing services
  • DNS tunneling (high-volume DNS requests, long subdomain strings)
  • Connections to known malicious infrastructure

Resource-appropriate approach: You can’t inspect all traffic. Focus on:

  • Unusual volume from specific hosts (statistical anomaly vs. baseline)
  • Connections to uncategorized or newly-registered domains
  • Data transfers during credential or access anomalies

Building Detection Without a 50-Person SOC

Enterprise detection strategies don’t translate to smaller teams. Here’s a scaled approach:

Tier 1: Highest-Value, Lowest-Noise Detections

These generate few false positives and catch high-impact activity. Start here.

  • New admin account created (especially outside change windows)
  • Service account interactive logon (should never happen)
  • First-ever login to domain controller from non-admin account
  • Security tool tampering (EDR disabled, logs cleared)
  • Malware detection (your EDR already does this—make sure alerts are actionable)
  • Known-bad indicators (threat intel matches in logs)

These detections are low-tuning, high-value. Get these working first.

Tier 2: Behavioral Baselines for High-Value Accounts

Focus on accounts where compromise would be catastrophic.

  • Domain admins
  • Cloud administrator accounts
  • Service accounts with broad access
  • Executives (high-value phishing targets)

Build behavioral baselines for these accounts:

  • Normal login times
  • Normal source systems
  • Normal activity patterns

Alert on deviation. The limited scope makes tuning manageable.

Tier 3: Network Anomaly Detection

Harder to tune, but valuable:

  • Internal scanning patterns
  • Unusual east-west traffic
  • Large data movements between segments
  • Connections to internet destinations with bad reputation

Modern NDR (Network Detection and Response) tools help here, but they’re expensive. Proxy and firewall logs with custom analysis can provide some of the same visibility.

Tier 4: Expand Coverage

Once Tiers 1-3 are solid and tuned, expand:

  • Baseline all privileged account activity
  • Detection for specific MITRE ATT&CK techniques relevant to your threat model
  • Application-layer detection in critical applications
  • User and entity behavior analytics (UEBA) across broader population

The Tuning Reality

Every detection requires tuning. Alerts that work in lab conditions generate noise in production.

Expect this process:

  1. Deploy detection
  2. Observe what it triggers
  3. Identify false positive patterns
  4. Add exclusions for legitimate behavior
  5. Monitor for exclusion abuse (attackers mimicking excluded behavior)
  6. Repeat

This isn’t failure—it’s how detection engineering works. Budget time for tuning. A detection that alerts 100 times a day and gets ignored is worse than no detection.

Tuning heuristics:

  • If >50% of alerts are false positives, the detection needs work
  • If analysts ignore a detection, it’s not providing value
  • Exclusions should be specific and documented
  • Review exclusions periodically—legitimate behavior changes

Correlation Over Volume

Single events are often ambiguous. The authentication from a new device might be an attacker—or an employee got a new laptop. The large file access might be exfiltration—or someone preparing a presentation.

Correlation provides confidence:

Example chain:

  1. Phishing email delivered (email security log)
  2. User clicked link (proxy log)
  3. Suspicious process execution (EDR)
  4. Unusual authentication (Windows event log)
  5. Internal reconnaissance (network traffic)

Any single event might be noise. The chain is clearly malicious.

Build correlation rules that connect related events:

  • Source/destination IP correlation
  • User/account correlation across data sources
  • Time-window correlation (events within N minutes)

Metrics for Detection

Measure your detection capability:

Coverage: What percentage of MITRE ATT&CK techniques relevant to your threat model do you have detection for?

Efficacy: When you test detection (via red team, purple team, or atomic tests), what percentage fires correctly?

Signal-to-noise: What percentage of alerts are true positives vs. false positives?

Time to detect: For true positives, how long between malicious activity and alert?

Time to triage: How long does it take analysts to determine if an alert is real?

Track these over time. Improvement should be visible.

The Honest Assessment

You will not detect everything. Sophisticated attackers with time and resources can evade detection.

The goal isn’t perfect detection—it’s making detection reliable enough that attackers can’t operate freely, and fast enough that damage is limited when they do get in.

Focus on high-value, low-noise detections first. Build behavioral baselines for your most critical accounts. Invest in correlation and context. Tune relentlessly.

A few detections that work beat hundreds of alerts nobody investigates.

Ready to make security your competitive advantage?

Schedule a call