Penetration Testing

Why Detection Gaps Are More Valuable Than Vulnerabilities

> grep -c 'ALERT' /var/log/siem/pentest_window.log && echo '...zero'_

Peter Bassill 22 July 2025 16 min read
detection engineering SOC SIEM detection gaps blue team MITRE ATT&CK incident response

The vulnerability tells you what broke. The detection gap tells you why nobody noticed.

A penetration test against a logistics company produces twelve findings. The critical finding is a Kerberoasting attack that escalated to Domain Admin through a service account with a weak password. The remediation is straightforward: change the service account password to a 25+ character random string, migrate to a Group Managed Service Account, and disable RC4 encryption for Kerberos. The vulnerability is closed. The risk is mitigated.

But the pen test also produced a thirteenth finding — one that doesn't map neatly to a CVSS score or a remediation ticket. Across three hours of active compromise, the tester performed LLMNR poisoning, credential cracking, LDAP enumeration, Kerberos ticket requests for every SPN in the domain, lateral movement via PsExec to three servers, privilege escalation through Backup Operator abuse, NTDS.dit extraction from the domain controller, and bulk file share access on the finance server. The SOC — staffed 24/7, running CrowdStrike on every endpoint, ingesting logs from Active Directory, the firewalls, and the VPN — generated zero alerts.

The Kerberoasting vulnerability is one finding with one remediation. The detection gap is a systemic failure across the entire defensive stack — and fixing it improves the organisation's ability to detect not just Kerberoasting, but every attack that uses similar techniques: credential capture, lateral movement, privilege escalation, and data access. The vulnerability fix closes one door. The detection gap fix installs a security camera on every corridor.

The Core Argument

Vulnerabilities are infinite. New misconfigurations appear every time the environment changes. New CVEs are published daily. You will never achieve a state where all vulnerabilities are remediated — the attack surface regenerates faster than remediation can close it. Detection capability, by contrast, is cumulative. Every detection rule you write persists. Every log source you onboard remains. Every analyst skill you develop compounds. Investing in detection produces returns that survive the next vulnerability.


Why detection scales and remediation doesn't.

Organisations intuitively prioritise vulnerability remediation — finding and fixing the specific weaknesses the pen test identified. This is necessary but insufficient, because remediation addresses individual findings while detection addresses categories of attack. The economics are fundamentally different.

Vulnerability Remediation Detection Engineering
Scope of protection Fixes the specific vulnerability that was found. The Kerberoastable service account is remediated. If another Kerberoastable account appears next month, it's a new vulnerability requiring new remediation. Detects the technique, regardless of which specific vulnerability enables it. A Kerberoasting detection rule alerts whether the target is svc_backup, svc_sql, or a service account that doesn't exist yet.
Durability Temporary. The fix persists only until the environment changes — a new service account is created, a misconfiguration is reintroduced, a patch is missed. The next pen test may find the same class of vulnerability in a different location. Cumulative. A well-written detection rule persists indefinitely. It continues to detect the technique across environmental changes, new systems, and future attack campaigns. Detection capability grows; vulnerability posture fluctuates.
Coverage breadth One fix, one vulnerability. Remediating 12 pen test findings closes 12 specific gaps. One detection rule covers an entire technique. A lateral movement detection rule that alerts on anomalous PsExec usage covers every lateral movement attempt via PsExec — whether it originates from the Kerberoasting chain, a phished credential, or a compromised VPN.
Time to value Variable. Some fixes take minutes (password change). Others take months (application redesign, infrastructure migration, vendor dependency). Often faster. A SIEM detection rule can be written, tested, and deployed within hours of receiving the pen test's detection gap analysis. The organisation's detection capability improves the same week the report is delivered.
Return on investment Linear. Each pound spent fixes one finding. The next pen test produces new findings requiring new spending. Compounding. Each detection rule protects against the technique permanently. Over successive pen tests, the detection gap shrinks — each engagement finds fewer undetected actions, and the investment in prior detection engineering pays dividends across every subsequent assessment.

This is not an argument against remediation — vulnerabilities must be fixed. It's an argument that detection gaps deserve equal priority, equal budget, and equal board-level attention. An organisation that remediates every pen test finding but ignores every detection gap is an organisation that can be compromised by the next finding — and won't know until it's too late.


Anatomy of the finding nobody reports.

A detection gap exists when an attacker action that should generate an alert doesn't. It's the absence of a response — which makes it invisible unless someone specifically looks for it. Detection gaps have four components, each of which points to a different root cause and a different remediation.

Component The Question Common Root Causes
Telemetry Did the attacker action generate any log data at all? Is the relevant log source enabled, collected, and ingested into the SIEM? Kerberos service ticket requests (Event ID 4769) not audited. PowerShell script block logging not enabled. DNS query logging disabled. SMB access auditing not configured. CloudTrail not enabled in all regions. The log source exists — it's just not turned on.
Parsing Is the log data correctly parsed so that the SIEM can query the relevant fields? Can the rule distinguish a Kerberos TGS request for RC4 from one for AES? Log data ingested but stored as unparsed raw text. Custom application logs without a parser. Windows event logs ingested but key fields (encryption type, service name, source IP) not extracted into searchable fields.
Logic Does a detection rule exist that matches the attacker's specific behaviour pattern? Is the rule correctly written — right field, right operator, right threshold? No rule exists for this technique. Rule exists but uses the wrong event ID. Threshold set too high (alerts on 100+ TGS requests but the attacker only made 3). Rule logic is correct but references a field that's named differently in this log source.
Response If an alert did fire, was it triaged, investigated, and escalated appropriately? Or was it dismissed as a false positive, lost in noise, or ignored? Alert fired but auto-closed by a suppression rule. Alert triaged by Tier 1 analyst who didn't recognise the significance. Alert escalated but response playbook didn't cover this scenario. Alert generated at 3am and wasn't reviewed until 9am.

A thorough detection gap analysis examines all four components for every significant attacker action. The output isn't just "SOC didn't detect Kerberoasting" — it's "Event ID 4769 is audited and ingested, but the SIEM parser doesn't extract the encryption type field, so the existing rule that checks for RC4 ticket requests never matches." That level of specificity turns a vague finding into a 15-minute configuration fix.


A detection gap analysis that changed everything.

To illustrate the difference between a standard pen test report and one that includes detection gap analysis, here's a side-by-side from the same engagement — the logistics company from the introduction.

Standard Pen Test Finding — Kerberoasting
# Finding F-004: Kerberoastable Service Account
severity = Critical
technique = Kerberoasting (MITRE T1558.003)
target = svc_backup@logistics.local
password = Backup2019! (cracked in 11 seconds)
impact = DA via Backup Operators → NTDS.dit extraction

# Remediation
action_1 = change password to 25+ char random
action_2 = migrate to gMSA
action_3 = disable RC4 for Kerberos (AES-256 only)

That's the standard finding. Three remediations. All correct. All necessary. Now here's what the detection gap analysis adds:

Detection Gap Analysis — Same Finding, Different Value
# Detection Timeline for F-004 Attack Chain

14:23:07 = LLMNR poisoning (Responder) # Telemetry: NONE
gap = no network sensor capturing broadcast traffic
fix = deploy network tap or IDS on VLAN; or disable LLMNR

14:26:44 = NTLMv2 hash captured (j.smith) # Telemetry: NONE
gap = NTLM auth events not forwarded to SIEM
fix = enable Event ID 4624 (type 3) forwarding from all DCs

14:41:12 = BloodHound LDAP enumeration # Telemetry: EXISTS
gap = LDAP queries logged but no rule for bulk enumeration
fix = create rule: >500 LDAP queries from single source in 5 min

14:58:33 = Kerberoast TGS request (svc_backup, RC4) # Telemetry: EXISTS
gap = Event ID 4769 ingested but encryption_type not parsed
fix = update parser to extract encryption_type; rule: RC4 TGS requests

15:34:18 = PsExec lateral movement to FILESRV01 # Telemetry: EXISTS
gap = CrowdStrike logged but no custom rule for PsExec from non-admin WS
fix = EDR rule: PsExec execution from workstation tier to server tier

16:12:45 = secretsdump NTDS.dit extraction from DC01 # Telemetry: EXISTS
gap = volume shadow copy creation logged but no alert rule
fix = rule: vssadmin/ntdsutil execution on domain controller

16:47:02 = Bulk read: \\FILESRV01\Finance (4,200 files) # Telemetry: PARTIAL
gap = file access auditing enabled but only on write, not read
fix = enable read auditing on sensitive shares; rule: >100 reads in 10 min

# Summary
attacker_actions = 7 significant actions over 2h 24m
telemetry_present = 4 of 7 (57%) # Data existed for 4 actions
alerts_generated = 0 of 7 (0%) # No rule matched any of them
detection_fixes = 7 (all implementable within 2 weeks)

Seven detection gaps. Four of them had telemetry already flowing into the SIEM — the data was there but nobody had written a rule to ask the right question. Three had no telemetry at all. All seven fixes are implementable within a fortnight: two log source additions, one parser update, and four new detection rules.

The standard finding produces three remediation actions that close one vulnerability. The detection gap analysis produces seven detection improvements that will catch this technique — and every technique in the same family — regardless of which specific vulnerability enables it next time.


How detection gaps drive long-term improvement.

The real value of detection gap analysis emerges over successive engagements. Each pen test generates a set of detection improvements. Each improvement persists. The detection capability compounds — and the gap shrinks with every cycle.

Engagement Attacker Actions Detection Rate Key Improvements Made After
Year 1 — Q1 10 significant actions. DA achieved in 3h 40m. 0% — zero alerts across the entire engagement. 7 new SIEM rules. 2 new log sources onboarded. 1 parser fix. EDR custom rule for PsExec.
Year 1 — Q3 12 significant actions (tester adapted techniques). DA achieved in 4h 10m. 42% — 5 of 12 actions detected. Kerberoasting detected at minute 38. PsExec blocked by EDR. 4 new rules for techniques that evaded prior detections. Analyst training on AD attack patterns. New playbook for credential-based alerts.
Year 2 — Q1 14 significant actions (tester used more advanced techniques). DA achieved in 6h 20m. 64% — 9 of 14 detected. SOC initiated incident response at hour 2. Tester had to evade actively. 3 new rules for WMI-based movement and DCSync. Behavioural analytics for anomalous authentication patterns. Honeypot service accounts deployed.
Year 2 — Q3 11 significant actions. DA achieved in 8h 15m — tester spent significant time evading detection. 82% — 9 of 11 detected. SOC contained 2 of 3 lateral movement attempts. IR triggered at hour 1. 2 remaining gaps in token manipulation and DCOM-based movement. Detection engineering now a continuous programme rather than post-pentest activity.

In two years: detection rate from 0% to 82%. Time to DA from 3 hours 40 minutes to 8 hours 15 minutes — not because the vulnerabilities were all fixed (the tester still achieved DA) but because the detection capability forced the attacker to work harder, move slower, and get caught more often. The vulnerabilities persisted (new ones always appear). The detection improved permanently.


Mapping detection gaps to a common language.

The MITRE ATT&CK framework provides the taxonomy that makes detection gap analysis actionable. Each attacker action maps to an ATT&CK technique ID, and each technique can be assessed for detection coverage. The result is a heatmap that shows the organisation's detection posture across the entire attack lifecycle — not as an abstract model, but grounded in the specific techniques that were actually used against this specific environment.

ATT&CK Tactic Technique Used in Engagement Detection Status Gap Severity
Credential Access T1557.001 — LLMNR/NBT-NS Poisoning No telemetry Critical — no visibility into the primary credential capture vector
Credential Access T1558.003 — Kerberoasting Telemetry exists, no rule High — data present, 15-minute rule deployment
Discovery T1087 — Account Discovery (LDAP) Telemetry exists, no rule Medium — enumeration is noisy and detectable
Lateral Movement T1569.002 — Service Execution (PsExec) EDR logged, no custom rule High — EDR has data, needs workstation-to-server rule
Credential Access T1003.003 — NTDS.dit extraction Telemetry exists, no rule Critical — indicates full domain compromise in progress
Collection T1039 — Data from Network Shared Drive Partial telemetry (write only) High — enable read auditing on sensitive shares

Mapping detection gaps to ATT&CK transforms the pen test report from a list of findings into a detection coverage assessment. The SOC team can see exactly which tactics and techniques they're blind to, prioritise detection engineering by gap severity, and track coverage improvement across the ATT&CK matrix over successive engagements.


What to demand from your pen test provider.

Not every pen test includes detection gap analysis — and most don't. The standard deliverable is a vulnerability report: what was found, what was exploited, what should be remediated. To get the detection intelligence that this article argues is more valuable, you need to ask for it explicitly.

Require Timestamped Action Logs
Ask the provider to log every significant attacker action with a precise timestamp: when they poisoned LLMNR, when the hash was captured, when the Kerberoast request was sent, when lateral movement occurred, when data was accessed. Without timestamps, the SOC can't correlate attacker actions with its own telemetry.
Request a Detection Gap Appendix
Ask for a dedicated section in the report that maps each attacker action to the SOC's response (or lack thereof). For each gap, the report should specify: was telemetry present? If yes, why didn't the rule fire? If no, which log source is missing? What specific detection rule or configuration change would close the gap?
Share SOC Access (Read-Only)
Give the pen test team read-only access to the SIEM during or after the engagement so they can verify whether their actions generated telemetry. Alternatively, have the SOC team pull the relevant logs and share them. The detection gap analysis is only as accurate as the telemetry review.
Hold a Joint Debrief
After the engagement, bring the pen test team and the SOC team into the same room. The tester walks through the attack timeline. The SOC reviews what they saw (or didn't). The gaps are diagnosed collaboratively. This single meeting is often more valuable than the written report — because the SOC team hears firsthand how the attacker operated and can ask questions in real time.
Request ATT&CK Mapping
Ask the provider to map every attacker action and every detection gap to MITRE ATT&CK technique IDs. This creates a common language between the pen test report and the SOC's detection engineering backlog — and allows you to track detection coverage against the ATT&CK matrix over time.

Turning every pen test into a detection engineering programme.

Step Action Outcome
1 Add detection gap analysis to your pen test scope. Budget for the additional time and specify it in the statement of work. Every engagement produces both a vulnerability report and a detection improvement plan.
2 Hold a joint debrief within one week of report delivery. Pen test team presents the attack timeline. SOC team reviews telemetry and identifies root causes for each gap. Gaps are diagnosed with precision: telemetry missing, parser broken, rule absent, or analyst missed. Each root cause has a different fix.
3 Treat detection fixes as critical remediations — not as nice-to-have improvement tickets. A detection gap that allows DA to go unnoticed for three hours is at least as critical as the vulnerability that enabled it. Detection improvements are tracked, prioritised, and completed alongside vulnerability remediations.
4 Validate detection fixes before the next engagement. Run the specific techniques that went undetected and confirm the new rules fire. Purple team the gap closures. Confidence that the fixes work — not just that they exist on paper.
5 Track detection rate as a board-level metric across successive pen tests. Present the trend: 0% → 42% → 64% → 82%. This is the metric that demonstrates defensive security maturity. The board sees measurable improvement. Security investment has a quantifiable return. The CISO can demonstrate progress, not just risk.

The bottom line.

A vulnerability is a single point of failure. A detection gap is a systemic blindness. Fixing a vulnerability closes one door. Closing a detection gap installs a sensor that monitors every door — including the ones that don't exist yet.

Vulnerabilities regenerate. New misconfigurations appear. New CVEs are published. The environment changes, and new attack paths materialise. The organisation that relies solely on remediation is running on a treadmill — fixing today's findings while tomorrow's accumulate. The organisation that invests equally in detection builds a capability that compounds: every rule persists, every log source endures, every analyst skill develops, and the detection rate climbs from 0% to 82% across four engagements.

The most valuable output of a penetration test isn't the list of vulnerabilities. It's the timestamped record of what the attacker did and whether anyone noticed. That record — the detection gap analysis — is the blueprint for a detection engineering programme that makes the organisation measurably harder to attack, measurably faster to respond, and measurably more resilient with every iteration.

Ask your pen test provider for it. Pay for it. Act on it. Track it. It's worth more than the vulnerabilities.


Find out what your SOC sees — and what it misses.

Our penetration tests include detection gap analysis as standard: timestamped attacker actions cross-referenced against your SIEM telemetry, with specific detection rules and configuration fixes for every gap identified.