> grep -c 'ALERT' /var/log/siem/pentest_window.log && echo '...zero'_
A penetration test against a logistics company produces twelve findings. The critical finding is a Kerberoasting attack that escalated to Domain Admin through a service account with a weak password. The remediation is straightforward: change the service account password to a 25+ character random string, migrate to a Group Managed Service Account, and disable RC4 encryption for Kerberos. The vulnerability is closed. The risk is mitigated.
But the pen test also produced a thirteenth finding — one that doesn't map neatly to a CVSS score or a remediation ticket. Across three hours of active compromise, the tester performed LLMNR poisoning, credential cracking, LDAP enumeration, Kerberos ticket requests for every SPN in the domain, lateral movement via PsExec to three servers, privilege escalation through Backup Operator abuse, NTDS.dit extraction from the domain controller, and bulk file share access on the finance server. The SOC — staffed 24/7, running CrowdStrike on every endpoint, ingesting logs from Active Directory, the firewalls, and the VPN — generated zero alerts.
The Kerberoasting vulnerability is one finding with one remediation. The detection gap is a systemic failure across the entire defensive stack — and fixing it improves the organisation's ability to detect not just Kerberoasting, but every attack that uses similar techniques: credential capture, lateral movement, privilege escalation, and data access. The vulnerability fix closes one door. The detection gap fix installs a security camera on every corridor.
Vulnerabilities are infinite. New misconfigurations appear every time the environment changes. New CVEs are published daily. You will never achieve a state where all vulnerabilities are remediated — the attack surface regenerates faster than remediation can close it. Detection capability, by contrast, is cumulative. Every detection rule you write persists. Every log source you onboard remains. Every analyst skill you develop compounds. Investing in detection produces returns that survive the next vulnerability.
Organisations intuitively prioritise vulnerability remediation — finding and fixing the specific weaknesses the pen test identified. This is necessary but insufficient, because remediation addresses individual findings while detection addresses categories of attack. The economics are fundamentally different.
| Vulnerability Remediation | Detection Engineering | |
|---|---|---|
| Scope of protection | Fixes the specific vulnerability that was found. The Kerberoastable service account is remediated. If another Kerberoastable account appears next month, it's a new vulnerability requiring new remediation. | Detects the technique, regardless of which specific vulnerability enables it. A Kerberoasting detection rule alerts whether the target is svc_backup, svc_sql, or a service account that doesn't exist yet. |
| Durability | Temporary. The fix persists only until the environment changes — a new service account is created, a misconfiguration is reintroduced, a patch is missed. The next pen test may find the same class of vulnerability in a different location. | Cumulative. A well-written detection rule persists indefinitely. It continues to detect the technique across environmental changes, new systems, and future attack campaigns. Detection capability grows; vulnerability posture fluctuates. |
| Coverage breadth | One fix, one vulnerability. Remediating 12 pen test findings closes 12 specific gaps. | One detection rule covers an entire technique. A lateral movement detection rule that alerts on anomalous PsExec usage covers every lateral movement attempt via PsExec — whether it originates from the Kerberoasting chain, a phished credential, or a compromised VPN. |
| Time to value | Variable. Some fixes take minutes (password change). Others take months (application redesign, infrastructure migration, vendor dependency). | Often faster. A SIEM detection rule can be written, tested, and deployed within hours of receiving the pen test's detection gap analysis. The organisation's detection capability improves the same week the report is delivered. |
| Return on investment | Linear. Each pound spent fixes one finding. The next pen test produces new findings requiring new spending. | Compounding. Each detection rule protects against the technique permanently. Over successive pen tests, the detection gap shrinks — each engagement finds fewer undetected actions, and the investment in prior detection engineering pays dividends across every subsequent assessment. |
This is not an argument against remediation — vulnerabilities must be fixed. It's an argument that detection gaps deserve equal priority, equal budget, and equal board-level attention. An organisation that remediates every pen test finding but ignores every detection gap is an organisation that can be compromised by the next finding — and won't know until it's too late.
A detection gap exists when an attacker action that should generate an alert doesn't. It's the absence of a response — which makes it invisible unless someone specifically looks for it. Detection gaps have four components, each of which points to a different root cause and a different remediation.
| Component | The Question | Common Root Causes |
|---|---|---|
| Telemetry | Did the attacker action generate any log data at all? Is the relevant log source enabled, collected, and ingested into the SIEM? | Kerberos service ticket requests (Event ID 4769) not audited. PowerShell script block logging not enabled. DNS query logging disabled. SMB access auditing not configured. CloudTrail not enabled in all regions. The log source exists — it's just not turned on. |
| Parsing | Is the log data correctly parsed so that the SIEM can query the relevant fields? Can the rule distinguish a Kerberos TGS request for RC4 from one for AES? | Log data ingested but stored as unparsed raw text. Custom application logs without a parser. Windows event logs ingested but key fields (encryption type, service name, source IP) not extracted into searchable fields. |
| Logic | Does a detection rule exist that matches the attacker's specific behaviour pattern? Is the rule correctly written — right field, right operator, right threshold? | No rule exists for this technique. Rule exists but uses the wrong event ID. Threshold set too high (alerts on 100+ TGS requests but the attacker only made 3). Rule logic is correct but references a field that's named differently in this log source. |
| Response | If an alert did fire, was it triaged, investigated, and escalated appropriately? Or was it dismissed as a false positive, lost in noise, or ignored? | Alert fired but auto-closed by a suppression rule. Alert triaged by Tier 1 analyst who didn't recognise the significance. Alert escalated but response playbook didn't cover this scenario. Alert generated at 3am and wasn't reviewed until 9am. |
A thorough detection gap analysis examines all four components for every significant attacker action. The output isn't just "SOC didn't detect Kerberoasting" — it's "Event ID 4769 is audited and ingested, but the SIEM parser doesn't extract the encryption type field, so the existing rule that checks for RC4 ticket requests never matches." That level of specificity turns a vague finding into a 15-minute configuration fix.
To illustrate the difference between a standard pen test report and one that includes detection gap analysis, here's a side-by-side from the same engagement — the logistics company from the introduction.
That's the standard finding. Three remediations. All correct. All necessary. Now here's what the detection gap analysis adds:
Seven detection gaps. Four of them had telemetry already flowing into the SIEM — the data was there but nobody had written a rule to ask the right question. Three had no telemetry at all. All seven fixes are implementable within a fortnight: two log source additions, one parser update, and four new detection rules.
The standard finding produces three remediation actions that close one vulnerability. The detection gap analysis produces seven detection improvements that will catch this technique — and every technique in the same family — regardless of which specific vulnerability enables it next time.
The real value of detection gap analysis emerges over successive engagements. Each pen test generates a set of detection improvements. Each improvement persists. The detection capability compounds — and the gap shrinks with every cycle.
| Engagement | Attacker Actions | Detection Rate | Key Improvements Made After |
|---|---|---|---|
| Year 1 — Q1 | 10 significant actions. DA achieved in 3h 40m. | 0% — zero alerts across the entire engagement. | 7 new SIEM rules. 2 new log sources onboarded. 1 parser fix. EDR custom rule for PsExec. |
| Year 1 — Q3 | 12 significant actions (tester adapted techniques). DA achieved in 4h 10m. | 42% — 5 of 12 actions detected. Kerberoasting detected at minute 38. PsExec blocked by EDR. | 4 new rules for techniques that evaded prior detections. Analyst training on AD attack patterns. New playbook for credential-based alerts. |
| Year 2 — Q1 | 14 significant actions (tester used more advanced techniques). DA achieved in 6h 20m. | 64% — 9 of 14 detected. SOC initiated incident response at hour 2. Tester had to evade actively. | 3 new rules for WMI-based movement and DCSync. Behavioural analytics for anomalous authentication patterns. Honeypot service accounts deployed. |
| Year 2 — Q3 | 11 significant actions. DA achieved in 8h 15m — tester spent significant time evading detection. | 82% — 9 of 11 detected. SOC contained 2 of 3 lateral movement attempts. IR triggered at hour 1. | 2 remaining gaps in token manipulation and DCOM-based movement. Detection engineering now a continuous programme rather than post-pentest activity. |
In two years: detection rate from 0% to 82%. Time to DA from 3 hours 40 minutes to 8 hours 15 minutes — not because the vulnerabilities were all fixed (the tester still achieved DA) but because the detection capability forced the attacker to work harder, move slower, and get caught more often. The vulnerabilities persisted (new ones always appear). The detection improved permanently.
The MITRE ATT&CK framework provides the taxonomy that makes detection gap analysis actionable. Each attacker action maps to an ATT&CK technique ID, and each technique can be assessed for detection coverage. The result is a heatmap that shows the organisation's detection posture across the entire attack lifecycle — not as an abstract model, but grounded in the specific techniques that were actually used against this specific environment.
| ATT&CK Tactic | Technique Used in Engagement | Detection Status | Gap Severity |
|---|---|---|---|
| Credential Access | T1557.001 — LLMNR/NBT-NS Poisoning | No telemetry | Critical — no visibility into the primary credential capture vector |
| Credential Access | T1558.003 — Kerberoasting | Telemetry exists, no rule | High — data present, 15-minute rule deployment |
| Discovery | T1087 — Account Discovery (LDAP) | Telemetry exists, no rule | Medium — enumeration is noisy and detectable |
| Lateral Movement | T1569.002 — Service Execution (PsExec) | EDR logged, no custom rule | High — EDR has data, needs workstation-to-server rule |
| Credential Access | T1003.003 — NTDS.dit extraction | Telemetry exists, no rule | Critical — indicates full domain compromise in progress |
| Collection | T1039 — Data from Network Shared Drive | Partial telemetry (write only) | High — enable read auditing on sensitive shares |
Mapping detection gaps to ATT&CK transforms the pen test report from a list of findings into a detection coverage assessment. The SOC team can see exactly which tactics and techniques they're blind to, prioritise detection engineering by gap severity, and track coverage improvement across the ATT&CK matrix over successive engagements.
Not every pen test includes detection gap analysis — and most don't. The standard deliverable is a vulnerability report: what was found, what was exploited, what should be remediated. To get the detection intelligence that this article argues is more valuable, you need to ask for it explicitly.
| Step | Action | Outcome |
|---|---|---|
| 1 | Add detection gap analysis to your pen test scope. Budget for the additional time and specify it in the statement of work. | Every engagement produces both a vulnerability report and a detection improvement plan. |
| 2 | Hold a joint debrief within one week of report delivery. Pen test team presents the attack timeline. SOC team reviews telemetry and identifies root causes for each gap. | Gaps are diagnosed with precision: telemetry missing, parser broken, rule absent, or analyst missed. Each root cause has a different fix. |
| 3 | Treat detection fixes as critical remediations — not as nice-to-have improvement tickets. A detection gap that allows DA to go unnoticed for three hours is at least as critical as the vulnerability that enabled it. | Detection improvements are tracked, prioritised, and completed alongside vulnerability remediations. |
| 4 | Validate detection fixes before the next engagement. Run the specific techniques that went undetected and confirm the new rules fire. Purple team the gap closures. | Confidence that the fixes work — not just that they exist on paper. |
| 5 | Track detection rate as a board-level metric across successive pen tests. Present the trend: 0% → 42% → 64% → 82%. This is the metric that demonstrates defensive security maturity. | The board sees measurable improvement. Security investment has a quantifiable return. The CISO can demonstrate progress, not just risk. |
A vulnerability is a single point of failure. A detection gap is a systemic blindness. Fixing a vulnerability closes one door. Closing a detection gap installs a sensor that monitors every door — including the ones that don't exist yet.
Vulnerabilities regenerate. New misconfigurations appear. New CVEs are published. The environment changes, and new attack paths materialise. The organisation that relies solely on remediation is running on a treadmill — fixing today's findings while tomorrow's accumulate. The organisation that invests equally in detection builds a capability that compounds: every rule persists, every log source endures, every analyst skill develops, and the detection rate climbs from 0% to 82% across four engagements.
The most valuable output of a penetration test isn't the list of vulnerabilities. It's the timestamped record of what the attacker did and whether anyone noticed. That record — the detection gap analysis — is the blueprint for a detection engineering programme that makes the organisation measurably harder to attack, measurably faster to respond, and measurably more resilient with every iteration.
Ask your pen test provider for it. Pay for it. Act on it. Track it. It's worth more than the vulnerabilities.
Our penetration tests include detection gap analysis as standard: timestamped attacker actions cross-referenced against your SIEM telemetry, with specific detection rules and configuration fixes for every gap identified.