Penetration Testing

Writing Findings That Teams Can Actually Remediate

> grep 'implement best practices' report.pdf && echo 'nobody knows what this means'_

Peter Bassill 9 September 2025 16 min read
reporting remediation actionable findings development infrastructure security teams

The finding was clear. The fix wasn't.

A penetration test report lands on the IT manager's desk. It contains 34 findings, clearly described, well-evidenced, with an attack narrative and a Real World Risk Score for each. The IT manager understands the risk. They agree with the priorities. They assign the findings to their team and set a 30-day remediation target for critical and high findings.

Six weeks later, eleven findings remain open. Two of the open findings are critical. The IT manager checks with the team. The infrastructure engineer says: "The report says 'harden the Active Directory configuration.' I don't know which settings they mean. I've spent three hours researching it and I'm not confident I've found the right GPO." The developer says: "The report says 'implement parameterised queries.' The finding is in a legacy PHP application with 40,000 lines of code and no test suite. I don't know where to start." The security analyst says: "The report says 'implement network segmentation between the office and production VLANs.' That's a six-month project involving procurement, network redesign, and change management. It wasn't scoped as a 30-day fix."

The findings were correct. The risk assessment was sound. The remediation guidance was useless — not because it was wrong, but because it wasn't specific enough for the people who actually have to implement it. "Harden Active Directory" is an intention, not an instruction. "Implement parameterised queries" is a principle, not a plan. "Implement network segmentation" is a project, not a remediation step.

The Expensive Sentence

"Implement best practices" is the most expensive sentence in cybersecurity reporting. It transfers the cost of research from the tester — who already understands the vulnerability — to the engineer — who doesn't. Every hour the engineer spends researching what "best practices" means for this specific finding is an hour the tester could have saved them with two additional sentences in the report.


The same finding needs different remediation for different people.

Penetration test findings are remediated by three distinct teams, each with different tools, different vocabularies, and different constraints. A finding that affects application code goes to the development team. A finding that affects servers, networking, or Active Directory goes to the infrastructure team. A finding that requires policy changes, monitoring configuration, or detection rules goes to the security team. Writing remediation that works for all three requires understanding how each team operates.

Team What They Control What They Need From the Finding What Blocks Them
Development Application code, APIs, authentication logic, input validation, session management, data handling, CI/CD pipelines. The vulnerable code path or endpoint. The specific input that triggers the vulnerability. A code example showing the fix — in the language and framework the application uses. Guidance on testing the fix. Vague remediation that doesn't reference the application's technology stack. "Use parameterised queries" without showing the specific code pattern for their framework. No indication of which endpoints are affected. No test case to verify the fix.
Infrastructure Servers, operating systems, Active Directory, Group Policy, network devices, firewalls, DNS, DHCP, virtualisation, cloud infrastructure. The specific setting, policy, or configuration to change. The exact path — Group Policy Object, registry key, configuration file, CLI command. The expected impact on other systems. A verification step to confirm the change worked. "Harden the configuration" without specifying which configuration. "Disable legacy protocols" without naming them or providing the GPO path. No impact assessment — the engineer doesn't know if the change will break a production service.
Security / SOC SIEM rules, EDR policies, detection logic, monitoring configuration, incident response playbooks, access control policies. The specific detection gap — which technique wasn't detected and why. The log source and event ID needed. A sample detection rule or the logic for one. The expected alert volume and false positive rate. "Improve monitoring" without specifying what to monitor. "Detect lateral movement" without providing the event IDs, log sources, or rule logic. No guidance on tuning to avoid false positives.

The eight components that make a finding actionable.

Every finding in a pen test report should contain enough information for the responsible team to understand the problem, reproduce it, fix it, and verify the fix — without needing to phone the tester, research the vulnerability, or guess at the remediation.

Component What It Contains Why It's Essential
1. Affected asset(s) The specific hostname, IP address, URL, endpoint, application, or service affected. Not "multiple servers" — the actual list. The team needs to know where to apply the fix. "SMB signing not enforced" affecting 12 unnamed hosts is unactionable. The same finding with a table listing all 12 hostnames is a work order.
2. Reproduction steps A step-by-step guide to reproducing the vulnerability — the tool, the command, the input, and the expected output. The engineer needs to see the vulnerability before they can fix it. If they can't reproduce it, they can't confirm it exists, can't test the fix, and will often deprioritise it as "possibly a false positive."
3. Evidence Screenshots, request/response pairs, command output, or tool output that proves the vulnerability exists. Timestamped where relevant. Evidence removes ambiguity. It proves the finding is real, shows exactly what the tester observed, and provides a baseline for comparison after remediation.
4. Root cause Why the vulnerability exists — not just what it is. A misconfigured GPO, a missing input validation check, a default credential that was never changed, a service account created during initial setup and forgotten. Understanding the root cause prevents recurrence. Fixing the symptom without addressing the cause means the same class of vulnerability will reappear. Root cause also identifies whether the finding is an isolated mistake or a systemic pattern.
5. Specific remediation The exact change required: the GPO path, the configuration parameter, the code change, the CLI command, the permission to remove. Written in the language of the team that will implement it. This is where most reports fail. "Enforce SMB signing" is a principle. "Computer Configuration → Policies → Windows Settings → Security Settings → Local Policies → Security Options → Microsoft network server: Digitally sign communications (always) → Enabled" is a remediation.
6. Impact assessment What the fix might break. Will enforcing SMB signing cause a 1–3% performance impact on file server throughput? Will disabling TLS 1.0 break connectivity for legacy clients? Will parameterising this query change the application's behaviour? Engineers won't implement a change if they don't know what it will break. An impact assessment gives them the confidence to proceed — and the information to schedule the change appropriately.
7. Verification step How to confirm the fix worked. The command to run, the test to perform, the expected output after remediation. "After enabling SMB signing, attempt nmap --script smb2-security-mode -p 445 target — output should show 'Message signing enabled and required.'" Without a verification step, the engineer implements the fix and hopes it worked. With one, they implement the fix and prove it worked. The difference is the difference between a finding marked "remediated" and a finding confirmed as remediated.
8. Effort and dependency An honest estimate of the effort required (15 minutes, 2 hours, 3 days, 6-month project) and any dependencies on other fixes, procurement, change windows, or third parties. A 30-day remediation target for a finding that requires a 6-month network redesign sets the team up for failure. Honest effort estimates let the IT manager plan realistically and set appropriate deadlines.

Writing for the team that manages servers, AD, and networks.

Infrastructure teams think in Group Policy paths, registry keys, CLI commands, and configuration files. The remediation needs to speak their language — with the exact settings to change, the exact values to set, and the exact command to verify.

Bad Infrastructure Remediation
Finding: LLMNR/NBT-NS Broadcast Protocols Enabled
Remediation: Disable LLMNR and NBT-NS across the domain.

# Problem: The engineer doesn't know HOW to disable these.
# They need to research GPO paths, DHCP options, and registry keys.
# Estimated research time: 1-2 hours before they can even start.
Good Infrastructure Remediation
Finding: LLMNR/NBT-NS Broadcast Protocols Enabled
Remediation:

1. Disable LLMNR:
GPO: Computer Configuration → Admin Templates → Network →
DNS Client → Turn Off Multicast Name Resolution → Enabled
Link: to the Default Domain Policy or a dedicated security GPO.

2. Disable NBT-NS:
Via DHCP: Server Options → Option 001 (disable NetBIOS over TCP/IP).
Or per-interface: NIC → TCP/IP → Advanced → WINS tab →
Disable NetBIOS over TCP/IP.
For domain-wide enforcement: deploy via PowerShell script in GPO:
$adapters = Get-WmiObject Win32_NetworkAdapterConfiguration
$adapters | ForEach-Object { $_.SetTcpipNetbios(2) }

Impact: Minimal. LLMNR/NBT-NS are fallback name resolution protocols.
Modern environments rely on DNS. If DNS is functioning correctly,
disabling these protocols has no operational impact.
Test on a pilot OU first if concerned about legacy applications.

Verify: Run Responder on the VLAN for 15 minutes after GPO propagation.
No hashes should be captured. If hashes are still captured, check
GPO application with gpresult /h report.html on affected hosts.

Effort: 15 minutes to configure. 24-48 hours for GPO propagation.
Depends: None. This fix is standalone and can be implemented immediately.

The first remediation requires research. The second requires only implementation. The engineer reads it, opens Group Policy Management, navigates to the path, enables the setting, and runs the verification command. The finding is fixed within the hour — because the tester invested five minutes writing the specific guidance that saved the engineer two hours of research.


Writing for the team that manages application code.

Development teams think in code paths, endpoints, frameworks, and test cases. They need to know which endpoint is vulnerable, what input triggers the vulnerability, what the fix looks like in their technology stack, and how to write a test that confirms the fix works.

Bad Development Remediation
Finding: SQL Injection in Customer Search
Remediation: Use parameterised queries to prevent SQL injection.

# Problem: The developer knows what parameterised queries ARE.
# They don't know which endpoint, which parameter, or which
# code file contains the vulnerable query. The app has 400 endpoints.
Good Development Remediation
Finding: SQL Injection in Customer Search
Endpoint: POST /api/v2/customers/search
Parameter: "surname" (body, JSON)
Payload: {"surname": "Smith' OR '1'='1"}
Response: 200 OK — returned all 12,847 customer records

Root cause:
The surname parameter is concatenated directly into the SQL query
in /src/controllers/CustomerController.php line 142:
$sql = "SELECT * FROM customers WHERE surname = '" . $surname . "'";

Remediation:
Replace the concatenated query with a PDO prepared statement:
$stmt = $pdo->prepare('SELECT * FROM customers WHERE surname = :surname');
$stmt->execute(['surname' => $surname]);

Audit all other queries in CustomerController.php and related
controllers for the same pattern. grep -rn '\$sql.*\.' src/

Test: Resubmit the payload {"surname": "Smith' OR '1'='1"} to the
endpoint. Expected: 0 results or an error. Not 12,847 records.
Add to CI/CD: automated SQLi test for this endpoint.

Effort: 30 minutes for this endpoint. 2-4 hours to audit related code.
Depends: None. Fix is backward-compatible. No schema change required.

The developer reads this finding and knows exactly what to do: open CustomerController.php, go to line 142, replace the concatenated query with the prepared statement shown, run the test payload, and add a CI/CD check. They don't need to research PDO syntax, search the codebase for the vulnerable endpoint, or guess which parameter is injectable. The tester did that work already.


Writing for the team that manages detection and policy.

Security teams think in detection rules, log sources, event IDs, and policy configurations. They need to know which technique wasn't detected, which log source provides the telemetry, what the detection logic looks like, and how to tune it to avoid drowning in false positives.

Bad Security Team Remediation
Finding: Kerberoasting Not Detected by SOC
Remediation: Implement detection for Kerberoasting attacks.

# Problem: The analyst knows what Kerberoasting IS.
# They don't know which event ID to monitor, what fields to parse,
# or what threshold to set to avoid false positives from legitimate
# service ticket requests.
Good Security Team Remediation
Finding: Kerberoasting Not Detected by SOC
Gap: Event ID 4769 ingested but encryption_type field not parsed.
No rule exists for RC4 (0x17) TGS requests.

Remediation:
1. Update SIEM parser for Event ID 4769 to extract:
- Ticket_Encryption_Type (field name varies by SIEM)
- Service_Name, Client_Address, Account_Name

2. Create detection rule:
IF Event_ID = 4769
AND Ticket_Encryption_Type = 0x17 (RC4-HMAC)
AND Service_Name NOT IN (krbtgt, $known_rc4_services)
THEN alert: 'Potential Kerberoasting — RC4 TGS Request'

3. Tuning guidance:
Some legacy services legitimately request RC4 tickets.
Run the rule in audit mode for 7 days. Add confirmed
legitimate services to the exclusion list. Expected
steady-state: <5 alerts/day after tuning.

4. Longer term: disable RC4 for Kerberos entirely:
GPO → Computer Configuration → Windows Settings → Security
→ Local Policies → Security Options → Network security:
Configure encryption types for Kerberos → AES256 only.
WARNING: audit RC4 dependency first (Event IDs 4768/4769).

Verify: Request a TGS ticket with RC4 encryption using Rubeus:
Rubeus.exe kerberoast /tgtdeleg
Confirm the SIEM alert fires within 5 minutes.

Effort: Parser update: 1 hour. Rule creation: 30 min. Tuning: 7 days.
Depends: Event ID 4769 must already be forwarded from DCs to SIEM.

The security analyst reads this and knows: the parser needs updating (specific field), the rule logic is provided (specific conditions and exclusions), the tuning approach is defined (7-day audit mode, expected steady-state alert volume), and the verification step uses the exact tool and command that tests the detection. They can implement and validate the rule within a day.


Not every finding is a 30-day fix.

One of the most damaging patterns in pen test reporting is treating all findings as equal-effort remediations. "Disable LLMNR" takes 15 minutes. "Implement network segmentation between office and production" takes six months and a procurement cycle. Setting a blanket 30-day remediation deadline for both guarantees that the network segmentation finding will miss the deadline — and the team's credibility suffers for a failure that was inevitable from the start.

Effort Category Examples Realistic Timeline Report Guidance
Quick win Disable LLMNR. Change a password. Enable a GPO setting. Remove an unnecessary share permission. Revoke an orphaned OAuth consent. Hours to days "This fix takes approximately 15 minutes to implement and has no dependencies. Implement immediately." The report should flag quick wins explicitly — these are the findings that show fast progress.
Standard remediation Deploy SMB signing across the domain. Migrate a service account to gMSA. Update a web application's authentication flow. Add detection rules and tune for false positives. Days to weeks "Estimated effort: 2 days including testing and change management. Requires a change window for SMB signing rollout. Test on a pilot group of 10 servers before domain-wide deployment."
Project-level change Implement network segmentation. Deploy NAC (802.1X). Migrate from on-premises AD to Entra ID. Rewrite a legacy application's authentication layer. Months to quarters "This is a strategic remediation requiring procurement, design, and staged implementation. Recommend scoping as a dedicated project with a 3–6 month timeline. In the interim, apply compensating control: restrict VLAN access to authorised MAC addresses via port security."
Accepted risk A legacy system that cannot be patched because the vendor no longer supports it. A business process that requires a configuration the tester recommends changing. N/A — managed, not fixed "If this finding cannot be remediated due to business or technical constraints, document the accepted risk and implement compensating controls: isolate the system on a dedicated VLAN, monitor all traffic to and from it, and restrict access to named administrators."

Honest effort estimates — included in the finding itself — let the IT manager build a realistic remediation plan rather than a fantasy one. Quick wins get done immediately. Standard remediations get scheduled. Project-level changes get scoped and funded. And accepted risks get documented with compensating controls rather than ignored.


Getting findings you can actually fix.

Evaluate Sample Reports for Remediation Specificity
When reviewing a prospective provider's sample report, go straight to the remediation sections. Do they contain GPO paths, code examples, and CLI commands? Or do they say "implement best practices" and "harden the configuration"? The specificity of the remediation is the single best indicator of whether the findings will actually get fixed.
Tell Your Provider Your Technology Stack
During scoping, tell the provider what technologies the team uses: Active Directory version, web application framework (PHP/Laravel, .NET, Node.js), SIEM platform (Sentinel, Splunk, QRadar), EDR vendor (CrowdStrike, Defender, SentinelOne). This lets the tester write remediation in the team's language — GPO paths for AD, framework-specific code for the app, SIEM-specific query syntax for detection rules.
Demand Verification Steps
Every finding should include a step the team can use to confirm the fix worked. Without it, findings are marked "remediated" on trust. With it, findings are marked "remediated" on evidence. Ask: "After we implement your recommended fix, how do we verify it's effective?"
Ask for Effort Estimates and Dependencies
A finding that says "implement immediately" should also say "estimated effort: 15 minutes, no dependencies." A finding that requires procurement or a project should say so. Your remediation plan is only as realistic as the effort estimates it's built on.
Use Post-Report Support
If a finding's remediation is unclear despite the provider's best efforts, phone them. Good providers offer post-report support — answering questions, clarifying remediations, and helping the team implement fixes correctly. If your provider considers the engagement finished when the report is emailed, the report is the last thing they invested in.

The bottom line.

A finding that doesn't get fixed is a finding that didn't matter. And the most common reason findings don't get fixed isn't disagreement with the severity, lack of budget, or technical impossibility. It's that the remediation guidance transferred the cost of research from the tester to the engineer — and the engineer, facing a queue of 34 findings, 200 other tasks, and a remediation that says "implement best practices," puts it in the backlog.

Specific remediation closes the gap. A GPO path instead of "harden Active Directory." A prepared statement in the application's framework instead of "use parameterised queries." A SIEM rule with event IDs, field names, and tuning guidance instead of "improve monitoring." An effort estimate that distinguishes a 15-minute quick win from a 6-month project. A verification step that lets the engineer prove the fix worked.

The five minutes the tester spends writing specific remediation saves the engineer two hours of research — and makes the difference between a finding that gets fixed in a day and a finding that sits in the backlog for six months. Specificity is the cheapest investment in cybersecurity — and the one with the highest return.


Remediation guidance specific enough to implement without further research.

Our reports include GPO paths, code examples, SIEM rule logic, effort estimates, impact assessments, and verification steps for every finding — because a finding that doesn't get fixed is a finding that didn't matter.