> echo 'honest findings, honest limitations, honest communication' > trust.txt_
The IT security engineer opens the pen test report. The first thing they see is a finding that describes their GPO configuration as "inadequate." The second finding highlights a detection gap in the SIEM rules they spent three months developing. The third documents the tester accessing the finance server — the one the engineer specifically hardened last quarter after the previous engagement.
The engineer's reaction determines whether the pen test produces improvement or resentment. If they feel the test was conducted transparently — with clear communication, honest scoping, and findings presented as systemic issues rather than personal failures — they'll engage constructively. They'll read the remediation guidance, ask questions, and fix the issues. If they feel ambushed — blindsided by findings they weren't given the context to understand, or criticised for decisions they made with limited resources — they'll become defensive. They'll dispute findings, resist remediation, and view the next pen test as a threat rather than a tool.
The difference is transparency. Not in what the tester finds — findings are findings, and they must be reported honestly — but in how the engagement is communicated, conducted, and presented. Transparency doesn't soften findings. It creates the conditions where findings are received constructively.
Trust begins before the engagement starts. The scoping process determines who knows what, who expects what, and who will be surprised by the results. Opaque scoping — where the engagement is arranged between the CISO and the provider without informing the teams who will be affected — creates exactly the adversarial dynamic that undermines the test's value.
| Opaque Scoping | Transparent Scoping |
|---|---|
| The engagement is arranged between the CISO and the provider. The IT team learns about the test when they discover unfamiliar activity in their logs — or when the report arrives. | The IT team is informed that a pen test is being conducted, the general scope is communicated, and point-of-contact arrangements are established. The team knows to expect activity and has escalation paths for genuine concerns. |
| The SOC isn't told. They detect the tester's activity, initiate an incident response, and waste hours investigating before being told it's a pen test. The SOC is frustrated. The incident response costs real money. | The SOC is informed at the appropriate level — typically the SOC manager — with a decision about whether to tell the analysts. If testing detection, the analysts aren't told but the manager can de-escalate once the team is engaged in genuine response. |
| The scope excludes production systems because the CISO is nervous about disruption, but this isn't communicated to the tester clearly. The tester touches a production database. The relationship is damaged. | Scope boundaries are documented explicitly. Systems that must not be tested are listed. Escalation procedures for accidental scope breaches are agreed. Both sides understand the boundaries — and the consequences of crossing them. |
| The objectives of the test are unclear. The tester assumes they should demonstrate maximum impact. The CISO assumed a controls review. The report doesn't match expectations. | The objectives are agreed in writing: "Demonstrate whether an attacker with internal network access can reach the financial database." Both sides measure success against the same criteria. |
The testing phase is where transparency is most practically important — and most frequently neglected. A tester who disappears for ten days and reappears with a 150-page report has missed every opportunity to build trust with the team they're trying to help.
The report is where transparency matters most — because the report is the permanent record. It's read by the IT team, the CISO, the board, the auditor, and potentially the insurer and the regulator. A report that overstates findings, omits limitations, or presents results with false certainty damages trust with every audience.
| Transparent Reporting | Opaque Reporting |
|---|---|
| "Domain Admin was achieved in 2 hours 15 minutes from a standard workstation via a chain of three findings (F-003, F-007, F-011). The chain is detailed in the attack narrative, Section 4." | "The tester achieved full domain compromise, demonstrating that the organisation's security controls are insufficient to prevent a sophisticated attack." — Dramatic but imprecise. Doesn't communicate the specific path, the time, or the chain. |
| "The testing window was 5 days. Given the environment size (1,247 hosts), full-depth testing of every system was not possible. The tester prioritised systems identified as high-value targets during discovery. Systems not individually assessed may contain vulnerabilities not reflected in this report." | No mention of scope limitations or time constraints. The reader assumes the report represents a comprehensive assessment of every system — which it doesn't. |
| "Social engineering, physical access testing, and denial-of-service testing were excluded from the scope. A real attacker would not operate under these constraints." | No mention of what was excluded. The reader doesn't understand the boundaries of the assessment or the residual risk from untested areas. |
| "The tester was unable to escalate beyond local admin on WEBSRV01. This may indicate effective hardening — or it may indicate that the escalation path requires more time than the engagement allowed. This system warrants further assessment." | "No critical findings on WEBSRV01." — Presented as a clean result when the reality is inconclusive. |
| "Finding F-014 is rated Medium. In isolation, the risk is moderate. However, in combination with F-003 and F-007, this finding forms part of the chain that reaches the financial database. The chain risk is Critical." | "Finding F-014: Medium severity." — The chain context that makes this finding critical is invisible. The reader deprioritises it based on the isolated score. |
Transparent reporting doesn't mean softening findings. It means providing the context that allows every reader — the engineer, the CISO, the board member, the auditor — to understand what was found, what it means, what wasn't tested, and where uncertainty exists. A report that says "we couldn't determine whether this system is vulnerable" is more trustworthy than one that says "no findings" when the reality is inconclusive.
The IT team, the SOC analysts, and the security engineers are the people who live with the findings. They're the ones who remediate them, who explain them to their managers, and who are implicitly judged by their existence. How findings are presented to these teams determines whether the pen test is a tool or a weapon.
| What Destroys Trust | What Builds Trust |
|---|---|
| Findings that read as criticism of specific individuals: "The system administrator failed to apply the GPO correctly." | Findings that identify systemic issues: "The GPO was applied to the Servers OU rather than the domain root, resulting in workstations being unaffected. This is consistent with a configuration management gap rather than an individual error." |
| A report delivered to the CISO without the IT team seeing it first. The team learns about findings from their manager's questions — defensive and caught off guard. | A debrief session where the tester walks the IT team through the findings before the report goes to leadership. The team hears the findings directly, asks questions, and understands the context before anyone else reads the report. |
| Findings presented without acknowledgement of what's working: 34 findings, no mention that the EDR caught the initial payload, that the password policy blocked the first five brute-force attempts, or that the SOC detected two of seven actions. | A balanced report that acknowledges effective controls alongside the findings: "The organisation's EDR product detected and blocked the initial payload. The tester was required to develop a custom encoded payload to bypass detection — indicating that the endpoint protection is functioning above baseline." |
| A tester who treats the engagement as a competition — bragging about how quickly they achieved DA or how many systems they compromised. | A tester who treats the engagement as a collaboration — explaining what they found, why it matters, and how to fix it, with respect for the team that built and maintains the environment under real-world resource constraints. |
Leadership — the CISO, the risk committee, the board — needs transparency of a different kind. They don't need technical detail. They need honest communication about what the test revealed, what it means for the organisation, and what the limitations are.
Transparency doesn't change what the pen test finds. It changes how those findings are received, understood, and acted upon. An opaque engagement — where the IT team is surprised, the scope is unclear, the report omits limitations, and findings are presented as criticisms — produces defensiveness, dispute, and resistance to remediation. A transparent engagement — where expectations are set, communication is maintained, reporting is honest, and findings are presented as systemic issues with actionable solutions — produces engagement, understanding, and constructive action.
Trust between testers, defenders, and leadership is the mechanism that turns a pen test report into security improvement. Without trust, findings are disputed. With trust, findings are fixed. Transparency at every stage — scoping, testing, reporting, and debrief — builds that trust.
The most effective pen test engagements are not the ones where the tester finds the most vulnerabilities. They're the ones where every stakeholder — the engineer who will fix the findings, the CISO who will fund the remediation, and the board who will approve the investment — trusts the process, understands the results, and acts on them.
Our engagements are structured around transparency: pre-test briefings, daily status updates, critical finding notifications, honest reporting that includes limitations and effective controls, and technical debriefs before the report reaches leadership — because a pen test that builds trust produces more security improvement than one that doesn't.