Penetration Testing

What Questions to Ask Before Commissioning a Penetration Test

> cat pre-engagement-checklist.txt | wc -l && echo 'ask these before you sign'_

Peter Bassill 10 February 2026 14 min read
scoping pre-engagement questions objectives planning meaningful results testing quality

Most pen tests fail before they start — because the right questions weren't asked.

The procurement email reads: "We need our annual penetration test. Internal network. Five days. Please quote." The provider quotes. The purchase order is raised. The tester arrives, runs the engagement, and delivers a report. The report contains findings — some useful, some generic, none connected to what the organisation actually needs to know.

The engagement wasn't bad. The tester was competent. The report was professionally written. But the results don't address the CISO's actual concerns: whether the AWS migration introduced new risk, whether the SOC can detect lateral movement, whether the acquired company's infrastructure is safe to integrate. These questions were never asked — so the engagement wasn't designed to answer them.

The questions an organisation asks before commissioning a pen test determine whether the engagement produces meaningful results or a routine compliance artefact. These questions fall into four categories: questions to ask internally (before contacting any provider), questions to ask the provider, questions to ask about what changed, and questions about how the results will be used.


Before contacting any provider — ask yourselves first.

Question Why It Matters How the Answer Shapes the Engagement
"What do we actually want to know?" "We need a pen test" is not an objective. "Can an attacker reach the financial database from a compromised workstation?" is. The objective determines whether the engagement provides useful answers or generic findings. A specific objective produces a focused engagement. "Test whether the AWS migration introduced exploitable weaknesses" is a different test from "test the internal network." The provider can design the engagement to answer the question — rather than testing broadly and hoping the answer emerges.
"What assumptions are we operating on that haven't been tested?" Every organisation has security beliefs: the EDR works, the segmentation holds, MFA covers everything, the SOC detects intrusions. These assumptions persist because nothing has contradicted them — but they may be wrong. The assumptions define what the test should challenge. If the organisation believes the network is segmented, the test should specifically attempt to cross the segment boundaries. If the assumption survives the test, it's validated. If it doesn't, the organisation knows before an attacker proves it.
"What changed since the last test?" Cloud migrations, acquisitions, new applications, workforce changes, and supply chain additions all shift the attack surface. Testing the same scope as last year while the business has changed produces diminishing returns on hardened ground and zero coverage of new risk. Changes map directly to scope additions. The AWS migration becomes a cloud assessment. The acquisition becomes an infrastructure review of the acquired environment. The new application becomes a web application test. Each change is a scope item — and each produces more value than re-testing the same internal network.
"Who needs to see the results?" The IT team needs technical detail. The CISO needs risk context. The board needs business impact. The auditor needs evidence of testing. The insurer needs evidence of remediation. Each audience requires different information. The answer determines reporting requirements. If the board will see the results, the report needs a non-technical executive summary. If the insurer needs evidence, the report needs scope documentation and remediation tracking. Communicate these requirements to the provider before the engagement.
"What will we do with the findings?" If findings enter the risk register, drive investment decisions, and feed the IR plan, the engagement should produce findings with business context, chain analysis, and effort estimates. If findings are filed and forgotten, the engagement budget is wasted regardless of quality. The intended use shapes the deliverable. If findings will be presented to the board for investment decisions, the report needs phased remediation costs and business impact. If findings will inform the IR plan, the report needs MITRE ATT&CK mapping and detection gap analysis.

During the scoping conversation — before signing anything.

Question What You're Looking For
"Given our objectives, what scope do you recommend?" A good provider will challenge your scope assumptions. If you've asked for an internal infrastructure test but mention that you migrated to AWS last year, the provider should recommend including the cloud environment. If they simply accept the scope you've defined without question, they may be prioritising agreement over value.
"How many days of testing does our environment need?" An honest answer that's proportionate to the scope. A 1,200-host internal network with Active Directory, multiple VLANs, and cloud integration needs more than three days. If the provider's day estimate seems low for the environment size, the engagement will be shallow. Ask what will be covered and what won't in the proposed window.
"What won't you be able to test in this engagement?" Every engagement has limitations — time, scope, methodology. A transparent provider will tell you what the engagement won't cover and what residual risk will remain after testing. A provider who claims the engagement will be "comprehensive" within an unrealistic window is overselling.
"Can you walk me through what the tester will do each day?" A provider who can describe the daily rhythm of the engagement — discovery on day one, manual exploitation on days two through four, chain analysis and reporting on day five — understands what a pen test involves. A provider who can't articulate what happens beyond "we'll run our tools" may be delivering automated scanning rather than manual testing.
"What do you need from us to deliver the best results?" A good provider will ask for: previous pen test reports, the remediation tracker, network diagrams, a list of high-value targets, information about recent changes, and details about the security stack. This information doesn't give the tester an unfair advantage — it ensures the engagement is focused on what matters rather than spending two days on discovery that could have been briefed in 30 minutes.
"How will you communicate during the engagement?" Defined communication protocols: daily status updates, critical finding notification procedures, scope boundary escalation, and a post-engagement debrief. If the provider's answer is "we'll send the report when we're done," the engagement is missing the communication that makes findings actionable.
"Will the report include an attack narrative and chain analysis?" Yes. If the answer is no — if the report will be a list of findings sorted by CVSS score — the engagement won't reveal how findings combine to produce compromise, where the cheapest chain break points are, or how the findings relate to the organisation's actual risk.

The practical considerations that affect engagement quality.

"When should we schedule relative to our budget cycle?"
If the results will inform the security budget, schedule the engagement so the report arrives before budget decisions are made. A pen test in Q4 that informs the Q1 budget produces maximum investment impact. A pen test in Q2 whose findings compete with an already-allocated budget produces delayed remediation.
"Are there business activities that should be avoided during testing?"
Month-end processing, annual audits, major deployments, and customer-facing events may conflict with testing activity. Identify these windows before scheduling. A pen test that triggers an outage during month-end close creates more problems than it solves.
"Should we tell the SOC?"
If the objective includes testing detection capability, the SOC analysts should not be informed — but the SOC manager should know, with authority to de-escalate if the team launches a genuine incident response. If the objective is vulnerability identification only, informing the SOC avoids wasted incident response effort.
"When should we schedule the retest?"
Budget for a retest 8–12 weeks after the report is delivered. This gives the engineering team time to remediate the critical and high findings, and the retest validates that the fixes work. Commissioning the pen test without budgeting for the retest is commissioning a diagnosis without funding the treatment verification.

Building on what came before — not starting from scratch.

If this isn't the organisation's first pen test, the previous engagement's results should inform the current one. The questions below ensure continuity between engagements — so each test builds on the last rather than repeating the same ground.

Question Purpose
"Which findings from the last test were remediated?" The provider should receive the remediation tracker showing which findings were addressed and which remain open. The current engagement can validate remediations rather than rediscovering the same issues.
"Which findings recurred from the engagement before that?" Recurring findings indicate systemic issues — not just configuration gaps. If LLMNR has recurred for three consecutive years, the current engagement should investigate why remediation isn't persisting, not simply report it again.
"Were the previous attack chains broken?" The current tester should specifically validate whether the chains from the previous engagement are still viable. If the chain break points were remediated, the chain should be confirmed as broken. If not, the finding is more urgent than a new discovery — it's a known risk that persists.
"What did the previous engagement not cover?" Scope gaps from the previous engagement become scope priorities for the current one. If the previous test excluded the cloud environment, this engagement should include it. If social engineering wasn't tested, this is the year to add it.

A pre-engagement checklist for meaningful results.

Define Objectives, Not Just Scope
Before contacting any provider, write down the specific questions you want the pen test to answer. "Can an attacker reach the financial database?" "Does the SOC detect lateral movement?" "Is the acquired infrastructure safe to integrate?" These questions become the engagement objectives — and the provider designs the test to answer them.
Prepare the Briefing Pack
Compile: the previous pen test report, the remediation tracker, network diagrams, a list of high-value targets, details of changes since the last test, and information about the security stack. Provide this to the provider before the engagement. The 30 minutes spent briefing saves two days of discovery — and produces deeper results.
Involve the Right People in Scoping
The scoping conversation should include the CISO (objectives and risk context), the IT team (technical environment details), and the business (what assets matter most). A scoping conversation with only procurement produces an engagement optimised for price. A scoping conversation with security, IT, and business produces an engagement optimised for value.
Document the Engagement Agreement
Before the test begins, document in writing: the objectives, the scope (including explicit exclusions), the testing window, communication protocols, critical finding notification procedures, the debrief schedule, and the retest timeline. This document is the contract for a meaningful engagement — not just a purchase order for "pen test — 5 days."
Ask What You're Afraid Of
The most valuable question is often the one the organisation is reluctant to ask: "What if the EDR doesn't work?" "What if the segmentation has gaps?" "What if the SOC can't detect the attacker?" These questions identify the assumptions the test should challenge — and challenging assumptions is where penetration testing produces its highest value.

The bottom line.

The questions asked before a pen test determine the quality of the results more than any other factor — more than the provider's skill, more than the testing window, more than the budget. An engagement with clear objectives, defined assumptions to challenge, a briefing pack for the tester, and a plan for how the results will be used produces meaningful security improvement. An engagement with no objectives beyond "annual pen test" and no preparation beyond a network range produces a compliance artefact.

The questions fall into four categories: what do we want to know (objectives), what should the provider understand (scope and context), what changed since last time (evolving attack surface), and what will we do with the results (intended use). Each category shapes a different aspect of the engagement. Together, they transform a routine procurement into a targeted assessment designed to produce the specific answers the organisation needs.

The best pen test starts with the best questions. Ask them before you sign — because once the engagement starts, the scoping is done.


Pre-engagement scoping designed to produce meaningful results from day one.

Our scoping process begins with your objectives — not a network range. We work with the CISO, the IT team, and the business to design an engagement that answers the questions that matter, challenges the assumptions that need testing, and produces results that drive genuine improvement.