Penetration Testing

Why Penetration Testing Should Evolve as the Business Evolves

> diff scope_2024.txt scope_2025.txt | wc -l && echo 'if this is 0, something is wrong'_

Peter Bassill 16 December 2025 14 min read
evolving scope business change attack surface cloud migration M&A testing strategy adaptive testing

Same test, different business — diminishing returns.

An organisation commissions its first internal infrastructure pen test in 2022. The tester identifies 34 findings, including a chain to Domain Admin via LLMNR poisoning and a Kerberoastable service account. The findings are remediated. The following year, the same test is commissioned — same scope, same methodology. The tester validates the fixes, finds some new issues, and the cycle continues.

By 2025, the internal infrastructure has been tested three times. The easy wins are fixed. The recurring findings are down to two. The tester reaches Domain Admin in two days instead of two hours — genuine improvement. But in those three years, the business migrated its ERP system to AWS, launched a customer-facing API platform, acquired a competitor with its own Active Directory forest, shifted 60% of the workforce to permanent hybrid working, and onboarded four new SaaS providers that process customer data.

None of these changes were in the pen test scope. The internal infrastructure test continues to show improvement — because it's testing a surface that's been hardened three times. The attack surface that actually grew — the cloud environment, the API, the acquired infrastructure, the remote access architecture, the supply chain — has never been tested. The organisation's reported security posture is improving. Its actual risk profile may be deteriorating.


When the scope should change — because the business did.

Business Change How It Changes the Attack Surface Testing Implication
Cloud migration New infrastructure in AWS, Azure, or GCP. IAM policies, storage permissions, network controls, serverless functions, and container configurations create an entirely new attack surface that doesn't exist on-premises. Misconfigured S3 buckets, overly permissive IAM roles, and publicly exposed management interfaces are cloud-specific risks. Add cloud-specific testing: IAM policy review, storage permission assessment, network control testing, serverless function security, and container escape testing. The internal infrastructure test doesn't cover any of this.
Mergers and acquisitions The acquired entity brings its own Active Directory, its own infrastructure, its own technical debt, and its own security posture — which is typically unknown until assessed. Trust relationships between forests, merged identity systems, and inherited legacy infrastructure create immediate risk. Test the acquired infrastructure as a priority — before full integration. Assess trust relationships between the existing and acquired AD environments. The acquisition's security posture is an unknown risk until it's tested.
New customer-facing applications A new portal, API, or mobile application creates internet-facing attack surface. Authentication, session management, authorisation, input validation, and API security become critical. Application-layer vulnerabilities are fundamentally different from infrastructure vulnerabilities. Commission web application and API testing before or immediately after launch. Application testing requires different skills and methodology from infrastructure testing — ensure the provider has OWASP expertise.
Hybrid and remote working VPN concentrators, remote desktop services, cloud identity providers, endpoint security on unmanaged networks, and split-tunnel configurations create new entry points. The perimeter has dissolved — the attack surface now includes every employee's home network. Test remote access architecture: VPN configuration, MFA implementation, conditional access policies, endpoint security effectiveness on remote devices, and cloud identity provider configuration (Azure AD/Entra ID, Okta).
Supply chain changes New SaaS providers processing customer data, new managed service providers with admin access, new integration partners with API connections. Each third-party relationship is a potential attack vector — as demonstrated by supply chain compromises from SolarWinds to MOVEit. Assess the security of third-party integrations: API authentication, data sharing mechanisms, access controls, and the scope of admin access granted to managed service providers. Consider supply chain-focused testing scenarios.
Regulatory changes New regulations (DORA, NIS2, UK GDPR enforcement actions) may bring previously untested systems into scope. Payment systems, customer data stores, and critical infrastructure that were "future testing priorities" become mandatory testing targets. Review the testing programme against current regulatory requirements. Ensure the scope covers all systems that regulations require to be tested — not just the systems that were in scope when the programme was designed.
Business growth More employees, more systems, more data, more customers, more locations. The attack surface scales with the business. A test scoped for a 200-person organisation doesn't adequately cover a 600-person organisation — even if the core infrastructure is the same. Scale the testing programme with the business. Increase the testing window, expand the scope, and consider whether the engagement duration is sufficient for the current environment size.
New technology adoption AI/ML systems, IoT deployments, operational technology, blockchain integrations, and other emerging technologies introduce attack surfaces that traditional testing methodologies don't cover. Ensure the provider has expertise in the specific technology. AI model security, IoT device testing, and OT assessments require specialist knowledge that a general infrastructure tester may not possess.

Why the same test gets less valuable every year you repeat it.

The first time you test a network, everything is new. The tester discovers the environment, identifies the attack paths, and produces a comprehensive set of findings. The second test validates the fixes and goes deeper — finding issues the first test didn't reach. By the third and fourth test of the same scope, the tester is re-covering familiar ground. The easy findings are fixed. The remaining findings are either accepted risks or systemic issues that require major investment to resolve.

This isn't a failure of the testing — it's the natural consequence of hardening an environment through repeated assessment. The diminishing returns are a signal that the scope has been adequately tested and it's time to redirect effort to untested areas. An organisation that responds to diminishing returns by continuing to test the same scope is spending money to confirm what it already knows. An organisation that responds by expanding the scope into untested areas is spending the same money to discover what it doesn't know.

Year Same Scope Repeated Evolving Scope
Year 1 Internal infrastructure: 34 findings, DA in 2 hours. High value — first assessment, full discovery. Internal infrastructure: 34 findings, DA in 2 hours. Same — both approaches start at the same place.
Year 2 Internal infrastructure: 12 findings, DA in 2 days. Good — validates fixes, goes deeper. Internal infrastructure retest (2 days) + web application test (3 days): validates fixes AND discovers 18 application-layer findings in the new customer portal.
Year 3 Internal infrastructure: 6 findings, DA in 4 days. Diminishing returns — most issues already known or accepted. Cloud environment assessment (3 days) + acquired company infrastructure (2 days): discovers 22 cloud misconfigurations and 14 critical findings in the acquired AD forest.
Year 4 Internal infrastructure: 4 findings, DA not achieved. The test confirms what's already known — the internal infrastructure is well-hardened. Red team exercise (10 days) covering the full estate: tests detection and response across internal, cloud, application, and remote access. Discovers that the SOC detects 3 of 9 actions.

Both programmes spend roughly the same number of testing days per year. The first programme has tested one environment four times and can confirm that the internal infrastructure is well-hardened. The second programme has tested four environments once each, validated the internal infrastructure fixes, and has a comprehensive view of risk across the organisation's actual attack surface. The second programme knows more, has found more, and has driven more improvement.


Business events that should trigger a scope review.

Rather than reviewing the pen test scope once a year when the engagement is due, organisations should treat specific business events as triggers for a scope reassessment. When the trigger occurs, the CISO or security team should evaluate whether the existing testing scope still covers the organisation's actual risk profile — or whether it needs to change.

Infrastructure Change
Any significant migration — to cloud, to a new data centre, to a new managed service provider. Any new network segment, VLAN, or trust relationship. Any new internet-facing service. If the infrastructure changed, the testing scope should be reviewed.
Corporate Transaction
Mergers, acquisitions, divestitures, and joint ventures. Each introduces or removes systems, networks, and trust relationships. The acquired entity's security posture is unknown until assessed. Test before you integrate.
Product Launch
Any new customer-facing application, API, or platform. Any significant change to an existing application's authentication, authorisation, or data handling. The application is the attack surface — test it before or immediately after launch.
Workforce Change
Significant growth, remote working adoption, BYOD policy changes, or new office locations. Each changes the access patterns, the endpoint landscape, and the perimeter. If how people work changed, test whether the security architecture still fits.
Regulatory Change
New regulations, updated compliance requirements, or regulatory enforcement actions in the sector. DORA, NIS2, and updated ICO guidance all potentially expand the systems and functions that must be tested. If the regulatory landscape changed, review the scope against the new requirements.
Threat Landscape Change
A significant breach at a peer organisation, a new vulnerability class affecting your technology stack, or a threat intelligence report identifying your sector as a target. If the threat changed, test whether your defences address it.

A testing programme that follows the business.

An adaptive testing programme starts with the business, not the testing methodology. Instead of asking "what should we pen test this year?" the question becomes "what changed in the business since the last test, and how does that change our risk profile?" The answer drives the scope.

Programme Element Static Approach Adaptive Approach
Scope definition Same scope every year, defined when the programme was established. Scope reviewed against business changes before each engagement. A pre-engagement scoping session with the provider maps current risk to current testing priorities.
Testing calendar One annual engagement, fixed duration, fixed timing. Multiple shorter engagements throughout the year, triggered by business events. Annual comprehensive test supplemented by targeted assessments when significant changes occur.
Provider briefing Provider receives a network range and a start date. Provider receives the business context: what changed since the last test, what the organisation's current strategic priorities are, where management perceives the greatest risk. The provider tailors the engagement to the current reality.
Methodology Same testing methodology annually. Internal infrastructure test with standard techniques. Methodology adapts to the scope: infrastructure testing for core networks, OWASP methodology for web applications, CIS benchmarking for cloud environments, TIBER-aligned approach for detection testing. Different scopes require different expertise.
Reporting Standalone report compared loosely to last year's findings. Report explicitly maps new findings to business changes: "The AWS migration introduced 12 findings that didn't exist in the previous engagement." Longitudinal tracking shows which changes introduced risk and which remediations resolved it.

Making your testing programme as dynamic as your business.

Map Your Attack Surface Annually
Before scoping the next pen test, map the current attack surface: all internet-facing services, all cloud environments, all third-party integrations, all remote access mechanisms, all acquired infrastructure. Compare it to last year's map. The difference between the two maps is where the testing scope should change.
Rotate Scope Deliberately
Don't test the same environment every year simply because it's familiar. Rotate: internal infrastructure one year, web applications the next, cloud the next, red team exercise the next. Each engagement covers different ground. Over a three-to-four year cycle, the entire attack surface has been tested — not just the original scope.
Implement Trigger-Based Testing
Integrate testing triggers into business processes: any cloud migration, any acquisition, any new internet-facing application, any significant regulatory change triggers a scope review and potentially a targeted test. Don't wait for the annual cycle — test when the risk changes.
Brief the Provider on Business Context
Before every engagement, brief the provider on what changed: the cloud migration, the acquisition, the new product launch, the workforce changes. A provider who understands the business context can focus the engagement on the highest-risk areas — rather than re-testing well-hardened ground.
Report Risk by Business Area, Not Just Finding Count
Present pen test results to the board mapped to business areas: "The core infrastructure is well-hardened after three years of testing. The cloud environment — migrated last year — has 12 findings including 3 critical. The acquired subsidiary has not yet been tested." This framing connects testing to business decisions and investment priorities.

The bottom line.

A penetration test that doesn't evolve with the business is testing yesterday's risk profile. The internal network may be well-hardened after three annual assessments — but the cloud environment migrated eighteen months ago has never been tested, the acquired company's infrastructure was inherited without assessment, and the new API platform launched last quarter was deployed without a security review. The organisation's testing programme says it's improving. Its actual attack surface says it's expanding.

Every significant business change — cloud migration, acquisition, product launch, workforce shift, supply chain change, regulatory development — shifts the attack surface. A testing programme that ignores these shifts produces diminishing returns on a hardened scope while leaving new, untested surfaces exposed. An adaptive programme treats business changes as testing triggers, rotates scope deliberately, and briefs the provider on business context so the engagement targets the current risk, not the historical one.

The question isn't "should we pen test this year?" The question is "what changed since the last test, and does our testing scope still reflect the business we are today?" If the answer to the second question is no, the scope needs to change — because the attackers targeting your organisation have already noticed.


Penetration testing programmes that adapt to your evolving attack surface.

We work with organisations to build adaptive testing programmes — scoping each engagement to the current business reality, not last year's network diagram. Cloud migrations, acquisitions, product launches, and regulatory changes all shift the risk. Your testing should shift with them.