Penetration Testing

Attack Surface Mapping

> nmap -sn 0.0.0.0/0 | grep 'yours' | wc -l_

Peter Bassill 8 April 2025 13 min read
attack surface penetration testing asset discovery risk prioritisation shadow IT

You don't know how big your front door is.

Every organisation has a mental model of its attack surface — the collection of systems, services, and entry points that an attacker could target. The problem is that the mental model is almost always smaller than reality. Usually significantly smaller.

When we conduct attack surface mapping as the opening phase of a penetration test, we typically discover between 30% and 60% more externally-reachable assets than the client's own inventory accounts for. Forgotten subdomains, undocumented cloud instances, development environments left running, legacy systems that were decommissioned on paper but never switched off, third-party services with trust relationships back into the corporate network.

This isn't a minor accounting issue. Every unknown asset is an untested asset — and untested assets are, by definition, the ones most likely to be vulnerable. They sit outside the patching cycle, outside the monitoring scope, and outside the pen test scope. They are exactly where an attacker looks first.

Attack surface mapping is the process of systematically discovering, cataloguing, and analysing every point at which an attacker could interact with your organisation. Done properly, it transforms a pen test from "assess the things we know about" into "assess the things an attacker would actually target" — and the difference between those two statements is where most breaches live.

The Discovery Paradox

You cannot secure what you don't know exists. And you cannot test what you haven't mapped. If your pen test scope is based on your asset inventory, and your asset inventory is incomplete, then your pen test has blind spots — blind spots that correspond exactly to the assets an attacker is most likely to exploit.


What is an attack surface?

Your attack surface is the sum total of all points where an unauthorised user could attempt to interact with your organisation's systems, data, or people. It isn't just your servers and web applications — it extends far beyond technology into processes, people, and third-party relationships.

It helps to think of the attack surface in layers. Each layer presents different risks, requires different testing approaches, and is maintained by different teams — which is precisely why gaps form between them.

Layer What It Includes Who Typically Manages It
External network Public IP addresses, DNS records, internet-facing services, VPN gateways, mail servers, web servers, cloud management consoles, API endpoints IT infrastructure / network team
Web applications Customer portals, SaaS platforms, e-commerce sites, intranets exposed to the internet, APIs (REST, GraphQL, SOAP), webhooks, OAuth endpoints Development / product teams
Cloud estate IaaS instances, PaaS services, serverless functions, container registries, storage buckets, IAM policies, cross-account trust relationships DevOps / cloud engineering / sometimes shadow IT
Internal network Active Directory, file shares, database servers, print servers, management interfaces, VLAN architecture, broadcast protocols, legacy systems IT infrastructure / systems administration
Wireless Corporate SSIDs, guest networks, IoT device networks, Bluetooth-enabled systems, wireless printers, building management systems IT infrastructure / facilities
Human Email addresses, social media profiles, phone numbers, physical office access points, helpdesk procedures, third-party supplier contacts HR / office management / everyone
Supply chain Third-party software dependencies, managed service providers with VPN access, SaaS vendors with API integrations, outsourced development teams Procurement / vendor management / IT
Information Public documents, code repositories, DNS records, certificate transparency logs, job adverts, Companies House filings — the OSINT layer No single owner — which is part of the problem

An attacker doesn't see these layers as separate domains with separate owners. They see one continuous surface with seams between the layers — and the seams are where the weaknesses concentrate.


How we map an attack surface in practice.

Attack surface mapping combines automated discovery, manual OSINT, and structured analysis. The goal is not just a list of assets but a prioritised understanding of where risk concentrates — which systems are most exposed, most valuable, and most likely to be targeted.

Our process follows four phases, each building on the last:

Phase What We Do Tools and Techniques
1. Passive discovery Enumerate every externally-discoverable asset without sending a single packet to the client's infrastructure. DNS enumeration, certificate transparency, Shodan/Censys queries, WHOIS, BGP route analysis, Google dorking, GitHub searches. Amass, Subfinder, crt.sh, Shodan, Censys, theHarvester, SpiderFoot, Google advanced operators, GitHub/GitLab search
2. Active enumeration Probe discovered assets to confirm their existence and identify running services. Port scanning, service fingerprinting, web application discovery, API endpoint enumeration, technology stack identification. Nmap, Masscan, Nuclei, httpx, WhatWeb, Wappalyzer, custom scripts
3. Relationship mapping Map how assets connect to each other and to the internal network. Identify trust relationships, shared infrastructure, authentication chains, and lateral movement paths. This is where the map becomes three-dimensional. Manual analysis, BloodHound (internal), DNS chain analysis, cloud IAM review, network topology inference
4. Risk-based prioritisation Classify every discovered asset by exposure level (internet-facing, authenticated, internal-only), business criticality (crown jewels vs commodity), and exploitability (known CVEs, weak authentication, misconfigurations). The output is a prioritised attack surface map. Tester expertise, CVSS data, asset criticality from client input, MITRE ATT&CK mapping
Attack Surface Discovery — Real Output
# Passive Discovery Results
subdomains_found = 67 # Client inventory listed 23
unique_ips = 41 # Client inventory listed 16
cloud_instances = 12 # 4 unknown to IT team
exposed_services = 94 # Across all discovered hosts

# Notable Unknowns
unknown[0] = staging.client.co.uk # Full app mirror, no WAF, default creds
unknown[1] = old-vpn.client.co.uk # Previous Pulse Secure — still running
unknown[2] = dev-jenkins.client.co.uk # Jenkins dashboard, no authentication
unknown[3] = s3://client-backups-2023 # Public read — contains DB dumps

# Delta
assets_in_client_inventory = 23
assets_actually_discoverable = 67
gap = 66% # Two-thirds of the surface was unknown

Two-thirds of the attack surface was unknown to the client. An unauthenticated Jenkins dashboard. A previous-generation VPN appliance still running. A public S3 bucket containing database backups. A staging environment with default credentials. None of these were in the pen test scope provided by the client — because the client didn't know they existed.

The Inventory Gap Is Normal

This isn't a reflection of poor IT management. It's a natural consequence of how organisations grow: systems are deployed for projects that end, cloud instances are spun up for testing that finishes, acquisitions bring unknown infrastructure, and nobody's role description includes "maintain a perfect inventory of everything we've ever put on the internet." The gap is normal. Ignoring it isn't.


How the map decides where we test deeply.

A pen test has finite time. Five days. Ten days. Whatever the engagement allows. The question is: how do you allocate that time to generate the most valuable findings? Test everything shallowly and you find what a scanner finds. Test the wrong things deeply and you waste effort on low-risk targets. Test the right things deeply and you find the attack paths that actually endanger the business.

Attack surface mapping provides the intelligence to make that allocation decision well. It answers three questions that determine where testing time is best spent:

Question What the Map Tells Us How It Changes the Test
What's most exposed? Which assets are directly reachable from the internet, with no authentication barrier? Which ones have known vulnerabilities in their service banners? Which ones are running outdated software? Heavily exposed assets with known weaknesses get priority testing time. A Fortinet VPN running a vulnerable firmware version gets tested before a patched nginx web server behind a WAF.
What's most valuable? Which assets hold or provide access to the organisation's crown jewels? The customer database. The finance system. The partner document store. The domain controller. Assets on the critical path to crown jewels get deep testing — even if they don't appear externally vulnerable. A well-configured web server that's the only route to the customer database deserves more attention than an isolated marketing microsite.
What's most connected? Which assets, if compromised, provide a path to other systems? The VPN gateway that bridges external and internal. The jump box with routes to production. The CI/CD server that deploys to every environment. Highly connected assets are force multipliers for an attacker. Compromising one gives access to many. These get priority because the blast radius of a successful exploitation is highest.

The intersection of these three dimensions — high exposure, high value, high connectivity — is where critical risk concentrates. A pen test informed by a complete attack surface map allocates the majority of its time to this intersection.


Turning a map into a test plan.

Once the surface is mapped, every discovered asset is classified into a priority tier. The classification is straightforward — but without the mapping phase, it's impossible to do accurately because you're working from an incomplete inventory.

Tier Criteria Test Approach Example
Critical Internet-facing, holds or routes to crown jewels, has known vulnerabilities or weak authentication, high connectivity to internal systems Deep manual testing. Full exploitation. Attack chain development. Business impact analysis. Detection testing. VPN gateway running vulnerable firmware with a route to the internal AD domain. Customer portal with IDOR vulnerabilities and access to the full customer database.
High Internet-facing, serves a business function, may have vulnerabilities but behind some defensive controls, moderate connectivity Thorough manual testing with exploitation of confirmed vulnerabilities. Attack path analysis to determine if compromise leads to higher-value assets. Corporate website with a CMS admin panel. Mail server with user enumeration via timing differences. Cloud management console behind MFA.
Medium Internet-facing but low-value, or internal-only with limited connectivity. No immediately obvious vulnerabilities from enumeration. Automated scanning supplemented by targeted manual checks. Included in the report but not the primary focus of tester time. Static marketing microsite on a separate hosting provider. Internal print server accessible only within the office VLAN.
Low Minimal exposure, minimal value, no connectivity to critical systems. Presents limited risk even if compromised. Automated scanning only. Noted in the report for completeness but not allocated manual testing time. Development sandbox isolated from all other networks. Public DNS records that reveal no sensitive information.

This tiering isn't a rigid formula — tester expertise and client context refine it. But the principle is clear: allocate time in proportion to risk, not in proportion to asset count. A single critical-tier asset may warrant more testing time than twenty low-tier assets combined.


How mapping reveals realistic attack chains.

The most valuable output of attack surface mapping isn't the asset list — it's the path analysis. By understanding how assets connect, trust each other, and share credentials, we can identify realistic exploitation paths before testing even begins.

An exploitation path is a chain of steps an attacker would take to move from an initial entry point to a specific objective. Each step requires a weakness — a vulnerability, a misconfiguration, a credential — and the chain only works if all the steps connect. Mapping reveals these connections.

Path 1: VPN to Domain Admin
Entry: exploit vulnerable VPN appliance (discovered via Shodan during mapping). Pivot: VPN drops the attacker on the internal network with no NAC. Escalate: LLMNR poisoning captures NTLMv2 hash. Crack offline. Account is a service account with DA-equivalent privileges via unconstrained delegation. Result: Domain Admin in under 3 hours. Every step identified during mapping before exploitation began.
Path 2: Web App to Customer Data
Entry: IDOR vulnerability in customer portal (discovered during app enumeration). Escalate: horizontal privilege escalation exposes other customers' data. Pivot: admin panel found at predictable URL with default credentials (discovered via directory enumeration). Result: full administrative access to the application and the underlying database. The path was visible in the mapping output before a single exploit was run.
Path 3: Public Bucket to Cloud Estate
Entry: public S3 bucket containing database backups (discovered during passive recon). Extract: backup contains application credentials including an AWS access key. Pivot: access key has overly-permissive IAM policy — can list and access other S3 buckets, EC2 instances, and Lambda functions. Result: full read access to the cloud estate via a single leaked credential. Mapping identified the bucket; path analysis identified the chain.
Path 4: Jenkins to Production
Entry: unauthenticated Jenkins instance (discovered during subdomain enumeration). Examine: Jenkins build configurations contain production deployment credentials in plaintext. Pivot: credentials grant SSH access to production web servers. Result: code execution on production infrastructure via a CI/CD tool that nobody realised was internet-facing. The Jenkins instance wasn't in the client's inventory.

Each of these paths was identifiable — at least in outline — from the mapping phase alone. The testing phase confirms, demonstrates, and documents them. But the mapping is what makes the testing targeted rather than random.


Why the things you've forgotten matter most.

There is a direct, observable relationship between how long an asset has been forgotten and how vulnerable it is. Assets that fall off the inventory stop getting patched, stop getting monitored, and stop being included in security assessments. They accumulate vulnerabilities at the same rate as everything else — they just never get remediated.

Type of Forgotten Asset How It Gets Forgotten Why Attackers Love It
Staging / development environments Spun up for a project, left running after delivery. Often a full mirror of production — same code, same data, weaker controls. Same vulnerabilities as production but without the WAF, rate limiting, or monitoring. Often has default or shared credentials. May contain a recent copy of production data.
Legacy applications Replaced by a new system but never decommissioned because "a few users still need it" or because nobody signed off the shutdown. Running ancient frameworks and libraries with years of accumulated CVEs. Authentication model predates MFA. Often has database connectivity to current production systems.
Previous-generation infrastructure Old VPN concentrator replaced but not removed. Previous mail server still accepting connections. Former firewall management interface still accessible. Old infrastructure is old software with old vulnerabilities. The Pulse Secure VPN that was replaced by Fortinet two years ago still has a public IP and hasn't been patched since the day it was taken out of active use.
Shadow cloud resources A developer spun up an EC2 instance for a demo. A marketing team created an S3 bucket for a campaign. A contractor provisioned a test environment in their personal AWS account with a trust relationship back to the corporate account. Outside the corporate patching, monitoring, and access control policies. Often provisioned with overly-permissive IAM roles because "it's just for testing." The testing ended. The permissions didn't.
Acquired infrastructure A company acquisition brought IP ranges, domains, and systems that were never fully integrated into the acquiring organisation's inventory or security programme. Acquired assets often run different technology stacks, different security controls, and different patch schedules. They may have trust relationships that the acquiring organisation doesn't know about — including VPN tunnels back to the acquired company's former partners.
Third-party managed services A managed service provider hosts a system on your behalf. It appears on your domain but is managed by a third party whose patching schedule and security standards may differ from yours. You're responsible for the risk; the MSP is responsible for the maintenance. If those responsibilities aren't clearly defined and monitored, the system exists in a governance vacuum — visible to attackers, invisible to your security programme.

Attack surface mapping finds these assets. Without it, they persist indefinitely — accumulating risk until either someone notices or an attacker exploits them. The discovery phase of a pen test is often the most immediately valuable deliverable, because every unknown asset found is an asset that can be remediated, decommissioned, or brought into the management fold.


Mapping is not a one-off exercise.

An attack surface map is a snapshot. The moment it's completed, it starts going stale. New systems are deployed. Cloud instances are spun up. Certificates are issued. Employees join and leave. Third-party integrations are added. The surface changes continuously — and a map from six months ago may bear little resemblance to today's reality.

Approach Frequency Best For Limitation
Point-in-time mapping Once per engagement (annual or as-needed) Organisations commissioning periodic pen tests. The mapping phase feeds directly into the test scope. Stale between engagements. Changes occurring after the map is complete aren't captured until the next test.
Continuous monitoring Automated, ongoing — daily or weekly scans Organisations with rapidly changing environments: cloud-native, frequent deployments, multiple development teams. Automated tools find what's technically discoverable but miss the cross-referencing and contextual analysis that human OSINT provides.
Hybrid approach Continuous automated monitoring supplemented by periodic human-led mapping exercises Most organisations. Automated monitoring catches new assets as they appear; periodic human assessment provides depth, context, and path analysis. Requires investment in both tooling and periodic expert assessment. But the combination covers both breadth and depth.
A Practical Monitoring Programme
continuous: attack_surface_monitor # Automated — runs daily
monitor --subdomains --certificates --shodan # New assets detected within 24h
alert --on=new_asset --on=new_service --on=expired_cert

periodic: human_osint_review # Manual — quarterly or pre-engagement
review --github --linkedin --google_dorking --breach_db
analyse --paths --relationships --risk_tiers

pre_engagement: full_mapping # Deep — before every pen test
discover --passive --active --cloud --supply_chain
prioritise --by_exposure --by_value --by_connectivity
output = updated_scope + prioritised_test_plan

What you receive from the mapping phase.

The attack surface mapping output is delivered as a standalone section of the pen test report — and many clients tell us it's the most immediately actionable part of the entire deliverable. Here's what it includes:

Complete Asset Inventory
Every externally-discoverable asset: subdomains, IP addresses, services, cloud instances, third-party hosted systems. Compared against your provided inventory to highlight the delta — the gap between what you know about and what an attacker can find.
Risk-Tiered Classification
Every asset classified into critical, high, medium, and low tiers based on exposure, business value, and connectivity. This directly informs the pen test scope — and continues to be useful long after the test as a basis for your own vulnerability management prioritisation.
Exploitation Path Analysis
Identified attack chains — from internet-facing entry point through to business-critical objective. Each path annotated with the weaknesses that make it viable and the controls that would break it. These are the paths the pen test then validates.
Shadow Asset Register
A specific list of assets that weren't in your inventory — with recommendations for each: decommission, bring into management, add to monitoring, or investigate further. Many of these can be actioned immediately without waiting for the full pen test to complete.
Surface Metrics
Quantified attack surface dimensions: total discoverable assets, percentage unknown to the client, services by type and version, assets with known CVEs, expired certificates, missing security headers. Provides a baseline for measuring improvement over time.

What happens when you skip this step.

Organisations that skip attack surface mapping and proceed directly to testing based on their own asset inventory consistently experience the same pattern of failures.

What Happens Why It Happens The Cost
The pen test scope covers 40% of the actual attack surface The scope was based on an incomplete asset inventory. Unknown assets were excluded by default — not by decision. 60% of the attack surface is untested. The test report provides assurance about a minority of the exposure.
Testing time is misallocated Without mapping, every asset in scope receives equal attention. Time is spent on low-risk, well-defended systems while high-risk unknowns go untested. Tester-days consumed on assets that present minimal real risk. Findings are technically accurate but strategically irrelevant.
Attack chains are missed Without relationship mapping, the tester assesses each system in isolation. The chain from staging server → shared credentials → production database is invisible because the staging server wasn't in scope. Individual findings are reported. The chains that turn them into critical compromises are not — because the connecting assets were never discovered.
The real breach comes from outside the scope An attacker exploits a forgotten development environment, a shadow cloud instance, or a legacy VPN that the pen test didn't touch. The pen test report said "no critical findings." The breach came from an asset the report didn't cover. The test provided false assurance.

The bottom line.

Your attack surface is larger than you think. It includes assets you've forgotten, cloud instances nobody documented, staging environments that were never decommissioned, and information leakage that's been sitting in public view for months or years. An attacker will find all of it. The question is whether your pen test does too.

Attack surface mapping transforms a pen test from a generic assessment of known assets into a targeted investigation of real risk. It identifies where to test deeply, what to prioritise, and which exploitation paths are realistic — ensuring that every hour of testing time is spent on the things that matter most.

Map the surface first. Then test what matters. Anything else is testing with your eyes half closed.


See the full picture before you test.

Every engagement begins with comprehensive attack surface mapping — discovering what an attacker would find, prioritising by real risk, and building a test plan that targets what matters most.