> ssh internal@10.0.0.1 && echo 'now what?'_
An engineering firm commissions an external penetration test. The results are reassuring: the perimeter firewall is well-configured, internet-facing services are patched and hardened, and the only findings are a few informational issues around HTTP security headers and an outdated TLS cipher suite. The CISO presents the results to the board. Confidence is high.
Three months later, the same firm commissions an internal test. The tester plugs a laptop into a network port in a meeting room — the same port a visitor, a contractor, or anyone with five minutes alone in the building could access. Within 90 minutes, the tester has captured domain credentials from broadcast traffic, escalated to Domain Admin through a Kerberoastable service account, accessed the finance file share containing payroll data for every employee, read the managing director's email, and demonstrated the ability to deploy a payload to every domain-joined machine via Group Policy.
The external perimeter was solid. The internal environment was comprehensively compromised in under two hours. And the gap between those two realities — a hardened shell around a soft interior — is the single most common finding pattern across every sector we assess.
In approximately 85% of internal penetration tests we conduct, the tester achieves Domain Admin. The median time to DA from a standard network connection is under 4 hours. These aren't poorly managed organisations — they're firms with firewalls, EDR, and annual external testing. They've invested in keeping attackers out. They haven't tested what happens when one gets in.
External and internal penetration tests aren't two variants of the same exercise. They simulate fundamentally different attack scenarios, test different controls, and reveal different categories of risk. Understanding what each does — and doesn't — assess is essential to building a testing programme that covers your full risk profile.
| External Test | Internal Test | |
|---|---|---|
| Simulates | An attacker on the internet attempting to breach the perimeter — through exploiting internet-facing services, web applications, VPN gateways, or email-borne attacks. | An attacker who already has internal network access — through a phished employee, a compromised VPN credential, a rogue device on the network, a malicious insider, or physical access to the building. |
| Starting position | No access. The tester begins from the internet with no credentials, no internal knowledge, and no network connectivity beyond what's publicly reachable. | Network access. The tester has a connection to the internal network — typically a standard employee-level network port or VPN connection, with or without domain credentials depending on the scenario. |
| Primary targets | Firewalls, web applications, email gateways, VPN portals, DNS, cloud services, any internet-facing infrastructure. | Active Directory, file shares, internal web applications, database servers, management interfaces, network segmentation, broadcast protocols, internal DNS. |
| Key question answered | Can an attacker get in? | Once they're in, how far can they go? |
| Typical critical findings | Exploitable vulnerabilities in internet-facing services, web application flaws (SQLi, IDOR, auth bypass), weak VPN authentication, exposed management interfaces. | Privilege escalation to Domain Admin, unrestricted lateral movement, credential harvesting from broadcast protocols, access to sensitive data through weak file share permissions, absent network segmentation. |
| Controls tested | Perimeter firewall rules, WAF effectiveness, patching of internet-facing services, email security (SPF/DKIM/DMARC), external attack surface management. | Network segmentation, AD hardening, credential hygiene, internal access controls, broadcast protocol security, endpoint detection, SIEM alerting, privilege management. |
| Detection tested | Rarely — external attacks either succeed or they don't. If the perimeter blocks the attack, the test is over for that vector. | Extensively — every phase of the internal attack generates activity that should trigger alerts: authentication anomalies, LDAP enumeration, lateral movement, privilege escalation. The internal test reveals whether your monitoring sees any of it. |
A trust boundary is a point in your architecture where the level of trust changes — where the system's assumptions about who is making a request, what they're allowed to do, and whether they should be verified shift from one model to another.
The most obvious trust boundary is the perimeter: outside is untrusted, inside is trusted. But real environments have dozens of trust boundaries — between network segments, between user roles, between services, between cloud accounts — and each one represents a point where a control either enforces the boundary or fails to.
Internal penetration testing is fundamentally about testing trust boundaries. Every finding is, at its core, a boundary that failed to hold.
| Trust Boundary | What It Should Enforce | How It Typically Fails |
|---|---|---|
| Network segmentation — separation between VLANs, subnets, and security zones | A device on the corporate VLAN shouldn't reach the server VLAN directly. The guest Wi-Fi shouldn't route to the domain controller. The OT network shouldn't be accessible from IT. | Flat networks with no segmentation. Firewall rules that allow "any-to-any" between VLANs. Guest Wi-Fi on the same broadcast domain as the corporate network. "Temporary" rules that became permanent. |
| Authentication boundaries — the point where identity is verified before access is granted | Accessing the finance system requires authenticating with finance-level credentials. Accessing the domain controller requires DA-level credentials. Each system verifies identity independently. | Single sign-on that grants too-broad access. Service accounts with passwords that never change. Cached credentials on workstations that allow pass-the-hash. Kerberos delegation that trusts too widely. |
| Privilege boundaries — the separation between standard user, local admin, and domain admin privileges | A standard user cannot install software, modify system configuration, or access other users' data. Local admin cannot manage the domain. Only designated accounts have DA. | Users with unnecessary local admin. Service accounts with DA membership. Overly-permissive Group Policy granting admin rights to broad groups. Unprotected LAPS or no LAPS at all. |
| Data access boundaries — controls determining who can read, write, or modify specific data | HR data is accessible only to HR. Finance data is accessible only to finance. Board papers are accessible only to directors. Client data is restricted to the relevant engagement team. | File shares with "Domain Users — Full Control." SharePoint sites with inherited permissions that nobody has reviewed. Database servers with shared application credentials. |
| IT/OT boundary — the separation between information technology and operational technology networks | IT systems (email, file shares, AD) cannot reach OT systems (PLCs, SCADA, building management). The boundary is enforced by a firewall, air gap, or data diode. | Shared credentials between IT and OT. Jump boxes that bridge both networks without additional authentication. Historical VPN tunnels installed by vendors for remote support. |
| Cloud/on-premises boundary — the trust relationship between on-premises AD and cloud identity (Azure AD / Entra ID) | Compromise of an on-premises account shouldn't automatically grant access to cloud resources. Azure AD Connect should be hardened. Conditional access should restrict cloud access by device compliance. | Azure AD Connect sync account with excessive permissions. No conditional access policies. Password hash sync enabling cloud access with on-premises credentials. Seamless SSO without MFA. |
An internal pen test systematically probes each of these boundaries. The findings tell you not just which boundaries failed, but what an attacker could achieve by crossing them — and which combinations of boundary failures chain together into complete compromise.
Lateral movement is the process of moving from one compromised system to another within the internal network. It's the mechanism by which a single compromised workstation becomes a fully compromised domain — and it's the phase that most internal defences are worst at detecting and preventing.
An attacker doesn't need to exploit a new vulnerability for each system they reach. Lateral movement typically relies on legitimate protocols, valid credentials, and normal administrative tools — which is precisely why it's so difficult for security monitoring to distinguish from ordinary IT operations.
| Technique | How It Works | Why It's Hard to Detect |
|---|---|---|
| Pass-the-hash | The attacker captures an NTLM password hash from memory on a compromised machine and uses it to authenticate to other systems — without ever knowing the plaintext password. | The authentication looks identical to a normal NTLM logon. No failed login attempts. No brute-force patterns. The credentials are valid — they're just being used by the wrong person. |
| Kerberoasting | The attacker requests Kerberos service tickets for service accounts with SPNs registered in AD, exports them, and cracks the passwords offline. Any domain user can do this — it's a designed feature of Kerberos. | Requesting a TGS ticket is a normal domain operation. The cracking happens offline, outside the network. The only detectable event is the ticket request itself — which looks like any other service authentication. |
| SMB relay | The attacker poisons a broadcast response (LLMNR/NBT-NS) to capture authentication attempts, then relays those credentials to another system in real time — authenticating as the victim without any password cracking. | The relay uses the victim's own credentials in a legitimate authentication exchange. The target system sees a valid logon. No unusual protocols, no malicious payloads — just redirected authentication. |
| RDP / PsExec / WMI | With valid credentials (captured, cracked, or relayed), the attacker uses standard remote administration tools to connect to other systems — the same tools IT uses daily. | RDP sessions, PsExec connections, and WMI queries are normal administrative activity. If your monitoring doesn't correlate who is running these tools with whether they should be, the attacker looks like a sysadmin. |
| Token manipulation | The attacker impersonates another user's security token on a compromised system — effectively becoming that user without their password. Particularly powerful when a privileged user is logged into a compromised machine. | Token operations happen in memory and use legitimate Windows APIs. If a Domain Admin has an active session on a workstation the attacker compromises, the attacker can act as the DA immediately. |
| Group Policy abuse | With DA access, the attacker deploys a payload to every domain-joined machine through Group Policy — the same mechanism used to deploy legitimate software updates. | GPO deployment is an expected enterprise function. If your monitoring doesn't alert on new or modified GPOs, the attacker can distribute a payload to thousands of machines in minutes using your own infrastructure. |
The internal pen test demonstrates which of these techniques work in your specific environment, how quickly they succeed, and — critically — whether any of them generate an alert. The gap between "technique succeeds" and "SOC detects it" is the most revealing metric in the entire engagement.
To illustrate how these techniques chain together in practice, here's a composite of the most common internal attack path we encounter — the one that produces DA in under two hours from a standard network port.
Ninety minutes from a network port in a meeting room to complete domain compromise. No sophisticated exploits. No zero-days. Just broadcast protocols that shouldn't be enabled, a service account with a weak password, and a backup operator with excessive file-level access to the domain controller. Three individually-moderate misconfigurations that chain into catastrophic compromise.
And the final line — alerts = 0 — is the finding that often generates the most urgent conversation.
Organisations that test externally but not internally are assessing their lock while ignoring everything behind the door. The external test evaluates whether the attacker can get in. The internal test evaluates what happens once they do — through any of the countless entry points that bypass the perimeter entirely.
Every one of these scenarios gives the attacker internal network access without exploiting a single perimeter vulnerability. The external pen test would report a clean bill of health. The internal pen test would reveal the domain compromise.
Internal pen tests consistently surface entire categories of vulnerability that external testing cannot assess — because these weaknesses only exist in the context of internal network access.
| Finding Category | What We Typically Find | Why It Matters |
|---|---|---|
| Active Directory misconfigurations | Kerberoastable accounts with weak passwords, AS-REP roastable accounts, unconstrained delegation, GPP passwords in SYSVOL, excessive DA membership, unprotected LAPS, stale privileged accounts | AD is the keys to the kingdom. Every misconfiguration is a potential escalation path. We find exploitable AD weaknesses in virtually every internal test we conduct. |
| Broadcast protocol poisoning | LLMNR, NBT-NS, and mDNS enabled on all subnets. Responses poisoned within seconds. Credentials captured without any user interaction. | These protocols exist for legacy compatibility and are enabled by default. They hand credentials to anyone listening on the same broadcast domain — which includes the attacker. |
| Network segmentation failures | Flat networks where any device can reach any other device. Missing firewall rules between VLANs. Guest Wi-Fi routing to the server VLAN. OT accessible from IT. | Without segmentation, one compromised device becomes a launchpad for the entire network. Segmentation is the control that limits blast radius — and its absence is the reason 90-minute domain compromises happen. |
| Excessive file share permissions | "Domain Users — Full Control" on finance, HR, legal, and board document shares. Inheritance granting access to sensitive data through nested group membership nobody has reviewed. | Data access is the attacker's objective. If every domain user can read the payroll spreadsheet, the MD's contract negotiations, and the M&A documents, then compromising any user account is a data breach. |
| Weak credential hygiene | Service accounts with passwords that haven't changed in years. Shared local admin passwords across all workstations. Passwords stored in Group Policy Preferences. Credentials in plaintext on internal wiki pages or SharePoint. | Credentials are the currency of internal attacks. Every weak, shared, or exposed credential is a shortcut through the kill chain. |
| Detection and monitoring gaps | No alerting on LDAP enumeration, Kerberoasting, lateral movement, or anomalous authentication patterns. SIEM configured but not tuned. EDR deployed but not monitored. | The internal test reveals whether your detection capability works in practice — not in theory. The gap between the controls you've deployed and the alerts they actually generate is often the most actionable finding in the report. |
Internal pen tests can be configured with different starting positions depending on the question the organisation wants to answer. Each variant simulates a different real-world scenario.
| Variant | Starting Position | Simulates | Best For |
|---|---|---|---|
| Network access, no credentials | Physical network port or rogue device. No domain credentials provided. | An attacker who has gained physical network access — through tailgating, a compromised IoT device, or an unsecured network port. | Testing whether your network controls (NAC, 802.1X, broadcast protocol security) prevent an unauthenticated device from gaining a foothold. |
| Authenticated standard user | A standard domain user account with no special privileges. The most common variant. | A phished employee, a compromised contractor, or a low-privilege insider threat. The most realistic post-breach starting point. | Testing escalation paths, lateral movement, data access, and detection — from the position an attacker is most likely to achieve. |
| Assume breach — workstation compromise | A compromised workstation with the tester operating as a local admin on a single machine. | An employee who has been fully compromised — their machine is under attacker control. Post-malware, post-RAT, post-initial exploitation. | Testing what the attacker can achieve after endpoint compromise. Focuses on lateral movement, credential access, and detection of post-compromise activity. |
| Specific scenario | Custom starting position aligned to a threat model — e.g. "compromised finance user" or "rogue administrator in the Edinburgh office." | A targeted threat scenario identified during threat modelling or risk assessment. | Organisations with mature security programmes that want to test specific attack hypotheses rather than broad-spectrum internal assessment. |
External and internal testing aren't alternatives — they're complementary. An organisation that tests only externally has validated its perimeter but knows nothing about its internal resilience. An organisation that tests only internally has skipped the question of whether the perimeter holds in the first place.
| Programme Stage | Recommended Approach |
|---|---|
| Starting out | Commission an external test first — validate the perimeter. Follow with an internal test within 6 months. The combined results provide a complete picture of your risk exposure and establish a baseline for future testing. |
| Annual programme | Alternate annually: external in Year 1, internal in Year 2, or run both in the same year with different timings. If budget permits only one, prioritise the internal test — it almost always reveals more critical findings. |
| Mature programme | Run both annually. Add scenario-based variants (assume-breach, specific threat models) as the organisation's security posture matures and standard findings are remediated. Integrate with red team exercises for the most realistic adversary simulation. |
| Post-incident | After a breach or significant security event, commission an internal test focused on the attack path that was exploited — validating that remediation has closed the specific chain and that no alternative paths exist. |
External pen tests evaluate your perimeter — the locks on the door. Internal pen tests evaluate everything behind it — the trust boundaries, the credential hygiene, the network segmentation, the access controls, and the detection capabilities that determine whether a breach stays contained or becomes catastrophic.
In approximately 85% of internal tests, we achieve Domain Admin. The perimeter was intact. The VPN was patched. The firewall rules were sensible. None of that mattered once the attacker had a network connection and the internal environment defaulted to trust.
Attackers get in. Through phishing, stolen credentials, physical access, supply chain compromise, or insider threat — the perimeter is bypassed constantly. The question isn't whether it will happen. The question is what the attacker finds when they're inside. The internal pen test answers that question before a real attacker does.
Our internal pen tests simulate realistic post-breach scenarios — testing trust boundaries, lateral movement, privilege escalation, and detection capability across your entire internal environment.