> C:\Users\intern.summer>whoami /priv — SeDebugPrivilege: Enabled<span class="cursor-blink">_</span>_
The organisation had done nearly everything right. Four hundred workstations, all running current builds of Windows 11, all enrolled in Microsoft Intune, all protected by a leading EDR platform, all receiving patches within seventy-two hours of release. Group Policy was tightly configured. Local administrator rights had been removed from standard users. Application whitelisting was in pilot across two departments. The security team had spent three years building this posture, and they were justifiably proud of it.
Then, in June, the HR department onboarded six summer interns. IT was given two days' notice. The asset pool was empty — every managed laptop was allocated. Procurement of new devices would take three weeks. The interns were starting on Monday.
Someone found a solution. In a storage cupboard on the second floor, behind a stack of old monitors, sat a shelf of decommissioned laptops. They had been retired from service eighteen months earlier during a hardware refresh cycle but had never been collected for disposal. One of them still powered on. It was wiped, rebuilt from a base Windows 10 image that a technician had on a USB drive, domain-joined, and handed to an intern named — for the purposes of this article — Alex.
Alex's laptop was the only device on the network that we needed.
The client — a professional services firm in the financial sector — had engaged us for an internal penetration test with a focus on lateral movement and privilege escalation. The scope was their primary office, encompassing all user VLANs, server VLANs, and supporting infrastructure. The engagement was scheduled for late July, approximately six weeks after the interns had started.
We were given an assumed-insider position on the user VLAN with no credentials — a standard starting point that simulates a compromised or rogue device on the corporate network. The objective was to determine how far an attacker could progress through an environment that had received substantial security investment.
The IT team expected a difficult engagement. Their EDR deployment had blocked common attack techniques on the previous year's assessment. They had implemented additional hardening since then. They were looking forward to seeing the results.
They had forgotten about the laptop in the cupboard.
Our initial reconnaissance followed standard practice — passive traffic capture, ARP scanning, and targeted service enumeration across the user VLAN. The environment was well-maintained: workstations responded with consistent service profiles, current Windows builds, and the EDR agent's signature service running on each host.
Consistency is the hallmark of good endpoint management. It is also what makes anomalies visible. During our scan of the 10.10.2.0/24 user VLAN, one device stood out.
One endpoint out of four hundred. Windows 10 version 2004 — a build that had reached end of life in December 2021, over two and a half years prior. No EDR agent. SMB signing enabled but not required (the Group Policy enforcing SMB signing had not been applied to this machine). RDP enabled — disabled on all other workstations via Group Policy. And a hostname that suggested it was a temporary allocation: YOURCOMPANY-INT06.
In a sea of uniformly hardened endpoints, this device was a lighthouse.
A single workstation was identified running an end-of-life operating system build, without EDR protection, with RDP enabled, and with SMB signing not enforced. The device's hostname and configuration profile indicated it had been provisioned outside the standard endpoint management process.
Before attacking the device, we wanted to understand why it existed in its current state. A device this far outside the security baseline on an otherwise well-managed network typically has a story behind it.
We queried Active Directory for the computer object YOURCOMPANY-INT06. The object had been created six weeks earlier. Its organizational unit (OU) placement was telling — it sat in the default Computers container at the root of the domain, not in the Managed Workstations OU where every other workstation resided. This meant it was not receiving the Group Policy Objects applied to managed workstations — the GPOs that enforced SMB signing, disabled RDP, deployed the EDR agent, configured the Windows firewall, and applied the security baseline.
The device had been domain-joined manually, dropped into the default container, and never moved to the correct OU. Every security control that the organisation enforced via Group Policy — and there were many — simply did not apply to this machine.
The laptop was not enrolled in Intune. It was not managed by LAPS. It was not reporting to WSUS for patch management. It was not running the EDR agent. It had received only two Group Policy Objects — the Default Domain Policy and the Default Domain Controllers Policy — neither of which contained the security hardening applied to managed workstations.
The device was domain-joined. It authenticated users against Active Directory. It accessed file shares and email. To the user sitting in front of it, it worked identically to every other laptop in the office. But from a security perspective, it existed in an entirely different world.
The laptop's lack of hardening presented multiple attack vectors. We chose the most straightforward: SMB relay.
On every other workstation in the environment, SMB signing was required — enforced by a Group Policy that the intern's laptop had never received. SMB signing prevents relay attacks by ensuring that each SMB message is cryptographically signed, preventing an attacker from modifying or replaying authentication exchanges. Without enforced signing, an attacker can intercept an SMB authentication request and relay it to another host, authenticating as the original user.
We configured ntlmrelayx from Impacket to listen for SMB authentication attempts and relay them to the intern's laptop at 10.10.2.217. We then needed to generate an authentication event that we could intercept.
LLMNR and NBT-NS had been disabled — the client's hardening work had removed these easy poisoning targets. However, we identified that IPv6 was enabled across the network but not actively managed. No DHCPv6 server was deployed. This meant that workstations were sending DHCPv6 solicitations that went unanswered — a condition we could exploit using mitm6, which responds to DHCPv6 solicitations with a rogue configuration, setting our laptop as the default DNS server for IPv6.
The mitm6 tool responded to IPv6 DHCPv6 solicitations from workstations on the VLAN, assigning our laptop as their IPv6 DNS server. When those workstations attempted to resolve WPAD (Web Proxy Auto-Discovery) via DNS, the request came to us. We responded with an authentication challenge, and the workstation's NTLM credentials were relayed to the intern's laptop.
The relay succeeded against the intern's laptop because it did not require SMB signing. It failed against every other workstation on the network because they did. The single unhardened endpoint was the only device in the environment that accepted the relayed authentication.
The relayed session belonged to j.morrison — a user who was not an administrator but was a member of the Finance group. Through the SOCKS proxy established by ntlmrelayx, we had authenticated SMB access to the intern's laptop as j.morrison.
Our relayed session gave us standard user access to the intern's laptop. We needed local administrator access to extract credentials and establish a more persistent foothold.
Because the laptop was not managed by LAPS, the local administrator account had the password set during the manual rebuild — the technician's default build password. We did not know this password, but the laptop's lack of patching provided an alternative route.
The machine was running Windows 10 version 2004 with no patches applied since the base image was created. It was missing over three years of security updates. We identified that it was vulnerable to CVE-2022-26923 (Certifried) — a vulnerability in Active Directory Certificate Services that allows a low-privileged user to escalate to domain administrator by manipulating the dNSHostName attribute of a computer account and requesting a certificate that impersonates a domain controller.
However, we chose a simpler local escalation path. The machine was also vulnerable to PrintNightmare (CVE-2021-34527) — a critical vulnerability in the Windows Print Spooler service that allows remote code execution with SYSTEM privileges. This vulnerability had been patched across the managed estate within days of disclosure in July 2021. The intern's laptop, rebuilt from a pre-patch image and never updated, remained vulnerable three years later.
PrintNightmare executed flawlessly. The Print Spooler service loaded our payload DLL and executed it as NT AUTHORITY\SYSTEM. On any managed workstation, the EDR would have detected the exploitation attempt, blocked the DLL injection, and alerted the security team. On the intern's laptop, there was no EDR. Nothing detected the attack. Nothing blocked it. Nothing logged it.
With SYSTEM-level access on the intern's laptop and no EDR to interfere, we had free rein to extract credentials from the system. We used Mimikatz without obstruction — there was no EDR to detect it, no application whitelisting to prevent it, and no Credential Guard to protect LSASS.
The intern's own NTLM hash was present in LSASS — useful but limited, as a summer intern typically has minimal domain permissions. More interesting was the cached credential for t.chen — the IT technician who had rebuilt the laptop. Chen's credentials were cached because they had logged on interactively during the setup process, and the default Windows cached logon count of ten had preserved the credential.
We queried Active Directory for t.chen's group memberships.
The IT technician's account was a member of Workstation Admins — granting local administrator rights on every workstation in the domain — and SCCM Admins, which provided administrative access to the organisation's System Centre Configuration Manager deployment. SCCM is used to deploy software, manage patches, and configure endpoints across the estate. Administrative access to SCCM is effectively administrative access to every endpoint it manages.
The cached credential was an NTLM hash, not a plaintext password. We could not use it for interactive logon directly. However, we could use it for pass-the-hash authentication against systems that accepted NTLM authentication — which included the SCCM server.
SCCM (now Microsoft Endpoint Configuration Manager) is one of the most powerful platforms in a Windows enterprise. It has the ability to execute arbitrary code on every managed endpoint, deploy scripts, modify configurations, and access detailed hardware and software inventories. In the hands of an attacker, it is the ultimate force multiplier.
We used t.chen's NTLM hash to authenticate to the SCCM server at 10.10.1.30 via the administration console's API. With SCCM Admin rights, we had several options for escalation. We chose to use SCCM's Network Access Account (NAA) — a domain account configured within SCCM that is used for clients to access content distribution points. The NAA credentials are stored in SCCM's database and are retrievable by any SCCM administrator.
The Network Access Account was a standard domain user. On its own, this would have been a limited finding. However, when we tested the NAA password against other accounts — a standard operational check on every engagement — we discovered that the Domain Administrator account used the same password as the SCCM NAA. Password reuse between service accounts and privileged administrative accounts.
We authenticated to the primary domain controller as Domain Administrator and performed a DCSync, extracting all domain credential hashes. Complete domain compromise — originating from a laptop that was fished out of a storage cupboard and handed to a summer intern.
| Step | Action | Weakness Exploited |
|---|---|---|
| 01 | Identified anomalous endpoint via service scan (no EDR, old OS) | Inconsistent endpoint hardening; device outside standard management |
| 02 | Confirmed device in default AD Computers container (no GPOs) | Manual domain join without placement in correct OU |
| 03 | IPv6 DNS takeover via mitm6; NTLM relay to intern laptop | IPv6 enabled but unmanaged; SMB signing not required on target |
| 04 | PrintNightmare exploitation for SYSTEM access | Three years of missing patches; no EDR to detect exploitation |
| 05 | Mimikatz credential extraction (intern + IT technician) | No Credential Guard; no EDR; cached logon credentials from setup |
| 06 | Pass-the-hash to SCCM server using IT technician's NTLM hash | IT technician account had SCCM Admin rights |
| 07 | Extracted SCCM Network Access Account credentials | NAA credentials retrievable by any SCCM administrator |
| 08 | Password reuse — NAA password identical to Domain Admin password | Credential reuse between service account and privileged admin account |
This engagement reveals a truth that security teams find deeply uncomfortable: a security posture is not defined by the four hundred endpoints you hardened. It is defined by the one you missed.
The organisation's endpoint security programme was genuinely mature. Three years of investment had produced a hardened, monitored, and well-managed fleet. If the intern's laptop had not existed, our assessment would have been substantially more difficult. The EDR would have detected our tools. The SMB signing would have blocked our relay. The patching would have closed our exploitation vectors. The Credential Guard would have protected LSASS.
But the laptop did exist. It existed because of a process failure — a human decision to solve an immediate problem (interns starting Monday, no laptops available) with an improvised solution (old hardware from a cupboard, manual rebuild, skip the proper onboarding process). The decision was understandable. It was made with good intentions by a technician under time pressure. It was also catastrophic from a security perspective.
The most impactful immediate action was auditing the default Computers container in Active Directory. Any computer object in this container has not been placed into a managed OU and is therefore not receiving the organisation's Group Policy hardening. This audit should be automated — a scheduled script or a SIEM rule that alerts when a new object appears in the default container.
Continuous compliance monitoring is essential. The organisation had the ability to query every endpoint for EDR status, OS version, and patch level — but this information was not actively monitored. An automated compliance check that flags any device missing EDR, running an unsupported OS, or more than thirty days behind on patches would have identified the intern's laptop within hours of its deployment.
The SCCM Network Access Account should be eliminated where possible. Modern SCCM deployments can use Enhanced HTTP or a Cloud Management Gateway instead of the NAA, removing the need to store a domain credential in a location accessible to SCCM administrators. Where the NAA must be retained, its password must be unique and managed by a privileged access management solution.
Finally, password reuse between service accounts and privileged accounts must be treated as a critical finding. A unique, complex, randomly generated password for every account is not a recommendation — it is a requirement. Password managers, Privileged Access Management solutions, and Group Managed Service Accounts all exist to make this achievable at scale.
Every organisation has a process for deploying endpoints. And every organisation has a story about the time they had to work around that process. The laptop from the cupboard. The contractor's personal device. The demo unit that was only supposed to be on the network for a week. The test machine that was never decommissioned. The device that was rebuilt from an old image because it was faster than ordering a new one.
Each of these exceptions is reasonable in isolation. Each solves an immediate operational problem. And each introduces a device onto the network that does not receive the security controls applied to everything else — creating a gap that an attacker will find, target, and exploit.
The intern did nothing wrong. The technician did nothing malicious. The IT team had built an excellent security programme. But a single device, provisioned outside the process, in a moment of operational pressure, undid three years of security investment in five hours.
Until next time — stay sharp, stay curious, and remember: the exception you make today is the finding in next year's pentest report.
This article describes a penetration test conducted under formal engagement with full written authorisation from the client. All identifying details have been altered or omitted to preserve client confidentiality. The techniques described were performed within the scope of a legal agreement. Unauthorised access to computer systems is a criminal offence under the Computer Misuse Act 1990 and equivalent legislation worldwide. Do not attempt to replicate these techniques without proper authorisation.
Hedgehog Security does not just test your best-defended systems — we find the exceptions, the outliers, and the forgotten devices that your processes missed. If your endpoint security looks good on a dashboard but has not been verified against every device on the network, let us show you what an attacker would find.