Case Study

The Forgotten Test Server That Held the Crown Jewels

> root@dev-legacy:~# ls /backups/prod/ | wc -l && echo 'backup files found'<span class="cursor-blink">_</span>_

Peter Bassill 3 September 2024 15 min read
penetration-testing asset-management shadow-it from-the-hacker-desk data-exposure development-environments credential-reuse database-security

Decommissioned on paper. Live on the network.

Every organisation has ghosts on its network. Servers that were built for a project that ended two years ago. Virtual machines spun up for a proof of concept that was never approved. Development environments created by a contractor who left the company before documenting what they had built. Test databases populated with production data because it was easier than generating synthetic records.

These systems accumulate like sediment. They are not in the asset register because they were never formally commissioned. They are not in the patching cycle because nobody knows they exist. They are not monitored because they were never enrolled in the SIEM. They are not backed up because they are not production systems. They are not decommissioned because nobody remembers they are there.

And then, one Tuesday afternoon, a penetration tester finds one. It is running an unpatched operating system, it is accessible from the user VLAN, and it contains a complete copy of the production database — customer names, addresses, financial records, and hashed passwords. All of them.

This is that story.


The Engagement Brief

The client was an e-commerce company with a significant online presence — processing several million transactions per year across multiple brands. They operated a hybrid infrastructure: customer-facing applications hosted in a public cloud environment, with back-office systems, databases, and internal tooling running on-premises in a collocated data centre. The IT team comprised approximately forty people, including a dedicated development team of twelve.

We had been engaged to conduct an internal network penetration test focused on the on-premises environment. The scope covered all internal VLANs reachable from the user network, including server, development, and management segments. The client was particularly interested in the security of their database infrastructure, as they were preparing for a PCI DSS reassessment and wanted assurance that their cardholder data environment was properly isolated.

They provided us with a network diagram and an asset register. Both documents would prove to be incomplete.


Initial Reconnaissance — Counting the Hosts

Our initial scan of the internal network produced a host count that did not match the asset register. The register listed one hundred and forty-seven servers across all VLANs. Our scan identified one hundred and sixty-three. Sixteen unregistered hosts — a discrepancy of approximately eleven per cent.

Discrepancies between asset registers and reality are common on penetration tests. They typically represent a mix of temporary systems, forgotten test infrastructure, and devices added outside the change management process. But the scale of this discrepancy — sixteen hosts — suggested a more systemic issue.

We catalogued the unregistered hosts by VLAN.

Unregistered Hosts — VLAN Distribution
Asset Register: 147 servers
Scan Results: 163 live hosts
Discrepancy: +16 unregistered hosts

Distribution:
Server VLAN (10.0.1.0/24): +3 unregistered
Development VLAN (10.0.5.0/24): +9 unregistered
Management VLAN (10.0.10.0/24): +2 unregistered
DMZ (172.16.0.0/24): +2 unregistered

# 9 of 16 unregistered hosts are on the development VLAN

Nine of the sixteen unregistered hosts were on the development VLAN. This VLAN was described in the client's documentation as a 'sandbox environment for application development and testing'. It was routable from the user VLAN — developers needed access to their test systems from their workstations. Firewall rules between the user VLAN and the development VLAN were permissive: all TCP and UDP ports were allowed in both directions.

The development VLAN was, in the client's own words, 'not production' and therefore subject to less rigorous controls. This distinction — between production and non-production — would prove to be the central weakness of the entire engagement.


The Development VLAN — The Wild West

We turned our attention to the development VLAN at 10.0.5.0/24. Of the thirty-one hosts present (twenty-two registered, nine unregistered), the registered systems were running a mix of Linux and Windows, hosting development instances of the company's web applications, CI/CD pipeline tools, and staging databases.

The nine unregistered hosts presented a very different profile.

Unregistered Hosts — Development VLAN Enumeration
$ nmap -sV -O 10.0.5.200-210

10.0.5.200 Ubuntu 18.04 (EOL) Apache, MySQL 5.7, SSH
10.0.5.201 Ubuntu 18.04 (EOL) Nginx, PostgreSQL 11, SSH
10.0.5.202 Ubuntu 16.04 (EOL) Apache, MySQL 5.6, PHP 7.0, SSH
10.0.5.203 CentOS 7 (EOL) Jenkins 2.249, SSH, Docker
10.0.5.204 Windows Server 2012 R2 (EOL) IIS 8.5, MSSQL 2014, RDP
10.0.5.205 Ubuntu 18.04 (EOL) Redis 5.0, SSH
10.0.5.206 Ubuntu 20.04 Grafana 7.5, SSH
10.0.5.207 CentOS 7 (EOL) GitLab CE 12.10, SSH
10.0.5.208 Ubuntu 18.04 (EOL) MySQL 5.7, SSH, NFS

# 7 of 9 hosts running end-of-life operating systems
# All running outdated application versions
# None present in asset register, patching, or monitoring systems

Seven of nine unregistered hosts were running end-of-life operating systems — Ubuntu 16.04, Ubuntu 18.04, CentOS 7, and Windows Server 2012 R2. None were receiving security patches. All were running outdated application software. Several were running services with known critical vulnerabilities.

These were the ghosts. Development servers built for projects that had ended, proofs of concept that had served their purpose, and test environments that had been superseded by newer infrastructure. Nobody had decommissioned them because nobody owned them. They had been built by developers who had moved on, by contractors whose engagements had ended, and by IT staff who had changed roles. The machines ran on, quietly consuming IP addresses and electricity, invisible to every management process the organisation operated.


The Server at 10.0.5.208

We examined each of the unregistered hosts systematically, but one device commanded our attention more than the others: 10.0.5.208 — an Ubuntu 18.04 server running MySQL 5.7, SSH, and NFS.

The NFS service was the immediate point of interest. NFS (Network File System) is a protocol for sharing filesystems across a network. When misconfigured — particularly when exports are shared with overly broad access permissions — NFS can expose sensitive data to any host on the network.

NFS — Export Enumeration
$ showmount -e 10.0.5.208

Export list for 10.0.5.208:
/backups (everyone)
/var/www (everyone)

# Both exports accessible to any host — no IP restrictions

$ sudo mount -t nfs 10.0.5.208:/backups /mnt/nfs_backups
$ ls -la /mnt/nfs_backups/

drwxr-xr-x prod/
drwxr-xr-x staging/
drwxr-xr-x scripts/
-rw-r--r-- backup_cron.sh

Two NFS exports, both shared with everyone — no IP-based access restrictions, no authentication, no encryption. We mounted the /backups share and discovered a directory structure that immediately raised the severity of the finding.

A directory named prod. On a development server. In an unregistered, unmanaged, unmonitored system.


Production Data in a Development Graveyard

Backup Directory Contents — /backups/prod/
$ ls -lh /mnt/nfs_backups/prod/

-rw-r--r-- ecommerce_db_20230815.sql.gz 4.2 GB
-rw-r--r-- ecommerce_db_20230901.sql.gz 4.3 GB
-rw-r--r-- ecommerce_db_20231001.sql.gz 4.4 GB
-rw-r--r-- ecommerce_db_20231101.sql.gz 4.5 GB
-rw-r--r-- ecommerce_db_20231201.sql.gz 4.6 GB
-rw-r--r-- ecommerce_db_20240101.sql.gz 4.7 GB
-rw-r--r-- crm_full_20231015.sql.gz 1.8 GB
-rw-r--r-- crm_full_20240115.sql.gz 1.9 GB
-rw-r--r-- user_auth_db_20240201.sql.gz 890 MB
-rw-r--r-- analytics_warehouse_20240101.sql.gz 12 GB

Total: 43.8 GB of database backups
Date range: August 2023 — February 2024

$ cat /mnt/nfs_backups/backup_cron.sh
#!/bin/bash
# Nightly production backup — copies from prod DB server
# Created by d.kumar for dev data refresh — 2023-08-10
mysqldump -h prod-db-01.corp.local -u backup_svc ...

# NOTE: This script last ran 2024-02-01 (cron still active)

Forty-three gigabytes of production database backups. Monthly exports of the e-commerce database, the CRM system, the user authentication database, and the analytics data warehouse. The files spanned six months, from August 2023 to February 2024. A cron job script on the server revealed the origin: a developer named D. Kumar had created an automated process that connected to the production database server each month, performed a full export, and stored the result on this development server.

The purpose, as noted in the script's comments, was a 'dev data refresh' — populating the development environment with realistic data for testing. This is an extremely common practice in software development. It is also one of the most dangerous, because it places production data — with all its sensitivity, all its regulatory obligations, and all its commercial value — on infrastructure that was never designed to protect it.

Critical Finding — Production Database Backups on Unmanaged Server

An unregistered, unpatched development server contained 43.8 GB of production database backups including customer PII, financial transaction records, authentication credentials, and CRM data. The data was accessible without authentication via NFS exports shared with all hosts on the network.


What Was in the Backups

We decompressed and examined the most recent backup of each database — not to exfiltrate or copy the data, but to characterise its contents and sensitivity for accurate reporting. The scope of our access was confirmed through selective examination of table structures and limited record sampling.

Database Size Contents Sensitivity
ecommerce_db 4.7 GB Customer orders, delivery addresses, transaction history, product catalogue, pricing PII, commercial — PCI DSS scope if card data present
crm_full 1.9 GB Customer contact records, support tickets, account notes, communication preferences PII — GDPR Article 5 obligations apply
user_auth_db 890 MB Usernames, email addresses, bcrypt password hashes, MFA seeds, session tokens, API keys Authentication credentials — critical security impact
analytics_warehouse 12 GB Aggregated transaction data, customer segmentation, marketing analytics, revenue reporting Commercial — competitive intelligence value

The user_auth_db backup was the most immediately dangerous. It contained bcrypt-hashed passwords for every customer account on the platform — over two million records. Bcrypt is a strong hashing algorithm, and the hashes would resist bulk cracking. However, the database also contained MFA seeds — the shared secrets used to generate time-based one-time passwords for customers who had enabled two-factor authentication. With the password hash and the MFA seed, an attacker who cracked a customer's password would also be able to generate valid MFA codes, completely defeating the two-factor protection.

The database also contained active API keys for integrations with payment processors, logistics providers, and marketing platforms. These keys were stored alongside their associated service account credentials. Several of the API keys, when tested against the providers' public endpoints, were still valid.

The e-commerce database contained customer names, email addresses, delivery addresses, phone numbers, and full order histories. A sampling of the data confirmed that it did not contain primary account numbers (PANs) or full card data — the payment processing was handled by a third-party tokenisation service, and only token references were stored. This was a positive finding from a PCI DSS perspective, but the volume of PII remained a significant GDPR concern.


SSH and the Credential Chain

The NFS share had given us access to the backup data directly, but we also needed to assess the server itself as a potential pivot point. The SSH service on 10.0.5.208 was running on the default port. We examined the backup script for credentials.

Credential Extraction — Backup Script Analysis
$ cat /mnt/nfs_backups/backup_cron.sh

#!/bin/bash
# Production backup script — d.kumar — 2023-08-10

PROD_HOST="prod-db-01.corp.local"
PROD_USER="backup_svc"
PROD_PASS="Bkup_Pr0d_2023!"

CRM_HOST="crm-db-01.corp.local"
CRM_USER="backup_svc"
CRM_PASS="Bkup_CRM_2023!"

AUTH_HOST="auth-db-01.corp.local"
AUTH_USER="backup_svc"
AUTH_PASS="Bkup_Auth_2023!"

# ... mysqldump commands using these credentials ...

Hardcoded plaintext credentials for three production database servers — the e-commerce database, the CRM database, and the authentication database. The credentials followed a predictable pattern: Bkup_[Service]_2023! — the kind of password that is generated once, pasted into a script, and never changed.

We tested these credentials against the production database servers. The backup_svc accounts were still active on all three servers. The passwords had not been rotated since creation. The accounts had SELECT permissions on all databases — sufficient to read every table, every row, and every column in the production environment.

The forgotten development server had not only stored a copy of the crown jewels — it had stored the keys to the vault as well.


From Development to Domain

We also explored the SSH access on the development server itself. The server's /etc/shadow file was readable from the NFS export of /var/www — the export permissions were set to allow root access without squashing, meaning our NFS client could access files as root on the remote filesystem.

NFS Root Squash — Disabled
$ cat /etc/exports (on 10.0.5.208 via NFS)

/backups *(rw,sync,no_root_squash)
/var/www *(rw,sync,no_root_squash)

# no_root_squash: remote root user has root access to files
# This allows us to read /etc/shadow via NFS traversal

$ cat /mnt/nfs_www/../etc/shadow
root:$6$[REDACTED]:19200:0:99999:7:::
d.kumar:$6$[REDACTED]:19579:0:99999:7:::

$ hashcat -m 1800 hashes.txt wordlist.txt -r rules/best64.rule
d.kumar:$6$[REDACTED]:Summer2023!
Status: Cracked (47 minutes)

The no_root_squash option on the NFS exports meant that a remote client connecting as root could access any file on the server as root — including /etc/shadow. We cracked the password for the d.kumar account: Summer2023!.

D. Kumar was a developer who had left the company four months prior. His Active Directory account had been disabled as part of the standard leaver process. However, his local Linux account on the development server remained active — because the server was not in the asset register, it was not included in the leaver process, and nobody knew it existed.

We tested the cracked password against other services. Password reuse delivered the expected result: D. Kumar had used the same password for his local Linux account, his GitLab CE account on 10.0.5.207, and — critically — his Jenkins account on 10.0.5.203.

Jenkins on 10.0.5.203 was running version 2.249 — over three years behind the current release, with multiple known critical vulnerabilities. D. Kumar's account had administrative access. Jenkins' credential store contained stored SSH keys and passwords used by the CI/CD pipeline to deploy code to staging and production servers. Among these was an SSH private key for a service account with sudo access on production web servers.

From a forgotten development server to production infrastructure access. The credentials had been sitting in Jenkins for over a year, waiting for someone to find them.


From Ghost Server to Crown Jewels

Step Action Weakness Exploited
01 Identified 16 unregistered hosts across internal VLANs Incomplete asset register; no automated discovery
02 Enumerated 9 unmanaged servers on development VLAN Development VLAN treated as low-risk; permissive firewall rules
03 Mounted NFS exports from 10.0.5.208 — no authentication NFS exports shared with everyone; no IP restrictions
04 Discovered 43.8 GB of production database backups Production data copied to unmanaged dev server for testing
05 Extracted production database credentials from backup script Hardcoded plaintext credentials in shell script on NFS share
06 Confirmed credentials still valid on production database servers No password rotation; service accounts unchanged since creation
07 Extracted /etc/shadow via NFS no_root_squash; cracked d.kumar NFS misconfiguration; weak password; departed employee account active
08 Password reuse to Jenkins; extracted SSH keys for production servers Password reuse; Jenkins credential store with production deployment keys

Shadow Infrastructure and the Production Data Problem

This engagement exposed two interconnected problems that are endemic in organisations with active development teams: shadow infrastructure and the proliferation of production data into non-production environments.

Shadow Infrastructure
Development teams create infrastructure to solve immediate problems — test servers, CI/CD pipelines, staging databases. These systems are built outside the change management process, do not appear in asset registers, and persist indefinitely after their original purpose has been served. They become unowned, unpatched, and invisible.
Production Data in Dev
Developers need realistic data for testing. The path of least resistance is to copy production data. This places customer PII, financial records, and authentication credentials on systems that lack the security controls applied to production infrastructure — unencrypted storage, permissive access, no monitoring.
Leaver Process Gaps
When an employee leaves, their Active Directory account is disabled. But local accounts on unregistered servers, credentials in CI/CD systems, and SSH keys stored in build pipelines are not touched — because nobody knows they exist. The leaver process only covers assets in the asset register.
Regulatory Exposure
Production database backups on unmanaged infrastructure represent a GDPR breach waiting to happen. The data controller's obligations under Articles 5, 25, and 32 apply regardless of whether the data resides on a production server or a forgotten development machine. The ICO will not distinguish between them.

The client's production database servers were well-managed. They were patched, monitored, backed up, and protected by network segmentation and access controls. The PCI DSS cardholder data environment was properly isolated. If the engagement had been scoped only to production systems, the findings would have been minimal.

But the copy of the production data on the development server was subject to none of these controls. The NFS share had no authentication. The server had no patches. The backup script had plaintext credentials. And the data — customer records, authentication hashes, API keys — was identical to what resided in the protected production environment.

Protecting the production database is necessary. It is not sufficient if a complete copy exists on an unmanaged server accessible from the entire network.


Technique Mapping

T1046 — Network Service Discovery
Identification of unregistered hosts and exposed services including NFS, MySQL, Jenkins, and GitLab across the development VLAN.
T1039 — Data from Network Shared Drive
Access to production database backups and backup scripts via unauthenticated NFS exports with no_root_squash enabled.
T1552.001 — Credentials in Files
Extraction of production database credentials hardcoded in plaintext within the backup shell script.
T1110.002 — Password Cracking
Offline cracking of the departed developer's SHA-512 password hash extracted from /etc/shadow via NFS traversal.
T1021.004 — SSH
Lateral movement to Jenkins and production servers using cracked credentials and SSH keys from the CI/CD credential store.
T1072 — Software Deployment Tools
Exploitation of Jenkins CI/CD administrative access to extract stored deployment credentials for production infrastructure.

Recommendations and Hardening

Remediation Roadmap
Phase 1 — Immediate (0–7 days) Cost: Low
✓ Power off and isolate 10.0.5.208 and all unregistered hosts
✓ Rotate ALL backup_svc credentials on production databases
✓ Revoke and regenerate all API keys found in auth_db backup
✓ Rotate SSH keys stored in Jenkins credential store
✓ Securely destroy all production backup files on dev servers
✓ Disable d.kumar accounts on all non-AD systems

Phase 2 — Short Term (7–60 days) Cost: Medium
○ Conduct full network scan; reconcile against asset register
○ Decommission or formally register all discovered ghost servers
○ Implement automated asset discovery (weekly scan + alerting)
○ Restrict development VLAN firewall rules (least privilege)
○ Implement NFS access controls (IP restrictions, root_squash)
○ Remove production data from all dev environments
○ Implement data masking/anonymisation for dev data refresh

Phase 3 — Strategic (60–180 days) Cost: Medium–High
○ Implement secrets management (HashiCorp Vault or equivalent)
○ Remove hardcoded credentials from all scripts and CI/CD configs
○ Enforce automated password rotation for service accounts
○ Include development infrastructure in vulnerability management
○ Extend leaver process to cover non-AD systems and CI/CD tools
○ Establish mandatory decommission process for dev infrastructure
○ Conduct GDPR DPIA for all environments containing personal data

The most urgent action was isolating the unregistered servers and destroying the production data. The backup files on 10.0.5.208 constituted a data breach waiting to happen — two million customer records, accessible without authentication, on an unpatched server reachable from the user network.

Data masking and anonymisation must replace production data copies for development use. Tools and frameworks exist that can generate realistic test data from production schemas without exposing actual customer records. Where production data must be used in testing — for example, to reproduce specific bugs — it should be accessed in a controlled, time-limited, audited manner within a hardened staging environment, not copied to unmanaged servers via cron jobs.

Secrets management platforms such as HashiCorp Vault eliminate the need for hardcoded credentials in scripts and CI/CD configurations. Credentials are retrieved at runtime from a centralised vault, with short-lived leases and automatic rotation. This removes the risk of plaintext credentials persisting on filesystems indefinitely.

The leaver process must be extended beyond Active Directory. When an employee departs, their access must be revoked across all systems — including local Linux accounts, CI/CD platforms, source code repositories, cloud consoles, and any other system where they hold credentials. This requires a comprehensive access register that goes beyond the corporate directory, which in turn requires that those systems are known and documented.

Finally, automated asset discovery must run continuously. A weekly network scan that compares live hosts against the asset register and alerts on discrepancies would have identified these servers within days of their creation. The technology is straightforward — the challenge is operational: ensuring that alerts are investigated, unregistered hosts are assessed, and ghost infrastructure is either formally adopted or decommissioned.


You cannot protect what you do not know you have.

The client's production environment was well-defended. Their databases were patched, monitored, and access-controlled. Their PCI DSS controls were effective. Their GDPR compliance programme was active and resourced. On paper, the data was protected.

But a copy of that data — every customer record, every password hash, every API key — sat on an unmanaged server in a forgotten corner of the development VLAN, accessible to anyone who could mount an NFS share. The data did not care that the server was labelled 'development'. The data was real. The exposure was real. The regulatory consequences, had this been discovered by an attacker rather than a penetration tester, would have been very real indeed.

Asset management is not an exciting topic. It does not make headlines. It does not feature in conference keynotes. But it is the foundation upon which every other security control depends. You cannot patch a server you do not know exists. You cannot monitor a system that is not in your SIEM. You cannot protect data that has been copied to a place you have never looked.

Until next time — stay sharp, stay curious, and count your servers. Then count them again.

Legal Disclaimer

This article describes a penetration test conducted under formal engagement with full written authorisation from the client. All identifying details have been altered or omitted to preserve client confidentiality. No customer data was exfiltrated or copied from the environment. Data sensitivity was assessed through schema analysis and limited record sampling only. Unauthorised access to computer systems is a criminal offence under the Computer Misuse Act 1990 and equivalent legislation worldwide. Do not attempt to replicate these techniques without proper authorisation.



If you have not compared your register to a live network scan, the answer is probably no.

Hedgehog Security finds the servers your asset register missed — the ghost infrastructure, the forgotten development environments, and the production data that has migrated to places it should never be. If your development environment has not been assessed alongside your production systems, you are only seeing half the picture.