> root@lift-ctrl:~# cat /var/log/floor_requests | grep --colour=always diagnostic<span class="cursor-blink">_</span>_
Most people step into a lift and think about which floor they need. We step into a lift and think about what processor is behind the control panel, what protocol the floor indicators are using to receive position data, and whether the maintenance port behind the certificate of inspection is running Telnet or SSH.
Professional deformation, perhaps. But it is precisely this way of seeing — of recognising the computer inside the machine — that turns a routine building assessment into something far more revealing. This is the story of how a lift car in a commercial office building became our mobile reconnaissance platform, carrying us floor by floor through a network that nobody thought to defend because nobody thought it existed.
The building had eight floors. By the time we were finished, we had mapped every one of them — not by walking the corridors, but by riding the lift.
The client was a property management firm responsible for a multi-tenanted commercial office building in a major metropolitan centre. They had recently completed a significant modernisation programme that included upgrading the building management system (BMS), replacing the lift controllers, and installing new HVAC monitoring. As part of this modernisation, much of the building's Operational Technology had been connected to IP networks — some purposefully, some as a consequence of the equipment's default configuration.
The engagement brief was unusual by penetration testing standards. We were not assessing a conventional corporate IT environment. We were assessing the building itself — its control systems, its OT infrastructure, and the degree to which these systems were isolated from the tenant IT networks that shared the same physical risers and patch panels.
Our scope included the building management system, HVAC controllers, access control systems, CCTV, and — critically — the lift control system. The rules of engagement were clear: no interference with life-safety systems, no disruption to lift service, no actions that could endanger occupants. We could observe, enumerate, and interact with diagnostic and administrative interfaces, but we could not send commands to motor controllers, brake systems, or door actuators.
The client expected us to confirm that their OT networks were properly segmented from the tenant IT networks. What we found was considerably more interesting.
Before touching a single cable, we spent the first morning understanding the building's physical and logical architecture. The building management company provided us with riser diagrams, network topology drawings, and documentation from the lift modernisation project. This is typical for OT assessments — unlike assumed-breach IT tests, OT engagements often begin with documentation review because the consequences of blind probing can be severe.
The building had three distinct network domains on paper. The first was the tenant network, with each floor having its own VLAN trunked down fibre from a core switch in the basement communications room. The second was the BMS network, intended to carry traffic for HVAC, lighting, and environmental monitoring. The third was the lift control network, described in the modernisation documentation as an isolated system connecting the lift controllers in the machine room to diagnostic panels on each landing and a centralised monitoring workstation in the facilities office.
Three separate networks. Three separate purposes. On paper, the architecture was sound. The problem, as is so often the case, was that the implementation did not match the design.
We began our assessment in the basement communications room. This is the nerve centre of any modern building — the room where fibre, copper, and coaxial cables converge from every floor, every riser, and every plant room. The room contained two full-height server racks, a core network switch, several smaller distribution switches, a rack-mounted BMS controller, and a dedicated lift control server.
The first observation was physical. The tenant network, BMS network, and lift control network all terminated in the same racks. Patch cables from all three networks ran to the same patch panels. While the core switch had VLANs configured, the patch panel labelling was inconsistent, and several cables were unlabelled entirely.
We connected our testing laptop to a designated assessment port on the core switch, configured for the BMS VLAN per our engagement scope, and began passive reconnaissance. Within minutes, we observed traffic that should not have been present on this segment.
Three different IP ranges. Three different protocols. BACnet/IP traffic from the BMS controllers on 172.16.10.0/24 was expected. The Modbus/TCP traffic on 192.168.100.0/24 was the lift control system — supposedly on its own isolated network. And the mDNS and NetBIOS traffic on 10.50.1.0/24 was tenant IT — Windows workstations announcing themselves on what should have been a completely separate network.
All three networks were visible from our single assessment port. The VLANs existed in the switch configuration, but somewhere between the core switch and the distribution layer, the segmentation had broken down. A subsequent trace revealed the cause: during the lift modernisation, the installation contractor had patched the new lift controllers into a distribution switch that was trunking all VLANs rather than only the lift VLAN. The misconfiguration had been present for seven months. Nobody had noticed.
A trunk port misconfiguration on a distribution switch had collapsed the segmentation between the BMS, lift control, and tenant IT networks. All three network domains were reachable from any port on the affected switch. This single misconfiguration invalidated the entire network isolation model.
With unexpected access to the 192.168.100.0/24 lift control network, we turned our attention to enumerating the system. We conducted a low-rate scan, mindful that OT devices can be sensitive to aggressive network probing — a SYN flood that would merely annoy a web server could crash a PLC.
The lift control system comprised a central server running Windows Embedded, three PLCs (one per lift car), eight diagnostic panels (one per landing), and three in-car display units. Eighteen devices on a network that was supposed to be air-gapped from everything else in the building.
We performed a targeted service scan against the central server at 192.168.100.5.
The attack surface on this single device was remarkable. A web-based management interface on port 80. Modbus/TCP for PLC communication on port 502 — entirely unencrypted and unauthenticated by design. MQTT on port 1883 — a lightweight messaging protocol commonly used in IoT and OT environments, running without TLS and, as we would soon discover, without authentication. RDP on port 3389. VNC on port 5900. Telnet on port 23. And a secondary web interface on port 8080 running Jetty, which typically indicates a Java-based management application.
We focused on the MQTT broker first. MQTT (Message Queuing Telemetry Transport) is a publish-subscribe messaging protocol designed for low-bandwidth, high-latency environments. It is lightweight, efficient, and ubiquitous in OT and IoT deployments. It is also, when misconfigured, one of the most generous sources of operational intelligence an attacker could hope for.
We connected to the broker on 192.168.100.5:1883 using the mosquitto_sub client and subscribed to the wildcard topic # — which returns every message published to every topic on the broker.
No authentication. No encryption. No access control lists on topics. Every message from every lift car, every diagnostic panel, and every sensor was being broadcast in cleartext JSON to anyone who connected to the broker. We could see each car's real-time position, direction, speed, door state, motor temperature, load weight, and estimated passenger count. We could see maintenance schedules, error logs, and energy consumption data.
And then we saw the line that changed the direction of the entire assessment: the diagnostics/car-a/network topic, which disclosed that the in-car display unit was connected to a wireless network called LIFT-DIAG with an RSSI of -42 dBm — a strong signal.
The in-car display units in each lift car were connected to a dedicated wireless network named LIFT-DIAG. This network was not documented in any of the building's network architecture diagrams. Its presence was disclosed through unauthenticated MQTT telemetry data.
We relocated from the basement to the ground floor lift lobby with a wireless assessment kit — a laptop with an external wireless adapter capable of monitor mode, running Aircrack-ng and Kismet. We needed to find the LIFT-DIAG network.
The network was not broadcasting its SSID. It was a hidden network — but hidden SSIDs provide no real security. When a client device probes for a hidden network, it transmits the SSID in its probe request. The in-car display units were probing constantly, and we captured the SSID within seconds.
The network was using WPA2-PSK. We captured a four-way handshake by waiting for one of the in-car units to reassociate as its lift car moved between floors and experienced momentary signal fluctuations. With the handshake captured, we ran it through Hashcat against a targeted wordlist.
The pre-shared key was the lift manufacturer's name followed by the year of installation. It cracked in three seconds. This is a pattern we see repeatedly in OT environments — installers choose passwords that are memorable and relevant to the equipment, creating credentials that are trivially guessable by anyone with knowledge of the installation.
We associated with the LIFT-DIAG network and received a DHCP lease on the 192.168.200.0/24 range. A quick scan confirmed that this wireless network was bridged to the wired lift control network on 192.168.100.0/24. We now had wireless access to the entire lift control infrastructure — from anywhere in the building where the LIFT-DIAG signal was receivable.
This is where the engagement took an unconventional turn. We had wireless access to the lift control network. The wireless access points serving the LIFT-DIAG network were installed in the lift shafts — one per shaft — to maintain connectivity with the in-car display units as the cars moved between floors. The signal was strongest inside the lift car itself.
We realised that the lift car, with its strong wireless connectivity to the OT network and its ability to physically traverse every floor of the building, was an ideal mobile reconnaissance platform. By riding the lift with our assessment laptop connected to LIFT-DIAG, we could conduct wireless surveys on every floor whilst simultaneously maintaining our connection to the lift control and — via the trunk misconfiguration — the BMS and tenant networks.
We wrote a simple script that correlated wireless scan data with lift position telemetry from the MQTT broker, automatically tagging each discovered network and access point with the floor on which it was detected and the signal strength at that location.
The survey revealed fifteen wireless networks across eight floors. The tenant networks were using WPA2-Enterprise with 802.1X — properly configured. But alongside them sat a collection of ancillary networks that told a far more concerning story.
An open CCTV network on the first floor. An open IoT sensor network on the fifth floor. A WPA2-PSK network labelled EXEC-PRIVATE on the third floor — the executive suite. A facilities management network on the seventh floor. And several WPA2-PSK printer networks scattered throughout the building. None of these ancillary networks appeared in the building's official network documentation.
Returning to the lift control server at 192.168.100.5, we explored the web interface on port 80. The management application presented a login page, but the default credentials from the manufacturer's publicly available installation manual had never been changed.
The management interface provided full visibility of all three lift cars in real time — position, direction, speed, load, door state, motor diagnostics, and maintenance history. More significantly, it exposed a configuration panel that allowed modification of operational parameters including floor mapping, priority scheduling, and diagnostic mode activation. The firmware had not been updated since August 2019 — over four years prior to our assessment.
We also confirmed access to the VNC service on port 5900 — which connected directly to the Windows Embedded desktop of the lift control server with no password. The Telnet service on port 23 dropped into a BusyBox shell on an embedded diagnostics module with root access and no authentication.
The lift control server was accessible via default credentials on the web interface, no password on VNC, and unauthenticated root Telnet. The system controlled three lift cars serving eight floors in a building occupied by hundreds of people daily. These access vectors were reachable from the tenant IT network due to the VLAN misconfiguration.
We must address the presence of Modbus/TCP on this network directly, because it represents the most serious dimension of this finding.
Modbus is a protocol designed in 1979. It has no authentication. It has no encryption. It has no concept of authorisation. Any device that can establish a TCP connection to port 502 can read holding registers, write coils, and modify parameters on any Modbus device on the network. This is not a vulnerability — it is the protocol's design. In 1979, Modbus networks were physically isolated serial buses connecting a PLC to a handful of sensors. They were never intended to exist on IP networks accessible from Windows workstations and wireless networks.
The lift PLCs at 192.168.100.10, .11, and .12 were communicating with the control server via Modbus/TCP. From our assessment position, we could read the registers on every PLC. We could see motor states, door positions, floor encoder values, brake status, and safety circuit conditions in real time. Our scope explicitly prohibited writing to these registers — and we would not have done so regardless, as the consequences of sending incorrect values to a lift motor controller are potentially lethal.
But the capability existed. Any device on this network — including, due to the VLAN misconfiguration, any device on the tenant IT network — could have written arbitrary values to these PLCs. No authentication was required. No authorisation was checked. The protocol simply does what it is asked to do.
| Protocol | Port | Authentication | Encryption | Risk |
|---|---|---|---|---|
| Modbus/TCP | 502 | None (by design) | None (by design) | Read/write access to lift PLCs from any network host |
| MQTT | 1883 | None (misconfigured) | None (no TLS) | Full telemetry disclosure; potential command injection via pub |
| VNC | 5900 | None (no password set) | None | Full desktop access to lift control server |
| Telnet | 23 | None | None | Root shell on embedded diagnostics module |
| HTTP | 80 | Default credentials | None | Administrative access to lift management interface |
| RDP | 3389 | Windows Embedded auth | TLS | Accessible — not tested further under scope |
The VLAN misconfiguration that gave us access to the lift control network also worked in reverse. From the lift control network, we could reach the tenant IT VLANs. We confirmed this by performing ARP scans against the 10.50.1.0/24 range we had observed in the initial traffic capture.
We discovered thirty-seven active hosts on the tenant network visible from the lift control VLAN — Windows workstations, network printers, a file server, and what appeared to be a small Active Directory domain serving one of the tenanted floors. The implications were severe. An attacker who compromised the lift control system — via the unauthenticated Modbus, the passwordless VNC, the default-credential web interface, or the open MQTT broker — would have a direct pivot into tenant corporate networks.
The building management company was responsible for the lift infrastructure. The tenants were responsible for their own IT security. Neither party knew that their networks were connected. Neither party's threat model accounted for the other's presence.
| Step | Action | Weakness Exploited |
|---|---|---|
| 01 | Connected to BMS VLAN; observed cross-VLAN traffic | Trunk port misconfiguration collapsed network segmentation |
| 02 | Enumerated lift control network (192.168.100.0/24) | Lift network reachable from BMS VLAN due to trunk leak |
| 03 | Subscribed to MQTT broker with wildcard topic | No authentication or ACLs on MQTT broker |
| 04 | Discovered hidden LIFT-DIAG wireless network via telemetry | Sensitive network information leaked through MQTT |
| 05 | Cracked WPA2-PSK for LIFT-DIAG in 3 seconds | Weak pre-shared key (manufacturer name + year) |
| 06 | Used lift car as mobile wireless survey platform | Undocumented wireless networks across all floors |
| 07 | Accessed lift control server via default credentials / VNC / Telnet | Default credentials; passwordless remote access services |
| 08 | Read Modbus registers on lift PLCs | Modbus/TCP — unauthenticated by protocol design |
| 09 | Confirmed bidirectional access to tenant IT networks | VLAN leak enabled pivot from OT to tenant corporate networks |
This engagement exposed a problem that is endemic across the built environment. The modernisation of building systems — lifts, HVAC, access control, lighting, fire suppression — has moved these systems from proprietary serial networks to IP-based communication. This migration brings enormous operational benefits: centralised monitoring, remote diagnostics, predictive maintenance, and energy optimisation.
It also brings these systems into the same threat landscape as corporate IT networks, with none of the defensive controls that IT environments take for granted.
The lift manufacturer's installation guide contained a single paragraph about cybersecurity. It stated that the system should be installed on an isolated network. The installer followed this guidance — but the subsequent cabling and switch configuration by a different contractor undid the isolation entirely. Seven months of undetected exposure followed.
Nobody tested the segmentation after commissioning. Nobody monitored for cross-VLAN traffic. Nobody included the lift system in the building's vulnerability management programme. The lift was a machine. Machines do not get penetration tested.
Until they do.
The remediation programme for this engagement was necessarily broader than a typical IT penetration test, as it involved coordination between the building management company, the lift maintenance contractor, the network cabling provider, and the tenants.
The most urgent action was correcting the trunk port misconfiguration. This single change restored the intended segmentation between all three network domains and eliminated the cross-network pivot. It was completed within hours of our initial report.
For the Modbus/TCP exposure, the long-term solution involves deploying a protocol-aware firewall capable of Modbus Deep Packet Inspection. This allows the firewall to enforce rules not just at the network layer but at the application layer — permitting read operations from the control server while blocking write operations from any other source. This compensating control addresses the protocol's inherent lack of authentication without requiring a protocol replacement.
The responsibility gap between the building management company, contractors, and tenants required formal documentation. A Shared Responsibility Matrix was developed that clearly defined which party was responsible for each aspect of the building's network security — from physical cabling to VLAN configuration to endpoint patching. Without this clarity, the same blind spots will re-emerge with every contractor visit, every system upgrade, and every new tenant fit-out.
Finally, post-commissioning verification was established as a mandatory step for all future OT installations. Every new device connected to the building's network must be verified against the intended network architecture before it is placed into service. The seven-month gap between the VLAN misconfiguration and our discovery was seven months too long.
We have spent decades thinking about cybersecurity as something that happens to computers — to servers, to workstations, to firewalls. But the modern commercial building is a computer. It has processors, memory, network interfaces, and software. It runs protocols, serves APIs, and stores data. It just happens to also have a lobby, a car park, and a rather good view from the seventh floor.
The lift in this engagement was never considered a cybersecurity risk by anyone involved in its specification, procurement, installation, or maintenance. It was a mechanical system that had been upgraded with modern controls. The controls happened to run on IP networks. The IP networks happened to be misconfigured. And the misconfiguration happened to expose every tenant in the building to an attack vector that bypassed every security control they had invested in.
Somewhere in your building, there is a system that nobody considers a computer. It is connected to a network that nobody documented. It is running firmware that nobody has updated. And it is one misconfigured switch port away from everything you are trying to protect.
We took the lift to find it. You might not have to go that far — but you do need to look.
Until next time — stay sharp, stay curious, and remember: the building is the network.
This article describes a penetration test conducted under formal engagement with full written authorisation from the client. All identifying details have been altered or omitted to preserve client confidentiality. The techniques described were performed within the scope of a legal agreement and subject to strict rules of engagement that prohibited any interaction with life-safety systems. Unauthorised access to computer systems is a criminal offence under the Computer Misuse Act 1990 and equivalent legislation worldwide. Do not attempt to replicate these techniques without proper authorisation.
Hedgehog Security specialises in assessments that bridge the gap between Operational Technology and IT. We understand the protocols, the equipment, and the operational constraints that make OT testing different from conventional penetration testing. If your building has been modernised, let us verify that the segmentation works as designed.