Case Study

The USB That Opened the Door

> operator@c2:~# while true; do echo "$(date) — callback received from $RANDOM_HOST"; sleep $((RANDOM%600)); done<span class="cursor-blink">_</span>_

Peter Bassill 1 April 2025 16 min read
penetration-testing social-engineering usb-security from-the-hacker-desk device-control endpoint-security user-awareness physical-security

Twenty USB drives. Fifteen callbacks. Forty-seven minutes.

There is a question that has been asked at security conferences for twenty years: if you drop a USB drive in a car park, will someone plug it in?

The answer, confirmed repeatedly by academic research and our own experience, is yes. Reliably, predictably, and quickly. Not because people are foolish. Not because security awareness training has failed. But because human curiosity is a deeper, more persistent drive than any policy document or e-learning module can override.

On this engagement, we distributed twenty prepared USB drives across the client's premises. Fifteen produced callbacks to our command and control infrastructure. The first callback arrived forty-seven minutes after the drives were placed. The last arrived three days later. In between, we watched as curiosity, helpfulness, and simple habit systematically bypassed the organisation's security controls.


The Engagement Brief

The client was a legal services firm — a mid-sized practice with approximately four hundred staff across two offices. They handled commercially sensitive and legally privileged material daily. They had completed a security awareness programme six months prior, which included a module on removable media risks. They wanted to test whether the training had been effective.

We were engaged to conduct a controlled USB drop exercise as part of a broader physical and social engineering assessment. The scope authorised us to place prepared USB devices in common areas of the client's primary office — the reception, kitchen areas, meeting rooms, printer rooms, and the building lobby — and to monitor whether the devices were connected to corporate workstations.

The USB drives did not contain malware. They did not exploit any software vulnerability. They contained files designed to generate a network callback to our infrastructure when opened — allowing us to record when and where each drive was connected, without compromising any system or accessing any data. The exercise was designed to measure behaviour, not to demonstrate exploitation.

The client's IT security team and the managing partner were aware of the exercise. No other staff had knowledge of it.


Preparing the Drives

The effectiveness of a USB drop exercise depends on the plausibility of the drives and the files they contain. A blank USB drive with a single executable named 'malware.exe' tests nothing except whether people are willing to run obvious malware. A carefully prepared drive that looks like it was lost by a colleague tests something far more realistic — the intersection of curiosity, helpfulness, and trust.

We prepared twenty USB drives across four configurations, each designed to test a different aspect of human behaviour and a different technical control.

Configuration Quantity Appearance Payload Mechanism
Type A — 'Lost Personal' 5 Branded consumer USB with keyring; label: 'Holiday Photos'. Placed in car park and reception. HTML file disguised as photo index. Opens browser, loads tracking pixel from external server. Tests: will users open files from unknown USB?
Type B — 'Corporate Document' 5 Plain black USB; label: 'Q4 Salary Review — CONFIDENTIAL'. Placed in printer rooms and kitchen areas. Word document with embedded macro. On enable, macro calls home to C2 server. Tests: will users enable macros on documents from unknown USB?
Type C — 'IT Department' 5 USB with printed label: 'IT Dept — Windows Update Tool'. Placed near IT service desk and meeting rooms. Windows shortcut (.lnk) file disguised as application. Executes PowerShell callback on click. Tests: will users run executables from USB based on trusted branding?
Type D — 'Rubber Ducky' 5 USB HID device (keystroke injection). Visually identical to standard USB drive. Placed in hot-desking areas. On insertion, types pre-programmed keystrokes: opens PowerShell, executes encoded callback. No user interaction required. Tests: device control and USB class filtering.

Each drive was uniquely identified with an embedded serial number, allowing us to track which specific drive called back, from which network location, and at what time. The drives were placed during early morning — between 06:30 and 07:00 — before the majority of staff arrived, using the building access we had obtained during the social engineering phase of the assessment.


Placement — Twenty Drives, Twenty Stories

The placement of each drive was deliberate. USB drop research consistently shows that location significantly affects pickup rates. Drives placed in high-traffic areas with a plausible 'lost' narrative are picked up more frequently than drives placed in obviously staged locations.

Car Park (3 drives — Types A & B)
Placed on the ground near the building entrance, partially visible, as though dropped from a pocket or bag. The car park narrative is powerful: someone walking to their car dropped this. It belongs to a colleague. Picking it up is the helpful thing to do.
Kitchen Areas (5 drives — Types A, B & C)
Left on worktops, near the coffee machine, and on the communal table. Kitchens are social spaces where people's guard is down. A USB drive left next to the kettle looks like someone left it behind after lunch.
Printer Rooms (4 drives — Types B & C)
Placed on the printer output tray and on the desk beside the scanner. The printer room narrative: someone printed something, put it on a USB, and forgot the drive. The 'Q4 Salary Review — CONFIDENTIAL' label exploits both curiosity and the assumption that confidential documents belong to someone important.
Hot-Desking and Meeting Rooms (5 drives — Types C & D)
Left on desks, beside monitors, and in the cable tray of meeting room tables. The Rubber Ducky devices were placed here specifically — hot-desking areas where people routinely plug in unfamiliar peripherals.
Reception and Lobby (3 drives — Types A & D)
Placed on the visitor seating area, near the sign-in book, and on the reception desk itself. These drives test whether staff will connect devices found in areas accessible to the public.

The Results

We monitored our callback infrastructure continuously for five business days following the drop. The results were recorded in real time.

Callback Timeline — First 72 Hours
USB Drop Callback Log — Day 1

07:47 Type D (Rubber Ducky) Hot-desk area, 3rd floor AUTO-EXEC
08:12 Type A (Holiday Photos) Kitchen, 2nd floor HTML opened
08:34 Type A (Holiday Photos) Car park entrance HTML opened
08:51 Type B (Salary Review) Printer room, 1st floor Macro enabled
09:15 Type D (Rubber Ducky) Meeting room B AUTO-EXEC
09:23 Type B (Salary Review) Kitchen, 3rd floor Macro enabled
10:05 Type C (IT Update Tool) Printer room, 2nd floor LNK executed
10:41 Type A (Holiday Photos) Reception seating HTML opened
11:30 Type D (Rubber Ducky) Hot-desk area, 2nd floor AUTO-EXEC
14:17 Type B (Salary Review) Kitchen, 1st floor Macro enabled
15:02 Type C (IT Update Tool) Meeting room D LNK executed

Day 1 total: 11 callbacks from 20 drives (55%)

Day 2: +3 callbacks (Types A, B, C)
Day 3: +1 callback (Type B)
Days 4-5: No additional callbacks

Final: 15 callbacks from 20 drives (75%)
Drives not connected: 5 (3x Type C, 1x Type D, 1x Type A)

Fifteen of twenty drives produced callbacks — a seventy-five per cent success rate. Eleven callbacks occurred on the first day. The first callback arrived forty-seven minutes after placement. The fastest human-initiated callback (excluding the Rubber Ducky auto-execution) arrived sixty-five minutes after the drives were placed.

Drive Type Placed Callbacks Rate Key Observation
Type A — Holiday Photos 5 4 80% Highest human interaction rate. Curiosity about personal content.
Type B — Salary Review 5 4 80% Users enabled macros despite security warning. Confidential label increased engagement.
Type C — IT Update Tool 5 2 40% Lowest rate. Technical branding may have created suspicion in non-IT staff.
Type D — Rubber Ducky 5 5* 100%* 3 auto-executed on connection. 2 blocked by USB device class policy. *3 successful.

The results by type revealed distinct behavioural patterns. The 'personal' and 'confidential' drives (Types A and B) had the highest engagement — eighty per cent. The 'IT Department' drives (Type C) had the lowest human interaction — forty per cent — suggesting that the technical framing may have triggered more caution. The Rubber Ducky devices (Type D) achieved one hundred per cent insertion but only sixty per cent execution — two were blocked by a USB device class policy that filtered HID devices on some workstations.

Finding — 75% USB Drive Insertion Rate Despite Security Awareness Training

Fifteen of twenty USB drives placed across the client's premises were connected to corporate workstations within three business days. The security awareness programme completed six months prior included specific guidance on removable media risks. The training did not measurably reduce the rate of USB insertion.


What the Technical Controls Did and Didn't Do

The human behaviour results tell one story. The technical control assessment tells another — and in many ways, a more actionable one. Because whilst training people not to plug in USB drives has limited effectiveness, configuring systems to safely handle USB drives when they are inevitably plugged in is entirely achievable.

We assessed the client's endpoint controls against each payload type.

Endpoint Control Assessment — By Payload Type
Type A — HTML tracking pixel:
USB storage: ALLOWED (no device control policy)
AutoRun: Disabled (Group Policy) ✓
File execution: User manually opened HTML file
Browser callback: ALLOWED (no URL filtering on tracking domain)
EDR detection: Not triggered (HTML is not malicious)

Type B — Word macro:
USB storage: ALLOWED
Macro policy: Macros BLOCKED BY DEFAULT in Office ✓
User override: Users clicked 'Enable Content' on 4 of 4 occasions
Macro execution: PowerShell callback executed successfully
EDR detection: DETECTED on 2 of 4 — blocked PowerShell execution
Net result: 2 successful callbacks, 2 blocked by EDR

Type C — LNK shortcut:
USB storage: ALLOWED
LNK execution: User manually clicked shortcut file
PowerShell: Execution policy set to 'Restricted' ✓
Bypass: -ExecutionPolicy Bypass flag in LNK target
EDR detection: DETECTED on 1 of 2 — blocked on newer endpoint
Net result: 1 successful callback, 1 blocked by EDR

Type D — Rubber Ducky (HID injection):
USB HID policy: BLOCKED on 2 of 5 workstations ✓
USB HID policy: ALLOWED on 3 of 5 workstations ✗
Keystroke exec: PowerShell callback on 3 endpoints
EDR detection: DETECTED on 1 of 3 — blocked
Net result: 2 successful callbacks, 1 blocked by EDR

The assessment revealed a layered but inconsistent control environment. AutoRun was disabled — a positive baseline control. Office macro execution was blocked by default — but users overrode this by clicking 'Enable Content' on every occasion. PowerShell execution policy was set to Restricted — but was trivially bypassed using the -ExecutionPolicy Bypass flag. USB HID device filtering was deployed — but only on some workstations, not consistently across the fleet.

The EDR was the most effective control, detecting and blocking several payloads. However, its coverage was inconsistent — it caught the PowerShell callbacks on newer endpoints with updated signatures but missed them on older endpoints. And it could not detect the HTML tracking pixel at all, because opening an HTML file that loads an external image is not, by any technical definition, malicious.

The most significant gap was the absence of a USB device control policy. There was no restriction on connecting USB mass storage devices to corporate workstations. Any USB drive could be inserted, mounted, and its contents accessed without restriction. The organisation relied entirely on user behaviour — backed by awareness training — to prevent USB-based attacks.


The Macro Paradox

The Type B (Salary Review) results deserve particular attention, because they illustrate a paradox that undermines one of the most widely deployed technical controls in modern endpoint security.

Microsoft Office's default macro policy blocks macros in documents from untrusted sources and displays a prominent yellow warning bar: 'Macros have been disabled.' followed by an 'Enable Content' button. This control exists specifically to prevent macro-based attacks. It is enabled by default. It works.

And on every single occasion in this exercise — four out of four — the user clicked 'Enable Content'.

This is not a training failure. It is a design failure. The security warning competes with the user's objective — I want to read this document — and the warning provides a one-click mechanism to dismiss it. The 'Enable Content' button is a self-destruct mechanism disguised as a convenience feature. Users have been conditioned, through years of encountering this warning on legitimate documents shared by colleagues, to click it reflexively. The warning has become wallpaper.

Microsoft has recognised this problem. Recent versions of Office block macros entirely in documents downloaded from the internet (Mark of the Web), with no user override option. However, this protection does not apply to documents opened from USB drives on some configurations — the document is treated as a local file, not an internet download, and the legacy 'Enable Content' workflow applies.


Why People Plug In Unknown Drives

Understanding why the USB drop succeeded at seventy-five per cent is as important as understanding that it did. The motivations are not uniform — different people connected the drives for different reasons.

Curiosity
The dominant motivation. 'Holiday Photos' triggers a simple, powerful question: whose photos are these? The drive offers a costless answer — just plug it in and look. The risk assessment ('this might contain malware') competes with the certainty of satisfying curiosity. Curiosity wins.
Helpfulness
Several drives were found in locations that suggested someone had lost them. Plugging in the drive to identify the owner — by looking at the files for a name, a department, a photo — is an act of helpfulness. The person is not being careless; they are trying to return lost property.
Confidentiality as Bait
The 'Q4 Salary Review — CONFIDENTIAL' label exploited a specific psychological trigger. Confidential information carries social currency. Knowing what colleagues earn is compelling. The label simultaneously signalled that the content was important and that reading it was transgressive — a combination that increases, rather than decreases, engagement.
Authority and Trust
The 'IT Department — Windows Update Tool' label exploited trust in internal authority. If the IT department has provided a tool, using it is the compliant thing to do. The label reframes the USB drive from an unknown risk to an expected action — compliance, not curiosity.

Awareness Training Is Necessary but Not Sufficient

The client had invested in a comprehensive security awareness programme. The programme included a specific module on removable media threats. Staff had completed the training six months prior. The training materials explicitly stated: do not connect unknown USB drives to corporate devices.

Seventy-five per cent of the drives were connected anyway.

This does not mean awareness training is worthless. It means awareness training alone is insufficient. Training creates knowledge — people know that unknown USB drives are risky. But knowledge does not reliably change behaviour, because behaviour is governed by context, habit, and motivation in the moment, not by information retained from a training module completed six months ago.

The most effective defence against USB-based attacks is not training people to resist their curiosity. It is removing the technical conditions that allow curiosity to be exploited.


Technique Mapping

T1091 — Replication Through Removable Media
Delivery of payloads via USB drives placed in common areas of the target premises, exploiting human curiosity and helpfulness.
T1056.001 — Input Capture: Keylogging (HID Injection)
Rubber Ducky devices simulating keyboard input to execute pre-programmed keystroke sequences on insertion.
T1204.002 — User Execution: Malicious File
Users opening Word documents and enabling macros, clicking LNK shortcut files, and opening HTML files from untrusted USB drives.
T1059.001 — Command and Scripting: PowerShell
PowerShell callbacks executed via Word macros, LNK files, and Rubber Ducky keystroke injection.
T1071.001 — Application Layer Protocol: Web
Command and control callbacks over HTTPS, blending with normal web traffic to avoid network-level detection.

Recommendations and Hardening

Remediation Roadmap
Phase 1 — Immediate (0–14 days) Cost: Low
✓ Deploy USB device control policy via Group Policy/Intune
— Block USB mass storage by default on all workstations
— Allow only approved/encrypted USB devices by hardware ID
✓ Block USB HID devices on all endpoints (prevent Rubber Ducky)
✓ Enforce macro block for files from USB (Mark of the Web flag)
✓ Update EDR signatures and ensure consistent deployment

Phase 2 — Short Term (14–60 days) Cost: Medium
○ Implement application whitelisting on endpoints
○ Block PowerShell for standard users (or constrained language mode)
○ Deploy DLP to monitor/block sensitive data to removable media
○ Implement secure USB kiosk for scanning found drives
○ Update awareness training with USB drop exercise results
○ Establish 'found USB' process — hand to IT, do not connect

Phase 3 — Strategic (60–180 days) Cost: Medium
○ Migrate to FIDO2 security keys (reduces USB port usage expectation)
○ Evaluate USB port disablement for roles that do not require it
○ Implement continuous security awareness (micro-training, simulations)
○ Conduct annual USB drop exercises as measurement of control effectiveness
○ Include USB drop in red team exercise scope

The most impactful single control is a USB device control policy that blocks USB mass storage devices by default. This can be deployed via Group Policy (Windows) or Intune (managed endpoints) and takes effect immediately across the fleet. Approved USB devices — encrypted corporate drives issued by IT — can be whitelisted by hardware ID. All other USB storage devices are blocked at the operating system level, before any file can be accessed.

USB HID device filtering must be deployed consistently across all endpoints. The Rubber Ducky devices succeeded because HID filtering was applied to only a subset of workstations. A consistent policy that blocks unknown HID devices — or that requires administrator approval before a new HID device is accepted — eliminates the keystroke injection vector entirely.

A secure USB scanning kiosk provides a safe alternative for staff who find USB drives. Rather than connecting the drive to their workstation, staff are directed to hand it to IT or scan it on a dedicated, isolated kiosk that examines the contents in a sandboxed environment. This accommodates the helpful impulse — the desire to identify the owner and return the drive — without exposing the corporate network.

PowerShell access for standard users should be restricted. The majority of the payloads in this exercise relied on PowerShell for their callback mechanism. Restricting PowerShell to Constrained Language Mode for non-administrative users — or blocking it entirely for roles that do not require it — eliminates the execution mechanism used by the macro, LNK, and Rubber Ducky payloads.

Finally, awareness training should be updated to include the results of this exercise — anonymised and presented constructively. Telling people 'don't plug in USB drives' is abstract. Showing them that seventy-five per cent of their colleagues did exactly that, despite the same training, makes the risk concrete and personal. The most effective training is not a module — it is a story.


You cannot train away curiosity. You can control what happens when it wins.

USB drop exercises produce uncomfortable results. A seventy-five per cent success rate, six months after targeted awareness training, is difficult to present to a board as evidence that the security programme is working. But the discomfort is the point. The exercise does not measure whether the training was good — it measures whether the training changed behaviour. And the answer, consistently, is: not enough.

This is not a failure of the people. It is a failure of the strategy. A strategy that relies on four hundred individuals making the correct security decision every time, in every situation, under every emotional condition, is a strategy that will fail. A strategy that deploys technical controls to ensure that the incorrect decision has no consequences is a strategy that succeeds regardless of human behaviour.

Block USB storage by default. Filter HID devices. Restrict PowerShell. Deploy application whitelisting. Then train people as well — not because training is the primary control, but because it is a valuable layer in a defence that does not depend on it.

Until next time — stay sharp, stay curious, and if you find a USB drive labelled 'Holiday Photos' in your car park, hand it to IT. They are expecting it.

Legal Disclaimer

This article describes a controlled USB drop exercise conducted under formal engagement with full written authorisation from the client. No malware was deployed. No data was accessed, exfiltrated, or compromised. The USB payloads generated network callbacks to attacker-controlled infrastructure for measurement purposes only. All identifying details have been altered or omitted to preserve client confidentiality. No individual employees are identified in this article or in the assessment report. Unauthorised deployment of USB devices intended to compromise computer systems is a criminal offence under the Computer Misuse Act 1990. Do not attempt to replicate these techniques without proper authorisation.



The answer is probably yes. The question is: what happens next?

Hedgehog Security conducts controlled USB drop exercises that measure both human behaviour and technical control effectiveness. We test whether your staff will connect unknown devices — and whether your endpoint controls will contain the impact when they do. The results are reported constructively, without naming individuals, and with actionable recommendations that focus on technical controls rather than blaming people for being human.