Penetration Testing

The Human Element of Penetration Testing

> echo 'the firewall approved this' | mail -s 'Urgent: Action Required' all@acme.co.uk_

Peter Bassill 1 July 2025 17 min read
social engineering phishing human factors security awareness physical security vishing

Every security investment protects the perimeter. People walk through it.

A law firm spends £180,000 on next-generation firewalls, endpoint detection and response, email security gateways, and a managed SOC. The external pen test comes back clean. The internal pen test finds only two medium-severity Active Directory misconfigurations, both remediated within a week. The CISO presents the results to the partnership. Security posture: strong.

Two weeks later, a tester posing as a fire safety inspector walks into the London office unchallenged. The receptionist asks for a name but not for identification. The facilities manager escorts the "inspector" to the server room to "check the suppression system," then leaves to make tea. The tester plugs a rogue network device behind the server rack and walks out through the front door fourteen minutes after walking in.

The same week, a phishing campaign targeting 120 fee earners achieves a 34% click rate and an 18% credential submission rate. Four accounts are fully compromised — including a senior partner with access to active M&A files, client account details, and privileged communication. The email security gateway caught two of the 120 emails. The SOC detected nothing. Three people reported the email as suspicious. The first report arrived 47 minutes after the campaign began.

The technology worked exactly as designed. The people did what people do: they trusted a confident stranger, they clicked a convincing email, and they entered their passwords on a page that looked right. No vulnerability was exploited. No software was compromised. The attack worked because it targeted the one component of the security architecture that no vendor sells a patch for.

The Number That Won't Move

Verizon's 2024 Data Breach Investigations Report attributes 68% of breaches to a human element — through social engineering, credential misuse, misdelivery, or error. The figure has remained stubbornly consistent for years despite unprecedented investment in security awareness training. The problem isn't that people aren't trained. It's that attackers are better at exploiting human nature than annual e-learning modules are at overriding it.


What your people believe — and what attackers know they believe.

Every social engineering attack exploits a human assumption — a mental shortcut, a default trust, a social convention that we rely on to function in everyday life but that becomes a vulnerability when an adversary understands it. These aren't stupidity. They're efficiency mechanisms that evolution spent millions of years optimising. Attackers have spent decades learning to abuse them.

The Assumption Why It Exists How Attackers Exploit It
"If they're in the building, they belong here" Physical presence implies authorisation. We don't challenge people who look like they belong — wearing a lanyard, carrying equipment, walking with purpose. Challenging a stranger feels socially aggressive. Tailgating through secure doors. Impersonating contractors, cleaners, auditors, fire inspectors. Walking into server rooms, plugging into network ports, photographing whiteboards, reading unlocked screens. Physical access bypasses every digital control the organisation has purchased.
"This email is from my boss" We comply with requests from authority figures quickly and without question — especially when the request is urgent. We trust the display name in the email header far more than we inspect the actual sender address. CEO fraud and business email compromise. An email appearing to come from the managing director requests an urgent wire transfer, a password reset, or access to a confidential document. The display name says "James Mitchell, CEO." The actual sender is a free webmail account. The £65,000 transfer is processed before anyone thinks to phone James directly.
"IT support wouldn't ask unless they needed it" IT is a trusted internal authority. People expect IT to ask for credentials during troubleshooting, migrations, or security incidents. The request matches their mental model of how IT operates. Vishing — voice phishing. The attacker calls pretending to be the helpdesk during a "system migration" or "security incident." They ask the user to "verify" their credentials on a page the attacker controls, or to read out an MFA code. The user complies because the request sounds exactly like something IT would do.
"This login page looks right" We judge website legitimacy by visual appearance. Correct logo, correct colours, plausible URL — that's enough. We rarely inspect the domain character by character or check the TLS certificate. Credential harvesting via cloned login pages. The phishing email links to login.m1crosoft365.com — a pixel-perfect replica hosted on attacker infrastructure. Modern tooling (evilginx, modlishka) operates as a reverse proxy, capturing the session token in real time and bypassing MFA entirely. The user sees a successful login. The attacker sees a valid session.
"I'm too senior / too junior to be a target" Senior staff assume attackers target the IT department. Junior staff assume attackers only target the C-suite. Both assumptions create a false sense of safety. Attackers target whoever has the access they need. A PA with calendar access to the CEO. A finance clerk who can approve payments up to £10,000. A developer with production database credentials. A receptionist who controls physical access. A marketing intern who manages the company's social media accounts. Job title is irrelevant; access is everything.
"The security tools would have caught it" If the email arrived in the inbox, it must have passed through the gateway, the spam filter, and the URL scanner. If the tools didn't block it, it must be legitimate. Attackers specifically design campaigns to bypass security tools. They test payloads against email gateways before deployment. They host malicious links on trusted platforms — SharePoint, Google Drive, Dropbox — that the gateway whitelists. They embed QR codes that bypass URL scanning entirely. A clean inbox arrival lowers the user's guard precisely when it should be at its highest.
"We're not important enough to be targeted" Small and mid-sized organisations, charities, schools, local government, and professional services firms all share this assumption. Targeted attacks are for banks and governments. Ransomware operators target any organisation that will pay. BEC attackers target any organisation that transfers money. Supply chain attackers target any organisation that provides access to a larger target. Data is data — whether it's held by a FTSE 100 or a 15-person accountancy firm.
"I need to do this quickly so I'm not the bottleneck" Urgency and social pressure override caution. When a request comes from a superior with a deadline, people bypass their normal verification process to avoid being the person who held things up. Every effective phishing email manufactures urgency: "Your account will be locked in 2 hours." "The CEO needs this before the 3pm board meeting." "Invoice overdue — legal action will follow." Urgency is the single most reliable social engineering lever because it exploits the gap between knowing the right thing to do and having the time to do it.
"I just logged in, so this MFA prompt must be mine" Users who have recently entered a password expect an MFA prompt. If one appears — even unsolicited — they approve it reflexively. The prompt looks normal. It feels normal. MFA fatigue. The attacker has the user's password (from a phish, a breach database, or credential stuffing). They trigger repeated push notifications — 5, 10, 20 — until the user approves one to make them stop. Or they time a single push to arrive when the user might plausibly expect it: first thing Monday morning, just after a password-expiry email.
"I can spot a phishing email" Security awareness training builds confidence: "Check for these red flags." People who believe they can always identify a phish scrutinise less carefully than people who accept they might miss one. Modern phishing has no red flags. Grammar is flawless — often AI-generated and native-quality. Branding is pixel-identical. The sending domain is plausible or compromised. The pretext is researched from LinkedIn, company announcements, and public filings. The payload bypasses the gateway. The login page captures the MFA token. Overconfidence is the vulnerability.

Cognitive biases that training cannot override.

The assumptions above aren't random — they're symptoms of well-documented cognitive biases that evolved to help humans make fast decisions in uncertain environments. These biases are not bugs in human cognition. They're features — and they're exploitable precisely because they're reliable.

Cognitive Bias How It Manifests The Social Engineering Application
Authority bias We comply with requests from perceived authority figures without questioning them. The higher the perceived authority, the less we challenge — even when the request is unusual. CEO impersonation, IT support impersonation, "legal department" threats, police impersonation. Any pretext that positions the attacker above the target in a perceived hierarchy. In physical tests, a high-visibility vest and clipboard create more authority than a suit.
Urgency bias Under time pressure, our decision-making shortcuts intensify. We skip verification steps, accept claims at face value, and prioritise action over analysis. Every effective phishing campaign manufactures a deadline. "Account locked in 2 hours." "Board meeting in 30 minutes." "Payment overdue — legal proceedings commencing." The urgency doesn't need to be real — it only needs to feel real long enough for the user to act.
Social proof We do what we see others doing. If nobody is challenging the visitor, they must be authorised. If the email was sent to the whole team, it must be legitimate. Group phishing emails feel safer than individual ones. In physical attacks, tailgating succeeds because if the person ahead didn't challenge the follower, the next person won't either. Social proof creates cascading permission.
Reciprocity When someone does something for us — however small — we feel socially obligated to return the favour. The debt is emotional, not rational. The attacker holds the door, carries a box, brings coffee, fixes a "broken" printer. When they then ask to borrow a badge, use a workstation, or get buzzed through a locked door, the social debt makes refusal feel rude.
Optimism bias We believe bad things happen to other people. "I would never fall for a phishing email." "Our company isn't a target." The more confident the individual, the stronger the bias. Overconfidence reduces vigilance. The person most certain they'd spot a phish is often the easiest to phish — because they don't examine the email as carefully as someone who accepts fallibility.
Habituation Repeated exposure to a stimulus reduces our response to it. MFA prompts become automatic approvals. Security banners become invisible. "This file may be dangerous" warnings become reflexive clicks on "Enable." MFA fatigue is pure habituation exploitation. So is banner blindness — if every downloaded file triggers a warning, users stop reading warnings. The boy cried wolf, and the attacker is the wolf.
Commitment and consistency Once we've started a process, we tend to complete it — even if warning signs emerge partway through. Stopping feels like admitting we made a mistake by starting. Multi-step phishing attacks exploit this: the user clicks a link, sees a loading screen, enters their username — they're now committed. The password prompt feels like a natural next step, not a suspicious request. Stopping at step 3 means admitting steps 1 and 2 were errors.

Security awareness training can teach people the names of these biases. It cannot switch them off. The biases operate below conscious decision-making — which is why the same employee who scores 100% on the annual phishing quiz will submit their credentials on a well-crafted phishing page three weeks later. Knowing about a bias and being immune to it are entirely different things.


The attack that scales and never stops working.

Phishing is the most common initial access vector in real-world breaches and a core component of any human-element penetration test. But effective phishing assessment goes far beyond sending fake emails and counting clicks — it evaluates the entire chain: which pretexts work against which populations, whether the email gateway catches the campaign, whether users report it, and how quickly the SOC responds when they do.

Phishing Variant How It Works What It Tests
Credential harvesting A phishing email links to a cloned login page. The user enters their username and password. In advanced assessments, a reverse proxy (evilginx) captures the session token in real time — bypassing MFA and granting the tester a fully-authenticated session. Whether users verify URLs before entering credentials. Whether the email gateway blocked the email or flagged the domain. Whether the SOC detected the credential submission or the anomalous login from a new IP. Whether the cloned domain triggered any DNS or threat intelligence alert.
Payload delivery The email contains or links to a malicious document — a macro-enabled Office file, an ISO image, a OneNote document with an embedded script — disguised as a legitimate business attachment: an invoice, a purchase order, a CV. Whether users open attachments from unfamiliar senders. Whether endpoint protection detects the payload on execution. Whether the email gateway sandboxes or strips the attachment. Whether macro execution is blocked by Group Policy.
Business email compromise No payload. No link. No malware. Just a convincing email impersonating a senior executive requesting a wire transfer, a bank detail change, or access to a sensitive document. The attack is pure social engineering — and it's the most financially damaging form of phishing globally. Whether finance staff follow verification procedures for payment requests. Whether employees challenge requests that appear to come from authority figures. Whether the email gateway detects sender spoofing or display-name impersonation.
Spear phishing Highly targeted phishing using specific OSINT about the target: their role, current projects, colleagues' names, recent company announcements. The email references context that only a legitimate contact would know — making it nearly indistinguishable from genuine communication. Whether high-value targets (finance directors, IT admins, executive assistants, developers with production access) are more resistant than the general population. Whether security awareness training has prepared people for pretexts tailored to their specific role.
Smishing and QR phishing Phishing via SMS or QR codes — bypassing email security entirely. The QR code on a poster in the kitchen, an SMS "from IT" about a password reset, a QR code in a PDF attached to a legitimate-looking email. Whether mobile device management catches malicious URLs opened on phones. Whether users are trained to treat SMS and QR codes with the same suspicion as email links. Whether the organisation's phishing defences are email-centric and blind to other channels.

The phone call that bypasses every email control.

Voice phishing — vishing — is dramatically underrepresented in most organisations' security testing programmes. It shouldn't be. A well-executed vishing call exploits authority bias and urgency bias in real time, with the added pressure of a live conversation where pausing to think feels socially awkward. Unlike email, there's no "Report Phish" button and no gateway to block the call.

In our vishing assessments, the success rate for obtaining credentials, MFA codes, or sensitive information via a phone call consistently exceeds the success rate of email phishing campaigns targeting the same population. People who would never enter their password on a suspicious webpage will read their MFA code to a confident caller who says the right words.

Vishing Assessment — Composite Results
# Pretext: IT helpdesk — 'emergency security patch requires verification'
calls_made = 30 # Mixed departments
calls_answered = 24 (80%)
callers_suspicious = 6 (25%) # Questioned but continued
callers_refused = 4 (17%) # Ended the call
credentials_obtained = 9 (38%) # Gave username + password
mfa_codes_obtained = 5 (21%) # Read out live MFA code
remote_access_granted = 3 (13%) # Installed remote tool as instructed

# Key Observation
avg_call_duration = 4 minutes 20 seconds # To full credential capture
longest_resistance = 8 seconds of questioning before compliance
reported_to_IT = 1 of 24 answered calls # After the call ended

Nine credentials, five live MFA codes, three remote access installations — from 24 answered phone calls. Average time to full compromise: four minutes twenty seconds. The longest any caller questioned the tester before complying: eight seconds. One person — out of 24 — reported the call to IT afterwards. The other 23, including the nine who gave their credentials, said nothing.


The test most organisations forget to commission.

Physical security testing is the most visceral form of human-element assessment — and the one that consistently produces the most alarming results. It tests whether an unauthorised person can gain physical access to the organisation's premises, its sensitive areas, and its IT infrastructure through social engineering alone.

The results are almost always the same: they can. Not because the locks are weak, but because the people are polite.

Tailgating
Following an authorised person through a secure door before it closes. The most effective technique is carrying something in both hands — a box of printer paper, two coffee cups, a stack of folders — so the target holds the door out of courtesy. We achieve successful tailgating entry in over 90% of physical assessments. Success rates increase at lunchtime, with a high-visibility vest, and on rainy days when people rush through doors.
Impersonation
Arriving as a fire inspector, a health and safety auditor, a photocopier engineer, or a new starter on their first day. The pretext determines the access level: fire inspectors are escorted to server rooms, IT contractors are left alone at desks, and photocopier engineers are given hours of unsupervised access to network-connected equipment in every department. A printed lanyard, a clipboard, and body language that communicates belonging are usually sufficient.
Rogue Device Deployment
Planting a small network device — a Raspberry Pi, a LAN Turtle, a Wi-Fi Pineapple — behind a printer, under a desk, or in a network cupboard. The device provides persistent remote access to the internal network from outside the building. If undiscovered, it operates for as long as it has power. We routinely deploy devices that remain active for the full duration of the engagement — weeks — without detection.
Shoulder Surfing and Observation
Photographing whiteboards with project plans and architecture diagrams. Reading passwords from sticky notes on monitors. Watching PIN entries on door keypads. Noting screen content on unlocked workstations. Collecting printed documents from shared printers and recycling bins. The volume of sensitive information casually visible in an average open-plan office is consistently staggering.
USB Drop
Leaving USB drives labelled "Salary Review 2025" or "Redundancy Plan — Confidential" in car parks, kitchens, bathrooms, and meeting rooms. When a curious employee plugs one in, the device executes a payload that phones home — giving the tester a remote shell on the workstation. Drop-to-execution rates vary between 15% and 45% depending on the label, the location, and whether the organisation's endpoint protection blocks autorun.
Rogue Wi-Fi
Setting up a Wi-Fi access point with the same SSID as the corporate guest network — or a plausible variant like "ACME-Guest-5G." Employee devices that have previously connected to the real network may auto-connect to the rogue AP. From there, the tester can intercept traffic, serve phishing pages, or capture credentials from applications that don't enforce certificate pinning.

Physical testing reveals a truth that digital-only assessments miss entirely: the most expensive firewall in the world is irrelevant if someone can walk into the building, plug a device into the network, and walk out again. In our experience, they almost always can — and the device is almost never discovered.


Metrics that matter — beyond the click rate.

A phishing simulation that produces only a click rate is a wasted exercise. The click rate tells you how many people fell for this specific pretext on this specific day. It doesn't tell you whether the organisation can detect, report, and respond to a social engineering attack before it causes damage. The metrics that drive real improvement measure the entire response chain.

Metric What It Measures Why It Matters More Than Click Rate
Report rate The percentage of recipients who reported the phishing email through official channels ("Report Phish" button, email to security team, call to IT). A 30% click rate with a 25% report rate is a healthier organisation than a 15% click rate with a 2% report rate. Reports trigger response. Unreported phishing triggers nothing. The report rate is the organisation's immune response.
Time to first report How many minutes between the first phishing email being delivered and the first report being submitted. Speed determines damage. A report at 3 minutes triggers containment before most users have opened the email. A report at 47 minutes means the campaign has been running undetected for nearly an hour. Industry benchmark for a well-trained organisation: under 15 minutes.
SOC response time How quickly the SOC or IT security team acts on the first report — blocking the phishing domain, pulling the email from inboxes, disabling compromised accounts. A fast report with a slow response is almost as bad as no report. If the SOC takes 3 hours to act on a phishing report, the attacker has 3 hours of uninterrupted access from every account that submitted credentials.
Credential compromise rate The percentage of recipients who not only clicked but actually submitted valid credentials on the harvesting page. Clicking a link is a mistake. Submitting credentials is a breach. The gap between click rate and credential rate reveals whether intermediate controls (browser warnings, URL reputation, muscle-memory hesitation) provide any last-line defence.
MFA bypass rate Of those who submitted credentials, how many also completed the MFA challenge — either through a reverse proxy capture or by approving a push notification. If MFA is bypassable through real-time phishing or fatigue attacks, the organisation's single most relied-upon compensating control is ineffective against a motivated attacker. This metric drives the case for phishing-resistant MFA (FIDO2).
Departmental variance Which departments, roles, or office locations showed the highest and lowest click, report, and credential rates. Aggregate statistics hide the outliers. If the finance department has a 5% click rate but the facilities team has a 60% click rate, the organisation-wide 18% average is misleading. Targeted training and controls should follow the risk, not the average.
Human-Element Assessment — Full Metrics Dashboard
# Phishing Campaign
emails_sent = 120 # All fee earners
emails_delivered = 118 # 2 blocked by gateway
emails_opened = 94 (80%)
links_clicked = 40 (34%)
credentials_submitted = 21 (18%) # Inc. 4 with privileged access
mfa_bypassed = 8 (7%) via reverse proxy # Full session capture
reported_as_phish = 3 (2.5%) # First report: 47 minutes
soc_response_time = 3 hours 12 minutes after first report
accounts_disabled = 0 # SOC did not disable compromised accounts

# Vishing Campaign
calls_made = 30
credentials_obtained = 9 (38% of answered)
mfa_codes_obtained = 5 (21%)
reported_to_IT = 1 of 24 answered

# Physical Assessment
tailgate_attempts = 4
tailgate_success = 4 (100%) # Zero challenges
server_room_access = achieved (escorted by facilities)
rogue_device = deployed floor 3, behind printer
device_discovered = no (active at engagement close)

# Critical Finding
detection_gap = phish + vish + physical: no SOC alerts generated

The click rate (34%) tells part of the story. The vishing success rate (38%) tells another. But the most critical finding spans all three channels: across phishing, vishing, and physical testing, the SOC generated zero alerts. The organisation's detection capability — the £180,000 investment in technology — was not tuned to detect the attacks that actually work. The investment protected against automated threats. The human-element attacks walked past it.


Building defences that survive human error.

If cognitive biases can't be trained away, the security model must change. Instead of trying to make humans infallible, build an architecture that limits the damage when they're fallible — which they will be, because they're human. This means layering technical controls, organisational processes, and cultural norms that collectively reduce the probability of a successful attack and contain the blast radius when one succeeds.

Defence Layer What It Does Why It Works
Phishing-resistant MFA (FIDO2 / passkeys) Replace push-notification and OTP-based MFA with hardware-bound cryptographic keys. FIDO2 keys are domain-bound — they physically cannot authenticate to a cloned site because the domain doesn't match. Eliminates MFA fatigue, reverse-proxy credential capture, and real-time session theft. The user cannot authenticate to a fake site regardless of how convincing it looks. This single control neutralises the most sophisticated phishing attacks in existence.
Payment verification procedures Require verbal confirmation via phone — to a known number, not one from the email — for any payment instruction, bank detail change, or financial authorisation received electronically. Defeats BEC entirely. The attacker's email requests an urgent wire transfer. Finance phones the supposed sender on their known number. The fraud is exposed in 30 seconds. This single procedural control prevents the most financially devastating social engineering attack.
Reporting culture, not blame culture Make reporting a suspicious email, call, or visitor frictionless and consequence-free. A one-click "Report Phish" button. A direct line to security for suspicious visitors. No punitive follow-up for false alarms. Visible recognition for good reports. The organisation's immune system depends on its sensors — people — actually reporting what they see. Punishing reporters trains silence. Rewarding reporters — including for false positives — trains vigilance. A 25% report rate is worth more than a 5% click rate.
Challenge culture Explicitly empower employees to challenge anyone — regardless of seniority or apparent authority — who requests credentials, access, or sensitive information through unusual channels. Model the behaviour from leadership. Directly addresses authority bias. If the organisation's culture rewards polite but firm challenges — "I need to verify this through our normal process" — employees will challenge the "CEO's" email, the "fire inspector's" badge, and the "IT helpdesk" caller.
Physical access controls Visitor registration with photo ID verification. Mandatory escort for all non-employees. Network ports disabled by default with 802.1X NAC. Mantrap doors on sensitive areas. No unaccompanied server room access. Technical controls that enforce physical security regardless of social pressure. The impersonator may convince reception they're a fire inspector — but the visitor process requires government-issued ID, a logged escort, and badge-restricted access that the escort's badge doesn't open.
Assume-breach architecture Design internal controls on the assumption that a phishing attack will eventually succeed. Segment the network. Implement least-privilege access. Monitor for post-compromise indicators. Limit what any single compromised account can reach. Accepts the inevitability of human error and limits the blast radius. When — not if — a user submits credentials on a phishing page, the attacker's access should be constrained to the minimum: no lateral movement, no privilege escalation, no access to data outside the user's role.

Ethical and effective human-element testing.

Human-element testing is uniquely sensitive. Unlike testing a server or an application, you're testing people — and people have feelings, professional reputations, and anxiety about being judged. The way an assessment is designed, communicated, and reported determines whether it strengthens the organisation's security culture or poisons it.

Test the Organisation, Not the Individual
Report results as aggregate statistics and departmental breakdowns, never as named individuals. "18% of the finance department submitted credentials" is actionable intelligence. "John Smith submitted his password in 4 seconds" is public humiliation — and it guarantees that John never reports a suspicious email again, because he's terrified of being singled out a second time.
Educate, Don't Punish
Users who clicked should receive an immediate, supportive learning experience — a brief explanation of what happened, why the pretext was effective, and what to look for next time. Not a disciplinary meeting. Not a mandatory retraining module as punishment. Punishing clicks teaches people to hide mistakes. Educating them teaches people to report them.
Brief Leadership Before, Not After
Senior leadership must know the assessment is happening, agree on the scope and ethical boundaries, and understand that the purpose is to identify systemic weaknesses — not to prove that employees are stupid. If the board sees the results as evidence of staff incompetence rather than organisational risk, the assessment has failed regardless of its technical findings.
Respect Ethical Boundaries
Never exploit genuine emergencies, health fears, bereavement, or personal circumstances. Never use pretexts that cause real distress — fake redundancy notices, false security alerts about personal accounts, fabricated disciplinary proceedings. The pretext must be realistic enough to test genuine behaviour without crossing the line into causing genuine harm.
Measure Progress, Not Perfection
A single assessment is a snapshot. Quarterly or biannual assessments — with varied pretexts, different target groups, escalating sophistication — reveal whether the organisation's human resilience is improving. Track click rate, report rate, time-to-report, SOC response time, and MFA bypass rate across each campaign. The trend matters more than any individual number.
Vary the Channel
An organisation that only tests email phishing has only tested one channel. Real attackers use phone calls, SMS, QR codes, social media messages, physical access, and combinations of all of them. A comprehensive human-element programme rotates across channels, testing the organisation's resilience to social engineering regardless of the medium.

The bottom line.

The human element is the attack surface that no firewall protects, no scanner detects, and no patch fixes. It's the receptionist who holds the door for the "fire inspector." It's the finance manager who processes the urgent wire transfer without phoning to verify. It's the developer who approves the MFA prompt at 7am without thinking. It's the senior partner who enters credentials on a cloned login page that's pixel-identical to the real one.

These aren't failures of intelligence or professionalism. They're features of human cognition — authority bias, urgency bias, social proof, reciprocity, habituation, commitment — that evolved to help us make fast decisions and that attackers have methodically weaponised. Annual e-learning cannot override millions of years of cognitive evolution. The 68% figure isn't moving because the approach is wrong — not because the training isn't frequent enough.

The answer is architectural, not educational. Training raises awareness. Phishing-resistant MFA eliminates credential theft. Payment verification procedures prevent BEC fraud. Challenge culture addresses authority bias. Physical controls enforce access policy when social pressure fails. Assume-breach design limits blast radius when — not if — a human makes a mistake. And a reporting culture that rewards rather than punishes creates the early-warning system that no technology can replace.

Human-element penetration testing — phishing, vishing, physical access, social engineering — reveals the gap between what the organisation assumes its people will do and what they actually do under realistic adversarial pressure. That gap is where 68% of breaches begin. Closing it requires understanding it first — and understanding it requires testing it.


Find out what happens when the attack targets your people.

Our human-element assessments combine phishing simulation, vishing, physical security testing, and social engineering into a realistic, multi-channel evaluation of your organisation's resilience — with ethical testing practices that strengthen your security culture rather than undermine it.