> curl -H 'Role: admin' https://app/api/users && echo 'oh.'_
In 2021, the OWASP Top 10 — the industry's most widely referenced catalogue of web application security risks — moved Broken Access Control from position five to position one. Not because it's a new problem. Because it's become the dominant problem — appearing in 94% of applications tested in OWASP's dataset, with over 318,000 occurrences across 34 common weakness categories.
It's not hard to understand why. Every modern application is fundamentally an access control system: it decides who can see what, who can do what, and who can change what. When those decisions are wrong — when a user can access another user's data, when a customer can reach the admin panel, when an API accepts a role parameter from the client — the consequence isn't a theoretical vulnerability. It's a direct path to data breach, fraud, or complete system compromise.
And the tools that most organisations rely on to find vulnerabilities — automated scanners — are fundamentally incapable of testing access control. Because testing access control requires understanding what a user should be allowed to do, and no scanner has that understanding.
A scanner sends a request and receives a valid response. It sees a 200 OK with data in the body. It moves on. It has no way to know that the data it received belongs to a different user, that the endpoint should require admin privileges, or that the action it performed should have been denied. Every access control test requires context that automation doesn't possess.
The terms are often used interchangeably, but they describe three distinct layers of security — each with its own failure modes, its own testing requirements, and its own consequences when it breaks.
| Layer | What It Does | The Question It Answers | When It Fails |
|---|---|---|---|
| Identity | Establishes who the user claims to be. The username, the email address, the certificate, the API key. Identity is the claim. | "Who are you?" | Usernames are predictable or enumerable. API keys are embedded in client-side code. Identity is assumed from client-supplied data (a hidden form field, a cookie, a JWT claim) without server-side verification. |
| Authentication | Verifies that the identity claim is genuine. The password, the MFA token, the biometric, the OAuth flow. Authentication is the proof. | "Can you prove it?" | Weak passwords accepted. MFA absent or bypassable. Password reset flows that leak tokens. Session tokens that are predictable, never expire, or survive password changes. OAuth state parameters not validated. |
| Access control | Determines what the authenticated user is allowed to do. Which resources, which actions, which data. Access control is the permission. | "Are you allowed to do that?" | Horizontal escalation (accessing another user's data). Vertical escalation (performing admin actions as a standard user). Missing server-side checks on API endpoints. Reliance on client-side enforcement. |
A penetration test examines all three layers — and critically, it examines the gaps between them. An application might authenticate users perfectly but then forget to check whether the authenticated user is authorised to access the specific resource they're requesting. The authentication layer is sound; the access control layer is absent. The gap between them is where the breach lives.
Authentication is the front door to every application. When it fails, everything behind it is exposed — regardless of how well the access control layer is implemented. Here are the authentication weaknesses we find most frequently, and why each one matters more than its CVSS score suggests.
| Weakness | How We Test It | Real-World Consequence |
|---|---|---|
| No MFA on critical systems | Attempt login with valid credentials (provided or from breached databases). Verify whether a second factor is required for VPN, webmail, admin panels, and cloud management consoles. | A single compromised password — from phishing, credential stuffing, or a third-party breach — grants complete access. Without MFA, there is no second chance to stop the attacker. |
| Bypassable MFA | Test whether MFA can be skipped by accessing the application via its API rather than the front-end. Test whether the MFA step can be bypassed by navigating directly to the post-MFA URL. Test for MFA fatigue (repeated push notifications). | MFA is deployed but the implementation has a gap. The API doesn't enforce MFA. The application trusts the browser session after a single MFA challenge and never re-validates. The user approves a push notification out of frustration. |
| Weak password policy | Attempt to set passwords that are common, short, or appear in breach dictionaries. Test whether the policy enforcement matches the stated policy. Spray common passwords against the login endpoint. | If Company2025! satisfies the password policy, it satisfies the attacker too. Password spraying — trying one common password against every account — routinely achieves a 3–5% success rate against organisations with technically-compliant but practically-weak policies. |
| Broken password reset | Request a password reset and examine the token. Is it predictable? Is it returned in the response body as well as the email? Does it expire? Can it be reused? Does the reset invalidate existing sessions? | A reset token that's returned in the API response — not just in the email — allows any user to reset any other user's password. We find this pattern approximately once in every 20 web application tests. |
| Session management flaws | Examine session tokens for predictability, entropy, and secure attributes. Test whether sessions expire after inactivity. Test whether sessions are invalidated after password change. Test for session fixation. | A session that survives a password change means the attacker retains access even after the victim resets their password. A predictable session token means the attacker doesn't need credentials at all — they can generate valid sessions mathematically. |
| User enumeration | Attempt login with valid and invalid usernames. Compare responses: do error messages, response times, or HTTP status codes differ? Test registration, password reset, and any endpoint that accepts a username. | Enumeration seems minor — it "only" reveals whether an account exists. But it's the prerequisite for targeted credential attacks. The attacker narrows 10,000 possible usernames to 3,000 confirmed accounts, then sprays one password against all of them. |
Access control testing is where human testers deliver the most value — because every test requires understanding the application's permission model and deliberately violating it. There is no signature to match, no pattern to detect, no payload to inject. There is only the question: does this application correctly enforce who can do what?
Every one of these vulnerability classes shares a common trait: the server returns a valid, successful response. There is no error, no unusual status code, no malformed output. The scanner sees normal application behaviour. The human tester sees a critical security failure. That gap is why access control testing requires human intelligence.
Access control failures aren't limited to web applications. The identity infrastructure that underpins the entire organisation — Active Directory, Azure AD / Entra ID, OAuth providers, SAML federations — presents its own category of weaknesses that penetration testing systematically exposes.
| Identity System | Common Weaknesses We Find | Impact |
|---|---|---|
| Active Directory | Excessive group membership (200 users in Domain Admins). Service accounts with DA-equivalent privileges and passwords that haven't changed since 2018. Unconstrained delegation allowing any compromised service to impersonate any user. GPP passwords in SYSVOL. No tiered administration model. | AD is the central identity store. Every misconfiguration is a privilege escalation path. When the service account that runs the print spooler is a Domain Admin, the print spooler is a Domain Admin — and so is anyone who compromises it. |
| Azure AD / Entra ID | No conditional access policies — any device, any location, any risk level can authenticate. Global Admin assigned to regular user accounts rather than dedicated admin accounts. Legacy authentication protocols enabled (basic auth bypasses MFA). Overly permissive application consent policies. | Cloud identity compromise grants access to every SaaS application federated through Entra ID: email, SharePoint, Teams, the Azure portal itself. A single compromised cloud account can be more damaging than Domain Admin on-premises. |
| OAuth / OpenID Connect | State parameter not validated (CSRF in the OAuth flow). Redirect URI not strictly matched (allowing token theft via open redirect). Overly-broad scope requests. Refresh tokens that never expire. Client secrets embedded in front-end code. | OAuth flaws allow attackers to hijack the authentication flow — stealing tokens, impersonating users, or gaining access to APIs with permissions the user never intended to grant. |
| SAML federations | XML signature wrapping attacks. SAML assertion replay. Missing audience restriction validation. Trust relationships that accept assertions from any identity provider without verification. | SAML is the federation protocol for enterprise SSO. A flaw in the SAML implementation can allow the attacker to forge authentication assertions — logging in as any user without their credentials. |
| API authentication | API keys in client-side code. Bearer tokens that never expire. No rate limiting on authentication endpoints. Shared API keys across all clients. JWT tokens with alg: none accepted or signed with a weak secret. |
APIs are the backbone of modern applications. When API authentication is weak, the attacker doesn't need to exploit the application — they interact with the API directly, bypassing every front-end security control. |
Testing identity, authentication, and access control isn't a single check — it's a structured process that examines every layer, every role, and every transition between privileges. Here's how we approach it.
| Phase | What We Do | What We're Looking For |
|---|---|---|
| 1. Permission mapping | Identify every role in the application (anonymous, standard user, privileged user, admin, super-admin). Document what each role should and shouldn't be able to do. Map every endpoint and function to its expected access level. | A complete picture of the intended access control model — the baseline against which every test is measured. If the application doesn't have a documented permission model, the mapping itself is a valuable deliverable. |
| 2. Horizontal testing | Authenticate as User A. Access User B's resources by changing identifiers (IDs, filenames, account numbers) in every request. Test across every endpoint that returns user-specific data or performs user-specific actions. | Any case where User A can access User B's data or perform actions on User B's behalf. IDORs, cross-account access, shared resource leakage. |
| 3. Vertical testing | Authenticate as a standard user. Attempt every admin function by calling admin endpoints directly, modifying role parameters, or replaying admin requests captured in a higher-privilege session. | Any case where a lower-privilege user can perform higher-privilege actions. Admin panel access, user creation/deletion, configuration changes, data export. |
| 4. Authentication attack surface | Test every authentication mechanism: login, registration, password reset, MFA, session management, OAuth flows, API key handling, token lifecycle. Test each for weakness, bypass, and abuse. | Authentication bypasses, MFA circumvention, session fixation, token prediction, credential leakage, brute-force resilience. |
| 5. Privilege transition testing | Test what happens at privilege boundaries: when a user's role changes, when a session is elevated, when MFA is completed. Verify that the transition is atomic, that old sessions are invalidated, and that the new privilege level is correctly enforced. | Race conditions during role changes. Sessions that retain old privileges after demotion. MFA challenges that can be completed once and then bypassed for subsequent actions. |
This structured approach ensures that testing isn't random or opportunistic. Every role is tested against every other role. Every endpoint is tested at every privilege level. The matrix is systematic — which is the only way to achieve confident coverage of access control.
To illustrate the depth of identity and access control testing, here's a composite from a real engagement: a customer portal for a financial services firm, serving 15,000 registered users.
Seven findings. Three independent chains to complete compromise. Zero detected by scanners. The portal had been scanned quarterly for two years — every scan returned a clean bill of health on these endpoints because every response was a valid 200 OK with correctly-formatted data. The data just happened to belong to someone else.
Access control failures aren't caused by incompetent developers. They're caused by systemic factors in how applications are designed, built, and tested — factors that create the conditions for these vulnerabilities to appear and persist.
| Root Cause | How It Creates Access Control Failures |
|---|---|
| Front-end-driven design | The application is designed around what the UI shows: if the button isn't visible, the user can't perform the action. But the API behind the button doesn't enforce the same restriction. Security is in the UI layer, not the business logic layer — and attackers don't use the UI. |
| No centralised access control framework | Each endpoint implements its own access checks. Developer A checks the role. Developer B checks the session. Developer C forgets to check anything. Without a centralised, framework-level enforcement mechanism, consistency is impossible at scale. |
| Untested permission model | The application has roles and permissions, but nobody has documented the intended access matrix — which roles should access which endpoints with which methods. Without a documented model, there's nothing to test against, and nobody notices when the implementation diverges from the intent. |
| Additive development | New features are added to existing applications over time. Each new endpoint is built by a different developer, in a different sprint, with different assumptions about access control. The original permission model (if one existed) isn't updated. Gaps accumulate silently. |
| Overreliance on scanning | The application is scanned regularly and passes every time. The organisation develops confidence that the application is secure. But the scanner never tested access control — it tested for injection, XSS, and configuration issues. The most dangerous vulnerability class was never assessed. |
Fixing access control isn't a single patch — it requires architectural thinking, development practices, and testing that specifically targets the identity layer.
Identity, authentication, and access control are the foundation of application security. They determine who can access what, and every failure in these layers is a direct path to data exposure, privilege escalation, or complete system compromise.
These are also the vulnerability classes that automated scanners are worst at finding. A scanner sees a successful response and moves on. A human tester sees a successful response and asks: should this user have received this data? That question — the question of authorisation, not just authentication — is the one that uncovers the findings which matter most.
Broken access control is the number one web application vulnerability class for a reason. It's common, it's impactful, it's invisible to automation, and it persists across quarterly scans that never test for it. The only reliable way to find it is a human tester with a permission matrix, a proxy, and the time to test every endpoint at every privilege level.
Our web application and infrastructure assessments include systematic testing of identity, authentication, and access control — the vulnerability classes that cause the most damage and receive the least automated coverage.