Breach Analysis

The Cambridge Analytica Scandal: When the Platform Is the Vulnerability

> platform.failure —— target: Facebook / Cambridge Analytica —— date: 2018-03-17 —— profiles_harvested: 87,000,000 —— consent: ABSENT —— democracy: COMPROMISED<span class="cursor-blink">_</span>_

Hedgehog Security 17 June 2018 32 min read

When the platform itself is the vulnerability.

Every breach we have examined in this series has involved attackers penetrating the defences of an organisation to steal data — through unpatched software, compromised credentials, insider access, or social engineering. The Cambridge Analytica scandal is fundamentally different. No servers were hacked. No vulnerabilities were exploited. No malware was deployed. Instead, Facebook's own platform — its deliberately designed API, its intentional data-sharing architecture, its business model built on the monetisation of personal information — was used exactly as designed to harvest the personal data of 87 million people without their meaningful consent. The platform itself was the vulnerability.

On the 17th of March 2018, The Guardian and The New York Times simultaneously published stories based on the testimony of Christopher Wylie, a former Cambridge Analytica employee turned whistleblower. Wylie revealed that Cambridge Analytica — a political consulting firm working for the 2016 Trump presidential campaign and the UK's Leave.EU Brexit campaign — had used a Facebook personality quiz app to harvest the personal data of tens of millions of users. That data was then used to build psychographic profiles for political microtargeting — delivering tailored political messages designed to influence individual voters based on their personality traits, fears, and vulnerabilities.

The revelations wiped over $100 billion from Facebook's market capitalisation, forced CEO Mark Zuckerberg to testify before both houses of the United States Congress, resulted in the largest fine in the history of the Federal Trade Commission, and catalysed a global reckoning with the power of social media platforms and the value — and vulnerability — of personal data in the digital age.

Three months on, we at Hedgehog Security examine this scandal through the lens of information security — not the political dimensions, important as they are, but the specific security and governance failures that allowed a single app developer to harvest the data of 87 million people and hand it to a political consulting firm for weaponisation.


Recommended

Not sure where to start?

We'll scope your test for free and tell you exactly what you need. No obligation, no hard sell.

Free Scoping Call

From personality quiz to political weapon.

Date Event
April 2010 Facebook launches its Open Graph API, allowing third-party apps to access not only the data of users who installed them but also the data of those users' Facebook friends — without the friends' explicit consent. This architectural decision is the root cause of everything that follows.
2013 Cambridge Analytica is founded as a subsidiary of SCL Group, a British political consulting firm. Aleksandr Kogan, a psychology researcher at Cambridge University, develops a Facebook app called 'thisisyourdigitallife' — a personality quiz. Approximately 270,000 users download the app and consent to share their data for 'academic research.'
2013–2014 Through Facebook's Graph API, the app harvests data not only from the 270,000 users who installed it but from all of their Facebook friends. The result: data on up to 87 million people — public profiles, page likes, birthdays, locations, and in some cases news feeds, timelines, and messages. This data is passed to Cambridge Analytica. None of the 87 million friends consented to this.
2014 Facebook changes its API rules to prevent apps from accessing friends' data without explicit permission. However, the changes are not applied retroactively. Cambridge Analytica retains the data already harvested. Facebook does not audit or enforce the deletion of data already collected under the old rules.
December 2015 The Guardian reports that Cambridge Analytica is working for the Ted Cruz presidential campaign using Facebook-harvested data. Facebook learns of the data misuse. Its response: ask Cambridge Analytica to delete the data. Facebook does not verify that deletion occurs. It does not notify the 87 million affected users. It does not inform regulators.
2016 Cambridge Analytica works for the Trump presidential campaign, using psychographic profiles built from the harvested data to deliver targeted political advertising. The data is also reportedly used in connection with the UK's Leave.EU Brexit campaign.
17th March 2018 Christopher Wylie blows the whistle. The Guardian and The New York Times publish simultaneously. Facebook's stock drops, erasing over $37 billion in market capitalisation in days. The total market cap loss eventually exceeds $100 billion.
April 2018 Mark Zuckerberg testifies before both the US Senate and House of Representatives. 'It was my mistake, and I'm sorry,' he tells Congress. Facebook reveals the true number of affected users: 87 million, not the 50 million initially reported. Cambridge Analytica's offices are raided by the UK Information Commissioner's Office.
May 2018 Cambridge Analytica files for Chapter 7 bankruptcy. The EU's General Data Protection Regulation (GDPR) comes into force on the 25th of May — its timing, just weeks after the scandal broke, lends it enormous political momentum.

The Architecture Was the Vulnerability

The Cambridge Analytica scandal was not caused by a bug, a misconfiguration, or a security flaw in the traditional sense. It was caused by a deliberate design decision: Facebook's Graph API intentionally allowed third-party apps to access the data of users' friends without those friends' explicit consent. This was not a defect — it was a feature. The entire Facebook platform was designed to maximise data sharing, because data sharing was the foundation of Facebook's advertising business model. Cambridge Analytica simply used that architecture for a purpose Facebook found inconvenient.


270,000 downloads, 87 million victims.

The mechanics of the data harvesting are deceptively simple and illustrate a fundamental principle of platform security: the blast radius of a single compromised or malicious app is determined by the platform's data-sharing architecture, not by the app itself.

The Data Harvesting Chain
── Step 1: The App ─────────────────────────────────────────────────────
App: 'thisisyourdigitallife' — a personality quiz
Developer: Aleksandr Kogan (Cambridge University researcher)
Users: ~270,000 people downloaded the app
Consent: Users consented to share data 'for academic research'

── Step 2: The API Amplification ──────────────────────────────────────
Facebook's Graph API allowed the app to access:
- The installing user's profile, likes, birthday, location
- ALL of the installing user's friends' data (same categories)
Average Facebook user has ~338 friends
270,000 users × friends = up to 87 MILLION profiles harvested
Friends did NOT consent. Friends did NOT install the app.

── Step 3: The Handoff ────────────────────────────────────────────────
Kogan's company (GSR) passed the data to Cambridge Analytica
This violated Facebook's terms — but Facebook did not enforce them
CA built psychographic profiles: personality, fears, persuadability
Profiles matched to voter rolls for political microtargeting

── Step 4: The Weaponisation ──────────────────────────────────────────
CA delivered tailored political ads to individual voters
Messages crafted to exploit personality-specific vulnerabilities
Used in 2016 Trump campaign and UK Leave.EU Brexit campaign
87 million people's data weaponised without their knowledge

Six failures that enabled the weaponisation of personal data.

Failure Detail Consequence
1. Friends' Data Shared Without Consent Facebook's API allowed apps to access the personal data of users' friends without those friends' explicit consent. A single user installing an app could expose the data of hundreds of friends who had no knowledge of the app's existence. This was the force multiplier. 270,000 app installs became 87 million profiles harvested. The architectural decision to share friends' data by default — rather than requiring explicit opt-in — is the single most consequential design failure in the history of social media.
2. No Meaningful Enforcement of Terms Facebook's developer terms prohibited the sale of user data to third parties. Kogan violated these terms by passing data to Cambridge Analytica. Facebook's enforcement consisted of asking CA to delete the data — without verification, audit, or follow-up. Terms of service are meaningless without enforcement. Facebook knew of the violation in December 2015 and took no effective action for over two years. A simple audit or technical verification would have confirmed that the data had not been deleted.
3. No Retroactive Application of API Restrictions When Facebook changed its API rules in 2014 to prevent apps from accessing friends' data, it did not apply the changes retroactively. Data already harvested under the old rules was not recalled, audited, or deleted. Changing the rules going forward without addressing the data already extracted is like changing the locks after the burglary but not recovering the stolen goods. The data Cambridge Analytica had already obtained remained in their possession.
4. Failure to Notify Users Facebook learned of the data misuse in December 2015. It did not notify the 87 million affected users until April 2018 — over two years later, and only after the story was broken by journalists. Users had no opportunity to take protective action. They could not change the information that had been harvested (likes, location, relationships) in the way they might change a compromised password. The delay in notification compounded the harm.
5. No Meaningful App Review Process At the time Kogan's app was approved, Facebook did not conduct meaningful security or privacy reviews of third-party applications. Any developer could create an app that accessed broad categories of user data with minimal oversight. The absence of app review meant that Facebook had no visibility into what developers were doing with the data their platform provided. The platform was, in effect, a self-service data harvesting tool for anyone who created a developer account.
6. The Business Model Incentivised Oversharing Facebook's advertising revenue depended on the breadth and depth of data it collected and shared. The API's permissive data-sharing architecture was not a bug — it was the engine of Facebook's business model. Restricting data access would have reduced the platform's value to developers and advertisers. This is the most uncomfortable failure because it implicates the platform's fundamental business model. The same data-sharing architecture that enabled Cambridge Analytica's harvesting was the architecture that made Facebook profitable. Security and revenue were in direct tension — and revenue won.

Why this breach is different from every other.

The Cambridge Analytica scandal represents a fundamentally new category of data breach — one that demands a corresponding expansion of how we think about information security. In every previous breach we have analysed, the organisation holding the data was the victim of an external attack or an insider threat. In the Cambridge Analytica case, the platform holding the data was the enabler — its own deliberately designed features were the mechanism of the breach.

This has profound implications for every organisation that operates a platform with third-party integrations, that relies on third-party platforms for data processing, or that shares data with external partners through APIs. The security of your data is not determined solely by the security of your own systems — it is determined by the security, governance, and enforcement practices of every platform and partner with whom that data is shared. The Cambridge Analytica scandal demonstrated that a platform's business incentives, governance structures, and enforcement culture can be as much of a vulnerability as an unpatched server.

For organisations that provide APIs or platform integrations to third parties, the lesson is equally clear: you are responsible for what third parties do with the data your platform provides. Terms of service without enforcement are worthless. Access permissions without auditing are dangerous. And business models that incentivise the over-collection and over-sharing of personal data are, ultimately, security vulnerabilities waiting to be exploited.


Testing the platform, not just the perimeter.

The Cambridge Analytica scandal challenges the traditional scope of penetration testing — but it does not render testing irrelevant. Rather, it demands that organisations expand their testing to encompass platform security, API governance, and third-party data handling.

API Security Testing
A thorough API security assessment of Facebook's Graph API would have identified the excessive data permissions granted to third-party apps — particularly the ability to access friends' data without explicit consent. API testing should evaluate not only whether the API functions correctly, but whether the data it exposes is proportionate, necessary, and appropriately consented.
Third-Party App Review
A security review of the app ecosystem would have assessed whether third-party developers were complying with data handling policies. The complete absence of meaningful app review at Facebook meant that no one was checking whether developers were doing what they said they would with user data.
Data Governance Assessment
A data governance assessment would have identified the lack of mechanisms for verifying third-party data deletion, the absence of retroactive enforcement when API rules changed, and the failure to notify users of known data misuse. These are governance gaps that testing can identify and flag.
Privacy Impact Assessment
A privacy impact assessment — which should be a standard component of any platform security review — would have identified the disproportionate data access enabled by the friends' data sharing feature and the inadequate consent mechanisms for non-installing users.

Estimated Risk Reduction: Penetration Testing

We estimate that a comprehensive security assessment programme — encompassing API security testing, third-party app review, and data governance assessment — would have reduced the likelihood of a scandal of this nature by approximately 50–60%. This is lower than our estimates for breaches involving traditional vulnerabilities, reflecting the fact that the Cambridge Analytica scandal was primarily a governance and business model failure rather than a technical vulnerability. However, API testing and data governance assessments would have identified the excessive permissions and absent enforcement that were the proximate causes of the data harvesting.


When the framework meets the platform economy.

Cyber Essentials Plus was designed primarily for organisations protecting their own networks and systems against external internet-based threats. The Cambridge Analytica scandal — a platform governance failure rather than a technical intrusion — tests the boundaries of the framework. However, the principles underlying CE+ remain relevant, particularly when applied to the governance of third-party access and data handling.

CE+ Principle Application to the Cambridge Analytica Scandal
User Access Control The principle that access should be limited to the minimum necessary applies directly to API permissions. Facebook's API granted third-party apps access to friends' data — data that was not necessary for the app's stated function. Applying least-privilege principles to API design would have restricted data exposure.
Secure Configuration The principle of configuring systems to minimise attack surface applies to platform configuration. Facebook's default API settings maximised data sharing rather than minimising it. Secure-by-default configuration would have required explicit opt-in for friends' data access rather than default access.
Firewalls & Boundaries The principle of controlling data flows at boundaries applies to API boundaries. Facebook had no effective controls at the boundary between its platform and third-party developers — no auditing, no verification, no enforcement of data handling terms.
Malware Protection & Monitoring The principle of monitoring for abuse applies to platform abuse. Facebook had no effective monitoring for anomalous data access patterns by third-party apps — the harvesting of data on 87 million users through a single app should have triggered detection and investigation.
Patch Management The principle of keeping systems current applies to API governance. When Facebook changed its API rules in 2014, it did not 'patch' existing data exposures retroactively. Effective patch management requires addressing existing vulnerabilities, not merely preventing new ones.

Estimated Risk Reduction: CE+ Principles

We estimate that rigorous application of CE+ principles to platform governance — particularly user access control (API permissions) and secure configuration (default data sharing settings) — would have reduced the risk by approximately 30–40%. This is our lowest CE+ estimate in the series, reflecting the fact that the scandal was driven primarily by business model and governance decisions that sit outside the traditional scope of technical security controls.

Combined Estimated Risk Reduction: 55–70%

The combined effect of comprehensive platform security testing and CE+ principles would have reduced the likelihood by approximately 55–70%. This is the lowest combined estimate in our series — reflecting the fundamental truth that the Cambridge Analytica scandal was, at its core, a business model and governance failure that technical controls alone could not fully address. The remaining 30–45% of risk reflects the deliberate design choices that prioritised data sharing over data protection, and the business incentives that made those choices profitable.


The scandal that changed the internet.

GDPR's Perfect Timing
The GDPR came into force on the 25th of May 2018 — just weeks after the Cambridge Analytica scandal broke. The scandal gave the regulation enormous political momentum and public support, transforming it from a technical compliance exercise into a cultural moment. GDPR's requirements for explicit consent, data minimisation, and the right to erasure directly address the failures that enabled Cambridge Analytica's data harvesting.
The $5 Billion Fine
In July 2019, the FTC imposed a <strong>$5 billion fine</strong> on Facebook — the largest privacy penalty in history, 20 times larger than the previous record. The fine was accompanied by a 20-year settlement order requiring an independent privacy committee, compliance officers, and personal certification by the CEO. Critics argued it represented just 23 days of Facebook's profit.
API Access Revolution
Facebook drastically restricted third-party API access, removing the ability for apps to access friends' data and implementing a more rigorous app review process. Other platforms followed suit. The era of permissive API access — which had powered a generation of social media innovation — came to an abrupt end.
The #DeleteFacebook Movement
The scandal triggered the #DeleteFacebook movement — a public expression of lost trust that, whilst it did not materially reduce Facebook's user base, signalled a fundamental shift in public attitudes toward social media platforms and data privacy. Only 41% of Facebook users said they trusted the company after the scandal.
Congressional Testimony and Regulatory Attention
Zuckerberg's testimony before Congress marked the first time a major tech CEO was called to account for data practices at this level. The hearings revealed Congress's limited understanding of technology but catalysed bipartisan interest in data privacy regulation — interest that continues to shape legislative proposals today.
Cambridge Analytica Destroyed
Cambridge Analytica filed for bankruptcy in May 2018, just two months after the story broke. The firm that had claimed to possess a 'psychological warfare tool' was itself destroyed by the public exposure of its methods — a reminder that companies built on ethically questionable practices are inherently fragile.

Lessons for every organisation in the platform economy.

Audience Recommendation
Platform Operators Apply least privilege to API design. Third-party apps should have access only to the data necessary for their stated function. Friends' data, network data, and data from non-consenting users should never be accessible by default. Audit third-party data access regularly and enforce terms of service with real consequences.
Platform Operators Implement meaningful app review. Every app accessing user data must be reviewed for compliance with data handling policies before and during its operation. Anomalous data access patterns must trigger investigation. Self-certification by developers is insufficient.
Organisations Using Platforms Understand your platform risk. If your organisation relies on third-party platforms for data processing, marketing, or customer engagement, you must understand what data those platforms expose to third parties and what controls are in place. Your customers' data is only as secure as the platform's governance.
Organisations Using Platforms Audit your third-party integrations. Review every third-party app, plugin, and integration connected to your platforms. Verify that each has the minimum necessary permissions. Remove integrations that are no longer needed. Monitor for anomalous data access.
All Organisations Implement explicit, granular consent. The GDPR requires explicit consent for data processing. Go beyond the legal minimum — ensure that your users understand what data is collected, how it is used, and with whom it is shared. Consent must be meaningful, informed, and freely given.
All Organisations Prepare for GDPR enforcement. The GDPR is now in force. Organisations handling the personal data of EU citizens must comply with its requirements for consent, data minimisation, the right to erasure, breach notification, and proportionate security. The Cambridge Analytica scandal has given regulators both the mandate and the public support for aggressive enforcement.

Data is not a product — data is people.

The Cambridge Analytica scandal is not, in the traditional sense, a data breach. No servers were hacked. No encryption was broken. No perimeter was penetrated. Instead, a platform's own features were used — as designed — to harvest the personal data of 87 million people and weaponise it for political manipulation. The implications of this are more profound, and more disturbing, than those of any conventional breach.

The scandal revealed that the architecture of the platforms we use every day — the APIs, the data-sharing defaults, the permission models — can be as much of a threat to our privacy as any hacker. It demonstrated that business models built on the monetisation of personal data create inherent tensions with the duty to protect that data. And it showed that governance, enforcement, and accountability matter as much as firewalls and encryption.

At its heart, the Cambridge Analytica scandal is a reminder that personal data is not an abstract commodity to be traded, shared, and monetised. It is an extension of the people it describes — their identities, their relationships, their vulnerabilities, their autonomy. When we treat data as a product, we dehumanise the people it represents. And when we fail to protect it, we fail them.

This article is the first in a two-part series examining the Cambridge Analytica scandal. An update examining subsequent developments — including the $5 billion FTC fine, the GDPR's impact, and the long-term consequences for platform governance — will be published in December 2018.


Do you know what data your APIs expose? Do you know what third parties are doing with it?

Our security assessments now encompass API security testing, third-party integration reviews, and data governance assessments — the controls that the Cambridge Analytica scandal proved are essential. We help organisations operating in the platform economy understand and manage the risks that traditional perimeter security cannot address.

Next Step

Not sure where to start?

We'll scope your test for free and tell you exactly what you need. No obligation, no hard sell.

Free Scoping Call

Related Articles