> scandal.update —— target: Facebook / Cambridge Analytica —— months_elapsed: 9 —— ftc_fine: $5,000,000,000 —— platform_model: UNDER_SCRUTINY<span class="cursor-blink">_</span>_
Six months ago, we published our initial deep-dive analysis of the Cambridge Analytica scandal — the incident that redefined public understanding of data privacy, social media power, and the weaponisation of personal information. In that article, we examined how Facebook's own platform architecture enabled the harvesting of 87 million users' data through a single personality quiz app, and we assessed how expanded approaches to penetration testing and Cyber Essentials Plus principles could have reduced the risk.
Nine months since the story broke, the consequences continue to accumulate. Facebook faces the largest fine in FTC history. Cambridge Analytica has ceased to exist. The GDPR has come into force and is reshaping data practices globally. Mark Zuckerberg has testified before Congress and the European Parliament. And the fundamental questions raised by the scandal — about consent, platform governance, and the tension between business models and privacy — remain unresolved.
We'll scope your test for free and tell you exactly what you need. No obligation, no hard sell.
Free Scoping Call| Regulator | Action | Amount/Outcome |
|---|---|---|
| US Federal Trade Commission | Investigation into whether Facebook violated a 2012 consent decree requiring it to protect user data. The FTC found continued violations including sharing friends' data with apps, enabling facial recognition by default, and using phone numbers collected for security purposes for advertising. | $5 billion fine — the largest privacy penalty in history. 20-year settlement order requiring an independent privacy committee on Facebook's board, compliance officers, and personal quarterly certifications from CEO Zuckerberg. False certification carries individual civil and criminal penalties. |
| UK Information Commissioner's Office | Investigation into whether Facebook breached UK data protection law by failing to safeguard user data and failing to be transparent about how data was harvested by third parties. | £500,000 fine — the maximum permitted under the pre-GDPR Data Protection Act 1998. The ICO noted that had the GDPR been in force at the time of the breach, the fine could have been up to 4% of global turnover — potentially exceeding $2 billion. |
| US Securities and Exchange Commission | Investigation into whether Facebook misled investors about the risk of data misuse. The SEC alleged that Facebook stated user data 'may' be improperly accessed when it knew of actual misuse since 2015. | $100 million penalty for misleading disclosures. Facebook did not admit or deny the allegations. |
| FTC vs Cambridge Analytica | Administrative complaint against Cambridge Analytica, CEO Alexander Nix, and Aleksandr Kogan for deceptive practices in harvesting personal data. | Nix and Kogan agreed to settlements restricting their future business practices. Cambridge Analytica, having filed for bankruptcy, did not respond to the complaint. |
| User Class Action | US class-action lawsuit on behalf of affected Facebook users alleging privacy violations. | Settled in 2022 for $725 million — one of the largest data privacy settlements in history. An additional $8 billion shareholder lawsuit against Zuckerberg and Meta's board is proceeding through the courts as of 2025. |
The combined financial penalties and settlements stemming from the Cambridge Analytica scandal now exceed $6.3 billion — comprising the $5 billion FTC fine, $725 million user settlement, $100 million SEC penalty, and £500,000 ICO fine. The ongoing $8 billion shareholder lawsuit could push the total far higher. These figures do not include the estimated $100 billion in market capitalisation lost in the immediate aftermath of the story, nor the incalculable cost to Facebook's reputation and user trust.
The EU's General Data Protection Regulation came into force on the 25th of May 2018 — just weeks after the Cambridge Analytica scandal had dominated global headlines. The timing was fortuitous for the regulation's proponents and devastating for its opponents. The scandal provided a vivid, real-world illustration of precisely the harms that the GDPR was designed to prevent.
In the wake of the scandal, Facebook implemented a series of platform reforms designed to prevent future data harvesting of this nature. The most significant was the restriction of third-party API access: apps can no longer access friends' data, and a more rigorous app review process has been implemented. Facebook also introduced tools allowing users to see and manage which apps have access to their data.
However, critics have argued that these reforms address the symptom rather than the cause. The fundamental business model — collecting vast quantities of personal data and monetising it through targeted advertising — remains unchanged. Facebook's 2018 revenue was $55.8 billion, and its 2023 revenue reached $114.6 billion. The $5 billion FTC fine represented approximately 23 days of profit. The financial incentive to collect and monetise personal data continues to dwarf the cost of regulatory penalties.
For organisations assessing their own platform and data practices, the lesson is clear: reforms imposed after a scandal are always more costly, more disruptive, and less credible than controls built in from the start. Facebook's post-scandal reforms were necessary, but they cannot undo the harm already inflicted on 87 million users whose data was weaponised without their consent.
The Cambridge Analytica scandal carries the lowest risk reduction estimates in our series — not because the security measures are ineffective, but because this scandal was driven primarily by deliberate business and governance decisions rather than accidental technical failures. Technical security testing and CE+ controls can identify and address many of the symptoms, but they cannot fundamentally alter a business model that incentivises the over-collection and over-sharing of personal data. That requires governance reform, regulatory enforcement, and cultural change.
The Cambridge Analytica scandal carries lessons that extend far beyond Facebook and far beyond social media. Any organisation that operates APIs, integrates with third-party platforms, or shares data with external partners must confront the governance questions that this scandal raised.
| Recommendation | Detail |
|---|---|
| Audit Every Third-Party Integration | Review every third-party app, API connection, and data-sharing arrangement. For each, verify: what data is shared, under what terms, with what consent, and what happens to the data after it leaves your systems. The Cambridge Analytica scandal was enabled by a failure to audit what developers were doing with the data Facebook's API provided. |
| Apply Least Privilege to APIs | Design your APIs to expose the minimum data necessary for each integration. Default to restrictive permissions. Require explicit justification for broad data access. Facebook's default of sharing friends' data was the root cause — a least-privilege API design would have prevented the entire scandal. |
| Enforce Data Handling Terms | Terms of service that prohibit data misuse are meaningless without enforcement. Implement technical controls (auditing, monitoring, access logging) and contractual provisions (audit rights, penalties) that enable you to verify compliance and take action when violations are detected. Facebook asked Cambridge Analytica to delete data and took their word for it — this is not enforcement. |
| Implement Anomaly Detection for Data Access | A single app harvesting the data of 87 million users should have triggered automated alerts. Implement monitoring that detects unusual patterns of data access — bulk downloads, disproportionate data requests, access to data categories beyond the app's stated purpose — and investigate anomalies promptly. |
| Apply GDPR Principles Even Outside the EU | The principles embodied in the GDPR — explicit consent, data minimisation, purpose limitation, accountability, and data protection by design — represent best practice for data governance regardless of your jurisdiction. Organisations that adopt these principles proactively will be better positioned for regulatory compliance globally. |
| Conduct Regular Privacy Impact Assessments | Before launching any new feature, integration, or data-sharing arrangement, conduct a privacy impact assessment that evaluates the potential harm to users. If the Cambridge Analytica personality quiz had been subject to a meaningful privacy assessment, the disproportionate data access it required would have been flagged. |
The Cambridge Analytica scandal exposed a fundamental problem with the consent model that underpins most digital data practices. The 270,000 users who downloaded the personality quiz app clicked 'I agree' to a set of permissions — but they did not understand that they were consenting to the harvesting of their friends' data, to the sale of that data to a political consulting firm, or to the creation of psychographic profiles for political manipulation. Their consent was technically present but practically meaningless.
This is the consent problem that the GDPR attempts to address. Under the GDPR, consent must be freely given, specific, informed, and unambiguous. It must be a clear affirmative action. Pre-ticked boxes, bundled consent, and buried terms of service do not constitute valid consent. And consent must be as easy to withdraw as it was to give.
For organisations that collect personal data through any channel — websites, apps, forms, integrations — the Cambridge Analytica scandal demands a fundamental reassessment of consent practices. Do your users actually understand what they are consenting to? Could they explain, in their own words, what data you collect, how you use it, and with whom you share it? If the answer is no, your consent is not meaningful — and under the GDPR, it may not be legally valid.
The uncomfortable truth is that much of the digital economy is built on consent that is neither informed nor meaningful. Users click 'I agree' because they want to use the service, not because they understand or accept the data practices described in the terms. The Cambridge Analytica scandal demonstrated the consequences of building an entire industry on this fiction of consent — and the GDPR represents the beginning of an attempt to make consent real.
The Cambridge Analytica scandal's lasting legacy extends far beyond any single fine, lawsuit, or regulatory action. It fundamentally altered public consciousness about data privacy, social media power, and the relationship between technology platforms and democratic institutions.
The Cambridge Analytica scandal is the breach that proved information security cannot be reduced to firewalls, encryption, and patching. It demonstrated that the governance of data — who can access it, under what terms, with what consent, subject to what oversight, and in service of what business model — is as critical to protecting people's privacy and autonomy as any technical control.
For the security industry, the lesson is humbling. We can test systems, identify vulnerabilities, and recommend controls. But if the business model incentivises over-collection, if the governance culture prioritises growth over protection, and if enforcement of data handling terms is treated as a cost rather than a responsibility, then the most sophisticated security programme in the world will not prevent the next Cambridge Analytica.
At Hedgehog Security, we have expanded our assessment methodology to encompass API security, platform governance, third-party integration reviews, and data handling audits — because the Cambridge Analytica scandal taught us, and the entire security industry, that protecting data requires protecting the systems, the governance, and the culture through which data flows.
This article concludes our two-part deep dive into the Cambridge Analytica scandal. Our next Breach Deep Dive will examine a different incident. To suggest breaches for future analysis, or to discuss any of the issues raised in this series, please contact us.
From API security testing and third-party integration reviews to GDPR compliance assessments and data governance audits, Hedgehog Security helps organisations protect the data that people entrust to them — not just against hackers, but against the governance failures that can be equally devastating.
We'll scope your test for free and tell you exactly what you need. No obligation, no hard sell.
Free Scoping Call