Back in 2013, CISPA (the Cyber Intelligence Sharing and Protection Act) triggered one of the largest online protests since the SOPA blackout. The bill proposed giving private companies legal immunity to share user data with the U.S. government in the name of cybersecurity. Critics saw it as a surveillance blank check. Congress shelved it after massive public backlash, but the underlying tension between national security and digital privacy never went away.
That tension has only intensified since then. Over the past decade and a half, governments worldwide have passed sweeping surveillance and content regulation laws that affect billions of internet users. Some of these laws address legitimate security threats. Others hand authorities broad powers with minimal oversight. Understanding this legislative landscape is no longer optional for anyone who uses the internet, which at this point means virtually everyone on the planet.
SOPA and PIPA: The Legislation That Sparked a Movement
The Stop Online Piracy Act (SOPA) and its Senate companion, the PROTECT IP Act (PIPA), were introduced in late 2011 with the stated goal of combating foreign websites hosting pirated content. The entertainment industry backed both bills aggressively, arguing they needed stronger enforcement tools against overseas piracy hubs that operated outside U.S. jurisdiction.
The problem was how these bills proposed to achieve that goal. SOPA would have allowed the Department of Justice to obtain court orders forcing ISPs to block DNS resolution for targeted websites, payment processors to cut off funding, and search engines to delist entire domains. Security researchers warned that DNS blocking would undermine the integrity of the internet’s naming system and conflict with ongoing efforts to deploy DNSSEC encryption.
On January 18, 2012, Wikipedia went dark. Reddit shut down. Google blacked out its logo. Over 115,000 websites participated in the protest, and more than 10 million people signed petitions. Congressional offices received so many calls that phone lines crashed. Within 48 hours, dozens of co-sponsors withdrew their support. Both bills were effectively dead by the end of January.
The SOPA/PIPA fight proved something important: coordinated internet activism could defeat well-funded lobbying campaigns. That lesson shaped every surveillance and censorship debate that followed.
CISPA: Cybersecurity as a Surveillance Gateway
CISPA took a different approach than SOPA. Instead of targeting piracy, it focused on cybersecurity threats. The bill would have allowed companies like Google, Facebook, and AT&T to share “cyber threat information” with the National Security Agency and other government bodies without any court order or warrant requirement. Companies sharing data under CISPA would receive complete legal immunity from privacy lawsuits.
The bill’s language was deliberately vague about what qualified as a “cyber threat.” Privacy advocates pointed out that this vagueness meant companies could share virtually any user data and claim cybersecurity justification. The bill also lacked any requirement for companies to strip personally identifiable information before handing data to the government. Your emails, browsing history, and private messages could all flow to intelligence agencies without your knowledge.
CISPA passed the House in April 2013 but never reached a Senate vote. The White House issued a veto threat, and privacy organizations mounted campaigns that generated millions of petition signatures. The bill’s failure, combined with the Snowden revelations just two months later, permanently changed public attitudes toward government surveillance programs.
The Snowden Revelations and Their Legislative Fallout
In June 2013, Edward Snowden leaked thousands of classified documents from the NSA to journalists at The Guardian and The Washington Post. The documents revealed surveillance programs of staggering scope. PRISM gave the NSA direct access to servers at Google, Apple, Facebook, Microsoft, and Yahoo. XKeyscore could search virtually everything a user did on the internet. Upstream collection tapped directly into fiber-optic cables carrying internet traffic.
The revelations showed that the NSA had been collecting bulk phone metadata on millions of Americans under Section 215 of the Patriot Act, a provision that was never intended for mass surveillance when Congress passed it after 9/11. The agency had also been collecting internet communications in bulk under Section 702 of the Foreign Intelligence Surveillance Act (FISA), which was supposed to target only foreign nationals.
Public outrage forced Congress to act. The USA FREEDOM Act, signed into law in June 2015, ended the NSA’s bulk collection of phone metadata and required the government to use specific selection terms when requesting records from phone companies. It was the first law to impose meaningful limits on NSA surveillance since the original FISA legislation in 1978. Many privacy advocates considered it a modest but insufficient reform, since the law left Section 702 collection largely untouched.
Section 702 and FISA Reauthorization Battles
Section 702 of FISA has become the most contested surveillance authority in American law. The provision authorizes the government to collect communications of foreign targets located outside the United States without individual warrants. The catch is that this collection inevitably sweeps up enormous quantities of communications involving Americans who are in contact with foreign targets.
The FBI has repeatedly searched this database for information about Americans, a practice known as “backdoor searches” that critics say amounts to warrantless surveillance of U.S. citizens. Court documents have revealed that the FBI conducted hundreds of thousands of queries using American identifiers, including searches related to domestic political protests and journalists.
Congress reauthorized Section 702 in April 2024 after months of heated debate, extending it through 2026. The reauthorization expanded the definition of “electronic communications service providers” who can be compelled to assist with surveillance, which civil liberties groups warn could force a wider range of businesses to serve as surveillance intermediaries. A proposed amendment requiring warrants for searches of Americans’ communications failed by a narrow margin in the House.
Europe’s Regulatory Revolution: GDPR, DSA, and the AI Act
While the United States has approached internet regulation through narrow, sector-specific laws, the European Union has pursued comprehensive frameworks that reshaped how technology companies operate globally. The General Data Protection Regulation (GDPR), which took effect in May 2018, established the principle that individuals have fundamental rights over their personal data. Companies must obtain clear consent before collecting data, provide users access to their stored information, and delete data upon request.
GDPR’s enforcement has teeth. Meta received a 1.2 billion euro fine in 2023 for transferring European user data to the United States without adequate privacy protections. Amazon, Google, and TikTok have all faced nine-figure penalties. The regulation’s extraterritorial reach means any company serving European users must comply, regardless of where the company is headquartered. This has effectively turned GDPR into a de facto global standard, since most tech companies find it easier to apply its protections universally rather than maintain separate data practices for different regions.
The Digital Services Act (DSA), which became fully enforceable in February 2024, goes beyond data protection to regulate platform content and algorithmic practices. Very large platforms with more than 45 million EU users must conduct annual risk assessments of their algorithmic systems, provide researchers with data access, and give users the option to opt out of recommendation algorithms based on profiling. The DSA also bans targeted advertising to minors and prohibits using sensitive personal data for ad targeting.
The EU AI Act, which entered into force in August 2024, is the world’s first comprehensive law governing artificial intelligence. It classifies AI systems by risk level and bans certain applications outright, including real-time biometric surveillance in public spaces (with narrow law enforcement exceptions), social scoring systems, and AI that manipulates human behavior. High-risk AI systems used in hiring, credit scoring, and law enforcement face strict transparency and auditing requirements.
The UK Online Safety Act and Content Regulation
The United Kingdom’s Online Safety Act, which received Royal Assent in October 2023, represents one of the most ambitious attempts to regulate online content outside of authoritarian regimes. The law requires platforms to proactively remove illegal content, protect children from harmful material, and enforce their own terms of service consistently. Ofcom, the communications regulator, can fine companies up to 10% of their global annual turnover for non-compliance.
The most contentious provision involves end-to-end encryption. The Act gives Ofcom the power to require platforms to use “accredited technology” to scan for child sexual abuse material, even in encrypted messages. Signal and WhatsApp both threatened to withdraw from the UK market rather than compromise their encryption. The government has since indicated it will not immediately enforce this provision against encrypted services, but the legal authority remains on the books and could be activated at any time.
This encryption debate is not unique to the UK. Australia passed the Telecommunications and Other Legislation Amendment in 2018, which can compel companies to build capabilities to decrypt communications. The FBI has repeatedly called for “responsible encryption” that allows law enforcement access. Cryptographers consistently maintain that there is no way to create a backdoor that only authorized parties can use. Any vulnerability built for law enforcement will eventually be discovered and exploited by malicious actors.
Section 230 and the Platform Liability Debate
Section 230 of the Communications Decency Act, passed in 1996, has been called “the twenty-six words that created the internet.” The provision shields platforms from liability for content posted by their users while allowing them to moderate content without being treated as publishers. Without Section 230, platforms would face an impossible choice: either moderate nothing and host every lawsuit-worthy post, or moderate everything and accept publisher liability for anything they miss.
Both political parties in the United States have called for reforming or repealing Section 230, though for different reasons. Conservatives argue that platforms use their moderation powers to suppress right-leaning viewpoints. Progressives argue that platforms profit from harmful content while hiding behind legal immunity. Several reform proposals have been introduced in Congress, including the EARN IT Act, which would condition Section 230 protections on platforms following “best practices” for preventing child exploitation.
Privacy advocates warn that the EARN IT Act’s best practices would effectively mandate client-side scanning of all messages before encryption, destroying the privacy guarantees of end-to-end encrypted services. The bill has been introduced multiple times since 2020 and continues to gain bipartisan support despite opposition from security researchers and civil liberties organizations.
The TikTok Ban and Foreign App Restrictions
In April 2024, President Biden signed legislation giving ByteDance, TikTok’s Chinese parent company, roughly nine months to divest its U.S. operations or face a nationwide ban. The law was upheld by the Supreme Court in January 2025 on national security grounds, with the Court accepting the government’s argument that Chinese ownership of a platform with 170 million American users posed an unacceptable intelligence risk.
TikTok briefly went dark in the U.S. before President Trump issued an executive order pausing enforcement. As of early 2026, the app remains available while negotiations over a potential sale continue, though the legal authority to ban it remains in effect. The situation has set a precedent that Congress can restrict access to foreign-owned communication platforms based on national security concerns, a power that some legal scholars worry could be applied far more broadly in the future.
India banned TikTok along with dozens of other Chinese apps in 2020, citing sovereignty and security concerns. The European Commission banned TikTok from staff devices in 2023. These actions reflect a broader fragmentation of the global internet along geopolitical lines, sometimes called the “splinternet,” where national borders increasingly determine which services citizens can access.
VPN Crackdowns, Encryption Wars, and Digital Resistance
Governments that restrict internet access have increasingly targeted the tools people use to bypass those restrictions. China’s Great Firewall has become progressively more sophisticated at detecting and blocking VPN traffic. Russia passed a law in 2017 banning VPNs that don’t comply with government censorship requirements, then escalated enforcement significantly after 2022. Iran blocked VPN protocols during the Mahsa Amini protests in 2022, cutting off millions from the outside internet during a critical period of civil unrest.
The encryption debate extends beyond messaging apps. The European Commission proposed the “Chat Control” regulation in 2022, which would require platforms to scan all private messages, including encrypted ones, for child sexual abuse material. The proposal faced fierce opposition from privacy advocates and was significantly watered down, but various forms of the legislation continue to be debated in the European Parliament. The fundamental tension remains unresolved: governments want the ability to read any communication when they deem it necessary, while security experts insist that any mechanism enabling this creates vulnerabilities that endanger everyone.
Decentralized technologies have emerged as a response to increased centralized control. The Tor network, Signal protocol, and decentralized social networks like Mastodon and Bluesky represent technical approaches to preserving privacy and free expression. However, these tools remain niche compared to mainstream platforms, and governments have begun targeting their infrastructure as well. Russia blocked Tor’s main domain in 2021, forcing the project to develop new bridge distribution methods.
The State of Digital Rights in 2026
The surveillance and privacy landscape in 2026 looks fundamentally different from what existed when CISPA was debated in 2013. The European Union has established itself as the world’s primary technology regulator, with GDPR, the DSA, and the AI Act creating a framework that companies worldwide must navigate. The United States still lacks a comprehensive federal privacy law, though several states have passed their own data protection statutes, creating a patchwork that increases compliance costs without providing consistent protections.
Artificial intelligence has introduced new dimensions to surveillance that legislation has barely begun to address. Facial recognition systems are deployed in cities worldwide. AI-powered content analysis can flag “suspicious” communications at scale. Predictive policing algorithms target specific neighborhoods based on historical data that reflects decades of biased enforcement. The EU’s AI Act is the only major law attempting to regulate these systems comprehensively, and its enforcement mechanisms are still being developed.
The fight over internet surveillance has never been a simple contest between privacy and security. Each piece of legislation involves tradeoffs between national security needs, law enforcement capabilities, corporate interests, individual rights, and the technical architecture of the internet itself. The SOPA blackout demonstrated that public pressure can defeat bad legislation. The Snowden revelations proved that secret surveillance programs will eventually be exposed. GDPR showed that strong privacy regulation can reshape global corporate behavior.
What has changed most dramatically since 2013 is the scale. Every person with a smartphone generates a continuous stream of data that governments and corporations can collect, analyze, and act upon. The laws governing that collection determine the boundaries of digital freedom for billions of people. Staying informed about these laws is not just an exercise in civic awareness. It is a practical necessity for protecting your own digital life.
Frequently Asked Questions
What was the difference between CISPA and SOPA?
SOPA targeted online piracy by allowing DNS blocking and delisting of websites hosting copyrighted content. CISPA focused on cybersecurity by allowing private companies to share user data with government agencies without warrants. Both faced massive public opposition but for different privacy and censorship concerns.
Why is Section 702 of FISA controversial?
Section 702 authorizes surveillance of foreign targets but inevitably collects communications involving Americans. The FBI has used this database to search for information about U.S. citizens without warrants, a practice critics call backdoor surveillance that violates Fourth Amendment protections.
How does GDPR affect companies outside Europe?
Any company that serves European users must comply with GDPR regardless of where the company is headquartered. Most major tech companies apply GDPR protections globally rather than maintaining separate data practices, making it a de facto worldwide privacy standard.
Can governments create encryption backdoors that only law enforcement can use?
Cryptographers consistently maintain that no such backdoor is possible. Any vulnerability built into an encryption system will eventually be discovered and exploited by malicious actors. This is why security experts oppose legislation like the EARN IT Act and the UK Online Safety Act’s scanning provisions.
Are VPNs still legal in most countries?
VPNs remain legal in most Western democracies, though some countries restrict their use. China, Russia, and Iran have all passed laws limiting or banning VPNs that do not comply with government surveillance requirements. Even in countries where VPNs are legal, some proposed legislation could affect encrypted communications.





Share Your Thoughts