Privacy comic strip from the website XKCD

Your Privacy Policy Doesn’t Mean A Thing; Regulating Privacy on the Web

In the beginning, the web had no memory. When you followed a link to a new page, everything you did on the last page was erased. There was a fresh start with every click.

It was Netscape that gave the web a memory. Pretty early on, actually, when they realized there were a few issues with a forgetful version of the web. Let’s say you added something to your shopping cart on an e-commerce site. As soon as you clicked to the next page to continue shopping, your shopping cart was empty. That was a bad user experience. So Netscape built a persistent state store, tiny, reusable, embedded bytes of code stored in the browser that let websites hand off information between page loads. That way that shopping cart would stay full.

In computer programming, there’s a word for identifying bits of information passed between machines. They’re called magic cookies, named for the message embedded in fortune cookies. Or maybe it’s for the cookie trail they leave. In either case, when Netscape added a persistent state object to their browser in 1994, they called them cookies as a tongue-in-cheek reference.

Cookies. Like that name suggests, they were meant to be a mostly harmless way for websites to hold onto a little bit of information between page loads. To keep their capabilities, and risks, low there were a number of limitations built right in. Cookie storage capabilities were intentionally small. Data could never be passed between websites, it was refreshed whenever a new site was visited. And there was no identifiable information embedded in a cookie, so sites could relate data to a particular session, but couldn’t really figure out who that session belonged to. If you wanted personal information from a site visitor, you had to actually ask them.

Which is exactly what sites did.

For consumers, cookies were hard to find and even harder to understand. No one was paying attention to what they did or why they mattered. But what sites learned, as early as ’94, was that if you offered users something, say a $1 coupon, they’d be willing to enter their email into a form. Or their age. Or current zip code. Once you had that information, you could use cookies to track users across your site, and tie their behaviors and habits directly to personally identifiable and demographic information.

This was obviously concerning, and community groups were formed to try and work at the problem. Efforts though, were largely ineffective. On one such mailing list in December of 1995, programmer Koen Holtman pointed out that if sites gathered identifying information, like emails, and then worked together with other sites doing the same thing, you could actually track a user’s behaviors across multiple sites and build up an ever-expanding pool of personal information.

His warning was almost immediately prophetic.

By the mid to late 90’s, sites started creating federated networks to share data about users with one another. Worse than that, marketing agencies began compiling databases of tracked user data and selling them wholesale so site’s could link up their own data with an external set. Consumers still didn’t fully understand the repercussions of the innocuous sounding cookies embedded in their browsers, and though there was some regulatory efforts in the wings, there was nothing substantial enough to stop sites from invading the privacy of their visitors.

In December of 1995, a reporter at CNN was able to retrieve the locations of hundreds of children by purchasing data from one such source. A few months later, in May of 1996, a journalist was able to do the same thing, this time under the assumed name of a notorious killer. That same month, a Los Angeles television station bought the names of over 5,000 children and their user data for just $227.

Arguably, adults could assume some responsibility for behavior tracking and ownership of their own data. When they entered their email into a form and clicked “Submit,” they knew it was going somewhere. That argument might be a bit shaky, given the tricks of the trade used to gather that kind of data, but it at least holds water. Children, on the other hand, have zero culpability. When they’re being tracked, it is always, always, without their knowledge and without a full understanding of the implications. It’s manipulative, deceptive and toxic and in the mid-90’s, unfortunately untethered.

This all came to a head in 1996 when Kathryn Montgomery and Shelley Pasnik issued a report from the Center of Media Education where they studied 38 different sites targeted at children. They found that over 90% of them were collecting information about their visitors. Just under half were offering free gifts or prizes, often using online Flash games as a hook, to try and coax some extra information about children and tracked kids across their site (or multiple sites) using cookies. What’s more is Montgomery and Pasnik proved what is painfully obvious. Children responding to questions online, or filling out some details to get a free prize or bonus in a game, have little to no understanding of what they’re actually giving up. These sites were stealing children’s privacy before they were old enough to even know what that meant.

At best, the early 90’s was beset by a few efforts to self-regulate. The largest of these was a multitude of Privacy Policies that blanketed just about every site on the web. The policies detailed what kind of data might be collected, and made assurances about anonymity, but it typically (if not always) provided no way of removing that data, or even a kind of enforceable activity. It was all based on more or less the honor system, and when it comes to user privacy, that will never be enough.

After rampant bad press and with additional pressure from organizations like the Center for Media Education and the Electronic Privacy Information Center, there were a series of more concerted efforts from the web community to adhere to level of privacy standards that would be suitable for advertisers, site owners, and users.

The TRUSTe Certification badge

The largest of these was the the creation of the non-profit TRUSTe, started in 1997 by Lori Fena, who had been the executive director of the Electronic Frontier Federation (EFF), and Charles Jennings. TRUSTe offered a path towards self-regulation by issuing certifications based on its own requirements. Sites that met their privacy criteria, which included things like the aforementioned Privacy Policy and limitations on exactly what kind of data could be collected, would be officially certified. If a site was found to no longer meet the requirements, that certification could be revoked and the site could, in theory, be held to some level of accountability.  TRUSTe was meant to demonstrate that the industry had good intentions and that, given proper time, could hold themselves to a higher standard without interference.

Unfortunately, only a small portion of the web actually turned to organizations like TRUSTe for official certifications. When larger automated advertising networks like Doubleclick entered the game, data collection only became more prevalent. And lot of those users were still children.

In July of 1998, the Federal Trade Commission issued a report called Privacy Online: A Report to Congress. In it, the FTC surveyed 1,400 sites and found that almost none of them had made a serious effort to self-regulate. Though professional certification programs like TRUSTe existed, most had not signed up, and continued to use cookies to track user’s data across the web. In other words, the option to change existed, but they had simply chose to ignore it.

The FTC agreed to give the industry another chance to self-regulate. There were some big, public displays to indicate that they were willing doing so, like the formation of the Online Privacy Alliance, a largely ceremonial organization comprised of 80 major web companies (Doubleclick, Microsoft, eBay, etc.) with a vow to focus on user’s privacy. In the end, it mostly just put a cap on the type of data that could be sold and exchanged, and resulted in more thorough Privacy Policies.

But the media reports and congressional testimony were enough to at least educate the public about the importance of privacy. Privacy is a fluid definition, and in most cases it’s not necessarily what’s being collected, but how that’s being used. After all, plenty of people were comfortable listing their names in the phonebook, and some may not even mind being tracked in exchange for a promotional benefit. It’s the lack of understanding that’s so frustrating.

And even after some fairly big changes in just a couple of years, that lack of understanding continued to be particularly acute with children. In 1999, the FTC issued a follow-up report, a year later look at what the industry had done to regulate themselves. They found some pretty commendable efforts, but one thing they noticed was that close to 90% of sites they surveyed directed at children continued to collect data through cookies. Of those, only 24% actually bothered to post privacy policies, and a negligible 1% required parental consent.

Somehow, we had let children slip through the cracks. Children that had very little to do with the development and trajectory of the web and were subject to its basest deceptions and manipulations.

Thankfully, pressure had begun to come down in another form entirely: government legislation. In late 1995, the first strike across the bow was the Data Protection Directive, issued the European Union (it’s successor, GDPR, was just recently passed). It put a major cap on the type of data that sites could collect, and perhaps more importantly banned all forms of data collected “unnecessarily”. The EU recognized that the problem was that sites were tracking all kinds of data just to have it, with no intent or purpose behind it. They aimed to put a stop to that, at least in countries in the EU.

The Privacy and Electronic Communications Directive, or more commonly the “Cookie Law” is an update to the EU’s Data Protection Directive

By 1998, with the EU’s legislation leading the way, and thanks to the advocacy of privacy organizations and pressure from the FTC, the U.S. Congress finally began to put things in motion.

Actually, given the oft-stagnant movement of legislation in the U.S., thinks actually moved fairly quickly. The legislation on the table dealt with collection of data from minors, and thankfully, the protection of our children was a bi-partisan and unilateral priority. In July of 1998, Senators Richard Bryan and John McCain introduced a new bill known as the Children’s Online Privacy Protection Act, or COPPA. It was officially passed into legislation in October of 1998.

COPPA issued governmental regulations for any sites that where specifically built or targeted children under the age of 13. It included a few criteria:

The FTC was put in charge of enforcing this new law, and any sites found to be in defiance of it would be subject to hefty fines until regulations were met.

There was some criticism, as there always is when sweeping regulatory practices are involved. Some thought the bill didn’t go far enough to limit data collection entirely. Others predicted that the weight of regulation would bring down everything the web held sacred. In practice, it has been used in a few higher profile incidents by the FTC to bring legal actions against a number of sites, including Hershey, the American Pop Corn Company, and Xanga.

More importantly, COPPA sparked an international conversation about data and privacy and it’s the reason a lot of us are even aware of just how much data is mined from users online. Still, as we all know, the web goes through rapid shifts. And in the two decades since COPPA has passed, the entire landscape of the web has been remade several times over. It’s possible that a new digitally native generation requires a different set of rules.

One of the biggest criticisms of COPPA is the ways in which children can get parental consent. The easiest method for sites to implement is basically credit card verification. The act of collecting credit cards presupposes that every parent has one, which of course isn’t true. And even the one’s that do might not be comfortable using it. And shouldn’t children be subject to the same freedom of expression as anyone else on the web? Parental consent has excluded a significant portion of the population who may be unable to get consent for any number of reasons.

It also had another unintended effect. Sites hoping to avoid the issue altogether, sites like Facebook and Twitter require users to click a box that says they are over 13 before signing up. That lets them wash their hands of any COPPA violations. Here’s the thing, and it’s something we all implicitly know. That doesn’t stop kids from signing up. In 2001, analyst danah boyd, along with a few colleagues, surveyed a large sample group of minors for insight into their online habits. What they found was that by the time children were 14 years, 80% of them were signed up for Facebook. Meanwhile, about 90% of parents were aware their children had a Facebook account despite their age, and over half of them helped their kid sign up. Parents know what’s going on, but they don’t want their children to be excluded from the ubiquitous experiences that can be found online. But by bypassing the proper channels altogether, we are once again leaving children unprotected.

danah boyd’s book “It’s Complicated”, which dives into the habit and implications of teenagers in a socially connected world

Legislation will undoubtably need to change. If you want a recent example of how this could work, just look to the General Data Protection Regulation (GDPR), passed in the EU just this year as the successor the Data Protective Directive originally passed into law in 1995. GDPR re-defines privacy for the modern age and places much stricter definitions around what kind of data can be collected, and perhaps more importantly, provides a path for users to remove their personal data from online platforms. GDPR was a reaction to the evolving definition of privacy online, and it’s specific definitions account for the ever increasing number of edge cases that seem to sprout up.

Like COPPA did for the world a two decades ago, GDPR has once again sparked a conversation about how data collection and user privacy should be regulated. One need not look too far to find evidence of user privacy being violated, and the devastating effects it can have on our political and cultural landscape. The time for change feels once again renewed, and awareness on the part of web users is at an all time high.


Added to the Timeline

April 16, 2016 - General Data Protection Regulation (GDPR) The EU adopts GDPR as a successor to the Data Protection Directive with further restrictions to the types of personally identifiable information that can be collected online. It specifically requires that websites disclose any data that has been collected and limits how long data can be held. It also enforces a way for users to request their personal data to be completely erased.
October 21, 1998 - Children's Online Privacy Protection Act (COPPA) Passed into law in 1998, effective as of April of 2000, and enforced by the Federal Trade Commision, COPPA provides data protections for children under the age of 14. Specifically, it restricts what kind of data can be collected from minors, as well as requiring parental consent before any data can be collected. Over the years, it has come under fire for sidestepping some of the more complex issues facing children online.
October 24, 1995 - Data Protection Directive One of the first pieces of legislation passed internationally regarding privacy online, the Data Protection Directive provides protections for individuals with regard to data collection online. It restricts the unnecessary collection of personal data, and requires sites to make clear exactly what data will be tracked. It was superseded in 2018 by the General Data Protection Regulation.
View the Full Timeline

Sources