The right to privacy has been a universal value shared through much of the world for generations, particularly in America. People rebelled in the 1970s when they learned the government was intercepting our phone calls. In the 1990s, growing awareness of what medical providers were doing with our most personal data generated similar outrage that led to sweeping legislation that made it clear each of us owns our own medical history. Then came the internet, and we could get cat memes, real-time traffic reports, and automatically curated playlists. Our thinking shifted as we consented to giving our data in exchange for direct benefits. In many ways, we did so without fully understanding the disparity of the value of the data we gave in exchange for what we got in return.

We’re starting to have that conversation now. Facebook got the ball rolling with this year’s disclosure that third-party partners were able to get more data than they should have had access to, then shared it inappropriately with a company whose mission was to pit Americans against each other during the presidential election. This tapped into anger that already was simmering over the Equifax breach last year, a company with which few of us explicitly consented to do business. Both issues highlight the complex business of writing privacy policies that give companies maximum flexibility without having to disclose any meaningful details about what is being collected and shared.

We’ve been ignoring — and therefore consenting to — privacy policies for 20 years now, first thanks to required HIPAA notices and later due to state and federal laws clarifying our “rights.” The name privacy policy gives users confidence that the company does something — anything — to protect us.  If they were named “clarification on the huge amount of data we collect with minor limitations on who we’ll sell it to” policy, we might read them. To be sure, that’s exactly what they are.

The conversation is going to keep heating up. The General Data Protection Regulation (GDPR), a new privacy law in the European Union, went into effect at the end of May. GDPR crystallizes in law that each of us own our personal data, and that we must explicitly consent to the collection, use and sharing of it. GDPR is imperfect, to be sure, but it gives EU citizens new rights and protections they didn’t have before. Most notably, it provides the right to see all of the data a company has collected about you, where they use it, and with whom they share it. While the U.S. is unlikely to match the full scope of GDPR, significant U.S. legislation already is being proposed and likely will pass in the next 18 months.

There are things each of us can do while we wait for better privacy rights. Services such as Facebook and Google are platforms, ecosystems where thousands of other companies operate under their well-known logos. When we fill out a survey or click on a link on one of those platforms, we are often sharing data with third parties. So, maybe pass on that one cool trick to save money on your mortgage. Be suspicious of highly profitable companies who don’t collect money from their customers — or who collect far less than it costs to deliver the service. We’re talking about you, Google. Remember the best advice on the internet: If you’re not paying for it, you are the product.

This issue explores privacy from a few different perspectives. We talk about what this GDPR thing really means, dig a little deeper into what you probably didn’t hear about the Facebook “scandal”, and take a look at how your personal data is used for good and evil.

-Cris

GDPR Is a Good First Step, But There are Still Leaps to Be Made

 

We were all fortunate enough to spend May being bombarded with privacy policy updates from dozens of companies, some of which we probably don’t remember working with. The typical format of the update was to tell you how much they value your privacy, briefly mention this new GDPR thing in Europe, then ramble off a series of significant changes to their policy. Many of those new changes included a “but only if you live in the EU” clause. If these companies valued your privacy already, why did legislation result in so many changes? The answer is a little complicated, but the short answer is that most companies did value your privacy to some degree, but GDPR says they have to value it much more or face big fines.

GDPR is a new law in the European Union, passed in 2016 and put into effect on May 25, 2018. It gives all EU citizens the right to reasonable consent before their data is collected, to understand how their data will be used, to see what data has been collected directly and indirectly on their behalf, and to correct or even have that data deleted.



In many ways, GDPR is similar to the protections given by HIPAA, the Health Insurance Privacy and Accountability Act, in the United States. However, HIPAA only applies to ourhealthcare data. GDPR extends to our demographic and personal information and the information companies acquire about our behavior while we use their platforms. For example, EU citizens can request the detailed behavior and usage data that Google has acquired across tens of thousands of partner websites to target ads. Citizens also can request to have that targeting data deleted, effectively starting from a blank slate.

GDPR is a positive start in forcing tech companies to properly value and protect our personal data, which they’ve used to create the greatest accumulation of wealth in history. It is far from perfect, though. What was summarized in a paragraph above is actually implemented through 99 articles and 173 recitals. Intended initially as a way to curb Google and Facebook specifically, it falls into the trap that most complex regulations do: It squeezes out innovative young companies and helps to cement the dominance of the largest players. Indeed, Google and Facebook have navigated GDPR primarily through changing legal policies, with little changes to their overall data handling (and exploitation) practices.

Both companies have turned themselves into a platform for those who want to deal in personal data, augmenting existing tools for advertisers to track and target users with “GDPR in a box” tools that manage compliance. Many smaller players simply packed up shop and rebuilt on top of these platforms, furthering the stranglehold on our data by tech giants.

Still, GDPR is not a failure. We are in uncharted territory, with no legislative or regulatory frameworks to use as a model. GDPR is the world’s first attempt at this, and it gets a lot right. Our hope is that it is improved over time to more effectively curb the behavior of data-hoovering tech giants while still promoting innovative new ideas that will add value to our data. After all, we give our data up in the hopes that what we get will be worth it.

The Facebook Data Breach Outraged Users, But We’re to Blame Too

 

After disclosing that user data was improperly shared with Cambridge Analytica in February, Facebook has been facing significant public outrage. There was even a #deletefacebook campaign, an apology tour and (obviously) Congressional testimony. The media has struggled to report on exactly what happened because the issue is far more complex and nuanced than it initially seemed. In fact, Facebook had come to the realization that their own policies concerning data sharing could be exploited in exactly this way and made significant changes in 2014, but the damage already had been done.


The timeline of events went something like this:

  • 2004: Facebook was founded as a free platform that requires users to provide personal information to participate in the “full experience.”
  • 2007: Facebook allowed partners to develop apps to build on the network of personal information and dependencies between them — what Facebook calls its search graph.
  • 2010: Facebook launched Open Graph API for third-party developers allowing them to request permission of users to access their personal data, including items such as their friends list.
  • 2013: Aleksandr Kogan created thisisyourdigitallife, an app that asked users to answer questions for a psychological profile. Users accepted the terms and conditions, granting permission for the app to access their personal data and the data of their friends. At some point, Kogan sold the data to Cambridge Analytica even though he did not have the license or permission to do so.
  • 2014: Facebook changed the rules, limiting access to user data and preventing access to friends’ data.
  • 2015: Facebook learned of the misuse of data (sharing the data they collected with another party) and pressured Cambridge Analytica to delete the data.
  • 2018: Facebook learned that Cambridge Analytica likely had not deleted the data and made a public disclosure.

Zuckerberg has admitted, “… It’s clear now that we didn’t do enough. We didn’t focus enough on preventing abuse and thinking through how people could use these tools to do harm. … We didn’t take a broad enough view of what our responsibility is, and that was a huge mistake. It was my mistake.” While much of what happened was the result of users not carefully reviewing the data they were agreeing to share in exchange for participating in the app, Facebook also shared “friend of friend” information without consent, either as a result of a bug in their Open Graph API or an unintended use of its functionality. The vast majority of data in the Cambridge Analytica data set was from Facebook users who never used the thisisyourdigitallife app, even if their profile settings were private.

The good news in this mess is that there is little incentive to repeat this mistake in the future. Facebook handled a significant threat to its future with relative success, but only after making a serious commitment to going well beyond their current measures to protect their users — even though the measures they took in 2015 already should have prevented this issue. Meanwhile, Cambridge Analytica is out of business. It was unclear that their voter profile database did anything to turn votes, and their future was unclear before Facebook’s disclosure.

We all have a role to play in protecting our personal privacy. Had users taken the time to appreciate the relatively simple screen that indicates what data will be shared in exchange for using the app, they might have questioned why answering a few brief questions meant they had to share all their likes and their friend list. These things matter, and by making careless choices, we become complicit. Facebook needs to remember users are giving permission to gently use our data in exchange for a free platform. We expect the company to protect us from being exploited. The #deletefacebook movement is a reflection of it falling short on this front.

We should be mindful of the value of our data to any free service, particularly when we provide any type of data beyond our name and email address. Once simple attributes come into the equation — such as age, gender, location or income — the value of the data goes up exponentially. Spotify combines some of those data points with what you listen to (and what you skip) to make rich profiles they sell to advertisers as well as music labels. And don’t even get us started on Google. If the Facebook story got you fired up and angry, just remember that Google intentionally does everything we’ve just described every day. And they tell you they do it, too, in their privacy policy.

What are We Giving Away, and Is It Worth What We Get In Return?

 

Do you know every video you’ve watched on YouTube, which ads you’re most interested in, every place you’ve been to, and everything for which you’ve ever searched? Google does. Do you remember every Facebook message and file you’ve ever sent or received, every contact in your phone, every time you’ve logged in and where you were when you did? Facebook does. Cortana, Siri, and Google Now all combine data from your calendar, email, contacts, text messages, and apps to give you incredibly targeted information, such as the fastest route to work when you leave the house in the morning.

As technology advances and modern conveniences become even more convenient, it’s easy to ignore what we’re giving upDylan Curran summarized it perfectly when he said, “This is one of the craziest things about the modern age. We would never let the government or a corporation put cameras/microphones in our homes or location trackers on us. But we just went ahead and did it ourselves because — to hell with it! — I want to watch cute dog videos.”



Many companies — including the most prolific data collectors — now share with you the data they store “on your behalf” for free, if you know where to look for it. The guides and links below will help you learn what these guys know about you.

 

Who? What?
Cortana  Your location, contacts, messages, calendar and more
Facebook  Advertisements you’ve clicked on
Facebook  Your location history
Facebook  Your posts, events, search history, login history and more
Google  A timeline of where you’ve been since using Google maps on your phone
Google  Everything you’ve ever searched
Google  Apps you use
Google  Your Youtube history

You’re Giving Genetic Service Companies More Than a DNA Sample

 

According to industry estimates, the number of people who have participated in direct-to-consumer genetic testing services like Ancestry.com and 23andMe now exceeds 12 million people, mostly in the U.S. — 1 in 25 Americans. Many of these services provide at-home test kits and genetic analysis below their cost, which should be a red flag to all of us. What actually happens with your data varies from provider to provider and is, of course, buried in the Terms and Conditions most people don’t take the time to read.

In December 2017, the FTC issued an advisement to consumers looking to purchase an at-home testing kit, suggesting that consumers comparison shop for privacy  — not price. Privacy questions to consider include: what happens to your data, will your personal information be shared or sold and how will your data be protected? According to all major providers, you own your data but grant them rights and privileges to analyze and share the data for research purposes.



Even as awareness grows about how our most personal and immutable data — our DNA — can be monetized by these companies, it is not clear that people will avoid them. Lee Rainie of the Pew Research Center says:

“Most Americans see privacy issues in commercial settings as contingent and context-dependent. …  There are a variety of circumstances under which many Americans would share personal information or permit surveillance in return for getting something of perceived value. … Nearly half (47 percent) say the basic bargain offered by retail loyalty cards — namely, that stores track their purchases in exchange for occasional discounts — is acceptable to them, even as a third (32 percent) call it unacceptable.”

Still, we think people should think twice about being too early to this party. Protections against genetic discrimination are not fully fleshed out in U.S. laws and judicial decisions, and it is only a matter of time before health and life insurance companies start trying to use these data sets to better target their premiums.

We talked to Ashleigh Daniluk, our technical communications manager, for her take:

“I understand the interest and the emotional inclination to want to participate in the at-home test experience, especially for individuals who may not know their full family medical histories and don’t want to pay thousands of dollars for a medical genome sequence. What I find alarming is that 23andMe has said over 80 percent of its members have chosen to share their ‘anonymized’ data for research, while at the same time admitting they aren’t trying to make money selling test kits. Patrick Chung, a 23andMe board member, stated, ‘Once you have the data, (the company) does actually become the Google of personalized health care.’  We know Google makes its money selling our data, and it’s no secret that 23andMe is doing it too. When I first heard ‘research,’ I assumed the data was being provided to universities and research scientists to cure diseases. When I learned that they also were selling data to pharmaceutical companies like Pfizer and Genentech, it made me wonder if the 80 percent of participants made the same incorrect assumption I did.”