The Rise of the Benevolent Data Pillage
While alerting the next-of-kin and helping make funeral arrangements are thought to be the primary responsibilities of a person who has recently lost a friend, the deletion of the deceased’s online browsing history has lately earned a place among them. It is one of the kindest final gestures one can make, both to the dead friend and to the family that might have otherwise happened upon the unseemly stuff. People are naturally protective of such information, especially when there is no opportunity to explain or defend it.
The irony, of course, is that browsing histories are being viewed, reviewed, and analyzed all day while we’re alive—by corporations using that information for their own advantage. Even as much as digital surveillance has become the new normal, consumers still feel uneasy about it. Most consumers passively, if grudgingly, accept that having their browser data collected is just a condition of being a digital consumer. Companies take this passivity for granted, and generally relegate the information about data collection into “Terms & Conditions” agreements they know users won’t read.
But there is now an emerging trend for companies not only to be more transparent about the data that they’re collecting, but to market that data collection as a value-add to their users. Facebook and Verizon are currently leading the charge to market their use of your data as a nice favor they’re doing for you. Not since miniature browser spies were named “cookies” has a digital surveillance mechanism sounded so benign!
As one of the most prolific platforms contributing to the rise of Digital TMI Syndrome, it makes sense that Facebook is at the forefront of the movement to get users to take what they share to the next level. In June, Facebook announced the expansion of the way they’d be collecting data on users to target ads to them in a post titled “Making Ads Better and Giving People More Control Over the Ads They See.” The accompanying video features a harmless but hip product manager narrating an animated production that appears to have been made by an ambitious amateur in the first year of online animation classes. It tells the story of how advertisers communicate to Facebook about who they want to reach, and then how Facebook can use users’ data to direct their efforts appropriately.
The business depicted in the video is a quaint shop that wants nothing more than to let 18-to-35-year-olds that ride bicycles learn about their products. By allowing your data from outside of Facebook to be shared, you can help this small business reach you, the video explains (assuming you are a millennial cycling enthusiast in search of new things to spend money on using that stagnant wage you may or may not be earning). What the promotional video does not share is precisely how difficult it is to opt out entirely of these tracking systems.
In their article last year, “A Theory of Creep: Technology, Privacy, and Changing Social Norms,” privacy law experts Jules Polonetsky and Omer Tene explained how unprepared most users are to proactively protect themselves from unwanted collection of their data (PDF). “Typically, individuals get privacy defaults wrong, disseminate content more broadly than they intended or is advisable for their own good, forget passwords or keep them listed on unencrypted files on their laptop, and generally struggle to keep up with the astounding surge in digital economy and culture,” they wrote.
When contact by phone, Jules Polonetsky, who serves as executive director of The Future of Privacy Forum, expressed doubts about the usefulness of educating users about the myriad potential uses of their data. “In many cases we can offer users options and explanations, but at other times the data has already been collected long ago or users don’t want to be interrupted,” he said. “We need either law, or ethical, standards because it not going to be feasible to interact with all users about their data privacy.” At present, nothing even resembling such standards exists.
Verizon Wireless takes the marketing ploy that the largely uninhibited use of your data is good for you to the next level—by connecting it to literal cash and prizes. In late July, Verizon rolled out the Smart Rewards loyalty program the way all great corporations do: with a commercial in which a man is brutally attacked by birds. The program gives users access to standard loyalty program rewards, like concert tickets and discounted travel, for things like paying bills on time and upgrading to smartphones. But to enroll in the rewards program, Verizon users must also be enrolled in Verizon Selects, a program that tracks mobile browsing data, location, and app usage. And while users are free to un-enroll afterward, they will miss out on 2,500 “loyalty points” by doing so. And opting out is considerably more challenging than hitting an “unsubscribe” button.
On its surface, this strategic shift to transparency about data collection looks like a good faith effort by businesses to let consumers know how and why their data is being used. But findings from a Carnegie Mellon report titled “Misplaced Confidences: Privacy and the Control Paradox” reveal how perceived control of data dissemination peculiarly facilitates more data sharing (PDF):
More than lack of awareness, it seems that this is at least in part due to the particular sense of control that new technologies transmit to users, making them feel endowed with the power of managing the flow of information about them that stems from their voluntary willingness to reveal….
On the one hand, the feeling of power conveyed by detailed controls in the privacy settings of several social media and the saliency of information publication generate confusion between control over publication of private information and control over accessibility and availability/usability of that information by others. On the other hand, the voluntary nature of the disclosure makes people perceive it as non privacy threatening relative to the situation where disclosure is solicited or required, in which case reactive devaluation might instill suspicion in people and prevent them from revealing private information.
In other words, when users feel more in control over what they share, they end up sharing even more.
Many find the barrage of increasingly personalized ads as a mere annoyance that is the cost of living in the digital age. Those that are overly suspicious of the tracking are treated as either paranoid or self-important. When Instagram changed its Terms & Conditions to include the potential commercial use of user content, those that were outraged were soon mocked for their deluded belief that Instagram had any interest in capitalizing on their brunch photos. But the collection and use of individual browsing data has major implications for privacy and security that go well beyond the icky feeling of having an advertisement follow you across networks.
Beyond the vulnerability of having a data stockpile about you available to potentially nefarious hackers and identity thieves, the algorithms used to target marketing are also susceptible to inadvertent discrimination. LaTanya Sweeney, a computer scientist at Harvard, found that web searches for names traditionally given to African Americans returned more results for finding arrest records than searches for names more likely assigned to white people.
Polonetsky noted the ambiguity of the existing legal framework for such issues: “Are companies doing something that is actually illegal? Are they using new methods that are not captured by current law? These practices might not be illegal but maybe should be.” Complicating the issue is that bigotry-by-algorithm might best be detected by yet another algorithm. Polonetsky explained, “In fact, the best way to detect some of the new kinds of discrimination may be by using Big Data analysis to discover what is going on!”
The potential for data to be used to selectively market to people in discriminatory ways will go largely undetected if purely profit-motivated companies have no incentive to monitor themselves with other forms of data analysis. Facebook may make surrendering personal data appealing with its vision of young cyclists frequenting mom-and-pop shops. But the reality is that the youth and relative poverty of such users will more likely make them the targets of unscrupulous lenders than targets for ads about the nice bike shop about the corner. Likewise, Verizon can lure users into sharing their data with deals on airline tickets, but what would prevent them from using that same data to inflate the prices of those tickets using the data it’s gathered?
And those are just the well-documented uses of data discrimination. When reached by email, Justin Brookman, director of consumer privacy at the Center for Democracy & Technology, listed a number of possibilities for discrimination that a person’s data usage could present:
You could imagine a health plan could reject you if they knew you were researching certain diseases, or an auto insurance provider denying you service if they knew that you drove around certain high-risk parts of town. Some of that may be prevented by the Affordable Care Act or the Fair Credit Reporting Act, but it’s not always clear what sorts of activities are covered (especially by FCRA). Could a dating site refuse to let you on if they knew you had looked at pornography recently? Do they have to tell you that’s why you were rejected for membership? Those are open questions under the law.
With these examples in mind, one doesn’t need an especially gifted mind for speculative fiction to consider how many ways our benevolently-collected data can be used against us. It is difficult to see these companies’ gestures toward transparency as anything less than substantial privacy invasions hiding in plain sight.
With the Federal Trade Commission planning a workshop this month called “Big Data: A Tool for Inclusion or Exclusion?” to explore the potential values and pitfalls of data use, the data-plundering adventure may be short-lived, as regulators work to establish a framework for addressing it. In the meantime, these companies are still poised to make a killing with an unprecedented amount of our user data that they’ve pillaged under the guise of good customer service.