Data Privacy Is Becoming A Luxury Good
Last Thursday, Amazon CEO Jeff Bezos stopped the presses with an astonishing Medium blog post, in which he revealed he was being blackmailed by AMI, owner of the tabloid publication National Enquirer. Long story short, AMI threatened to release some private photos of Mr. Bezos if he didn’t denounce that the Enquirer was “politically motivated or influenced by political forces” to publish an exposé about an affair he had had, which is a popular theory that floated in the Bezos-owned Washington Post. Putting aside the political extortion that potentially goes all the way up to the highest levels of U.S. government, this unexpected move that Mr. Bezos took to defend his privacy really shows that in today’s day and age, no one — not even the richest man in the world — is safe from privacy violations.
Privacy is widely recognized as a basic human right that is, in many cases, legally protected. So why is it that our privacy feels increasingly vulnerable and nearly unattainable in the digital age?
Personal Privacy vs. Data Privacy
To understand what is happening to our right to privacy, it is important to first make a distinction between the traditional notion of “privacy” and the newer kind of “data privacy” that is becoming an issue of debate. In the pre-digital age, privacy was a concept tied to physical presence and private communications. Everyone got to keep certain things to themselves, and could expect others to respect that. Thanks to its tangible nature, privacy was, for the most part, easily enforceable and controllable. You could dictate whom you shared your personal information with, since you are the lawful owner of your personal information. And if you choose to share your private information with an institution, public or private, they are expected to honor your privacy and keep your information safe.
In today’s digital age, however, with more and more aspects of everyday life becoming digitized, everything we do now is easily quantifiable and trackable. Data is what happens when personal information gets digitized and aggregated, and if “data is the new oil in the digital economy,” all data must be collected. This kind of data-ism and worship of algorithmic decision-making is what led to the tech industry’s troubling tendency to treat consumers as mere data entry points to be collected, analyzed, and fed back into the machine. Most recently, researchers from Medill’s Spiegel Research Center analyzed 13 terabytes of anonymous reader and subscriber data from three regional newspapers, all for the purpose of “tracing individual behaviors of people who kept and canceled digital subscriptions.”
Part of this sea change is due to the inherent nature of digital tools. When you read a physical newspaper, there is no way to track which pages you read and how long you spend skimming through a particular article; when you consume news online, however, all those activities are easily quantified and logged and filed away, with the promise of providing more relevant content recommendations and advertising, as well as optimizing internal operations. TechCrunch reports that a number of iOS apps capture users’ screen activity without having to get explicit permission. Apps from Abercrombie & Fitch, Hotels.com, Singapore Airlines, and others were using “session replay” tools from analytics firm Glassbox, allowing the recording of user activity; some experts say the service may inadvertently be capturing private info. Following the report, the apps have all been updated to remove Glassbox or removed from the app store.
Therefore, some may argue that our data is never meant to be private in the first place, at least not in the traditional sense. Research has shown a stark generational gap in terms of attitudes towards digital privacy, with the younger generations who grew up with digital devices being far more comfortable with the value exchange than the older ones.
If sharing our data is the price of admission to partake in the 21-century digital economy — a whole ecosystem that essentially runs on data — then sharing our data is but a necessary byproduct of this value change of personal data and user value, usually manifested in the forms of personalized services or targeted ads. Companies get data to mine for insights and the consumer gets value and, in some cases, free access to services. Everybody gets something out of this win-win exchange and everyone is happy.
Except that this value exchange rarely works the way it is supposed to. Worse, it often works to the detriment of consumer benefits and brand experience.
The Unbalanced Data-Value Exchange
The irony of the current situation is that, while almost every company is gathering all the data they could get, often at the expense of user privacy, very few companies are actually good at mining the vast amount of collected data to gain insights and develop products and services that truly benefit the customers.
As former Google Fiber engineer Avery Pennarun explained in a recent post on his personal blog Apenwarr, most AI-based recommendation algorithms today barely work because they are trained on “examples of what humans did while following a very dumb heuristic.” All the assumptions and unconscious biases of human heuristics are thus encoded into the machine learning algorithms and become inscrutable, amounting to what is essentially “statistical astrology.” The most useful examples of ad targeting we have today are the ones that usually utilize past purchase data and don’t require compiling a comprehensive user profile.
In addition, consider how much location data is being tracked on our mobile devices, supposedly in service of providing more relevant local information. In the U.S. alone, there are over twenty companies that offer “location intelligence” in one form or another. But there’s strong evidence to suggest that a majority of programmatic bid stream location data, which some providers use as their primary source, is either fraudulent or of questionable quality.
Granted, for brands there is a real competitive advantage to collect all the data you can possibly get from users and analyze their behavior to stay ahead of your competitors. Facebook infamously leveraged the traffic data it collected from its Onavo’s VPN app to inform its decision to acquire WhatsApp just as the messaging app was blowing up, while also keeping tabs of Snapchat’s growth. Amazon’s recent acquisition of Eero, a mesh router maker, provides a similar data-oriented advantage for its growing smart home ecosystem led by Alexa-enabled devices.
However, from the perspective of the customers, this type of excessive data collection offers no tangible benefit to the end users whose data is being exploited. Worse still, a lot of this type of value exchanges are outlined in the EULAs written in dense legalese that most people can’t and don’t read, and thus aren’t consciously agreeing to in the first place. Therefore, in such instances, the supposed value exchange between data privacy and better service is severely tilted in favor of the tech giants.
Most of the time, this unbalanced exchange manifests in a subpar user experience, where consumers receive irrelevant recommendations, creepy ads that follow them from one site to another, or persistently targeted ads for products they have already purchased. While these may just be a nuisance, for the most part, they do significantly undermine the online experience brands offer. In some extreme cases, they could even cause real emotional harm, as is the case of a woman who kept receiving unwanted ads for baby products after she had a stillborn delivery.
And this is where the data privacy debate gets interesting. Some would argue that it is precisely her privacy — in this case, her personal medical records kept private by law — that prevented the ad platforms from knowing she is no longer in the market for baby products. For the data-value exchange to work, some would argue that all data points, once properly anonymized and encrypted, need to be free and accessible. Partially available data privacy would only break the equation.
Of course, this problematic line of thinking completely ignores the fact that at the end of the day, consumers should be the owners of their own data, and consumers should have the final say in what personal data they choose to offer up, and how they are being categorized. According to a recent Pew Research survey, 74% of Facebook users don’t even realize the company collects their interests for targeted ads, and about half of them say they are not comfortable with how they are categorized by Facebook.
Jon Evans from TechCrunch made a strong argument for privacy being not only an individual right, but also a commons, a “cultural and natural resources accessible to all members of a society,” And the fact we implicitly allow this kind unbalanced data-value exchange is leading to some troubling social-political consequences. The loss of data privacy for all would be a tragedy of the commons.
Image for post
Many Facebook users say they do not know the platform classifies their interests; Source: Pew Research Center
Nevertheless, this notion of trading privacy for free access and personalized services has become ingrained in our digital economy as the go-to justification for tech companies gathering personal data and tracking user’s every move. In order to get away from this broken data-value exchange, one would have to disregard the prevailing assumption that everything on the internet should be free, and pay up to keep their personal data private.
Data Privacy As A Luxury Good
Due to the economics of most digital services, privacy is no longer the expectation, especially among the younger generations. Instead, data privacy is quickly becoming a luxury good that people have to pay to have. Only people who can afford to pay for premium, ad-free services can escape the invasive tentacles of data collection, leaving the vast majority who can’t (or won’t) to trade their personal information for free access to digital services and content. As a key symptom of the “media have and have-nots” trend that we highlighted in our Outlook 2019 report, this increasing gap between how online audiences are being segmented in the digital realm will have lasting consequences on our entire society.
Out of the major technology platforms in the U.S., Facebook and Google clearly share an ad-based business model that hinges on their ability to harvest and process user data. Consumers do receive some value in return for offering up their data, usually in the form of the many free services these two companies offer, but the value exchange is severely tilted to the benefit of those two platform owners. Amazon and Netflix also collect a lot of user data and leverage it to gain various competitive advantages, but at least the two are not advertising-dependent, although Amazon’s fast-growing ad business may change that. Whenever you use any of their services, you partake in an unbalanced data-value exchange that is rigged for the tech giants. And it is nearly impossible for people to not use any of those digital services provided by the major tech giants.
Apple might be the outlier here, being the only big tech company that is not particularly interested in harvesting user data. In fact, data privacy has become a main selling point for the iPhone maker in recent years, as the company implicitly positions privacy and data security as a differentiator to partially justify its premium pricing.
Case in point, Apple brought down the hammer on January 30, revoking Facebook’s Enterprise Developer Certificate for misusing it to distribute apps to consumers for data collection purposes. This effectively threw a wrench into Facebook’s entire enterprise operation, shutting down all of its internal apps, including iOS versions of Messenger, Instagram, and WhatsApp the company is beta testing. As the New York Times’ Kevin Roose puts it, Apple has become the de facto privacy regulator for Facebook. Later that day, Apple did the same to Google, on the same grounds of misusing their developer certification and violating privacy. Although Apple quickly restored their access to enterprise certificates and brought the internal apps back online, this move shows that Apple is not pulling punches on its quest to solidify its brand narrative as the premium, ad-free service provider, which it uses to differentiate itself from the other major platforms.
Before Apple shut it down, Facebook was paying $20 a month to users ages 13–35 to install and run its “Facebook Research” app, a VPN designed to monitor all data passing through a user’s phone, including private messages, internet searches and browsing, and, in some cases, continuously updated location data. And some users were reportedly perfectly fine with that exchange, seemingly resigned to the idea that most of the personal data will be collected by the tech giants anyway. Given the unbalanced data-value exchange prevalent in our digital economy today, however, this hardly seems like a good deal. Not to mention that the fact that putting a price of data privacy should be directly tied to the value that data generates, not arbitrarily set by a tech company.
Of course, Apple is not making a show of vigilante justice for no reason. Some have suggested that given the plateauing smartphone market, Apple is eagerly ramping up its push the protection of data privacy as a key marketing angle for its devices, citing the surprise billboard Apple placed at this year’s CES as supporting evidence. However, the primary reason Apple was able to pull off this power move was due to eroding public trust in Facebook and, to a lesser extent, Google. In the past few years, public perception of social platforms has turned a corner, as they have become prime targets for abuse and political manipulation. Although the souring public sentiment has not been strong enough to hurt Facebook’s bottom line yet, it does hinder the company’s ability to enter new markets, as is the case with Facebook Portal.
How Brands Can Restore The Balance
In order to avoid the same scrutiny that Facebook is facing and help restore the balance of this data-value exchange, brands should consider adhering to the following practices.
First of all, brands can adopt a more consumer-oriented data collection policy. As a brand, you don’t need to collect data on everything about every single customer. Gathering huge amounts of unstructured data will not provide insights if you don’t have the right algorithms to help analyze them, and most companies today don’t. Only collecting the data you need, and tying every data point you gather to a tangible benefit that you could give back to consumers is the best way to help parse down your data collection and focus on what really matters.
More importantly, brands have an opportunity to utilize social listening from owned channels to gather non-personal data that, when aggregated, directly reflect what consumers want. This is a tactic many direct-to-consumer brands have successfully deployed to learn about their customers through social channels, establish a two-way communication channel, and put what they learned directly into product development. Glossier, a D2C beauty brand, for example, aims to build a social platform of its own to learn insights and drive sales. And the best part of this strategy is that they allow these D2C brands to come up with a limited product portfolio that also requires less hyper-targeting, since most of their products already reflect a certain level of universal appeal among the target demo.
Transparency is also an important factor that could help brands build trust with consumers and facilitate the data-value exchange on a more level playing field. Brands should be clear with the language used in privacy policies and ToS and avoid confusing legalese. Clearly communicate why you are collecting what you are collecting, and ask for explicit consent from your customers. Offering consumers the tools to see their own data profile can also grant them the agency to manage their data privacy accordingly. The main point here is that the exchange can only be fair to consumers when brands actually do something positive with the data they collect, and are transparent about why they’re collecting it and how they’re using it.
At the end of the day, there will always be a place in our digital economy for advertising and ad-supported models, and the data-value exchange is certainly here to stay. Regulators will likely soon start to find ways to curb the questionable data practices of the tech giants, although there is no simple solution here and it will take some trial and error before we get it right. What we can do as brand marketers is to fully understand the challenges surrounding this complicated issue and help guide our clients to collect and secure consumer data in a respectful and ethical way, lest we fall into the dystopian traps of surveillance capitalism.