The Privacy Control Paradox

Is privacy dead? Or do people really care about it? If so, what is it that drives them to voluntarily reveal their personal information for small rewards such as free pizza? How do we go about solving this privacy paradox? Are the compelling benefits offered by online services to blame? If so, were Facebook (or Google or Twitter or... you get the idea) to alter its business model tomorrow to a monthly subscription, would the 2-billion people on the platform still use it?


The truth however is a lot more complicated. Free services are enticing, sure, and at first glance, the benefits do seem to outweigh the risks: they are convenient, widely accessible, and what's more, they are both useful and engaging. But convenience isn't everything. It also ultimately boils down to answering the question: "privacy from whom?"

And unfortunately the answer to the latter won't always be the same from person to person. For some, the fear is government surveillance. For others, it's the constant tracking of their online lives for commercial purposes. Thus depending on the kind of information an individual wants to keep secret, a given piece of technology may be privacy-enhancing or privacy eroding or both at the same time.

That personal privacy is eroding as a consequence of technological development is a premise so widely accepted today, and it's not entirely false. In a report titled Deceived by Design published late June by the Norwegian Consumer Council, it emerged that Facebook, Google and Microsoft (with Windows 10) employ a series of psychological dark patterns to push users towards selecting privacy intrusive options, resulting in unintentional loss of user privacy, at the same time giving them an illusion of control.

By "threatening" users with loss of functionality or deletion of their accounts if certain settings are not chosen, companies have time and again resorted to a rewards-like system where the right (privacy intrusive) action is compensated with a better service, while choosing to turn them off can be a punishment. Even worse, it's always opt-out by default, never opt-in.

Google, for example, prevents users from enjoying a fully personalised Google Assistant experience unless Web & App Activity, Device Information and Voice & Audio Activity settings are turned on. Facebook, likewise, cautions anyone who wishes to disable facial recognition that doing so means that the firm "won't be able to use this technology if a stranger uses your photo to impersonate you."

Frustratingly, key settings in many an online service are often buried in menus (hidden), at times bundled with other options (and hence making it all exhausting to go through) and even "deceptively" enabled by default, preventing users from making informed decisions, thereby leading to serious mismatches in privacy expectations.

It's not just that. As early as 2012, Alessandro Acquisti, Laura Brandimarte and George Loewenstein published a research on behavioural tactics employed by Facebook, going as far to conclude that "providing users of modern information-sharing technologies with more granular privacy controls may lead them to share more sensitive information with larger, and possibly riskier, audiences."

Facebook provides a strong feeling of control, because users can change every detail of their default privacy settings, including what type of information will be available to whom. However, users have very little control over the way in which information, once posted, will be used by a third-party application or by their friends. The third-party application could, for instance, use that information to send invasive targeted advertising to the user, or perhaps for price discrimination; a friend could post the information somewhere else, making it accessible to unintended third parties.

Pushing emotional buttons to elicit reactions from its users is nothing new for Facebook, and it's this very deliberately addictive, habit-forming design, coupled with its advertising model and infamous News Feed algorithm that made it easy for it to be weaponised to spread misinformation and fraudulent content. It's also this very confusing privacy settings, while giving users an impression that are in control, that led to the Cambridge Analytica data scandal. (Or, it's also possible that some people just didn't care.)

Providing a greater "sense" of control over their personal data in exchange for encouraging users to share more is not the only disingenuous move exploited by Mark Zuckerberg & Co. Infinite scroll, a seemingly bottomless stream of status updates, is another ploy that it effectively uses to drive engagement, but the hidden agenda behind such a simple user interface change is much more insidious: the endless feed of posts and photos not only induces users to spend more time on the site, it gives platforms like Facebook (and Instagram, Twitter, Pinterest etc.) further incentive to vacuum as much information as possible, gathering their likes, dislikes and preferences.

But even more troubling is the issue with default privacy settings. Research in behavioural economics has repeatedly found that people tend to stick to the default setting of whatever is offered to them, even when they could change it easily. And Facebook, fully aware of this quirk, changed the default settings of users from friends of friends (at most) to public in 2010.

What then is the solution? While increased user control and greater transparency through elaborate dashboard tools and privacy polices (however self-defeating and ambiguous they may be) are beneficial to some extent, figuring out what exactly what we stand to lose if the data goes uncollected is of prime importance in order to bring about any meaningful regulation of the digital world.

In fact, research undertaken by Leonard Nakamura and two colleagues at the Bureau of Economic Analysis found that Facebook and Google's advertising business added 0.07 percentage points to the U.S. economy's annual growth rate from 1995 to 2005, and 0.11 percentage points from 2005 to 2015.

This is discounting the strides these companies are making in artificial intelligence (on-device learning or otherwise) and bringing internet connectivity to underserved parts of the world, in addition to giving millions of people a much-needed platform to communicate and interact on a scale that was previously unimaginable. Regulations to curb the unchecked power of these digital behemoths, then, should entail a combination of giving people more control over their data while holding companies accountable when they play fast and loose with user privacy, betraying the trust of users who handed over their personal information in return for a valuable service.

Another alternative is to collect all personal info in one place, so that a central authority can handle it. That central authority could be a government or a neutral regulatory body, or for better or worse, another tech giant with the resources required to support such a mammoth undertaking.

"Think of a PDS (Personal Data Service) as a single place for users to manage their privacy settings, giving organizations access to the information that users are willing to share, but limiting access to information users themselves consider more sensitive. PDSs provide security, user-controlled sharing, and a robust access control system so that only authorized third parties have access to the data," wrote security researcher Isaac Potoczny-Jones in Network Computing back in 2016. But the legalities of such an initiative is doubtless complex.

Brandimarte summed it all up convincingly in a recent interview with Chicago Tribune: "The problem is that while the benefits [of online behaviours] are always very visible to us, the costs are very much hidden. What does it mean in terms of our data privacy to actually do an action online? We don't know. The benefits are immediate and certain; the risks are only in the future and they are very much uncertain, so that makes our decision-making very, very hard."

Comments