TikTok Is the New Privacy Battleground
The clock is ticking for TikTok. Like the controversy magnet that Facebook has turned out to be in recent years, the viral short-form video-sharing app is the latest social media service to get embroiled in an endless loop of regulatory and security scandals, prompted in part because of its Chinese roots.
The platform, which started its life as a lip-syncing app Musical.ly before it was acquired by ByteDance for US$ 1 billion in 2017, grew wildly popular with teens, but in a span of just two years, it would go on to gain mainstream attention, offering users an entertaining distraction from Facebook and turning out to be one of the first non-U.S. apps to become a global hit.
But fame comes with a price. Just as Facebook has increasingly drawn scrutiny for amplifying hate and misinformation, algorithmically enabling age, gender and racial discrimination in hiring and housing ads, and fuelling the rise of extremism (not to mention a long list of privacy missteps), TikTok's soaring success has made it a ripe target for rivals and lawmakers, thus finding itself in the maelstrom of a geopolitical war that could effectively curtail its ambitions to expand beyond its home country.
Scrutiny over TikTok reached a fever pitch a few months ago over its wide-ranging data collection practices, with privacy and security experts accusing the app of amassing extensive swaths of user information — such as hardware IDs, memory usage, apps installed, IP addresses, Wi-Fi access points, and GPS pings every 30 seconds — from the devices it was installed
"There's also a few snippets of code on the Android version that allows for the downloading of a remote .ZIP file, unzipping it, and executing said binary," a reverse engineer posted on Reddit.
Even more troublingly, it was alleged that TikTok is not only engineered to engage new users to keep them hooked on the platform, the app was designed in such a way that any attempts to reverse engineer it will cause the app's behaviour to change slightly to cover its tracks.
Plus, it didn't help that TikTok's iOS app was caught capturing clipboard data without explicit consent, potentially exposing passwords or other sensitive data. The issue came to light because of a new feature in iOS 14, which notifies users whenever an app accesses the device's clipboard. Although the company said the feature was designed to "identify repetitive, spammy behaviour," it has since been removed. (TikTok is not the only app though, LinkedIn, Reddit and tons of other apps were also called out for similar clipboard-copying behaviour.)
But despite ByteDance's assurances that it sends no user data to China, these issues, along with its heavy-handed content moderation policies, have collectively led to a ban in India, which outlawed 59 apps and services late last month, including Helo, Alibaba Group's UC Browser and UC News, and Tencent's WeChat and QQ, citing privacy and security concerns over their connections to the Chinese government.
Damning as these accusations are, TikTok isn't doing anything different. Facebook has had scores of privacy scandals over the years, starting with the infamous emotional contagion experiment, in which users' News Feeds were manipulated to play with their mental states, to the Cambridge Analytica data fiasco, that led to an unauthorised breach of sensitive user information by a third-party app.
That's besides its role in cratering the news industry, harming the mental health of its content moderators, compromising elections, attempting to remake the global monetary system by inserting itself as a key player, spreading hate speech and propaganda, and more.
What's more, this is not the first time TikTok has come under criticism for security issues. In February 2019, the company paid US$ 5.7 million to the U.S. Federal Trade Commission over alleged violations of a children's privacy law (COPPA) for allowing kids under 13 years old to sign up for the app without parental consent.
But interference from China isn't to be shrugged off either. Tencent-owned instant messaging service WeChat has been previously found to conduct surveillance of images and files shared on the platform and use the monitored content to train its censorship algorithms, and even keep tabs on non-China-registered accounts and use messages from those accounts to train the algorithms to be used against China-registered accounts without informed approval of the users.
It's no surprise that even as the battle to keep personal data private rages on, it's expected of netizens to relinquish control over their data as the cost of admission for all the conveniences of the digital world.
When you install an app on your phone and use it, the data you share with the app — your personal information, your preferences, where you go, and what you buy — are shared with other third-party data brokers in real-time, who in turn sell that data to advertisers to target you with relevant ads, while the app maker gets a cut of the money. This is how the data brokerage industry works, and it's all legal because it's explained in the privacy policy and terms of service agreement that many of us don't bother to read.
The larger problem, then, rests with the business model of these companies, which are based on tracking users' online habits to profile their behaviour and harvesting the data to feed highly sophisticated, black-box algorithms that deliberately employ psychological dark patterns to push users towards selecting privacy intrusive options in order to offer a better service.
The notion that ad-supported networks use addictive habit-forming interface elements (endless scrolling and pop-up notifications) and automated personalisation (recommended posts and videos) to boost engagement with harmful but compelling content (like conspiracy theories and misinformation) is not new. Social media is no different from television or radio. If anything, the internet scales up, speeds up, and automates older media in a way that poses unique problems.
What's required is a meaningful framework that puts an end to uninhibited collection and analysis of personal information for profit maximisation at the same time introduce new cybernated governance reforms to advance consumer privacy, market competition, and algorithmic transparency.
The platform, which started its life as a lip-syncing app Musical.ly before it was acquired by ByteDance for US$ 1 billion in 2017, grew wildly popular with teens, but in a span of just two years, it would go on to gain mainstream attention, offering users an entertaining distraction from Facebook and turning out to be one of the first non-U.S. apps to become a global hit.
But fame comes with a price. Just as Facebook has increasingly drawn scrutiny for amplifying hate and misinformation, algorithmically enabling age, gender and racial discrimination in hiring and housing ads, and fuelling the rise of extremism (not to mention a long list of privacy missteps), TikTok's soaring success has made it a ripe target for rivals and lawmakers, thus finding itself in the maelstrom of a geopolitical war that could effectively curtail its ambitions to expand beyond its home country.
Scrutiny over TikTok reached a fever pitch a few months ago over its wide-ranging data collection practices, with privacy and security experts accusing the app of amassing extensive swaths of user information — such as hardware IDs, memory usage, apps installed, IP addresses, Wi-Fi access points, and GPS pings every 30 seconds — from the devices it was installed
"There's also a few snippets of code on the Android version that allows for the downloading of a remote .ZIP file, unzipping it, and executing said binary," a reverse engineer posted on Reddit.
Even more troublingly, it was alleged that TikTok is not only engineered to engage new users to keep them hooked on the platform, the app was designed in such a way that any attempts to reverse engineer it will cause the app's behaviour to change slightly to cover its tracks.
Plus, it didn't help that TikTok's iOS app was caught capturing clipboard data without explicit consent, potentially exposing passwords or other sensitive data. The issue came to light because of a new feature in iOS 14, which notifies users whenever an app accesses the device's clipboard. Although the company said the feature was designed to "identify repetitive, spammy behaviour," it has since been removed. (TikTok is not the only app though, LinkedIn, Reddit and tons of other apps were also called out for similar clipboard-copying behaviour.)
But despite ByteDance's assurances that it sends no user data to China, these issues, along with its heavy-handed content moderation policies, have collectively led to a ban in India, which outlawed 59 apps and services late last month, including Helo, Alibaba Group's UC Browser and UC News, and Tencent's WeChat and QQ, citing privacy and security concerns over their connections to the Chinese government.
Damning as these accusations are, TikTok isn't doing anything different. Facebook has had scores of privacy scandals over the years, starting with the infamous emotional contagion experiment, in which users' News Feeds were manipulated to play with their mental states, to the Cambridge Analytica data fiasco, that led to an unauthorised breach of sensitive user information by a third-party app.
That's besides its role in cratering the news industry, harming the mental health of its content moderators, compromising elections, attempting to remake the global monetary system by inserting itself as a key player, spreading hate speech and propaganda, and more.
What's more, this is not the first time TikTok has come under criticism for security issues. In February 2019, the company paid US$ 5.7 million to the U.S. Federal Trade Commission over alleged violations of a children's privacy law (COPPA) for allowing kids under 13 years old to sign up for the app without parental consent.
But interference from China isn't to be shrugged off either. Tencent-owned instant messaging service WeChat has been previously found to conduct surveillance of images and files shared on the platform and use the monitored content to train its censorship algorithms, and even keep tabs on non-China-registered accounts and use messages from those accounts to train the algorithms to be used against China-registered accounts without informed approval of the users.
It's no surprise that even as the battle to keep personal data private rages on, it's expected of netizens to relinquish control over their data as the cost of admission for all the conveniences of the digital world.
When you install an app on your phone and use it, the data you share with the app — your personal information, your preferences, where you go, and what you buy — are shared with other third-party data brokers in real-time, who in turn sell that data to advertisers to target you with relevant ads, while the app maker gets a cut of the money. This is how the data brokerage industry works, and it's all legal because it's explained in the privacy policy and terms of service agreement that many of us don't bother to read.
The larger problem, then, rests with the business model of these companies, which are based on tracking users' online habits to profile their behaviour and harvesting the data to feed highly sophisticated, black-box algorithms that deliberately employ psychological dark patterns to push users towards selecting privacy intrusive options in order to offer a better service.
The notion that ad-supported networks use addictive habit-forming interface elements (endless scrolling and pop-up notifications) and automated personalisation (recommended posts and videos) to boost engagement with harmful but compelling content (like conspiracy theories and misinformation) is not new. Social media is no different from television or radio. If anything, the internet scales up, speeds up, and automates older media in a way that poses unique problems.
What's required is a meaningful framework that puts an end to uninhibited collection and analysis of personal information for profit maximisation at the same time introduce new cybernated governance reforms to advance consumer privacy, market competition, and algorithmic transparency.
Comments
Post a Comment