December 22, 2024

Paul Matzko

A new study, “A Tik‐​Tok‐​ing Timebomb,” that compares the use of political hashtags on TikTok and Instagram is being widely shared by those calling for a ban or forced sale of TikTok. The report, from the Network Contagion Research Institute, shows that political posts which run counter to Chinese national interests underperform on TikTok (owned by Chinese company Bytedance) relative to Instagram (owned by US‐​based Meta). For the authors, this suggests that Chinese authorities are algorithmically throttling content on TikTok that harms their international interests, such as posts about Tibet, the Uyghurs, and Tiananmen Square.

However, the authors of the study made two remarkably basic errors that call into question the fundamental utility of the report.

First, the authors chose a flawed methodology that failed to account for how long each platform has existed. Instagram (launch 2010) is roughly twice as old as TikTok (international launch 2017). Thus, topics that were the subject of intense public discourse in the early 2010s, but which have not been heavily featured in the decade since, will naturally underperform on TikTok versus Instagram. Yet the authors did not adjust their data collection window to reflect this fact.

Second, the authors assumed that the same people use both platforms, which led them to miss the potential for generational cohort effects. In short, the median user of Instagram is older than the median user of TikTok. Compare the largest segment of users by age on each platform: 25% of TikTok users in the US are ages 10–19, while 27.4% of Instagram users are 25–34. That roughly decade‐​modal age gap will skew the ratio of topics covered on the two platforms. Simply put, older and younger users have different interests.

To see how these errors compounded to distort the study’s findings, let’s consider just one of the ten examples proffered by the authors. The study compares the frequency of three common hashtags related to Tibet and the Dalai Lama, finding a highly skewed ratio of 37.7 posts on Instagram for every one post on TikTok. That certainly appears suspicious given the longstanding Chinese censorship of domestic activists advocating for Tibetan independence.

But bear in mind that the study’s authors were not assessing only those posts uploaded in a discrete period, like, say, in the fall of 2023. No, they were comparing the total number of posts ever posted since the platforms were created (Instagram: 2010 / TikTok: 2017).

Also, a quick peek at Google trends data show that public discourse about Tibet in the US has been in a general decline throughout the 2000s and 2010s, albeit punctuated by exponential spikes (as much as 15x) in April 2008 and December 2016, which corresponded to moments of intense Tibetan activism.

Both spikes predated the international rollout of TikTok but happened at a time when Instagram already had hundreds of millions of users. It isn’t surprising that six years’ worth of TikTok posts from a time of relatively low US interest in Tibet would be swamped by 13 years of Instagram posts from a time of intense US interest in Tibet.

Furthermore, even if one were hypothetically able to restrict the data on the Tibet hashtag to only posts made after the creation of TikTok, one would still expect to run into generational cohort effects. To put it simply, knowledge of and interest in a topic tends to persist. Millennials who were in college in 2008 during the Tibetan uprising are going to be more likely to be interested in — and thus post about — Tibet today than are members of Generation Z, some of whom were still in diapers in 2008.

None of what I’ve written here proves that TikTok did not or could not manipulate its algorithm to downgrade content unfavorable to the CCP. But it strongly suggests that this particular study is poorly designed and should not be used as serious evidence of algorithmic manipulation by TikTok. Frankly, I’m surprised by just how sloppy it is.

A better study would choose a discrete time frame (such as when both platforms actually existed) and assess the ratio of posts in that time frame on controversial issues of importance to the Chinese government. If someone does that study, I’d like to see a copy.

Even then, however, it’s important to remember that these platforms aren’t synonymous and any outcomes might be skewed by platform‐​specific differences. For example, why should we expect the ratio of non‐​political and political content to be similar? That assumes a constant level of political interest between those who use the two platforms. But that is hardly a guarantee given generational differences between their user bases, the downstream contrasts from being photo‐​first versus short‐​form video‐​first formats, and other platform‐​specific divergences.

Regardless, the fact that many major news organizations missed these basic flaws in the study and then ran credulous coverage of the report is an indictment of mood affiliation in journalism, especially when they are tasked with covering social media platforms with which they compete for the public’s attention.

Crossposted from the author’s Substack newsletter. Click through and subscribe for more content at the intersection of policy, media, and history.