Tech This Week | Now is the time for misinformation reform
One of the most evident changes brought about by the pandemic has been the accelerated shifting our interactions online. This involves turning to the web not just our engagements with friends or colleagues, but also for questions and comments about the virus and the developments around it. For instance, running Google searches on whether the virus can spread through water, or engaging on Twitter about the latest numbers and how they can be controlled.
Like many such transitions, there are multiple anticipated second-order effects of increased user interaction on platforms. Firstly, an increased number of searches around COVID-19 provides advertisers with an incentive to leverage that to their advantage. This includes advertising false cures for the virus, or masks, or immunity boosters. In addition, it also includes using controversial targeting options (such as anti-vaccine groups) to sell products.
Secondly, it also allows bad or ignorant actors to spread misinformation about the virus itself. In case you have heard of candles or heat killing the coronavirus, or the potency of hydroxychloroquine as a proven cure, you have been subject to it. Because of liberal laws based on principles of free speech on most dominant platforms, misinformation is often allowed to stay up.
Thirdly, on the back-end, this presents a fresh set of challenges for content moderation. We are still not yet sure what does and does not belong on the internet when it comes to coronavirus. As a result, platforms are still updating their policies and guidelines. This is likely to be a dynamic process that will evolve with time.
Given these second-order effects, and the rise of misinformation related to the virus, there have been plenty of calls for platforms to rise up and act as ‘arbiters of truth’. On a side note, presently, no one is happy with the amount of moderation platforms engage in. Depending on where you stand, there is a case for platforms being too heavily moderated or not being moderated enough.
Platforms have reacted to this challenge on two levels, in policy and in action. Prateek Waghre and I wrote a paper about this, analysing on a granular level how platforms have reacted to the misinformation challenge. We drew from our research that direct responses driven by action can be classified into three broad categories, allocating funds, making changes to the user interface, and by modifying information flows.
Responses by platforms to the spread of misinformation have been swift and varied. Google, Facebook, and TikTok have all announced grants to deal with the problem. They have also vowed to prioritise ads by local and international public health authorities, even providing them with ad credits in some cases.
On the policy front, things have been varied but not as fast. While Google came up with its own policy around COVID-19 misinformation, Facebook pledged to use existing policies to take down content. Here is a snapshot of how the policy landscape has changed by platform:
Type of Policy Intervention
Company | Created New Policies | Modified Existing Policies | Applied Existing Policies |
Facebook / Instagram | ✔ | ||
TikTok | ✔ | ||
| ✔ | ✔ | |
YouTube | ✔ | ✔ | |
| ✔ | ✔ | |
ShareChat | ✔ |
As is evident, there is variance in how platforms are dealing with updates to their misinformation policies. This table does not do nuance justice. For instance, Facebook claimed that they would be taking down misinformation related to COVID-19. At the same time, most of the underlying policies for the company (their Advertising Policy on Misinformation, Community Standards on False News, and Community Standards on Manipulated Media) only talk about content being downranked or not being shared in News Feeds.
We often talk about how a crisis can be turned into opportunity. It can be hard to spot the exact moment when the pivot happens. As things stand now, this discrepancy is the moment of opportunity for not just Facebook, but also other platforms.
There has never been more collective pressure for platforms to act as arbiters of truth. As it stands now, the policy positions on most platforms remain unchanged and are being repurposed to tackle information disorder around COVID-19. That is not ideal and under current circumstances, highly subject to change.
The changes brought about by the current situation are going to be lasting and do not have to be only COVID-specific. This is an opportunity to take stock and redefine the underlying mechanics that help platforms classify and deal with misinformation for a post-COVID world. For instance, redefining ‘harm’ to include content and ads that don’t just contradict local and international health authorities, but also to scientific facts such as global warming and climate change. Twitter has already done significant work in the area.
An update in reform surrounding false news has been overdue for a while. As it turns out, the misinformation crisis brought upon by the pandemic may be the perfect opportunity to make it happen.