Abuse thrives as social media giants watch on
Even death and rape threats are not taken seriously by many social media giants.
Social media platforms were once known for funny memes and hilarious videos but today, these have become notorious for threats and harassment. It’s a toxic environment where people are singled out and threatened for their beliefs.
Adding to the problem is the lack of interest shown by social media giants to handle harassment. When you report a threat to platforms like Instagram, Twitter or Facebook, often comes the polite message, ‘Thank you for reporting. We carefully review reports of threats and consider many things when determining whether a threat is credible’. However, they soon revert, ‘We reviewed your report carefully and found violation of no rules’.
On Sunday, journalist Nidhi Razdan got a death threat in a private message saying, “I will hang you, I will execute you”. When she reported it, Facebook initially pointed that it doesn’t violate their rules. According to them, “In determining whether a threat is credible, we may also consider additional information such as a targeted person’s public visibility and vulnerability”.
On the face of it, the threat did seem serious. Naming and shaming the social media platform on another helped, in this case. No sooner did the journalist tweet about it, the issue garnered a lot of attention and eventually Facebook admitted to its mistake and suspended the detractor’s account.
Actress Richa Chadha, who reported a rape and death threat, was at the receiving end of similar attitude. Recalling Twitter’s response, she said, “They said ‘no violation within context’.”
Many people wonder why threats get a free pass, like Actor Faran Akhtar who had asked Instagram “How is a death threat not a violation of your guidelines?”
The problem seems to be “algorithmic and human reviewers” who label comments or posts as offensive or non-offensive without considering the context.
Careful consideration is only given when a user fumes about it for the second or third time. Threats go unnoticed because these platforms don’t have enough moderators trained with proper skills to act on them. Hiring people for content moderation incurs a huge cost and this is why these platforms are slow to react to abuse.
While all of them say they have strict norms, it’s the implementation that often suffers.
A Twitter spokesperson said, “We start from a position of assuming that people do not intend to violate our rules. Unless a violation is so egregious that we must immediately suspend an account, we first try to educate people about our rules and give them a chance to correct their behavior. We show the violator the offensive Tweet(s), explain which rule was broken, and require them to delete the content before they can Tweet again.”
But when it comes to death threats, the logic of waiting for repeated violations is totally bizarre.
While the European Union has made it mandatory for social media platforms to act on all kinds of content, the Indian IT Act, section 79, makes it optional for platforms to take down content, as a result of which reacting to threats isn’t a top priority for them. The only way a victim can get redressal is by reaching out to the police and making use of the provisions in the IPC. Since the way the interactions take place differ across the world, activists are now asking Facebook to develop guidelines specific to each region, instead of following a universal policy.