Facebook's AI system reports more offensive content than humans

When users upload something offensive to disturb people, it normally has to be seen and flagged by at least one person.

Update: 2016-06-01 12:53 GMT
The software, dubbed multilingual composer, is currently being tested among a small group of users. (Representational image)

Washington: In an attempt to combat hate speech, Facebook has said its artificial intelligence (AI) systems now report more offensive photos than humans do, which can help remove such content before it hurts the sentiments of people.

When users upload something offensive to disturb people, it normally has to be seen and flagged by at least one person. Offensive posts include content that is hate speech, threatening or pornographic; incites violence; or contains nudity or graphic or gratuitous violence.

For example, a bully, jilted ex-lover, stalker, terrorist or troll could post offensive photos to someone's wall, a group, event or the feed, 'Tech Crunch' reported. By the time the content is flagged as offensive and taken down by Facebook, it may have already caused the damage.

Now, AI is helping Facebook to unlock active moderation at scale by having computers scan every image uploaded before anyone sees it. "Today we have more offensive photos being reported by AI algorithms than by people," said Joaquin Candela, Facebook's Director of Engineering for Applied Machine Learning.

As many as 25 per cent of engineers now regularly use its internal AI platform to build features and do business, Facebook said. This AI helps rank news feed stories, read aloud the content of photos to the vision impaired and automatically write closed captions for video ads that increase view time by 12 per cent.

AI could eventually help social networking sites combat hate speech. Facebook, along with Twitter, YouTube and Microsoft yesterday agreed to new hate speech rules.

Similar News

Cancel the noise