‘Too Slow’ in Preventing Hate Speech That Led To Myanmar Violence: Facebook

Aug. 16, 2018



The ethnic violence in Myanmar is horrific and it was “too slow” to prevent the spread of misinformation and hate speech on its platform, Facebook acknowledged on Thursday.

The admission came after a Reuters investigation on Wednesday revealed thatFacebook has struggled to address hate postsabout the minority Rohingya, the social media giant said the rate at which bad content is reported in Burmese, whether it’shate speechor misinformation, is low.

“This is due to challenges with our reporting tools, technical issues with font display and a lack of familiarity with our policies. We’re investing heavily in Artificial Intelligence that can proactively flag posts that break our rules,” Sara Su, Product Manager at Facebook, said in a statement.

Facebook said it proactively identified posts as recently as last week that indicated a threat of credible violence in Myanmar.

“We removed the posts and flagged them to civil society groups to ensure that they were aware of potential violence,” said the blog post.

In May, a coalition of activists from eight countries, including India and Myanmar, called on Facebook to put in place a transparent and consistent approach to moderation.

The coalition demanded civil rights and political bias audits into Facebook’s role in abetting human rights abuses, spreading misinformation and manipulation of democratic processes in their respective countries.

Besides India and Myanmar, the other countries that the activists represented were Bangladesh, Sri Lanka, Vietnam, the Philippines, Syria and Ethiopia.

Facebook said that as of June, it had over 60 Myanmar language experts reviewing content and will have at least 100 by the end of this year.

“But it’s not enough to add more reviewers because we can’t rely on reports alone to catch bad content. Engineers across the company are building AI tools that help us identify abusive posts,” said the social media giant.

Not only Myanmar, activists in Sri Lanka have argued that the lack of local moderators — specifically moderators fluent in the Sinhalese language spoken by the country’s Buddhist majority — had allowed hate speech run wild on the platform.

Facebook said it is working with a network of independent organisations to identify hate posts.

“We are initially focusing our work on countries where false news has had life or death consequences. These include Sri Lanka, India, Cameroon, and the Central African Republic as well as Myanmar,” said the company.