Facebook tests hate speech reporting

Share

This time it is for a hate speech glitch.

These were "hate speech", "test p1", "test p2" and "test p3".

"We define hate speech as a direct attack on people based on what we call protected characteristics - race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease", Facebook said. They've currently begun testing it in worldwide markets, such as New Zealand and Australia for now, according to TechCrunch. A bug caused it to launch publicly.

According to several first-hand reports, Facebook users were seeing a button asking "Does this post contain hate speech?" on all of their News Feed posts, along with options to select "Yes" or "No".

This image shows the "hate speech" button accidentally set live by Facebook on Tuesday, May 1, 2018.

If users clicked "no", the message - which appeared twice on adverts - disappeared.

Facebook's ability to police the content on its platform has been under scrutiny since the 2016 election season, when misinformation and divisive messages marred its News Feed and advertising.

Facebook VP of product Guy Rosen said that the feature was a "bug" and a "test" that had been incorrectly applied to all posts, including Mark Zuckerberg's.

Zuckerberg made the comments during testimony before a joint hearing with the Senate judiciary and commerce committees.

'Hate speech, I am optimistic that over a five to ten year period we'll have AI tools that can get into some of the nuances, the linguistic nuances of different types of content to be more accurate in flagging things for our systems, but today is just not there on that'.

We're starting with a feature that addresses feedback we've heard consistently from people who use Facebook, privacy advocates and regulators: everyone should have more information and control over the data Facebook receives from other websites and apps that use our services.

Share