Facebook Moderators Claim Zuckerberg is Asking Them to ‘Risk Our Lives’

Facebook moderators blast Zuckerberg, claim he's risking their lives for profits

Facebook moderators demand Covid-19 safety protections

The firm's new Community Standards Enforcement Report showed that the millions of posts were taken down because they included misleading claims, such as fake preventative measures and exaggerated cures, which could lead to imminent physical harm.

The latest Community Standards Enforcement Report provides metrics on how Facebook enforced its policies from July to September, and includes metrics across 12 policies on Facebook and 10 policies on Instagram.

On Thursday, Facebook said its AI systems detected 94.7% of the 22.1 million pieces of hate-speech content it removed from the social site in the third quarter of 2020; this is up from AI spotting 80.5% of the 6.9 million pieces of content it took down in the same quarter a year ago. This means that there are 10-11 pieces of hate speech for every 10,000 views of content.

The report found 0.11 percent of all views on the platform are identified as that of hate speech.

Those that do come into the office, they said, should be given hazard pay. It's a tricky, never-ending task to remove objectionable user posts and ads, in part because people are uniquely good at understanding what differentiates, say, an artistic nude painting from an exploitative photo or how words and images that seem innocent on their own can be hurtful when paired. Users can manually report a post which they think violates Facebook's or Instagram's rules. As The Verge reported in October, Facebook content moderators are being "forced" back to the office in Dublin, Ireland.

A group of moderators working on Facebook published a collective open letter on Wednesday.

"To cover the pressing need to moderate the masses of violence, hate, terrorism, child abuse, and other horrors that we fight for you every day, you sought to substitute our work with the work of a machine", reads the letter. Today, Facebook proactively detects about 95% of hate speech content that is being removed.

This is not the first time Facebook has an internal problem with its moderators. Because hate speech depends on language and cultural context, Facebook sends these representative samples to reviewers across different languages and regions.

Facebook CEO Mark Zuckerberg testifies remotely via videoconference in this screengrab made from video during a Senate Judiciary Committee hearing titled
Coronavirus: Facebook accused of forcing staff back to offices

Even when you have the definition, developing AI which can detect it can be a much bigger challenge.

What this letter suggests is that AI is simply not working as Facebook execs would hope. All of this helps Facebook develop AI with greater language understanding capabilities.

This post has been updated to reflect that full-time Facebook employees are also demanding these changes in solidarity with content moderators.

Facebook has long had issues in dealing with misinformation on a range of topics.

As it is, there have already been COVID outbreaks in several of Facebook's offices, with workers in Ireland, Germany, Poland and the United States testing positive for the virus. Misinformation it considers less risky is sent to the company's fact checkers for debunking. Misinformation appears to be Facebook's most widespread problem, though the company removed or labeled millions of posts in other categories as well, including bullying, harassment and spam.

And while strides have been made in proactive detection of hate speech, the platform still has a lot of work to do.

The company has acknowledged that their AI technology is still not flawless and requires further development.

Latest News