Markets

Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

Is Facebook Doing Enough to Stop Bad Content? You Be the Judge

The report also covers fake accounts, which has gotten more attention in recent months after it was revealed that Russian agents used fake accounts to buy ads to try to influence the 2016 elections.

Along with fake accounts, Facebook said in its transparency report that it had removed 21 million pieces of content featuring sex or nudity, 2.5 million pieces of hate speech and nearly 2 million items related to terrorism by Al Qaida and ISIS in the first quarter of 2018.

The company has a policy of removing content that glorifies the suffering of others.

It took action on 21 million pieces of content containing nudity and sexual activity.

Facebook said that for every 10,000 content views, an average of 22 to 27 contained graphic violence, up from 16 to 19 in the previous quarter, a rise that was attributed to the rising volume of graphic content being shared on Facebook.

"We have a lot of work still to do to prevent abuse", Facebook Product Management vice president Guy Rosen said.

In the latest stop on its post-Cambridge Analytica transparency tour, Facebook today unveiled its first-ever Community Standards Enforcement Report, an 81-page tome that spells out how much objectionable content is removed from the site in six key areas.

Sichuan Airlines copilot sucked out of cockpit through broken window
At the same time, oxygen masks deployed in the cabin and cabin attendants made announcements and handled the situation. The cockpit experienced a sudden loss of pressure and drop in temperature and its right windshield was gone.

The response to extreme content on Facebook is particularly important given that it has come under intense scrutiny amid reports of governments and private organizations using the platform for disinformation campaigns and propaganda.

Mr Rosen added: We believe that increased transparency tends to lead to increased accountability and responsibility over time, and publishing this information will push us to improve more quickly too.

Facebook does not fully know why people are posting more graphic violence but believes continued fighting in Syria may have been one reason, said Alex Schultz, Facebook's vice president of data analytics.

On Tuesday, Facebook said it took action on some 2.5 million hateful pieces of content in the first three months of 2018, up from 1.6 million in the last three months of 2017.

The report looks at Facebook's enforcement efforts from Q4 2017 and Q1 2018, and shows an uptick in the prevalence of nudity and graphic violence on the platform.

This led to old as well as new content of this type being taken down. It estimates that overall, 3-4% of its monthly active users are fake. "This is especially true where we've been able to build artificial intelligence technology that automatically identifies content that might violate our standards".

Graphic violence: During Q1, Facebook took action against 3.4 million pieces of content for graphic violence, up 183% from 1.2 million during Q4. The inaugural report was meant to "help our teams understand what is happening" on the site, he said. During Q1, the social network flagged 96 percent of all nudity before users reported it. Tuesday's report said Facebook disabled 583 million fake accounts during the first three months of this year, down from 694 million during the previous quarter. It says it found and flagged almost 100% of spam content in both Q1 and Q4. "Our metrics can vary widely for fake accounts acted on", the report notes, "driven by new cyberattacks and the variability of our detection technology's ability to find and flag them".