Facebook releases a report to ensure better community standards.
Facebook has decided to implement a new playbook in order to flag the offensive content released by the users. The company has also decided to release the outcomes of its internal audit twice a year. Earlier this week, Facebook issued its Community Standards Enforcement Preliminary Report. The report contained an analysis of how social media platforms can keep a track of the offensive content that is being uploaded and how can that be responded. A few weeks before, Facebook had already unveiled a set of dos and don’ts on the social media channels. The report turned out to be a follow up on that and Facebook’s executives walked the reporters through the entire process of how they intend to keep a track of violations. The report faced certain criticisms and at the face of it was the idea that Facebook plans to control the content being shown to the users. Facebook, however, cleared this earlier that the highlights of the reports include newly born methods which aren’t currently in practice. No such methods were set in stone but it is the need of the time that social media platforms get alert to ensure smooth functionality.
Facebook CEO Mark Zuckerberg, who unveiled the transparency report through his Facebook post, says:
“AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we’re working on it.”
After Cambridge Analytica
Facebook has segregated the offensive content into different categories for distinction. These categories can be broadly listed as follow:
- Graphic Violence
- Adult content
- Terrorist Propaganda
- Fake accounts
- Hate speech
Previously, the company was increasingly relying on users reporting data offences but now it is using AI to weed the offences out. The idea is to remove offensive content even before it is displayed to the public. Users are still being asked to report offensive content but that is no more the primary idea of tackling the problem.
How often content violation cases occur on Facebook?
After incorporating artificial intelligence more than million offensive graphics were flagged and knocked down by Artificial Intelligence. The company is hoping to put down more in the near future. The critical question that arises here is: How many content violations actually occur on Facebook? Schultz and Rosen gave their insights on the basis of the data they had from the first quarter of 2018 and last quarter of 2017.
An estimate is that around 0.25 percent of the content violated the standards set by the Facebook. This is drastic if the evidence shows that this ratio increased from 0.16 till now from the last quarter of 2017. To make it easier to comprehend, approximately 25 out of 10,000 pieces of the content contain offensive graphics. Executives are of the view that this number has increased since last year because of the increasing terrorist activities in the world and happenings like Syrian war. Experts are speculating on how to control this growing number.
“We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards,” the report says.
Rosen also stated that Facebook was able to disable around 600 million face accounts in the first quarter of this year within minutes of registration. The social media network hopes to achieve a brighter ground to prevent violations.
Amid of darkness, there’s light
The report also contained a section on how Facebook is planning to safeguard the newsfeeds of the users in future but most of it is a work in progress at the moment. Schultz made it clear that “All of this is under development.”
Facebook says that the reason for releasing the report was to spotlight the initiatives that the company is planning for future. The social media network is working on upgrading its internal metrics to provide a comfortable user experience instead of suspects and violations. The company has scheduled more such summits in the upcoming days. Summits are to take place in Oxford and Berlin to step ahead in developments. These steps show that Facebook is certain to combat violations and ensure community standards in the near future. We can hope that the users won’t have to go through another Cambridge Analytica anytime soon!