Facebook is using Artificial Intelligence to combat offensive content

Facebook releases a report to ensure better community standards.

Facebook has decided to implement a new playbook in order to flag the offensive content released by the users. The company has also decided to release the outcomes of its internal audit twice a year.

Earlier this week, Facebook issued its Community Standards Enforcement Preliminary Report.

The report contained an analysis of how social media platforms can keep a track of the offensive content that is being uploaded and how can that be responded. A few weeks before, Facebook had already unveiled a set of dos and don’ts on the social media channels.  The report turned out to be a follow up on that and Facebook’s executives walked the reporters through the entire process of how they intend to keep a track of violations.

Facebook Against Offensive Content

The report faced certain criticisms and at the face of it was the idea that Facebook plans to control the content being shown to the users. Facebook, however, cleared this earlier that the highlights of the reports include newly born methods which aren’t currently in practice. No such methods were set in stone but it is the need of the time that social media platforms get alert to ensure smooth functionality.

Facebook CEO Mark Zuckerberg, who unveiled the transparency report through his Facebook post, says:

“AI still needs to get better before we can use it to effectively remove more linguistically nuanced issues like hate speech in different languages, but we’re working on it.”

After Cambridge Analytica

Facebook

Cambridge Analytica scandal where the data of 87 million users was accessed has placed Facebook in the spotlight. The report is a response to the rising controversies about Facebook’s privacy and user comfort policy. The response to the hate content on Facebook is particularly important given that the network is undergoing the extreme scrutiny of the government as well as private institutions. Facebook has been inspected for being a platform for deception based campaigns and propaganda of private parties. No matter how mammoth the social media network is, it needs to clear the charges that have been allegedly placed on its privacy policy.
Facebook has segregated the offensive content into different categories for distinction. These categories can be broadly listed as follow:

  • Graphic Violence
  • Adult content
  • Terrorist Propaganda
  • Fake accounts
  • Hate speech
  • Spams

Previously, the company was increasingly relying on users reporting data offences but now it is using AI to weed the offences out. The idea is to remove offensive content even before it is displayed to the public. Users are still being asked to report offensive content but that is no more the primary idea of tackling the problem.

Also ReadFacebook ships Portal with Privacy ensured!

How often content violation cases occur on Facebook?

After incorporating artificial intelligence more than million offensive graphics were flagged and knocked down by Artificial Intelligence. The company is hoping to put down more in the near future. The critical question that arises here is: How many content violations actually occur on Facebook? Schultz and Rosen gave their insights on the basis of the data they had from the first quarter of 2018 and last quarter of 2017.
An estimate is that around 0.25 percent of the content violated the standards set by Facebook. This is drastic if the evidence shows that this ratio increased from 0.16 till now from the last quarter of 2017.  To make it easier to comprehend, approximately 25 out of 10,000 pieces of the content contain offensive graphics. Executives are of the view that this number has increased since last year because of the increasing terrorist activities in the world and happenings like Syrian war. Experts are speculating on how to control this growing number.

“We use a combination of technology, reviews by our teams and reports from our community to identify content that might violate our standards,” the report says.

Rosen also stated that Facebook was able to disable around 600 million face accounts in the first quarter of this year within minutes of registration. The social media network hopes to achieve a brighter ground to prevent violations.

More on FacebookFacebook to pay a $644,000 fine for data breach

Amid of darkness, there’s light

The report also contained a section on how Facebook is planning to safeguard the newsfeeds of the users in future but most of it is a work in progress at the moment. Schultz made it clear that “All of this is under development.”

Facebook says that the reason for releasing the report was to spotlight the initiatives that the company is planning for the future. The social media network is working on upgrading its internal metrics to provide a comfortable user experience instead of suspects and violations. The company has scheduled more such summits in the upcoming days. Summits are to take place in Oxford and Berlin to step ahead in developments.

These steps show that Facebook is certain to combat violations and ensure community standards in the near future. We can hope that the users won’t have to go through another Cambridge Analytica anytime soon!

Nur
Nur
Nur ul ain Chaudhry is a LUMS Economics and Politics graduate who takes a keen interest in tech blogging and tech news. Her forte is Interbrand comparisons and reviews and Startup/Kickstarter stories and ideas.

Related Articles

Textsheet.com was an online repository of textbook answers, homework solutions, and other help for students. Its users loved its simplicity...
Dodge these battery draining Apps to administer your battery level.
If you are concerned about the possibility of experiencing a flat tire and being stranded on the roadside, a more...