Facebook Introduces New AI System to Tackle Harmful Content

A major challenge of Facebook is removing harmful content. That is why the platform continuously innovates to counteract them. Every once in a while, the platform publishes a community standard enforcement report. It also enabled warning prompts for sharing possible misinformation and unread links. As it called for online content governance, Facebook introduces a new AI system to tackle harmful content.

Facebook Introduces New AI System to Tackle Harmful Content
Facebook

Facebook’s new AI system is called “Few-Shot” learner (FSL). It can detect harmful content within weeks, instead of months. FSL learns different kinds of data like images and texts. It understands more than 100 languages. It works across three scenarios:

  • Few-shot for a Facebook policy with few examples
  • Low shot for a Facebook policy with a low number of training examples.
  • Zero-shot for a Facebook policy with no example

The new AI system starts with a general understanding of many different topics. Then, FSL filters them into lesser or even zero categories of policy violations. As such, FSL easily identifies misleading content or something that incites violence.

Facebook introduces a new AI system to tackle harmful content on 08 December 2021

Implications for Marketers: 

Facebook’s new AI system is a great way to eliminate harmful content on Facebook. For marketers, it means a better place to promote products and services through accurate ads and campaigns that consumers can trust. 

Reference: https://about.fb.com/news/2021/12/metas-new-ai-system-tackles-harmful-content/

If you find this post useful, please share to your friends.
Share on whatsapp
WhatsApp
Share on facebook
Facebook
Share on linkedin
LinkedIn
Share on email
Email

Join Our Newsletter

Join our newsletter to receive the latest updates and market tips.