Connect with us

Business

Facebook is using AI to combat 500,000 monthly reports of revenge porn

Published

on

Facebook has been working on improving their content processing operations more and more efficient for years now. They’ve deployed algorithm after algorithm for a number of different uses. From auto-tagging people in pictures to identifying videos containing violence and everything in between is being mapped onto machine learning and artificial intelligence tools. AI has been critical in helping the social media giant manage their userbase of over 2.5 billion people worldwide.

One of the more serious issues that has been harder to control over the platform has been the spread of revenge porn. A recent report by NBC says that Facebook has to assess up to 500,000 reports of revenge porn every month. The report details how Facebook controls the nonconsensual spread of intimate photos and uses AI tools to identify and remove such content before it is even reported by other users.

Revenge porn is a form of invasion of sexual privacy and online harassment where the perpetrator shares of threatens to share intimate photos and videos of their partners publicly on social media platforms. To fight this issue and help take down such photos and videos as soon as they’re uploaded, Facebook has built a dedicated team of 25 individuals as well as deployed AI tools. It is important to note that the team of 25 does not include any content moderators employed by Facebook for other types of content moderation.

Facebook also owns Instagram, WhatsApp, and Messenger which are all popular apps worldwide. NBC reports that the mix of human and AI tools that Facebook has employed is responsible for tracking down any instances of revenge porn on any of its platforms. This issue is obviously very complex and can be impacted greatly by the cultural differences of different regions and time constraints as well. However, only through the use of AI and machine learning does it seem remotely possible to provide a scalable solution that can effectively work on such a large userbase for Facebook.

While the efforts to include AI and machine learning into the mix to enable timely removal of such objectionable content by the company are admirable, they also stand as an example of hard it can be to get something like that done. Revenge porn is just one of the issues that the social media platforms face. Terrorism propaganda, white supremacy, anti-vaccination movements and fake news are all large-scale problems that are proving very difficult to control on such a wide scale.

While the intimate images can be posted anywhere else on the internet, the act is especially damaging when done on social media platforms. Everyone from your family and social circle can view those pictures and videos. There have been many cases of honor killings, disownment by family as well as physical abuse as a direct result of sharing such images of victims on Facebook, Twitter, Instagram etc. There have even been cases of suicide and self-harm by the victims. Because the consequences can be so severe, the responsibility than falls on Facebook as they’re the only ones in the position to effectively do something about it.

Getting AI involved

Back in March, Facebook announced that it would be introducing an AI based detection mechanism to identify “near nude images or videos that are shared without permission on Facebook and Instagram.” Prior to that, Facebook had already been using artificial intelligence to detect images of child nudity and stop exploitation of minors on the platform.

AI can be helpful in these issues because of how quickly it is able to process the hundreds of millions of images being shared on the platform by the minute. No human force could possibly keep up with such a massive task. AI, however, can be trained to keep a look out for easy identifiers regarding a post possibly being an instance of revenge porn being shared. According to the company, they can train the AI to scan posts for certain clues and quickly flag images and videos that could possibly be intimate photos of someone being posted without permission.

Clues that AI can be trained for include captions like “look at this,” mixed with the laughing emojis as well as some form of nudity or near nude images in the shared picture. Once algorithms flag an image, it is sent to human moderators for review. “Our goal is to find the vengeful context, and the nude or near-nude, and however many signals we need to look at, we’ll use that,” said Mike Masland, Facebook’s product manager for fighting revenge porn.

The training data problem

AI needs large chunks of data in order to properly learn to identify certain patterns. In the case of revenge porn, this training data was nude images that similar to the kind of images that are typically shared between partners. Facebook says that it turned to nude or near nude images and videos that were already posted on the social media platform and had been flagged by their human moderators.

Even though the AI might be completely accurate in or fail to identify some cases completely, the company claims that the technology is still evolving. Adding that the more instances of the crime are reported and identified that better AI will get at identifying them in the future by using them as examples.

The privacy problems that arise from the use of AI to identify images and keeping any type of record of those images are still there of course. Will Facebook be able to responsibly handle this sensitive issue and solve one of the most difficult problems that content moderation has proven itself to be for social media platforms in recent times? We are certainly rooting for them.

 

Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Trending