Connect with us

Facebook Audit Deletes 1.5 Billion Fake Accounts And Content

   

Facebook has deleted 1.5 billion fake accounts and content found through the Facebook app audit. Since the suspension of 200 apps, the discovery of million of fake accounts and posts have surfaced.

From January 1 to March 31, Facebook reports that there are 6.5 million attempts for creating a fake account every day among the 2.2 billion monthly active users. Facebook is working to develop a more advanced AI capability to better detect fake accounts.

The deletion of fake accounts is essential to keeping Facebook’s promise in creating a better user experience through enhanced security protocols. During the first quarter Facebook’s AI took down 837 million spam posts, while deleting the account before any user had to report them.

These actions by Facebook are to avoid further misuse of data, while staying true to the ideal experience for users. The combination of deleted spam (837 million) and fake accounts (583 million), totals to approximately 1.5 billion deleted posts and accounts. Most of these accounts were directed by bots to influence the election.

Facebook’s Community Standards Enforcement Report reveals that in the first quarter, they took down 21 million posts with sexual nudity/activity and removed 2.5 million posts involving hate speech. Over three million pieces of violent content we’re deleted or censored by Facebook.

The Facebook software “detection technology” is working to better identify hate speech to delete before users report the post or account. Hate speech material review requires detailed scrutiny with context to determine whether material violates standards.

The hate speech detection rate is lower with 38 percent of posts and accounts taken down before reported by users. The detection technology automatically took down 99.5 percent of terrorist propaganda and 95.8 percent of nudity, before users reported the content.

The report was published to ensure that Facebook is doing their part in ensuring better protection for their users. Facebook plans on releasing reports such as this every six months to commit to transparency in their efforts for protecting users from harmful content or accounts.

Connect