Internal Facebook documents would reveal that the tools for detecting content that violate the rules are inadequate for such a task.
This is important because, for years, Mark Zuckerberg has justified the use of algorithms to fight illegal and abusive content on his platform; to the point that the Facebook CEO came to assure before 2020 that his Artificial Intelligence would be able to eliminate “most of the problematic content” on his platform. This way, Facebook doesn’t hire as many human employees, and you can leave its platform moderation on “automatic.”
In practice, these algorithms ignore most of the content that violates Facebook’s rules ; and the company knows it, according to internal documents published by the Wall Street Journal . The figures of the internal investigation do not leave the AI well, stating that it eliminated publications that generated only between 3% and 5% of hate messages, and only 0.6% of the publications that violated the rules of violence .
These are figures that have absolutely nothing to do with those that Facebook itself publishes in its reports; According to the latest one, published in February to deny its involvement in the assault on the US Capitol, the “super efficient” Artificial Intelligence would have detected and deleted 97% of hateful posts , even before any human had detected them.
The difference is not surprising to researchers and organizations who have studied Facebook, who have long warned that Facebook’s figures do not match those of third-party studies; but they have also denounced the difficulties that Facebook puts to obtain this type of data, and the lack of transparency about how it reached its conclusions.
Facebook has responded to the Wall Street Journal publication, clarifying that these are old and outdated documents, and that they demonstrate that their work is “a journey of several years.”
His defense has centered on the fact that it is “more important” to see how hate messages, “generally”, are being reduced on Facebook, rather than in the elimination of hate content. That is why he believes that the most “objective” metric is prevalence, since it represents the content that has ‘escaped’ them, and that has been reduced by 50% in the last three quarters according to his data.
Therefore, Facebook would be tacitly accepting that it is not capable of removing hateful content from its platform, but that the important thing is that it gets very few people to see it .
This is by no means the first time that Facebook has had to defend itself from its own internal studies this month alone; A former Facebook employee leaked documents that would show that Facebook does nothing against hateful content because it would hurt them financially.