The human rights organization Global Witness conducted a test in which it revealed that Meta approved ads encouraging violence and attacks against democracy in Brazil. The experiment aimed to test whether the company’s measure, which said it would block content of this type on Facebook, is effective.
- Facebook and Instagram should release photos with nipples, recommends committee
- Instagram launches “silent mode” to temporarily stop notifications
To reach the conclusion, the organization produced 16 ads on Facebook calling for people to invade public buildings, pointing out fraud in elections and even calling for the death of children whose parents voted for the candidate elected to the presidency of the Republic. The social network would have approved 14 of them, which would be a demonstration that the announced measure was not enough to prevent the dissemination of violent content.
Global Witness digital threat activist Rosie Sharpe accuses the social network of failing to deliver on its promise. “There is absolutely no way the type of violent content we tested could be approved for publication by a major social media company like Facebook,” she explained via press release.
Unlike Facebook, YouTube has not approved any similar ads. In addition to rejecting the boost, the video platform also suspended accounts that tried to spread the hateful message. This would be a way of showing that the test can be detected by the media algorithm, provided it is calibrated for this purpose.
As soon as the test ads were approved, the organization claimed to have deleted the content to prevent dissemination. Even so, many contents don’t even need money to go viral, since the users themselves can boost them by playing in groups and generating false interactions.
Meta would have minimized search
To FreeGameGuide, a spokesperson for Meta stated that the sample size (16 ads) was too small to question Facebook’s ability to block harmful content:
“As previously disclosed, prior to last year’s Brazilian elections, we removed hundreds of thousands of content that violated our violence and incitement policies and rejected tens of thousands of ad submissions before they ran. We use technology and teams to help keep our platforms safe from abuse, and we’re constantly improving our processes to enforce these policies broadly,” he explained.
In its press release, Global Witness appears to suggest that Facebook is not taking the attacks in Brazil as seriously as the social platform took the attacks in the United States last year, when the company implemented measures to prevent civil unrest from escalating. spread.
The international organization recommended the platform and other social media companies to commit to intensifying content moderation efforts to avoid new situations. During the attack on the Capitol, which took place in 2021, Face took tough measures, including suspending former President Donald Trump’s account for two years.
Unfortunately, Meta’s platforms are currently the main means of distributing this type of content in Brazil. Facebook, Instagram and WhatsApp — especially the latter — are ways that social agitators use to mobilize the masses to act violently.