afaqs! news bureau

Meta fails hate test from ICWI and Ekō, approves political ads inciting violence: Report

The advertisements were created and submitted to Meta’s ad library by India Civil Watch International (ICWI) and Ekō.

Meta, the owner of Facebook and Instagram, approved a series of AI-manipulated political advertisements during India's ongoing election that disseminated disinformation and incited religious violence, according to a report shared exclusively with The Guardian.

The advertisements were conceptualised and submitted to Meta’s ad library- the database of all ads on Facebook and Instagram- by India Civil Watch International (ICWI) and Ekō, a corporate accountability organisation. Their aim was to test Meta’s mechanisms for detecting and blocking political content that could be inflammatory or harmful during India’s six-week 2024 election.

ICWI is committed to upholding the democratic rights of all peoples in India and the Indian communities in North America. Ekō is a community of people from around the world committed to curbing the growing power of corporations.

As mentioned in the report, Facebook approved advertisements in India containing known slurs against Muslims, including phrases like “let’s burn this vermin” and “Hindu blood is spilling, these invaders must be burned.” These ads also featured Hindu supremacist language and spread disinformation about political leaders. One of the approved ads called for the execution of an opposition leader, falsely claiming they wanted to “erase Hindus from India”, and included a photo of a Pakistani flag.

The report mentions the adverts were created based upon real hate speech and disinformation prevalent in India, underscoring the capacity of social media platforms to amplify existing harmful narratives. They were submitted midway through voting, which began in April and would continue in phases until 1 June.

The report researchers submitted 22 adverts in English, Hindi, Bengali, Gujarati, and Kannada to Meta, of which 14 were approved. A further three were approved after small tweaks were made that did not alter the overall provocative messaging. After they were approved, they were immediately removed by the researchers before publication.

Meta’s systems failed to detect that all of the approved adverts featured AI-manipulated images, despite a public pledge by the company that it was “dedicated” to preventing AI-generated or manipulated content being spread on its platforms during the Indian election.

Five of the adverts were rejected for breaking Meta’s community standards policy on hate speech and violence, including one that featured misinformation about India's current Prime Minister Narendra Modi. But the 14 that were approved, which largely targeted Muslims, also “broke Meta’s own policies on hate speech, bullying and harassment, misinformation, and violence and incitement”, according to the report. Meta also failed to recognise the 14 approved adverts were political or election-related, even though many took aim at political parties and candidates opposing the BJP. 

As mentioned in The Guardian story, a Meta spokesperson said people who wanted to run ads about elections or politics “must go through the authorisation process required on our platforms and are responsible for complying with all applicable laws”.

The company further added, “When we find content, including ads, that violates our community standards or community guidelines, we remove it, regardless of its creation mechanism. AI-generated content is also eligible to be reviewed and rated by our network of independent factcheckers – once a content is labeled as ‘altered’ we reduce the content’s distribution. We also require advertisers globally to disclose when they use AI or digital methods to create or alter a political or social issue ad in certain cases.”

Meta has faced accusations of inadequately curbing the dissemination of Islamophobic hate speech, incitement to violence, and anti-Muslim conspiracy theories on its platforms in India. In certain instances, such posts have resulted in real-life incidents of riots and lynchings.

Nick Clegg, Meta’s president of global affairs, characterised India's election as "a massive, massive test for us" and highlighted the extensive preparation the company had undertaken in India over several months. The company stated that it had broadened its network of local and third-party fact-checkers across all platforms and was operating in 20 Indian languages.

Have news to share? Write to us