1. Factual background

On 18 March 2026, Liverpool FC defeated Galatasaray SK 4–0 in the second leg of the UEFA Champions League Round of 16 at Anfield. During the first half, Liverpool centre-back Ibrahima Konaté was involved in a collision with Galatasaray striker Victor Osimhen, who was subsequently diagnosed with a fractured forearm and substituted at half-time.

Post-match comments from Galatasaray’s coaching staff, criticising Konaté’s physical play, triggered a wave of racist abuse directed at the player on Instagram. The dominant motif was the comparison of Konaté—a Black man born in Paris to a family of Malian descent—to a monkey.

On 20 March 2026, Liverpool FC issued a statement describing the abuse as “vile and abhorrent,” “dehumanising, cowardly and rooted in hate.” The club called on social media companies to act, stating: “These platforms have the power, the technology and the resources to prevent this abuse, yet too often they fail to do so.” 

2. What Vigilia found and reported

Vigilia identified and reported 285 comments on Konaté’s Instagram page that explicitly compared him to a monkey. The comments were posted on publicly accessible content. They were not ambiguous. They were not “edge cases” requiring sophisticated contextual analysis. A person comparing a Black footballer to a monkey is engaging in one of the oldest and most well-documented forms of racial dehumanisation.

All 285 comments were reported to Instagram through its in-app reporting mechanism. As of the time of writing—more than 48 hours after the reports were submitted—Instagram has not taken action on any of the reported comments.

 

Screenshot of Konaté’s Instagram page showing unmoderated racist comments.

3. Instagram’s 75-report cap

In the course of reporting the 285 identified comments, Vigilia encountered a hard limit imposed by Instagram on the number of reports a single account can submit within a 24-hour window. After 75 reports, Instagram stopped registering further submissions from the reporting account. No error message was displayed; the platform simply ceased to add those additional reports to the support center that aggregates a user’s reporting activity.

The practical consequence is that 210 of the 285 comments Vigilia identified and attempted to report will not receive an official moderation decision from Instagram. Vigilia has no visibility into whether those 210 comments were even logged in Instagram’s moderation queue. As far as we can determine, from the platform’s perspective, those reports do not exist.

This cap has significant implications. The DSA’s notice-and-action regime (Article 16) is predicated on the assumption that users and organisations can effectively notify platforms of violative content. When a platform limits the number of reports that can be submitted by a single account to 75 per day—and the volume of violative content on a single post or profile routinely exceeds that number—the reporting mechanism ceases to function as designed. It becomes, in effect, a bottleneck that shields the platform from the very knowledge that would require it to act.

Whether or not this is Instagram’s intent, it is Instagram’s outcome: a reporting architecture that structurally limits the platform’s exposure to liability by capping how much illegal content it can be made aware of.

This is particularly concerning in cases like this one, where the content is not scattered across the platform but concentrated on a single, highly visible profile—and where volume is itself part of the harm.

4. Referral to out-of-court dispute settlement

Given Instagram’s failure to act, Vigilia has referred all 285 cases to a certified out-of-court dispute settlement (ODS) body under Article 21 of the Digital Services Act (Regulation (EU) 2022/2065). 

5. Content violation analysis

5.1 Instagram’s Community Guidelines

Instagram’s Community Guidelines (as stated in Meta’s Hate Speech policy) prohibit content that targets people on the basis of race or ethnicity to dehumanise them, including through comparisons to animals. Comparing a Black person to a monkey is listed as a Tier 1 violation—the most severe category—under Meta’s own classification system. There is nothing ambiguous about the content reported. Instagram’s own rules call for its removal. Instagram did not remove it.

5.2 Hate speech under EU and French law

We decided to report the comments under Instagram’s community guidelines violations, as we presumed that the platform was most familiar with this set of rules and would then act swiftest. However, it bears noting that these comments are also very likely to violate the law of most EU Member States.

For instance, in France, comparing a Black person to a monkey has unambiguously been found illegal.

6. Commentary

6.1 On the nature of the content

The comments in question are not contextually dependent, satirical, or open to charitable interpretation. They compare a Black man to a monkey. This is the kind of content that has been universally recognised as racist hate speech for longer than Instagram has existed. The fact that it requires stating in 2026 is, in itself, the problem.

6.2 On Meta’s “More speech, fewer mistakes” policy

In January 2025, Meta CEO Mark Zuckerberg announced a sweeping overhaul of the company’s content moderation approach under the banner “More speech, fewer mistakes.” The initiative involved dismantling third-party fact-checking programmes, rolling back automated enforcement except for “high-severity violations,” and relaxing restrictions on various categories of speech. Meta’s own internal classification system designates dehumanising comparisons based on race—including specifically comparing “Black people and apes”— as Tier 1 violations: the highest severity category. In other words, this is precisely the kind of content that Meta has explicitly committed to continuing to enforce against, even under its new regime.

And yet, 285 Tier 1 violations sat unmoderated for more than 48 hours after being individually reported. The “More speech” part of the policy appears to be working as advertised, the “fewer mistakes” part is more doubtful.

7. Next steps

Vigilia has referred the 285 comments to a certified ODS body for independent review. We expect the outcome to be straightforward: comparing a Black person to a monkey violates Instagram’s own Community Guidelines (not to mention applicable EU member state criminal law).