Algospeak is the use of coded expressions to evade automated moderation algorithms on social media platforms such as TikTok and YouTube. It is used to discuss topics deemed sensitive to moderation algorithms while avoiding penalties such as shadow banning. It is a type of internet slang[1] and a form of linguistic self-censorship.[2][3]
The term algospeak is a blend of Algorithm and -speak;[4] it is also known as slang replacement or Voldemorting,[3] referencing the fictional character known as "He-Who-Must-Not-Be-Named".[5] Algospeak is different from other types of netspeak in that its primary purpose is to avoid moderation, rather than to create communal identity. However, algospeak may still be used in online communities.[1]
A 2022 poll showed that nearly a third of American social media users reported using "emojis or alternative phrases" to subvert content moderation.[6]
Causes and motivations
Many social media platforms relies on automated content moderation systems to enforce their guidelines, which are often not determined by users themselves.[2] TikTok in particular uses artificial intelligence (AI) to proactively moderate content, in addition to responding to user reports and using human moderators. In colloquial usage, such AIs are called "algorithms" or "bots". TikTok has faced criticism for their unequal enforcement on topics such as LGBT and obesity. This led to a perception that social media moderation is contradictory and inconsistent.[1] Between July and September 2024, TikTok reported removing 150 million videos, 120 million of which were flagged by automated systems.[7] In addition, AI may miss important context; for example, communities who aid people who struggle with self-harm or suicidal thoughts may inadvertently get caught in automated moderation.[8][1] TikTok users have used algospeak to discuss and provide support to those who self-harm.[9] An interview with 19 TikTok creators revealed that they felt TikTok's moderation lacked contextual understanding, appeared random, was often inaccurate, and exhibited bias against marginalized communities.[1]
Algospeak is also used in communities promoting harmful behaviors. Anti-vaccination Facebook groups began renaming themselves to “dance party” or “dinner party” to avoid being flagged for misinformation. Likewise, communities that encourage the eating disorder anorexia have been employing algospeak.[10] Euphemisms like "cheese pizza" and "touch the ceiling" are used to promote child sexual abuse material (CSAM).[6]
On TikTok, moderation decisions can result in consequences such as account bans, video removals, or delisting videos from the For You page. A TikTok spokeswoman told The New York Times that users' fears are misplaced, saying that many popular videos discuss sex-adjacent topics.[11]
Methods
Algospeak uses techniques akin to those used in Aesopian language to conceal the intended meaning from automated content filters, while being understandable to human readers. One such method draws from leetspeak, where letters are replaced with lookalike characters (eg. $3X for sex).[3] Other similar adoption of obfuscated speech include Cockney rhyming slang and Polari, which were formerly used by London gangs and British gay men respectively.[8]
Another method is where certain words may be censored, or in the case of auditory media, cut off or bleeped, e.g., s*icide instead of suicide. A third method involves "pseudo-substitution", where an item is censored in one form, while it is present in another form at the same time, as used in videos.[12]
A similar phenomenon applies to the names of famous individuals, such as politicians, businesspeople, and celebrities, whose names may be censored when typically criticizing or defaming them to avoid detection by bots, trolls, or the supporters of said individuals or in some cases, protecting their identities. (e.g., J*an de la Cr*z for Juan de la Cruz) In its typical use, this practice may associate such individuals as taboo.[13][14]
In an interview study, most creators that were interviewed suspected Tiktok's automated moderation was scanning the audio as well, leading them to also use algospeak terms in speech. Some also label sensitive images with innocuous captions using algospeak, such as captioning a scantily-dressed body as "fake body".[1]
The use of gestures are emojis are common in algospeak, showing that algospeak is not limited to written communication.[15]
Impact
Algospeak can lead to misunderstandings. A high-profile incident occured when Italian actress Julia Fox made a seemingly unsympathetic comment on a Tiktok post mentioning "mascara", not knowing its obfuscated meaning of sexual assault. Fox later apologized for her comment.[8][16] In interviews, creators shared that the evolving nature of content moderation pressure them to constantly innovate their use of algospeak, and that the use of algospeak makes them feel less authentic.[15]