An image of Jesus and half a shrimp fused together, a SpaceX UFO craft, tiny children baking impossibly perfect cakes, and well-written articles outlining problems and magical solutions by the end of them.


No, you don’t need to imagine them; these images, texts, and media are already available on the internet. Welcome to the world of “Slop,” which is akin to spam and can be described as scammy, low-quality, and dubious garbage generated by AI (artificial intelligence). It encompasses a broad term and refers to unwanted, shoddy AI content in books, art, social media, and, increasingly, internet search results. It’s not exactly disinformation, but rather vague text, inert prose, and weird narratives. And yes, it’s everywhere, but is it that big a problem?

An originally-generated example of the “Shrimp Jesus” AI imagery that appeared on Facebook in 2024. Generation prompt was: A person who looks like Jesus but who is made completely out of live shrimp, swimming underwater in the turquoise ocean, with a shrimp halo, and his hands dissolving into a shoal of shrimp

What is slop?

The word slop will make most people imagine images of troughs for livestock being shovelled with piles of unpalatable food. However, the first online content to get labelled as slop was AI-generated content on online message boards. Basically, it mirrors similar terms for huge amounts of low-quality data and information on the internet, like ‘pink slime,’ which describes poor-quality news reports appearing to be local news.

Slop could take any form: videos, images, text, and even complete websites. Moreover, it could also trickle into real life. For instance, in November 2024, thousands of people gathered in Dublin’s city centre for a non-existential Halloween parade, as promoted by a website using fake images, fake reviews, and some real photos of other events with no relation to this one. There was even ‘real-life slop’ in the form of the now-infamous Willy Wonka Glasgow experience. Promoted as an interactive family experience using dreamy AI images, the event was low-quality in reality, starting off but getting cancelled halfway.

That’s not all. Did Google suggest adding non-toxic glue to make cheese stick to a pizza? Is there a Facebook post or a low-priced digital book that’s not the one you were looking for prop up out of nowhere on your feed or search results? All slop.

Image Courtesy: Wikipedia

Why Is Slop A Problem?

In the end, AI-generated slop feeds websites that want to optimise SEO as cheaply as possible. This could affect the way the traffic that news outlets receive and how we engage with information on the web. On the other hand, slop has been compared to email spam, which platforms did become quite efficient at filtering out. Even AI hallucinations, which were prevalent in the very first versions of AI chatbots, have decreased in their later versions, but slop has stayed. The question is: will slop fizzle out or could it possibly degrade the entire information ecosystem?

One valid concern is that since AI-generated text is cheaper, faster, and easier to produce, it will proliferate on the web. Eventually, if it’s input as training data into MLs and LLMs (machine learning and large language models), it could lead to information value and quality getting greatly eroded.

Then, there’s the problem of ‘careless speech,’ a set of risks defined in a paper by academics Sandra Wachter, Chris Russell, and Brent Mittelstadt. According to them, careless speech is AI-generated output that features oversimplified information, subtle inaccuracies, or biased responses passing off, in a confident tone, as the truth. It’s not disinformation, and the aim isn’t to mislead, but rather to sound confident and convincing. After all, the thing that’s most dangerous to society isn’t a liar; it’s a bullshitter. The problem? Careless speech could easily pass under the radar.

A man lying in a hospital bed, wearing a sinister mind-control helmet. His hands are clenched into fists and he is grimacing. Through a hole in the wall we see a prancing vaudevallian, whose head has been replaced with the head of Mark Zuckerberg’s Metaverse avatar. Behind this figure is the giant red eye of HAL9000 from Stanley Kubrick’s ‘2001: A Space Odyssey.’ At the end of the bed stand a trio – Mom, Dad and daughter – in Sunday best clothes, their backs to us, staring at the mind-controlled man’s face. (Image Courtesy)

Finally, there’s the issue of AI slop masquerading as ‘news’ from AI-generated websites. Such websites could result in sites optimised for SEO to maximise advertising revenue — think Buzzfeed or The Onion on steroids with repeated search terms and clickbait headlines. While most are low-quality clickbait websites publishing content about entertainment and celebrities, others also cover issues like politics, obituaries, and cryptocurrencies.

In February 2024, for instance, a Wired article reported on Serbian entrepreneur Nebojša Vujinović Vujo, who bought abandoned news sites, filled them with AI-generated slop content, and pocketed huge monies in ad revenue.

Is Slop Here To Stay – and Slay?

Slop stepped into the limelight, if we can call it that when Google incorporated into its US-based search results its Gemini AI model in late 2023. Instead of pointing users toward useful links, Gemini attempted to solve queries directly with an “AI Overview” — a chunk of text atop the results page that best guessed what the users were looking for. This also prompted Microsoft to incorporate its AI into Bing search results. Not surprisingly, Google decided to roll back some features until the problems could be ironed out.

Slop might not seem harmful at first sight, but the problem is that it’s all over the internet — and beyond. Digital platforms have been accused of leaning into slop heavily, and announcements have been made to add features for users to create AI-generated content specifically. Yes, AI can help with efficiency, flexibility, and scalability, but it shouldn’t compromise content, newsroom values, or editorial judgement. Whether they will end up creating engaging, unique posts or filling feeds with digital slop, only time will tell.

In case you missed:

Malavika Madgula is a writer and coffee lover from Mumbai, India, with a post-graduate degree in finance and an interest in the world. She can usually be found reading dystopian fiction cover to cover. Currently, she works as a travel content writer and hopes to write her own dystopian novel one day.

Leave A Reply

Share.
© Copyright Sify Technologies Ltd, 1998-2022. All rights reserved