With bots on the rise, algorithms controlling what we see and machine-written content filling in the cracks, is there any space left on the internet for humanity?
While scrolling along social media this week, I was confronted with a number of sponsored ads from a certain local used car marketplace. Although the campaign featured a variety of images (I saw at least two different ads, each with a unique image), the gist was the same: each image featured a “candid” image of a car that had been scrawled on with marker, as well as the doe-eyed child/children responsible.
I wish I could tell you what the tagline for the campaign was, but I honestly don’t think I read it – I was too distracted by the eerie quality of the images, which had the uncanny “too perfect” giveaway of something AI-generated. In one of the images, two little redheaded girls stand in front of a white sedan they have supposedly drawn on, their expressionless yet perfectly beautiful faces looking straight back at me. To their left, a retriever-type dog with shiny golden fur sits and stares with the same blank expression.
Gentle reader, I am not for a moment suggesting that this is the first time I’ve encountered AI-generated images in ads. I’ve recognised AI images (and writing!) in ads for quite some time now – just never this obviously, and never by such a big-name company. This particular occurrence stopped me in my tracks because of how obviously fake it was, as if no effort at all had been made to hide its origins. Surely, I thought, the comments on this ad must be full of people who are just as shocked by this creative choice as I am.
Instead, I found comment after generic comment – “love this!”, “so cute!” – etc, as well as a flurry of stickers and emojis. The ad had been shared three times, and amassed over 87 likes.
At first I was confused by this – could people not recognise that this wasn’t a real image they were responding to, or did they not care? But then I realised that none of the responses in the comments felt particularly human either. They were all generically positive, and none of them spoke to the image or what it contained directly.
A computer-generated image of fake humans, being responded to by a crowd of fake humans, in the hope that real humans will buy something. Welcome to the internet in 2024.
Is anybody out there?
The Dead Internet Theory is an online conspiracy theory that claims that the internet as we know it has largely been taken over by bots and algorithm-driven content, leaving only a shell of human interaction behind. According to believers of this theory, this shift wasn’t accidental but part of a calculated effort to fill the web with automated content, quietly nudging all of us along pre-determined thoughts and ideals while crowding out genuine human voices and opinions.
Some suggest that these bots – crafted to influence algorithms and pump up search rankings – are being wielded by government agencies to shape public opinion, making our online world a little less real and a lot more controlled. Others believe that big corporates are in charge, and that every step we take online is on a guided path towards an eventual purchase. The supposed “death” of the Internet is believed to have happened in either 2016 or 2017. If that’s true, that would mean that we’ve been interacting mostly with bots and curated content for years.
Of course, it’s vital to acknowledge that this is a conspiracy theory – with extra emphasis on the “theory” part. In a way, the fact that you are reading this article right now, which was researched and written by a flesh-and-blood human being, kind of dispels the idea that the internet is dead. Here I am, adding living content to it as you read. So maybe the internet isn’t completely kaput – but that doesn’t mean that it’s flourishing, either.
What makes the Dead Internet Theory so compelling is that there definitely are some measurable changes in online behaviour, like a rise in bot traffic and fake profiles. Here’s one of my favourite examples: earlier this year, a video was posted on X of a Kazakhstani anchorwoman reading a news report. The poster jokingly compared the sound of the Kazakh language to “a diesel engine trying to start in winter”. The post garnered 24,000 likes and more than 2000 reshares. This would have been normal, if not for the fact that the video was mistakenly uploaded with no audio, therefore rendering the joke completely inaccessible. Does that seem like the kind of thing that 24,000 human beings would click the like button for, or are we seeing evidence of bot-driven traffic right in front of our eyes?
What fascinates me the most is that the predicted “death” of the internet occurred half a decade before the mainstream adoption of AI text and image generators. A recent Europol report estimated that by 2026, as much as 90% of the content on the internet may be AI generated. So is the Dead Internet Theory really a conspiracy theory, or rather the ringing of a warning bell?
They do not come in peace
So a few ad agencies are using bots to boost engagement on their social media posts. So what, right?
Actually, what I saw on social media is just a small symptom of a much larger sickness. One potential outcome of an overabundance of bot behaviour is what’s called an inversion. This is a term first coined by YouTube engineers to describe a scenario where their traffic monitoring systems would begin to mistake bots for real users, and vice versa. In an inversion scenario, bot content would be labelled as real, while human content would be marked as suspicious and ultimately blocked.
While the likelihood of such an inversion happening might be a bit exaggerated, the underlying concern points to a larger truth about how online interactions have become so distorted. It’s a bit unsettling to think that we could soon be navigating a digital world where we can’t even tell who’s real anymore.
The bots aren’t all of the friendly, social-media-commenting variety either. An Imperva report from 2021 found that nearly 25% of all online traffic that year was generated by “bad bots.” These are the bots that aren’t just harmlessly lurking in the background but are actively working to manipulate and undermine the internet. They’re responsible for everything from scraping websites for content, harvesting personal and financial data, and running fraud schemes, to creating fake accounts and generating spam. They’re essentially eating the internet – and their hunger never stops.
Humanity, beware
I’ve been wandering around the internet like it’s my personal library since I was a teenager. Even in the span of a decade and a half, I can confirm that the place feels distinctly different. It’s not just the eerie presence of AI and bots online that’s unsettling; it’s the fact that they might be subtly altering how we, the humans, behave. When every interaction feels curated by algorithms or influenced by faceless bots, how much of what we do online is genuinely our own?
Charlie Warzel, a journalist for The New York Times, highlighted the phenomenon of “context collapse”, which is what happens when random events or fleeting moments are intentionally made to seem like huge cultural moments online, sparking mass conversation, all while masking the real significance — or lack thereof. Everyone’s talking about it, but does anyone really know why?
The big digital platforms don’t just create space for these cycles of emotion and conversation, they actively encourage them. They prompt us to react on impulse, to respond the same way every time to the same types of content. And in doing so, it’s almost as if we’re becoming part of the machine; a cog in the ever-turning wheel of predictable, click-driven responses. Which begs the question: are we truly still in control, or have we become just another predictable part of the system?
About the author: Dominique Olivier
Dominique Olivier is the founder of human.writer, where she uses her love of storytelling and ideation to help brands solve problems.
She is a weekly columnist in Ghost Mail and collaborates with The Finance Ghost on Ghost Mail Weekender, a Sunday publication designed to help you be more interesting.
Dominique can be reached on LinkedIn here.
Many thanks for such an interesting article 🙂 …………
(an old-fashion emoji ) – see there still old school folk out there
My comment has just been removed for not being human. Makes you wonder.
Thank you for keeping it real for us. I was oblivious, now I will look at things different
Haha wasn’t removed by the very human (or is that Ghostly?) moderator replying to this 🙂