Do vegan businesses and campaigners still need PR in a social-first world?


When Donald Trump posted an AI-generated video of himself flying a plane and dumping sewage over the No Kings protestors this October, it was grotesque, attention-grabbing, and, in its way, the perfect symbol of what’s now being called AI slop.
The post was created to provoke rather than provide any value and could have quite literally been generated and shared within seconds of being a passing thought.
For those of us old enough to remember when television channels went dark at midnight and the national news was delivered in the morning paper or an evening broadcast, the world used to move at a different pace.
Each day had a beginning and an end, at least in broadcasting terms. There were pauses between stories and moments for conversation, for thinking critically before the next big headline. (If you don’t pre-date the internet, we know it’s hard to imagine!)
Then came the internet, mobile technology, social media, and now artificial intelligence, each one adding to the demand for 24/7 content.
Once that demand took hold, speed became everything. But when you have new content hitting multiple platforms, day in, day out, it creates a lot of “noise”. This posed the dilemma of how to cut through the noise and be heard.
Clickbait was born, using division, outrage, curiosity, and spectacle to grab attention because it turns out that deeper, considered, and highly researched content just doesn’t have the same hook when people are endlessly scrolling.
It was probably inevitable that once algorithms were trained to reward speed and novelty over substance, the next step would be to automate the churn.
Now, with generative AI able to produce an article, an image, a song, or even an opinion in seconds, the internet is filling with almost-instant content that pretends to be something it’s not.
This hollow but easy-to-produce content has earned the name AI slop. It’s essentially another form of spam, designed to be created and shared at volume for a profit.
Like the amorphous gruel slapped onto Oliver Twist’s plate before he plaintively asks, “Please sir, I want some more”, it’s the slurry of context-free material generated at an industrial scale for us to eat up as entertainment.
Look closer, and AI slop is words without thought, stories without roots, and images without truth. We gobble it up, but it leaves us empty.
We’ve chosen not to link to examples in this article because we don’t want to amplify the noise. However, we strongly recommend watching this episode of Last Week Tonight with John Oliver for an unflinching look at the spread of AI slop, with recent examples.
John Oliver highlights how AI slop is stealing from talented human artists, making it harder for first responders in emergency situations, and polluting political discourse, as just a few examples of how problematic it’s becoming.
In fact, AI slop is spilling into every corner of the digital world, from manipulated product reviews and hyper-realistic fake user profiles to bots commenting on social media with the sole purpose of causing division or furthering a specific political message.
And just so we understand the scale of this issue, according to Imperva’s 2025 Bad Bot Report, 51 percent of all global web traffic in 2024 came from bots, and 37 percent of that was malicious, designed to scrape data, manipulate engagement, or spread disinformation.
More than half of the web’s activity, in other words, might not be human at all. While it might have originated from humans, will the creators be able to keep up? What will happen when machines start feeding the machines?
AI doesn’t create in a vacuum. It learns by absorbing the labour of others (artists, photographers, journalists, authors, marketers, and other creators) whose work has been scraped into datasets, usually without consent.
Talented artists who have honed their craft over years are now seeing AI rip-offs of their work across every social media channel without any financial compensation. While people may quote Oscar Wilde and say that “imitation is the sincerest form of flattery”, they usually miss the end of the quote, “that mediocrity can pay to greatness”. In other words, if you don’t have any originality, the best you can do is copy.
This is what AI slop does, while it detracts from and devalues the work of the original artists, threatening their livelihoods.
As a content creator for your vegan business, you face the same risks.
Every time you publish something thoughtful online, it risks being swallowed by the next AI training cycle, stripped of context, and repurposed to generate content that competes with your own.
Trust and transparency are vital when you run an ethically focused business, so AI slop poses a genuine risk that the words you carefully wrote about your sanctuary, or your vegan product, could end up training a model that produces imitation content for a less principled company. The irony is cruel: your integrity becomes the raw material for someone else’s shortcut.
It’s the digital echo of the exploitative systems our movement rejects, i.e. extraction and exploitation disguised as progress. In its way, it’s no different to fast fashion or industrial animal agriculture.
Recognising this, we must ask a deeper question: even if we can create with AI, should we?
As we discussed in our recent article about AI veganism, each generated image, video, or article comes with a high cost, from the vast energy use of data centres to the low-paid workers in the Global South tasked with filtering violent or traumatic content to make these tools “safe”.
If that technology were being used to replace laboratory testing on our fellow animals or to tackle environmental destruction, perhaps the trade-off could be defended. But generating deep-fake videos, filler blogs, or novelty content that’s forgotten as soon as it’s shared at the same human and ecological expense? That’s harder to justify.
Using an energy-hungry, exploitative system to produce digital waste isn’t progress; it’s just another way to commodify the planet’s resources.
And that’s just the tip of the iceberg….
The same systems that learn from human creativity are now crowding it out, reshaping how information appears and who gets seen.
AI models have no genuine understanding of truth. Instead, they rely on pattern-spotting and probability to create appropriate responses to a prompt. An unfortunate side effect of this is that AI models are known to hallucinate, inventing facts to fill gaps, like a journalist leaping to conclusions to make a story sound juicier.
The inaccuracies created by AI hallucinations don’t just disappear; instead, they’re circulated (a content creator publishes AI slop containing incorrect information), copied (other people quote the slop in their own work), and then fed back into the next generation of AI models. The result is a kind of digital composting of misinformation – a self-reinforcing cycle in which what’s false is repeated until it becomes “true” to the machine.
If AI slop continues unchecked, it could eventually (if not already) make it almost impossible to know what’s real (more about this in a moment).
Adding insult to injury, search engines themselves are becoming part of the slop cycle. Google’s AI Overviews now summarise web content directly on the results page, often without sending visitors to the original source.
If you represent a small vegan or ethical business that’s been working hard on its content creation and search engine optimisation, this can be devastating. Instead of new audience members discovering your work via an online search, they see a machine-written précis (sometimes accurate, sometimes confidently wrong) without ever needing to visit your website.
In July 2025, The Guardian reported a study that found that sites previously ranked first on Google can lose up to 79% of their traffic if the search results appear below an AI Overview. A Google spokesperson described the methodology of the study as “flawed”, but even cautious figures suggest a 30-35% drop in organic clickthrough rates.
The irony is that many of these summaries are built from your content in the first place. You do the work, but the machine gets the views.
The examples we’ve touched on above relate to how AI slop may impact a business, but it has far more sinister and far-reaching implications.
As the boundaries between real and artificial blur, there’s something that researchers call the Liar’s Dividend. It’s the advantage that bad actors gain once people know deepfakes and synthetic media exist. If anything can be faked, then anything uncomfortable or incriminating can be dismissed as fake too.
The result is a kind of collective exhaustion. When you can no longer trust your eyes or ears, you stop believing altogether. A genuine piece of undercover footage, an authentic protest image, or a rescue video from a sanctuary can be waved away with a shrug: “It’s probably AI”.
AI slop feeds this dividend. Every time a synthetic clip goes viral, or an AI-generated “news” story circulates, it reinforces the idea that nothing online can be trusted. The cumulative effect is corrosive, not just to journalism, but to activism, democracy, and even compassion. After all, if suffering itself can be simulated, some people will use that uncertainty as an excuse not to care.
For vegan and ethical movements, which rely on truth-telling and emotional connection, the implications are profound. It becomes harder to show the reality of animal exploitation or environmental damage when the public has learned to question the reality of everything. The Liar’s Dividend doesn’t just protect liars; it silences truth.
Once you know what to look for, you’ll notice AI slop everywhere. This makes it a truly daunting issue to tackle. Is there anything we can do to stem the tide, or is it hopeless?
People often say that the first step to combating AI slop is developing our digital literacy, so we know what to look for. However, it’s often the case that although people can tell the difference between reality and AI slop, they don’t want to. Perhaps because it’s easier to believe that nothing is real or trustworthy or to cherry-pick content that suits a narrative.
Somehow, though, we have to resist this sense of hopelessness. We must keep caring about the truth and fighting for a shared reality where people are accountable for what they say and do.
The best way to resist numbness is to face the issue head-on. Once you start really seeing how AI slop presents itself, the patterns become unmistakable – it’s the:
Above all, though, AI slop is bland. It has a sort of emptiness or beigeness, probably caused by a lack of lived experience. Real people write from somewhere, AI writes from everywhere and nowhere.
If you run a business, you can train your eye to spot this by slowing down and asking:
You can apply these questions to your own content, too.
Whether or not you personally use AI, it’s sensible to develop your digital literacy to recognise it. As we’ve seen, AI can be either deliberately or accidentally misleading, which can damage your reputation if you share the wrong thing.
Protecting your presence and integrity
Even if you never touch an AI tool, you’re already shaped by the systems around it. It’s not something that’s going to go away. Indeed, the technology is only going to become more sophisticated.
There’s something to be said about embracing our humanity, warts and all, and protecting your vegan organisation by focusing on authenticity and accountability in how you show up online.
Be transparent about your creative process. If AI supports your work (perhaps for translation, editing, or idea generation), treat it as a tool, not a ghost writer, and be open about that distinction. Transparency builds trust, and trust is the only real antidote to generative AI.
Reassert your humanity wherever you can: use real names and photos, share your story and that of your team, your products, and your suppliers, as well as responding personally to feedback. When people see and hear the humans behind a business, they’re far less likely to confuse it with a machine.
Also, we recommend keeping an eye on your digital footprint. Search for your brand name alongside words like “review” or “AI” to spot impersonations or fake content before your audience does.
Most importantly, nurture direct relationships through email lists, community networks, events, and conversations. Every genuine connection loosens the grip of the algorithms that now filter and summarise your work for you.
At Ethical Globe, we often think about stewardship, i.e. the belief that we are caretakers, not owners. The same principle applies to technology. Using AI responsibly means recognising both its potential and its cost: the energy it consumes, the people it relies on, and the ideas it quietly steals.
Ethical stewardship doesn’t necessarily mean rejecting tools altogether, but it does mean refusing to exploit them or let them exploit others.
Use AI as scaffolding if you must, but remember to keep the heartbeat of your message human. Credit and pay real creators. Link to high authority sources. Double-check facts, especially when AI presents them with confidence.
Build space for reflection into your creative process; slowness itself can be an ethical stance. What do we mean by this? In a culture driven by speed, output, and constant visibility, it’s an act of resistance to slow down and consider the consequences of what we’re planning to do. By refusing to move at machine pace, we can be gloriously and defiantly human!
Audit your own content from time to time. Ask whether it still feels alive, or whether it’s begun to sound like everything else. Protect your creative labour with visible authorship and a distinctive style. Stay alert to misinformation about your work and correct it openly.
Above all, keep the human loop open. Encourage dialogue, corrections, and feedback. Ask yourself what resonated, what felt off, and what felt real. Also, pay attention to the topics that your audience cares about. What gets them talking? Real conversation is the most reliable safeguard against slop.
The age of automation will keep tempting us to speak faster and louder, but ethical communication has always asked a different question: not “How much can I make?” but “What do I mean?” In a digital landscape where machines speak fluently but without conscience, choosing sincerity, again and again, is an act of quiet rebellion.
Because the antidote to AI slop isn’t cleverer technology. It’s consciousness. And consciousness is what separates stewardship from exploitation.