10 left-field marketing ideas for 2025 and beyond

Too long to read right now? Here’s the abridged version:
Artificial intelligence (AI) is everywhere, shaping the emails you see, the ads that follow you online, and even conservation tools that track endangered species. But its rise brings deep ethical concerns about exploitative labour, environmental damage, bias, erosion of human creativity, and more.
Some people are responding with “AI veganism”, a refusal to use AI (much like food veganism avoids animal exploitation). The challenge is that, unlike animal products, AI is hardwired into our digital lives, making complete abstention almost impossible.
So, where does that leave us as people who care about justice for our fellow animals? This blog explores both sides: the serious risks of AI and its paradoxical potential to replace animal testing, aid conservation, expose cruelty, and even help address the climate crisis.
Rather than a simple yes or no, we explore a path of critical engagement using AI sparingly, consciously, and ethically.
Artificial intelligence might feel like it came from nowhere, but it didn’t. The first working AI program, the Logic Theorist, was created back in 1955, but the idea of building machines that could think for themselves goes back to early civilisations.
From Greek myths of automatons to Descartes’ musings on whether machines could reason and Alan Turing’s famous question in 1950: ‘Can machines think?’, what was once philosophy is now technology woven into everyday life.
Once the first programs showed machines could reason, the question was no longer if AI would enter our world, but when and how deeply it would root itself there.
Over the decades that followed, it slipped quietly into everyday systems until most of us were using it without even noticing.
Today, AI filters the emails in your inbox, curates the posts you see on social media, decides which adverts follow you around the internet, answers your queries as a customer, guides you through unfamiliar streets via your sat nav, and even helps detect fraud in your bank account.
If you’ve typed a query into a search engine lately or used predictive text on your phone, you’ve interacted with AI.
AI fully exploded into public consciousness in 2022 with the launch of generative platforms like ChatGPT, Midjourney, and Claude. Suddenly, people who had barely thought about AI before were using it as a thought partner, research assistant, or creative companion.
This was accompanied by unease.
Some of our own Ethical Globe readers have told us they don’t want to hear about it. A few have unsubscribed from our mailing list because we discuss AI at all. Others have emailed to express their worries that AI is unethical and potentially harmful to humans, our fellow animals, and the planet.
These concerns are valid. There are major ethical red flags with AI, from exploitative labour practices to environmental damage (more about these issues below).
It’s understandable that some people are now practising what’s been called AI veganism, an attempt to abstain from using AI in protest of its harms, much like food veganism abstains from the exploitation of our fellow animals.
But here’s the tension and the reason we’ve decided to write about this topic: AI is already everywhere. It is, in some sense, unavoidable. It may also offer tools that can accelerate the fight for animal freedom.
So, we face a difficult question: if we want a kinder, more just world for our animal kin, should we reject AI altogether, or is there a way to engage with it ethically?
We don’t have a definitive answer right now.
We’re watching, learning, and reviewing our understanding of this issue every day, and we decided to write this blog because we think many of you may be wrestling with the same question.
The phrase AI veganism deliberately draws on the philosophy of veganism.
We vegans avoid using our fellow animals for food, clothing, or entertainment because we recognise their inherent worth and want to reduce suffering. The idea is simple but radical: we do our best, as far as possible and practicable, to step away from exploitation.
AI veganism takes a similar approach.
Those who adopt it want to avoid AI systems because of their human, environmental, and social harms.
The challenge is that AI is not like animal products. With animal exploitation, you can decide not to buy leather shoes or eat cheese, and you’re largely free from direct participation.
With AI, avoidance is trickier. Even if you never log into ChatGPT, you’re still using AI every time you open Instagram, buy something from an online store, or stream a film. AI is no longer a tool you can pick up or put down; it’s infrastructure.
Is abstention from AI even possible in any meaningful sense? And if it isn’t, what do we do instead?
There are now entire degrees, particularly postgraduate qualifications, focused on the ethics and potential risks of AI, as well as its development. It’s a complex field, so our intention is to provide a simple overview of some of the key issues.
1. Human creativity and skill erosion
One of the most immediate worries is what AI is doing to human creativity. When we let a machine draft our ideas, are we outsourcing not just the labour, but the thinking itself?
A study by the Media Lab at MIT suggests that could be the case. A cohort of 54 students was split into three groups and tasked with writing SATs essays using ChatGPT, Google, or nothing to help them. The ChatGPT group “consistently underperformed at neural, linguistic, and behavioural levels” suggesting that their reliance on AI to do the thinking had direct cognitive consequences.
Concern about skill erosion is growing
Joe McKay, the founder of a LinkedIn Content Agency called Great Chat, recently announced that his organisation would be going AI vegan.
In his video about this, he expressed his concerns that AI cannot be a reliable thought partner when, by definition, it doesn’t think. He also highlighted that, in interacting with AI, we “mistake language for meaning” when really, we’re seeing words strung together by probability, not by understanding. Additionally, he explained his worry that AI stifles the unique human perspectives at the heart of creative work.
McKay’s thoughts reflect the possibility that if we begin to accept AI content as “good enough”, we may start to distrust our own instincts and dilute our voices.
This isn’t a small matter.
Movements for justice thrive on originality and lived experience. If our words all start to sound the same, where will those sharp edges of truth come from, the ones that jolt societies into change?
2. Intellectual property and plagiarism
Another contentious issue surrounding AI is the source of the training data. Large language models (LLMs) like ChatGPT are built by sweeping up vast amounts of text from the internet, spanning articles, books, forums, and code repositories.
This information is open internet data, which is publicly accessible but not necessarily free of rights.
Who owns the data?
AI companies say this is legally acceptable because the model is “learning patterns” rather than storing copies, but many authors, journalists, and artists disagree.
Indeed, whether or not LLMs use copyrighted data is a topic of fierce legal debate.
The New York Times, for example, is currently suing OpenAI and Microsoft, claiming that millions of its articles were used to train AI models without consent.
Getty Images has also taken Stability AI to court for allegedly scraping its copyright-protected stock photos. The outcomes of these cases could have huge repercussions, reshaping how people develop and use AI in the future.
These concerns aren’t just about copyright law, although that matters. They’re also about fairness.
If you pour your heart into an article or a painting, should a corporation be allowed to use it to train a machine that may one day put you out of work?
Even if the courts eventually rule in favour of the tech companies, the ethical question remains. Is it right that human creativity – i.e. years of training, skill development, and imagination – is being extracted to power systems owned by a handful of corporations?
From an animal rights perspective, it echoes a familiar problem: the assumption that if something/someone is there, it/they can be used and exploited.
3. The data has bias and prejudice built into it
AI systems aren’t neutral; they inherit the values and prejudices of the data that trains them. If the world’s records are full of racism, sexism, and other injustices – and they are – then AI systems inevitably learn and reproduce those same patterns.
Take two thoroughly documented instances. Researchers at LSE found that Google’s widely used AI model, Gemma, downplayed women’s health concerns in social care case notes compared to men’s, potentially leading to unequal treatment.
In the US, ProPublica’s investigation into the COMPAS algorithm (used by judges to predict how likely someone is to reoffend) revealed that Black defendants were nearly twice as likely as white defendants to be falsely labelled “high risk”.
These aren’t isolated glitches. They reveal how easily existing prejudices get coded into AI and then automated at scale, with real consequences for people’s lives.
And because most training data is overwhelmingly human-centred, our fellow animals are either ignored completely or described through speciesist norms like “livestock productivity” or “pest control”.
In this way, AI doesn’t just mirror injustice; it risks reinforcing it.
4. Labour and power dynamics
AI can do a lot of things, but it didn’t think itself into existence.
Behind the polished interfaces, there are thousands of human workers in low-paid, precarious jobs teaching it how to “think”. These workers are often located in what some have called “digital sweatshops” in the Global South.
Countries such as Kenya, India, the Philippines, and Madagascar have digitally literate English- or French-speaking workers but minimal labour protection laws, making them ideal locations for companies wanting skilled but inexpensive employees.
The human cost of AI
AI workers carry out numerous micro-tasks to teach AI, performing repetitive, relentless and largely invisible labour, increasingly referred to by critics as “ghost work”.
Ghost work is often dehumanising. In 2023, a TIME investigation and subsequent report by The Guardian revealed that content moderators in Kenya had been tasked with filtering violent or traumatic material to “teach” AI systems what’s acceptable and what isn’t. They were paid less than $2 an hour, received little support, and experienced significant psychological distress due to the content they viewed.
Meanwhile, the power over AI development lies with a small number of giant corporations. These companies decide what data gets used, what guardrails are set, and whose values are embedded into the system. Commentators repeatedly highlight this in reference to Elon Musk’s Grok, which has been known to defer to and repeat its creator’s opinions as fact.
5. Jobs and livelihoods
While the risk of exploitation grows, AI threatens to displace workers such as copywriters, designers, translators, customer service staff, and even some programmers.
Given the adage that “time is money” and that AI can churn out content in seconds, there’s a very real risk that businesses will opt for “an inexpensive but adequate product compared to more expensive processes which require deep human creativity”, especially when there’s a need to fill a 24/7/365 demand for content.
Corporations might frame this as “efficiency”, but for those who lose their income, it’s devastating.
Automation has always raised questions of fairness: who benefits and who pays the price? With AI, the benefits are accruing to a small elite of tech giants and investors, while the risks fall on ordinary workers.
Perhaps we should counterbalance this, however. A new report published by MIT has found that 95% of company-based generative AI pilot schemes fail, due to a “learning gap” between the tools and the organisations using them. It’s likely that there will be a wave of new careers for people who can help organisations to use AI effectively.
6. Environmental costs
It takes vast amounts of water and electricity to train and run AI models. This is at a time when UNICEF warns that 50% of the world’s population is living in areas facing water scarcity, and that at least 700 million people could be displaced by a lack of water within the next five years.
Data centres have been linked to drought-stricken regions where water is diverted to cool servers, while the energy demands contribute to carbon emissions.
Given that there are warnings of ecological collapse within this century, can we justify building ever larger models?
For those of us already concerned about our planet and our fellow animals, this is a serious ethical dilemma. If AI worsens the ecological crisis, then it worsens the conditions for all life.
7. Beyond humans
Most discussions about AI ethics don’t mean to leave out our fellow animals, but they usually do. That omission is speciesist because it assumes that only humans are affected by the possible harms of AI, while the lives of other beings are unimportant or unaffected.
AI is already being tested in farming to improve slaughter lines, keep an eye on pigs’ stress levels, or increase the amount of milk cows can produce. This “precision” farming increases the extent to which sentient beings are treated as “units of production” managed for profit and efficiency.
Even in conservation, many tools see fellow animals as just data points, entries on a spreadsheet, instead of individuals with their own agency.
The risk is obvious: AI may make exploitation more effective and more difficult to detect.
This could strengthen speciesism in future systems rather than dismantling it, meaning our fellow animals may become even more vulnerable if we don’t push back.
While AI comes with undeniable harms, here’s the paradox: some of the very tools we’re wary of might also be used to save the lives of our fellow animals by:
1. Ending animal testing
Biomedical research might be the best place to use this technology. AI models can now predict how drugs will work in the human body, challenging the idea that we have to test new drugs on our fellow animals. Some scientists think that AI-driven toxicology might one day make animal testing unnecessary.
The Virtual Liver Initiative and other projects are already using simulations of human liver function to test how the body responds to specific drugs. Companies like Insilico Medicine are using AI to create and test new compounds. Instead of animal test subjects, they use computer-simulated experiments (in silico) to model safety and effectiveness.
For those who have campaigned against animal testing for years, this is a glimpse of what freedom could look like.
2. Supporting conservation
In the field, AI is already changing the way conservation looks, sometimes in inspiring ways.
AI tools enable researchers to detect and react to changes in ecosystems much more quickly than humans could on their own, from monitoring migration patterns to analysing satellite images for indications of deforestation.
Wildbook, for example, uses AI-powered photo recognition to identify individual whale sharks, giraffes, and other species from photographs taken by tourists and researchers. This has transformed population monitoring, helping scientists protect endangered species with unprecedented accuracy.
By combining AI with acoustic sensors in Central Africa, the Elephant Listening Project can detect the sound of gunshots and chainsaws, giving rangers the chance to intervene against poaching and illegal logging.
These projects show how AI, when placed in the service of conservation, can become a powerful ally to our animal kin and their habitats.
3. Exposing exploitation
Imagine a hidden camera that never sleeps, tirelessly recording what happens behind closed doors in factory farms.
AI makes this a possibility, which could help campaigners to identify patterns of suffering at a scale that would be impossible for a single investigator, exposing covert cruelty in factory farms, slaughterhouses, and during transportation.
Systems are currently being trialled to detect lameness in cows, stress in pigs taken for slaughter, or illness in chickens. Of course, this isn’t about animal freedom; it’s about trying to make the journey to the slaughterhouse a little less stressful.
These projects, despite their welfare objectives, highlight a risk. If corporations control these tools, they could just as easily be used to conceal and perpetuate exploitation as to reveal and end it.
Used ethically, however, AI could strengthen the work of whistleblowers and campaigners, shining light where industries would prefer darkness.
4. Empowering small movements
If you run a sanctuary or a grassroots group, you’ll know the frustration of big dreams and tiny budgets. A few carefully chosen AI tools might give small organisations a fighting chance to punch above their weight.
AI tools can provide affordable access to design, data analysis, translation, and more. In this sense, AI can level the playing field.
5. Tackling the ecological crisis
Given its heavy water and energy use, it may seem contradictory to mention AI in the context of addressing ecological change. However, many people believe that AI could become a powerful tool for reducing emissions and helping societies stop or, in the worst-case scenario, adapt to environmental breakdown. Here are some current projects:
Think about the devastation of an unexpected storm or drought. AI is making weather forecasting and climate modelling more accurate, which means communities can prepare earlier, conservationists can protect fragile habitats, and renewable energy providers can balance supply and demand more effectively.
What’s exciting is that these improvements don’t just help us respond to extreme weather; they can also reduce overall energy use by helping grids run more efficiently. In other words, smarter forecasting not only saves lives and habitats; it can cut emissions at the same time.
Most of us don’t give much thought to the energy our buildings quietly swallow up, but it adds up to around a third of global electricity use. AI is helping design ‘smart buildings’ that learn when to heat, cool, or light a space. This means less energy waste, smoother integration with renewable power, and ultimately, less strain on the planet.
Every product on a shelf has a hidden carbon story, from transport miles to packaging choices. AI tools are starting to help businesses trace those footprints with far more precision. For ethical companies, that means a chance to identify areas for improvement and to prove to their community that sustainability is more than a buzzword.
In the Arctic and Antarctic, ice is vanishing faster than most of us can comprehend. AI is helping scientists track that loss in real time, offering clearer warnings about what’s at stake. For polar bears, penguins, and countless others whose lives depend on fragile ice ecosystems, this isn’t abstract data; it’s survival.
Farming has long been a driver of both human survival and ecological harm. AI could tip the balance in a better direction by helping farmers reduce pesticides, protect the soil, and even move away from destructive monocultures and animal exploitation. Done well, this could support a food system rooted in respect for the land, for biodiversity, and for our animal kin.
Given the potential harms, it’s not surprising that some people want to abstain from AI altogether. But what does “abstention” really mean?
For some, it means refusing to use overtly AI-driven platforms such as ChatGPT, Midjourney, or the AI design tools in Canva. For others, it might mean avoiding anything with AI integrations in the background.
The difficulty is that AI now underpins so much of the digital world, from social media algorithms to search engines, email filters, and online payments. Unless you leave the internet entirely, you’re interacting with AI whether you choose to or not.
That doesn’t make abstention meaningless; it can be a powerful statement and form of activism. It’s certainly likely that we’ll see a swing back to quirky, unique, unfiltered voices – the very antithesis of AI.
But at the level of a movement, boycotting AI may be more symbolic than practical, limiting our reach without significantly reducing harm.
So, the real question becomes: where do we draw the line?
Some people feel that any use of AI amounts to complicity in harm. Others argue that if AI can help reduce the suffering of our fellow animals, as some projects already suggest, we may even have a responsibility to use it.
There’s unlikely to be a single right answer.
Just as veganism is guided by the principle of avoiding exploitation “as far as is possible and practicable”, perhaps our approach to AI needs to be the same. That could mean:
What might this look like in practice? Rather than blanket acceptance or outright rejection, perhaps what’s needed is critical, conscious engagement.
For activists and organisations, this could mean:
AI isn’t going away. The real question is, how do with meet it with open eyes, with care, and with integrity?
Perhaps the key lies in intention. If we treat AI as just another shiny tool, it will almost certainly deepen existing harms. But if we approach it with the same critical lens we bring to food, clothing, and culture, it might become one more way to push for change. Used sparingly, transparently, and always in service of equity, never convenience alone, AI could become a tool that serves justice for all rather than undermines it.
You still have the freedom to choose whether to abstain or to use it selectively. What matters is that we keep holding ourselves accountable, asking hard questions, and making sure our actions reflect our values.
Because in the end, whether we’re talking about food, technology, or activism, the principle stays the same: as far as possible and practicable, choose the path that reduces harm and builds a kinder future for all beings.