AI and the terminology land grab

We’re in the middle of a terminology land grab: words are being co-opted and freighted with new meaning. While "AI" can mean a lot of things, it now typically means “whatever helps us sell our product.” I explain why pushing back on the land grab is politically important.

A model of an eye, made of ivory and horn. It sits on a dark surface and casts a shadow.
It's an admittedly weak pun: a eye. Model of an eye, from the Wellcome collection via Wikimedia Commons. Licensed under a Creative Commons Attribution 4.0 International license. https://commons.wikimedia.org/wiki/File:Ivory_and_horn_model_of_an_eye,_Europe,_1601-1700_Wellcome_L0058730.jpg

A trivial thing happened to me a year or so ago. A discussion was taking place (online and asynchronously, of course) about the need to have a number of different images, all with the same template, but with different text in each image. Being someone who has both a background as a graphic designer, and an interest in good, old-fashioned automation of boring tasks, I suggested that I could look into how we could use a certain set of tools to – and this is where I went wrong – generate the images based on texts contained in separate files. Essentially, give a spreadsheet or a plaintext document to a simple tool and have it put the text into the right place in the image template. In offering this suggestion, I mentioned a couple of tools. I named Inkscape, the wonderful, F/LOSS vector graphics editor. And I named Python, an incredibly popular (and also F/LOSS) programming language that, among its many applications, sometimes gets used for scripting in Inkscape.

You might have predicted the punchline to this joke when I mentioned earlier that I went wrong by using the word “generate.” Yes, my offer to look into automating our workflow with Inkscape and Python was interpreted as a suggestion that we should use Generative AI for the task. People read the word “generate,” perhaps didn’t know the tools I was mentioning, or just quit reading after that word, and decided that I must be talking about AI. Once one person starts the ball rolling, it takes extra effort for someone to look back and verify the validity of the first response. This dynamic resulted in a small pile-on, and made me feel temporarily sad. As I say, fairly trivial. It’s not nice to be the subject of a pile-on, especially if it’s a result of a misunderstanding. But in the end, no harm, no foul. Everyone moved on, and each image was made manually, without my intervention, and without any automation.

If the situation I’ve described above is so trivial, why am I writing about it the better part of a year later? The pile-on was ultimately not a big deal from a personal perspective (okay, I'll cop to it feeling bad for maybe a week or two), but it shows something more distressing about where we are with terminology. The conflation of the word “generate” with the idea “Generative AI” is really unfortunate. It strips us of a handy word that’s been in the English language in one way or another for 500 years and a bit. It’ll be a shame if it dies like this.

In the specific context of computer graphics, the new meaning attributed to “generative” is especially unfortunate for the people who have been puttering away in the world of generative art for the last half century. Cool artistic things you can do with computers have now been pushed out of the popular language space by images and videos made with a handful of text-to-image and text-to-video models (and artists in my circles are experiencing very real suffering from the shift to all digital art now apparently needing to be focused on “AI”). Again, this may seem like a bit of an edge case. Why does it matter, in the grand scheme of things, if the generative art people have to live with a new meaning and if “generate” and all of its linguistic companions come to be associated with AI?

The land grab

We’re in the middle of a terminology land grab, in which certain words are being co-opted and freighted with new meaning. Generative/generate is only one of these. Terms that have until recently had other meanings are being grabbed and put to uses that are narrow or specific to particular technologies, and worse, that are deployed in the service of selling a specific class of product. While "AI" can mean a lot of things, including useful ones like certain aspects of computer vision, it now typically means “whatever helps us sell our product” or “tools that make us feel like we’re not falling behind.” And the product or tool is more often than not some variety of GenAI. Because “GenAI” is such big news, “generative” becomes a casualty of this move, losing its other meanings through its co-optation. This is a problem, because it turns “generate” into a dog whistle that can shut down conversation, as it did in my case. Worse than being a sign that triggers a given response, we are losing diversity. When “generate” is synonymous with GenAI, other forms of generation become rhetorically folded into that category. If "generate" were the only word this was happening to, we'd have less of a problem.

Language is a wonderful and ever-changing thing, so why am I distressed that it’s doing what it does best? Am I being that pedant who says that the English I learned when I was young is the correct English, and everything else is wrong? No, of course not. Or at least, I hope not. This is not about changes to English in an abstract sense, this is about the co-optation of general words into a specific and dangerous use. There is always a tension between technical language (or any kind of specialized language, really) and popular language. The translation of technical terms (or jargon, or terms of art in any field) into words that are easier to use casually is a process that loses nuance, and opens the use of language up to new risks.

This has been incredibly true in the “AI” boom of the last few years. There’s this whole class of technology that gets lumped under the heading of “Artificial Intelligence” and which comprises a lot of different things. One day I’ll do an essay about how we define the suite of technologies encompassed by AI, but today is not that day. For now, I’ll draw some broad strokes. Over the years, the term “AI” has risen and fallen in popularity, as it has over-promised and under-delivered (this is the cycle of "AI Winters" and "AI Springs" you sometimes hear about), as other terms have become better at monetizing themselves, or as the general ebb and flow of terminological popularity has buffeted it. Because the idea of artificial intelligence (having computers somehow replicate what we think of as thought) is conceptual, not technical, it is a basket capable of carrying a huge variety of different things. And this is a risk in the current period of hype and hyperactivity.

Companies invested in making money from products predominantly based on large language models (one of the main technologies sitting behind the friendly chat window, or underlying the push by companies like Microsoft and Google, which want to take away the hassle of humans having to read or write or synthesize) are generally invested in the sub-class known as Generative AI – systems which probabilistically make stuff (instead of, say, interpreting stuff, a different use case for other things also called "AI"). The enormous groundswell of interest in the apparently frictionless production of text, images, and videos has led to a belief among many decision-makers that adoption of these technologies is a necessary and important component of future-readiness. It is obviously also in the interests of the companies selling these tools for large swathes of the public to believe in their inevitability.

The risks of the land grab

All of this means that we’re sitting in a moment during which all discourse seems to somehow involve “AI” and all public and private organizations (at least at the decision-making level) seem to think that they need this “AI” thing, whatever it may be. Because the race to sell “AI” and “GenAI” in particular is so needlessly wasteful, resulting in hyperscale data centres, rapacious resource use, and the exploitation of armies of outsourced workers in global majority countries, a contingent of principled people are fighting back against the widespread adoption of these tools. But this poses a problem if there’s no generally-agreed definition of what “AI” means. It’s fine and good to say “no AI” but the boundaries of this rejection are unclear. We can engage in all kinds of what-aboutism on the definition of “AI” as a term or a field, and argue that we shouldn’t discard the useful, older things that also fall under this banner while trying to reject the wasteful, new things (but few people are really advocating for ditching autocomplete and all forms of machine translation when they say "No AI"). Or we could not even trouble the definition and just say that “AI” means anything being marketed under that banner. Both methods are lacking.

Here's the problem I'm trying to point out: “AI” has always been a kind of marketing term (famously, it was coined in an application seeking funds to support a summer project on this topic which did not yet exist under the artificial intelligence banner, but had instead been floating around under other names for some time), and it is now a marketing term which stands in for hubris, rapacious extraction, and a very problematic emphasis on the future of humanity at the expense of current humans and planet. “AI” has only the meaning attached to it by the people trying to sell it, and by saying “No AI!” without being clear on what we’re rejecting, we are placing our possibilities for change partially at the mercy of the people doing the harm.

Remediation

What kind of remedy is there to this problem? “AI” has always been a nebulous term, that basket capable of carrying any number of different technologies, as long as they’re all serving the same broad goal of creating synthetic thought. But the problem now is threefold: First, the Generative AI systems being pushed down our collective throats are hugely wasteful and are based on all kinds of unethical extraction. Second, the ill-defined nature of “AI” means that we are punching at clouds when we try to fight back, and this limits the effectiveness of efforts to curtail the documented ill-effects of the headlong rush to make “AI” happen as a technological and societal paradigm shift. Third, by accepting “AI” as the relevant term, we are playing in a field owned and managed by others – they control the narrative around the term, and those who don’t like what’s happening are placed in a reactive position.

My two small remediations, then, are these: specificity and reframing. Instead of talking about “AI” – a term which means anything a handful of rich assholes want it to mean – we need to talk about specific technologies and activities. It’s less sexy, but more actionable. We can reject hyperscale data centres, we can reject the ridiculous financialization of certain computing components, and we can reject ChatGPT or Grok or Claude in specific ways which are more targeted. We can reject the de-skilling of work, and we can reject re-organization in companies. We can reject surveillance and casualization. Each is more tangible, and each has a clearer group of stakeholders to mobilize. We can reject all kinds of things that fall into the current “AI” basket without ceding control over the terminology to the people trying to sell it to us. My second remediation is reframing. Because rejection is a reactive response, I’d argue that we also need some positive actions or imaginaries (I've written about this before). It’s one thing to reject, and another thing to imagine what we’d like instead. We need to reframe away from just rejection and begin to also push alternative imaginaries to the forefront. There is a disempowerment in only rejecting and not simultaneously imagining something better. The rejection needs to be accompanied by some better dreams.

Alternatives to the paradigm currently being sold to us do exist. They existed five years ago before all this hype started, and they can exist again. I could previously, and still can, code myself a little script that puts some text into some images, without the intervention of a chatbot. And if you'll pardon my swearing, how the ever-loving fuck did so many people come to believe that doing things the way we did them a few years ago had suddenly stopped being possible or desirable? Living inside an over-active hype machine run by a bunch of rich bros and their acolytes seems to have warped the shape of reality, and along with it, the modes of thought and speech we believe to be possible. We can start by taking the words back, by denaturalizing and questioning. And then we can get on with thinking about what we actually want, not what someone else wants us to want.

P.S. In the end, I did come up with a little pipeline for simply putting text into a visual template. Instead of Inkscape and Python, I wrote a little bash script that gets ImageMagick to do it. This is how I now make my slide decks. You can read about it in this post.