The Semantic Ambiguity of “AI”

Gamer_152
7 min readMar 17, 2024

--

A pair of glasses placed on the edge of a laptop offers a clear view of an otherwise blurry screen. On the screen is an IDE full of computer code. Image by Kevin Ku.

Having played video games for more than two decades, I find the way people talk about “AI” bananas. Artificial Intelligence is treated as a recently birthed, cutting-edge concept in software. Its implementations are discussed as inherently genius or at least pregnant with revolutionary results. It’s bizarre because AI, at least the AI that gamers were talking about, has been around since the 1970s. I would argue since the 1950s, even. It was also common as sin to hear people talk about how bone stupid this software component could be. “The AI can’t drive”, “The AI are morons”, “The AI will run into walls”, these are the sorts of complaints you’d read on forums constantly, even as recently as the late 00s. It’s a far cry from the present day when I keep seeing companies use “AI-powered” or “AI-driven” as badges of quality.

At the most extreme end, AI is taken to be the solution. That is, it is not one solution to one problem; it’s the only solution you’d ever require, no matter the problem. Whether it’s replacing all workers in all jobs or creating bespoke media libraries for everyone on Earth, AI is the answer. This omnicompetent conception of AI mirrors the gods in the machines that we’ve long heard about from sci-fi and futurology. You’ll also notice that these definitions of AI lack any shared statement about the mechanisms through which it works, which seems like a pretty essential aspect of a technology.

So, you’ve got a definition of AI that is so broad it’s nearly meaningless. AI is some sort of software entity that performs any number of a variety of jobs from piloting video game spaceships to directing films to running production lines. It may fulfil these roles with a degree of cognitive power anywhere from that of a deity to your allies in Aliens: Colonial Marines. As best I can tell, AI is whenever a computer does something. Sure, the implication in many deployments of the term is that AI is some expert servant gloved in cyberspace. Yet, when people making lofty claims about AI grasp for real-world examples of those computerised savants, they gesture towards software of dubious intelligence like ChatGPT and MidJourney. In a more rational political ecosystem, language that does not communicate a solid idea would be discarded, but when politicians, CEOs, and excitable technophiles talk about “AI”, the ambiguity is an asset.

Back when “cancel culture” was the consuming bugaboo of the right, a lot of people pointed out the ambiguity of that term provided political leverage. Conservative commentators wanted to create the impression that there was a McCarthy-style purging of right-wing beliefs in the US and, to some extent, the UK. That you could lose your job, have your business repossessed, or be societally exiled for siding against the left on social issues. But that kind of material pushback or complete alienation from a community almost never happened. However, there were a lot of instances of people being told not to use hateful and discriminatory speech. So, talking heads classified everything from a kid telling their Dad not to be racist to a small business owner having their shop taken from them as “cancel culture”.

Now, you could say “cancel culture” is rampant and technically be correct because you’ve seen people advocating for a more socially just society. Yet, because you’ve defined the term so broadly, when you say, “cancel culture is rampant”, what you imply is that conservatives are a persecuted underclass. The ambiguity of the term “cancel culture” made it a shapeshifter, able to fill whatever container you needed it to at the time, allowing you to exaggerate both the scale and potency of pushback to right-wing beliefs in our society. The blobbiness of the term “AI” similarly functions to conflate and exaggerate.

For example, let’s say you’re an institute representing a former UK Labour Prime Minister. Your country’s national healthcare system is hideously overburdened, but relieving that burden would mean investing more money in it. And to invest more money, the government would have to raise more money through taxes. Your party won’t tax the poor because that would make them less popular at the ballot box, and the working class are pretty skint as it is. Your party also isn’t going to tax the rich because that’s effectively who runs the country. They hung, drew, and quartered the last head of your party who suggested that.

So, how do you diagnose and treat all of these people without paying for hospitals, medical equipment, or doctor’s salaries? What about having patients talk to a chatbot instead of a GP? Unfortunately, the leading chatbot we have has a banner below it that tells you that what it says might be entirely wrong, and it has been found giving unwitting interlocutors the recipe for chlorine gas or failing to do basic arithmetic. If you put it like that, no one will want your computer program as their doctor, but here’s where the long bridge of “AI” steps in. That term is used for the toxic gas chatbot, but it’s also used for HAL 9000 or the intelligences that beat chess grandmasters. Even if all you have is the equivalent of the bot that drives into walls, maybe if Labour constituents hear “AI”, they’ll think they’re going to get consultations with a genius. You’d want to be treated by a genius, wouldn’t you?

Or maybe you’re one of the Silicon Valley overseers, and you’re aware that after years of broken promises, heightening wealth disparities, and privacy breaches, you could be subject to regulation or general opposition from non-billionaires. You need to convince the world that if they clip your wings, they’ll be destroying their future. So, you spin a fantastical tale about minds in boxes that will colonise the stars and eliminate world hunger. But everyone has to be nice to you because it’s your hand on the power button of the blessed machine. As proof that the AI utopia is imminent, you point to the same generative AI that can’t figure out what clapping should look like or that designed the Glasgow Willy Wonka Experience. They and your hypothetical future AI aren’t the same thing, but by using the initialism “AI” for both of them, it kinda sounds like they are.

You’ll notice in both my chatbot doctor and Silicon Valley examples that the speakers are ignoring not just the skill level of the technologies but also their field of application. They’re telling you that they can take photo editing tech or a search engine and have it be an ENT specialist or a data scientist. But the term “AI” erases any differences in use case between these technologies.

Along with the above two examples, I’d be remiss if I didn’t mention that AI is invoked in the oppression of labour movements. It’s beaten into us that we have to tolerate any indignity or stripping of financial security in our jobs because we could be automated out of them tomorrow. The fictional AI like Skynet would be smart enough to carry out any labour, and “AI” exists now, so our days are numbered. Supposedly. The strikebreakers never explain why anyone today has long-term contracts if companies actually have an army of infallible, unsalaried digital workers on the verge of launch, but just because those digital workers are a fiction doesn’t mean they can’t be a convenient fiction. And just because AI isn’t up to a task doesn’t mean it won’t be given it.

There’s more hesitancy to implement AI in industries where it could injure someone. Yet, you can see that even though LLMs can’t get their facts straight or write an article to save their life, that doesn’t mean that they’re not currently clogging the internet with F-tier filler blogs and TikTok scam videos. Many tech elites are not immune to their own propaganda, and even before the rise of OpenAI, many didn’t give a fuck about quality of output. If you view criticism and journalism as just lorem ipsum to copy/paste in between AdSense embeds, of course you don’t care if the computer program that writes that text is only semi-literate.

However, I’m not here solely to tell you few AI are road-ready. My point is also that “AI” is not just a suite of tools that can identify images, paint portraits, or drive cars, albeit with relatively large error rates; AI is also a rhetorical and ideological tool. A lot of powerful people and blind techno-optimists tell us that the world can be changed for the better in the near future without challenging power. But there is a missing piece in their arguments. As a concept defined broadly enough that it can take any form, AI can always fill that hole, no matter its shape.

The semantic slipperiness of the term AI is far from the only reason that it’s been embraced tighter than the last proposed techno-panacea: blockchain technologies. In this article, we’ve brushed up against the black-box nature and newness of neural nets, the capacity of some things dubbed AI to produce actually useful results, and years of seeing AI in media depicted as having a 500 IQ and being applicable to any use case. But so many of the current-day pitches for a deus ex machina rest on how the term AI muddies the waters when it comes to programs’ applications and abilities.

There’s nothing to stop there being multiple competing definitions of AI in the loop simultaneously, as long as people state what they mean when they use the term. But if the technological elite had to narrow down the definition, they would have to make the capacities and limitations of the technology clear. They would downgrade it from the mystical omni-solution to real and identifiable programs with all their inherent weaknesses and underdevelopments. Best to keep it vague. Thanks for reading.

--

--

Gamer_152
Gamer_152

Written by Gamer_152

Moderator of Giant Bomb, writing about all sorts. This is a place for my experiments and side projects.

No responses yet