AI is looking like it will upend vast swathes of the economy and society. But by treating it as something other than highly advanced software, we allow the pioneers to act with impunity.

It really should go without saying, but if a system:

…then it's fundamentally a software system, regardless of whatever emergent properties it may develop.

Of course, it can be software like most people have never experienced before. But it is still software. It may be probabilistic, learned, and partially opaque, but it’s still engineered, deployed, and governed by people.

That's not to say that there are not some really interesting conversations to have about whether a system can transcend the substrate it was built on. It also does not belittle the impact it is having, and how it will, in all likelihood, change every part of society.

However, when we treat it like some kind of mystical entity or deny what it is at its very core, the only people who win are those who have something to sell.


Technological Cold Reading For Gain

Our industry has long been people buying technology they do not understand, from people who understand it just a tiny fraction more. AI has accelerated this phenomenon.

Treating AI as if it's magic and beyond software is a tremendously helpful trick for charlatans to sell AI solutions to the non-technical buyers.

Every consultant can promise "AI transformation" without being accountable for what they are actually building or how it works. As it's so hard to pin down what AI really is in a neat little soundbite, people can claim it's pretty much whatever they want it to be to make the sale.


The Widened Responsibility Gap

Big tech has an impressive track record of failing to take accountability for the outcomes of its choices. Here are one, two, three, and four examples.

Being a pioneer of a technology, should not excuse you from the fall out of the choices that you make, in pursuit of your goals.

AI is already accelerating the ability for manufacturers of all shapes and sizes to deploy powerful software quickly, without necessarily taking responsibility for the outcomes.

By hiding AI behind mystical obscurity, we create linguistic escape hatches that dodge accountability. For example:

Even if future AI systems develop genuine emergent properties or consciousness, this doesn't absolve humans of responsibility for the training data, optimisation targets, and deployment decisions that shape their behaviour.


Democratic Oversight of Technologies

Aside from automating disingenuous marketing, the mystification of such technology is counterproductive to democratic oversight.

If lawmakers don't understand that AI is software with human fingerprints all over it, they can't regulate it effectively.

When business leaders think AI is magic rather than engineering, they can't make informed decisions.

When the public believes AI is some autonomous entity rather than sophisticated automation, they can't hold the right people accountable when things go wrong.

The simple and empowering truth is that AI, at its very core, is extraordinarily powerful software that learns and synthesises patterns instead of following hand-coded rules.

Understanding this does not diminish its capabilities, but it should clarify where human responsibility lies. In a world that is being rapidly shaped by AI, this distinction is essential.