Defining ‘artificial intelligence’ for regulation

By Paul Waller

May 24, 2022

artificial intelligence-AI
The AI regulatory focus needs to be on the domain of application. (Alex/Adobe)

In the course of the most recent wave of expectation and hype about “artificial intelligence” (AI) — let’s say the last 10 years — there have been repeated attempts to define what it is. Serious documents such as from academics, governments, or professional bodies typically say that there is no agreed definition and then propose their own or fall back on a well-known one (for example the UK Government used the phrasing: “AI can be defined as the use of digital technology to create systems capable of performing tasks commonly thought to require intelligence”). Popular articles tend not to agonise about it, but use the term to imply something technically advanced or futuristic.

Mostly this doesn‘t matter too much. But now that governments are crafting laws referring to AI (e.g. the EU’s AI Act and the UK National Security and Investment Act 2021) it is beginning to matter a lot. The scope of a law should include neither too much nor too little; be clear which cases fall within it and which do not; be understandable by anyone using it; anyone should be able to easily determine whether a case falls under it, and it should not need continual updating. Consequently, the debate on the scope of the EU AI Act (ongoing at the time of writing) is crucial to the impact of the eventual regulation.

Unfortunately, such debates about what “AI” is are probably unresolvable as they are based on a false premise. It is a semantic problem, but words matter, particularly in law.

We’ve gone down a blind alley

This seems like a roadblock for prospective legislators, but it should not be as it is irrelevant. The purpose of regulation is to protect one group of people from harm resulting from the actions of other people, such as those selling or using dangerous products. The focus of attention for regulators needs to be on the point at which people may be harmed by an action — intentionally or accidentally, directly or indirectly. Laboratory experiments and innovations need not generally be a regulatory concern until they leave the lab (beyond ethical considerations and the health and safety of the lab workers of course). The discovery that “E=mc2” didn’t immediately trouble lawmakers, but when people were able to utilise the energy released by nuclear fission, their attention was rightly grabbed.

The term “artificial intelligence” was coined in 1956 by a group of researchers looking for a name for their field of scientific and engineering study that differentiated it from existing ones like cybernetics. Hence it is a term equivalent to “physics” or “biology”. However, popular reading has interpreted “artificial” as being an adjective for the noun “intelligence”, as a human attribute, to create a term for a real thing (often a “technology”) such as may be built using scientific and engineering research in the way a bomb may be built using discoveries in nuclear physics and metallurgy. Unfortunately, most attempts to define AI start from that object-related image. If the researchers had adopted a novel abstract term for their field of study (something like “cybolics” perhaps), maybe we wouldn’t have the problem we now do.

Ask a million people, get a million answers

A study of attempted definitions reveals their origin in this interpretation. There is commonly a variation or expansion of the dictionary definition of “artificial” (“man-made”) such as “create a system”. Some definitions just leave “intelligent” or “requiring intelligence” in place thereafter. Others try phrases associated with human intelligence, incorporating “cognitive abilities”, “reasoning” or “learning”. Human characteristics dominate in definitions that refer to artefacts (e.g. software) that can “make decisions, recommendations or predictions”. An inappropriate anthropomorphism gets in the way of clarity.

When pushed to the next level of detail (as happened in the crafting of the EU AI Act), definitions often stay in the anthropomorphic realm and land on “machine learning”. This in turn may then suffer the same fate as “artificial intelligence” through being defined by uninformative variations on “systems that learn”. However, it originated as a similarly unfortunate term for a variety of mathematical and statistical methods for finding patterns in a set of data that might be used to make predictions. Other definitions may thus go on to list such methods, including ones that have been around for ages but now underlie all sorts of things that get called “AI”. These include, by the way, neural networks — making the human connection again by reference to brains. But a loose collection of mathematics does not provide a definition or a basis for regulation.

A combination of mathematics and advances in data collection and software and hardware engineering has created the capability to do things that couldn’t be done before. These include predicting unknown quantities (e.g. traffic density in the rush hour), estimating which group an observation falls into (a scan suggests either a positive or negative diagnosis), choosing a likely option from many available (this image is probably a dog), finding clusters (of consumers in purchase data), language processing (translating or generating text, interpreting speech), or goal-seeking (finding a winning move in Go or chess). After much hype, we are finding out that many of these things don’t perform as well as expected or claimed, and they may be used for good or bad purposes. The one common feature with all of these is however that there is uncertainty associated with the output, and the consequences of taking action on the basis of it are highly dependent on the circumstances in which it is being used.

So we need to start in a different place

The regulatory focus thus needs to be on the domain of application (such as medical diagnosis, employment, immigration, policing, cybersecurity, or public administration), how that is changed by the capability of new technology and methods (whatever they may be called), and the nature of possible harms that may occur. Sectoral regulators are challenged by such developments in their realms of responsibility, but this is normal as medicines and medical device certifiers will confirm. The debate raised by the EU AI Act on banned or high-risk applications is therefore very relevant, though confounded somewhat by an attempt to bound them with a technical, product-orientated, definition of AI — an unnecessary distraction.

“Artificial intelligence” has had a few years in the limelight and served to attract attention or funding when put to use as a label for research or a business, or on the cover of a magazine. But it is not a useful term for precise communication or as the basis for legislation. Its adaptation to other semantic forms is at best meaningless and at worst grating: “artificially intelligent”, “an AI”, “AI marketplace”, “AI supplier”. It is best reserved for its originators’ intended purpose – so doing a PhD in AI is just fine, but it should be dropped in all other situations. Writers and lawmakers will just have to do a bit more thinking about what they actually mean.

This article is republished from Apolitcal.


:

National centre to further Australia’s AI goals

About the author

Any feedback or news tips? Here’s where to contact the relevant team.

The Mandarin Premium

Try Mandarin Premium for $4 a week.

Access all the in-depth briefings. New subscribers only.

Get Premium Today