- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #145 -- What is or isn't AI?
Digital Economy Dispatch #145 -- What is or isn't AI?
Digital Economy Dispatch #145 -- What is or isn't AI?
20th August 2023
I think I’m starting to lose the plot on AI. What seemed so obvious to me some time ago, now doesn’t seem nearly as clear. When I started my journey in the world of computing over 30 years ago, “Artificial Intelligence” was a term used to describe something far off and unobtainable. Theoretical studies and unfathomable programming languages aside, it seemed like we were very early on the long road towards computers that thought and made decisions as humans do. It was almost inconceivable that computers might soon be able to operate autonomously and without succumbing to many of the human foibles that limit our own abilities – inconsistency, errors, bias, fatigue, and so much more.
For technologists like me, such ideas offered a great vision of hope for the future. A more digital society meant a more equitable, less divided world where technology did the heavy intellectual lifting to benefit society as a whole. Perhaps I was trapped in my own bubble, seduced by the rapid pace of technological advances at the end of the 20th Century. Or worse, caught up in the romantic imagery of a future world where digital technology brought advances to so many aspects of humanity to support every aspect of our lives.
Yet, as the age of AI comes more into focus, the reality is nowhere near as clear or as clean as many of us had hoped. With systems of all kinds are diffused with greater intelligence and automated decision-making capabilities, a wide variety of concerns have been raised. The recent exchanges about the role of AI in society have taken a distinctly worrying turn. From enabling a surveillance economy, reinforcing social barriers, and driving the loss of jobs to the de-humanizing of society, people are questioning why, where, and how to manage the deployment of AI. Some are even calling for a “pause for reflection” at the same time that others want to accelerate digital transformation to as a pawn in a bigger geopolitical battle with global implications for us all.
As the work in AI evolves and its role expands, such discussions will, and must, continue. However, as it does so, I am struck by an even more fundamental concern: I am no longer sure that I understand what is or is not AI.
I Blame Alan Turing
Many decades ago, computers were roomfuls of vacuum tubes capable of performing no more than a few dozen simple arithmetic operations per second at the behest of lab coated scientists and engineers. Even then, visionaries such as Alan Turing foresaw the potential. The so-called “Turing Test” was the embodiment of how he viewed the relationship between human and artificial intelligence. When it could no longer be distinguished if a response to a question was human or computer generated, he argued, then we can be said to have created artificial intelligence. And for a long time, that was sufficient.
However, in the intervening years, computing capabilities have moved on. The price, performance, and scale of digital technology advances mean that the latest supercomputers are capable of several quintillion instructions per second. Everyday computers have access to petabytes of stored data. Rather than a room full of unstable stand-alone components, computers are sufficiently small, reliable, and efficient to be embedded and connected within everyday objects all around us. Systems are designed and built using sophisticated algorithms written in programming languages that support a wide variety of programming paradigms suited to solving problems in every kind of domain.
No surprise then, that for many people the “Turing Test” is little more than an interesting historical reference and long ago became irrelevant as a meaningful litmus test for AI. Mathematicians and computer scientists have moved ideas on AI so far forward that they are almost unrecognizable from my early days of designing so-called “expert systems” by writing a few stuttering lines of computer code in Prolog. A much richer and more complex set of AI approaches has emerged.
Until recently, much of this work to expand AI was out of view (and certainly outside of the comprehension) of the vast majority of people. For “the rest of us”, a much more intuitive, down-to-earth perspective was being popularized as core computing capabilities became faster, cheaper, and more ubiquitous. Intelligence could be simulated through the use of simpler techniques supported by a combination of high-performance computing and clever programming skills. In effect, a much more practically relevant view of AI emerged.
Smarter than the Average Bear
This was seen most clearly in a wave of “smart” solutions and services. Over the past few years, many “smart” products and systems used the speed, power, and connectivity of today’s computers to perform tasks that appear to demonstrate intelligence. By repeatedly tapping in to multiple data sources, accessing the latest information from the systems to which it is connected, and remembering previous behaviour, they can carry out sophisticated pre-programmed actions.
This has significant implications for many common tasks. For example, your internet connected “smart TV” is capable of automatically downloading and installing the latest software to manage its programming, turn itself on and off to reduce power consumption, and adjust audio settings to suit different situations or to optimize sound for the type of programme being watched. Many would count this as intelligence. Furthermore, as it is not being controlled by a human, it surely must be a form of “artificial intelligence”.
With such “smart” behaviour widely available, many of us now expect that the environments in which we work and live will adjust to daily patterns. Information is gathered to ensure that actions and reactions are appropriate to the situations that emerge. Below the surface of these smart systems, very few people are aware of the details of how they achieve this. That’s both good and bad.
Please Try This at Home
Of course, AI is much more than pre-programmed responses and faster processors. A lot has been happening to enhance a system’s ability to learn, reason, and create new forms of knowledge that expand human-based ways of solving problems. These are increasingly becoming accessible to all, not least through schemes such as generative AI and the large language models (LLMs) that support them. These are heralding a new wave of smart products and services, and adding new capabilities to many of those currently in use. While welcomed, this is inevitably adding to the challenge we all face in determining how to best use them.
We want the technologies around us to blend into the background and become part of the fabric of our lives. Intelligent or otherwise, they should blend seamlessly into the way we live. Yet, to a large extent, this increasing familiarity with digital technologies has led us into a false sense of security that is beginning to concern many people. Questions are now being asked: What information are they collecting? Who owns and has rights to the data? How are decision being made and who governs them? What regulations are appropriate to guide their use? And so many more.
So, here is a little test you can carry out for yourself. Take one of the so-called “smart devices” that you see around you and ask yourself some questions about how it works. Consider the data that it collects, shares, and consumes. Try to assess how it makes its decisions. Then, think about the new capabilities that it will undoubtedly offer as it evolves. And finally, come to your own conclusion about whether you think it is exhibiting “artificial intelligence” and what that means to your view of the world.
Not sure where to start? Here is a list of “smart” products and services to consider:
1. Your heating thermostat.
2. Your car’s adaptive cruise control.
3. Your internet-connected toothbrush.
4. Your bank’s credit rating system.
5. Your dating app’s matching algorithm.
6. Your email client’s prioritization and “suggested response” features.
7. Your online shopping platform’s recommended purchases button.
The New Normal
As digital technologies become more ubiquitous, we must ask new questions about the activities we undertake and adjust our understanding of the world around us in response. A key part of this is to stop to consider everyday devices and the intelligence that is now embedded within them. By collecting and analysing data, these solutions are using increased processing power and more sophisticated algorithms to learn about our behaviour and adjust their actions to optimize how they work. As these “smart systems” rapidly evolve with new capabilities, they will force us to question how everyday devices carry out their tasks and shift our perspectives on what we think artificial intelligence is or isn’t.