• Digital Economy Dispatches
  • Posts
  • Digital Economy Dispatch #135 -- Beyond the Hype of ChatGPT: The Role of AI in Digital Transformation

Digital Economy Dispatch #135 -- Beyond the Hype of ChatGPT: The Role of AI in Digital Transformation

Digital Economy Dispatch #135 -- Beyond the Hype of ChatGPT: The Role of AI in Digital Transformation
11th June 2023

The hype surrounding AI has never been more intense. It seems that commentators from every conceivable domain are rushing to predict how the latest developments in AI herald a step-change in business and a new era for society. Undoubtedly, the emerging tools using generative AI based on Large Language Models (LLMs) have captured our collective imagination in ways we have not seen before. As a result, the past few months have seen a crescendo of hype about how AI will revolutionize how we work and play, but may also hasten our extinction,  destroy millions of jobs, make it impossible to tell truth from fiction, and is a threat to civilization.

Such attention may have many useful benefits by broadening discussion on how we view the role of technology in our lives and opening many people’s eyes to the impacts of digital transformation in every aspect of society. However, this intense focus on the latest advances in AI also brings with it the danger that many people are failing to recognize the wider footprint of AI and is leading to misunderstandings about the role of digital transformation in delivering these changes.

AI and its associated technologies have been slowly but surely changing our world for several decades. While ChatGPT, Bard, and other recently released tools are taking all the headlines, are we missing the point of what AI has been and will be doing to rewire and redesign our world?

Back to the Future

From its earliest days, much of the history of computing has been inspired by the goal of creating “machines that think”. Early pioneers, including Alan Turing, Herb Simon, Marvin Minsky, and John McCarthy, laid the groundwork for AI, but progress was slow due to limited computing power and few accessible datasets. For many people the turning point for AI can be traced back to the Dartmouth Conference in 1956, where a dozen mathematicians held a 6 week set of workshops and the term was first coined. They foresaw the potential impact of automata theory and cybernetics on the future of humanity and initiated a new field of study aimed at developing many of the core principles behind the study of AI.

It wasn't until the 1980s and 1990s, however, that AI had its first major impact beyond mathematicians and academics. During that time, as in my own undergraduate degree in Computational Science, learning to program in languages such as Prolog and Haskell were core parts of the curriculum for a new generation of university students. As a result, expert systems and machine learning techniques became more widely practiced. Unfortunately, the hype surrounding AI soon surpassed its capabilities, as more traditional computing approaches were adopted across businesses and into the home. The led to what became known as the "AI winter" in the late 1990s when investment in AI research dropped significantly.

It was not until well into the 2000’s with breakthroughs in deep learning, powered by rapid advances in availability of big data and powerful GPUs, that renewed interest in AI began to emerge. This was highlighted in 2011 when IBM's Watson showcased the potential of AI by defeating human champions on the quiz show “Jeopardy!”. This acted as a catalyst for a variety of work in AI, with further high profile events such as the demonstration of the use of AI techniques by DeepMind to defeat the Go world champion, Lee Sedol in 2016 using its AlphaGo system. An event watched by over 200 million people.

These events created a lot more interest in the potential uses of AI. Since that time there has been an explosion of AI applications across various domains, including healthcare, finance, and transportation. AI-powered technologies such as virtual assistants, autonomous vehicles, and recommendation systems became part of everyday life. Advances in so-called “smart” digital products and services are all around us. Often they are out in the open where they can be seen, such as your bank’s mobile app or the devices you buy for the home to play music and control the heating. However, more and more we see these capabilities buried inside the products and services we have been using for some time: Cars, TVs, and washing machines, for instance.

The Power Behind the Throne

Underlying AI’s progress over the past 50 years has been a convergence of advances in data analysis, access to vast numbers of digital sources of data, improvements to access and quality of services for high speed connectivity, and cheap, abundant raw computing power in the cloud. The challenge has been to bring this combination of capabilities together to provide what we might view as “intelligence”. That is, the creation of algorithms to recognize situations and solve problems by learning from earlier experiences and applying that knowledge in unfamiliar contexts. To achieve that feat, what we’re experiencing today is largely the adoption of digital technology advances through knowledge management techniques that use brute force application of very large computing resources to examine extreme numbers of possibilities and variations. By “training” AI systems with a lot of data about known situations, it is possible to compare the new situation to what has been seen before and come to a set of likely conclusions.

This combination of digital technologies and knowledge management techniques provide the basis for many kinds of solutions. It can be used to address a very broad set of applications to bring value to a wide set of stakeholders. The work in AI builds on this to deliver capabilities with characteristics that we associate with intelligence: Rapid synthesis of large amounts of data, experience-based decision making, adaptation to new contexts, and cooperative alignment across different actors.  

The use of these capabilities opens up a wealth of opportunities in many situations. Kathleen Walch has classified these into 7 styles of AI solutions that we typically see today:

  • Hyperpersonalization — using AI to develop a profile of each individual, and then having that profile evolve and adapt over time based on activities being monitored.

  • Autonomous systems – combinations of hardware and software to accomplish a task, reach a goal, interact with their surroundings, and achieve an objective with minimal human involvement.

  • Predictive analytics and decision support – using AI to understand how past or existing behaviors can help predict future outcomes or help humans make decisions about future outcomes.

  • Conversational AI – supporting interaction between machines and humans across a variety of media including voice, text, and images.

  • Exception management – applying AI to seek patterns in data sources, learn about connections between data points to match known patterns, and searching for anomalies in data.

  • Recognition – using AI to identify objects and features in images, video, audio, text, or other unstructured data.

  • Goal-driven activity – learning rules and applying AI to apply those rules to find ways to achieve stated goals in areas such as strategy, role playing, gaming, and other activities.

In isolation and in combination, these patterns of AI enable us to address many different problem areas. Much of what we do today is to refine the challenges we face to be amenable to these AI patterns. This is the basis for the AI revolution we’re experiencing today.

Let’s take an example to understand how AI redefines the way we experience the world. Today’s cars are incredible machines packed with digital technologies. A high-end car may contain dozens of CPUs and over 150 million lines of software code. Sensors are used to record, assess, and report on every aspect of a car’s operation. Vehicles communicate with each other in real time to share information about their driving experiences, current road conditions, and expected traffic patterns. Some people joke that a Tesla can best be described as an “iphone on wheels”.

The consequence of this digital transformation of the car is that driving has been redefined based on a multitude of sensors and cameras to be a predictive problem of working out what to do next given extensive processing in real time to analyze several data streams telling us the state if the car, the driver, the road ahead of us, the weather, the road conditions, and so on. Every fraction of a second the car’s digital management systems are processing previous data, adding new information into its decision making systems, communicating with other vehicles around it, and so on. All to enable it address the question “what can I determine from all the data I have to decide what to do next?”.

Take the High Road

To understand what is happening today we must take a more holistic view of digital transformation powered by AI. Don’t be too distracted by the AI sideshow now taking place. The deeper revolution we see happening is a combination of digital technologies, advanced algorithms, and reimagined experiences. A key component of this is AI. The current reality and future path of AI is embedded in the fabric of how systems can use data more effectively for decision making to bring intelligence to our everyday activities. This is the digital transformation that will affect all our lives.