- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #101 -- The Past, Present, and Future of AI
Digital Economy Dispatch #101 -- The Past, Present, and Future of AI
Digital Economy Dispatch #101 -- The Past, Present, and Future of AI16th October 2022
I’m old enough to remember all this from the first time around. Even then, in my days as a computer science undergraduate, some people were convinced that the age of procedural programming was over. We could forget FORTRAN, consign COBOL to the bin, junk that Java code, put Pascal in the past, and don’t even talk about the complexities of C and C++. The answer was declarative programming. We all needed to be fluent in Prolog and everything would be fine.
The logic behind such sentiment was sound. With procedural languages, you tell a story to the computer about what you want it to do. You lead it carefully by the hand from one step to the next. Data is explicitly described in great detail, and you instruct the computer how to bring all the might of the microprocessor to manipulate it. The problem, however, is that this only succeeds if you’re able to define everything you want it to do in advance. The requirements must be fully specified. Data structures described down to the last attribute. And all the activities of the computer choreographed in a complex arrangement of interactions. It’s laborious, complex work requiring an engineering mindset. Software engineering, in fact.
In contrast, declarative approaches offer a different vision. What if you could describe what you’re looking for, define a few rules of the road, and set the computer off to find a solution? You spend your time thinking about the problem domain and leave the computer to learn enough to track its way to the answer. On route, by trial and error, it uncovers a vast network of relationships, pathways, and dependencies. These are recorded in ways that mean that the next time it faces a similar situation, it doesn’t need to start from scratch to work things out. It can use earlier lessons to pick up where it left off.
The excitement of applying this declarative approach is that it finds its own path to solve problems. By trying every possible combination of legitimate options, it stumbles its way forward to find answers that you and I would never dream of. And in so doing it can be said to be exhibiting a new form of intelligence: Artificial Intelligence.
Whoa Nelly!
For a while this excitement was contagious. These systems learn and adapt the more they are used. Their behaviour can be said to mimic how humans solve problems. Widespread debates considered the pros and cons of the new AI era. We’d cracked the code on Alan Turing’s famous “Turing test” and nothing would ever be the same again. But we were wrong.
Unfortunately, in many cases it turned out that all we’d done was to move the problem from one place to another. The challenge we had been facing with procedural approaches was the effort required to understand complex requirements in every detail and the coordination it took to manage the realization of an effective solution. This was made more difficult over time as additional effort was needed to maintain its relevance when errors were discovered, or as the operating environment evolved.
In declarative approaches we had a different concern. This way of working only came into its own if we could do two things: Unambiguously describe the goals and outcome of the system; and define the rules governing a meaningful solution. Hence, finding the best move in a game like chess or Go is ideal. Some forms of simulation and prediction also saw great benefits from using these techniques when regular patterns of behaviour could be described and mimicked, or anomalies highlighted for special attention.
But for everything else, it fell short. Instead, most computer applications could best be described, taught, and realized as large data management and manipulation efforts expressed using familiar techniques borrowed from the manufacturing and construction industry. Hence, building software is primarily viewed as a scientific exploration realized through engineering endeavour. Despite the best intentions of some, it struggles to see itself as an intellectually creative activity where solutions are discovered by solving puzzles in new and novel ways.
As a result, expectations for the impact and influence of AI had spiralled out of control with the reality of what could be delivered. Unsurprisingly, interest and funding for many AI initiatives began to dry up. The subsequent “AI winter” saw a great deal of antagonism toward the term such that many efforts to improve practices and build new solutions were marginalized, renamed to make them more acceptable to funders, or simply banished to the fringes of the research world.
Old Dog, New Tricks
Now skip forward several decades and we find ourselves in a new era of excitement around AI. The past few years have seen massive growth in use of the term and widescale application of AI-based techniques across many aspects of business as society. Also, and perhaps more significantly, AI is now big business. Why?
Undoubtedly, things have moved on. The capabilities of computer hardware have changed beyond recognition. Every day home computers now measure their storage in terabytes and CPU speed in billions of floating-point instructions per second. Programming languages such as Python offer the ease of procedural programming with access to extensive libraries of sophisticated data analytics capabilities offered by the likes of GitHub and Google’s TensorFlow. These are backed by readily available cloud-based execution platforms from AWS, Google, Microsoft, IBM, and others.
This is major shift to a new digital era delivering many important advances across broad areas of technology, working practices, and society. Several of these have contributed to produce an environment in which AI can now flourish. Many great articles and books have been written on how our understanding of AI has evolved in recent years, how business is beginning to apply AI, and the techniques that are now being used to make AI usable and accessible to many more people. All of these point the way to why AI is now back in fashion. They outline the reasons that what we are seeing with AI today is different than before, and why it is having more practical impact this time around.
Without repeating all these arguments, several areas are worth highlighting in relation to previous failed AI experiences of the past:
The scalability of AI solutions has improved enormously, largely due to much greater access to data. Using a variety of digital technologies, vast amounts of digital data is now generated across a range of devices embedded in homes, workplaces, factories, cities, and elsewhere. We can store, manage, and manipulate that data using widely available cloud infrastructure. Furthermore, investments in robust communications solutions means that it is technically feasible and financially viable to copy, share, combine, and move this data around the globe.
The scope of AI applications being created has increased. From initial high-profile demonstrations in game playing, the past few years has seen an explosion of new computing techniques that have led to algorithms being defined as the basis for many more kinds of solutions. With advances in areas such as deep learning and neural networks, we have broadened the range of computational approaches that allow us to redefine problems away from traditional perspectives requiring largescale engineering efforts toward lighter weight predictive approaches that explore many possible solutions in parallel to come to the best way forward. Extraordinary advances in computer hardware in the past years make these computationally intense algorithms feasible.
The skills required to deliver AI applications are much more available than ever before. Driven by increasing demand, there has been a lot of emphasis on building the pipeline for new AI-trained workers. With the support of significant government funding, re-training schemes for existing workers have accompanied the creating of a new generation of data scientists, computer scientists and others well versed in the fundamentals of AI.
The societal norms surrounding AI have evolved to be more accepting of technology-driven decision making. Today, large parts of our society have become accustomed to the influence and impact of digital technology. A key effect of the digital transformation we have seen in recent years has been a significant shift in attitudes about the role of digital solutions in how we conduct our lives. This has been supported by changes in many areas of government, in legal systems, and every other aspect of societal infrastructure.
From previous disappointments with the adoption of AI, industry leaders such as Jeff Bezos now talk about us living in AI “golden age” for AI. It is so fundamental to today’s digital transformation that the capabilities it brings have become hidden in the fabric of many of the products and services that we use every day.
Repeating Mistakes of the Past
Yet, once again we are in need of caution. In a familiar scene to what was experienced first time round, we now see ever-expanding expectations for the impact of AI. It seems that not a day goes by without another declaration about how our world will be redefined by boundless effects of this technology. However, while there is much to be excited about, we are also seeing some worrying signs.
The first challenge is that it is becoming more difficult to define what is and what isn’t AI. The phrase is now so overused that it has lost all meaning.
For some people, the answer is to redefine current efforts at AI into quite a narrow box. In particular, this view rejects the idea that AI is anywhere close to replacing human-based decision making or supporting unsupervised activities in high-risk scenarios. Rather, we need to accept that most of what is now described as AI is based on rather straightforward algorithms that detect patterns in data. Most of the time these systems avoid any attempt to understand the broader context in which data is used and perform minimal analysis on the decisions derived from using the data.
Second, the popular press tends to focus on how AI will replace humans by mimicking human intelligence and demonstrating human-like characteristics. What is clear from currently available AI solutions is that it isn’t ready to assume human qualities that emphasize empathy, ethics, and morality. Despite massive investment, we struggle to see how AI can be placed in positions where decision making is ambiguous and filled with nuance. For example, some believe we are still a long way from significant AI deployment in areas such as autonomous driving.
Third, we are recognizing that the task of designing and deploying AI systems shares many characteristics with traditional software engineering projects. They require managed teams creating complex solutions that will be in use for many years. Significant issues must be addressed to ensure those systems do not just meet the technical needs of the problem domain, but also fit into the volatile and uncertain context in which they operate. Consequently, expensive failures building AI solutions are being seen and require a raft of practical techniques to manage AI project to ensure delivery success.
The Long Road Ahead
In recent years we have come a long way in raising the importance of AI and demonstrating the insights it can bring. Undoubtedly the successes we’re experiencing are making a great contribution to digital transformation in many business domains. Yet, some of us with long memories remember how previous generations of AI failed to meet expectations and deliver impact in key areas. Reflecting on those experiences will help to ensure we focus attention on where AI will be able to deliver value today and overcome barriers to drive further success in the future.
Recognizing the challenges is an important step in the acceptance of AI. By focusing on how they will be overcome we will be able to close the gap between the vision for AI and the reality with systems being delivered today. More than that, we have the opportunity to demonstrate that AI is on the path to maturity and can live up to the high expectations being defined for this new digital era.
Digital Economy Tidbits
The Battle for the Soul of the Web. Link.
Recent discussions about the future of the internet highlight that many of those deeply involved in its creation are very unhappy about how it has all turned out. They want to start again. And to begin with a distributed infrastructure that refuses any form of centralized control , direct manipulation by state-owned agencies, and exploitation of its users by a handful of BigTech providers. What will this look like? Will they succeed?
In a recent webinar series, the Internet Archive defined the decentralized-web movement as an effort to break apart “all the layers” of the current online experience. It’s helpful to think about this idea in terms of what it opposes: Meta, for example, centralizes messaging, media sharing, data collection, and much else, so users are subject to its content-moderation policies and can’t help but submit their information to its sprawling marketing apparatus. Amazon owns so much of the infrastructure that the internet runs on that you could hardly function without it. The DWeb movement is interested in subverting this status quo through tools that would give individuals greater control over their online identities and information.