• Digital Economy Dispatches
  • Posts
  • Digital Economy Dispatch #045 -- Would you like some AI with that? The pervasive nature of AI and the basis for its intelligence

Digital Economy Dispatch #045 -- Would you like some AI with that? The pervasive nature of AI and the basis for its intelligence

Digital Economy Dispatch #04518th July 2021

Would you like some AI with that? The Pervasive Nature of AI and the Basis for its Intelligence

We live in a confusing world. Technologically speaking, we have entered a period where advances in so-called “smart” digital products and services are all around us. Often they are out in the open where they can be seen, such as your bank’s mobile app or the devices you buy for the home to play music and control the heating. However, more and more we see these capabilities buried inside the products and services you have been using for some time. Your TV and your washing machine, for instance.

Much of today’s digital transformation involves data-driven innovation bringing predictive insights and automation of tasks we have had to do for ourselves. It is changing our relationship with the world around us and challenging our understanding of what is based on human judgement and decision making,, and what is not. Let’s take a simple example. How would you know if I had written this blog article myself or if it was generated by a fancy bit of software? Does it matter? Would that change your view of what you’re reading and how you respond to it?

These days it is hard to escape the rhetoric and promise of what Bill Gates has called “the new golden age of computer science”. Whether the topic cloud computingdata sciencedigital twins, or even computer architecture, the story seems to be the same: You ain’t seen nothing yet! Nowhere is this expectation higher than in the area of Artificial Intelligence (AI).

The hype surrounding AI has never been more intense. The possibilities raised over the past 50 years are now being realized by a convergence of advances in data analysis, access to new digital sources of data, high speed connectivity, and raw computing power. Seen together, they are enabling rapid advances in how digital solutions are designed and delivered.

The interesting challenge here is to bring this combination of capabilities together to provide what we might view as “intelligence”. That is, the creation of algorithms to recognize situations and solve problems by learning from earlier experiences and applying that knowledge in unfamiliar contexts. To achieve that feat, what we’re experiencing today is largely knowledge management techniques that use brute force application of very large computing resources to examine extreme numbers of possibilities and variations. By “training” AI systems with a lot of data about known situations, it is possible to compare the new situation to what has been seen before and come to a set of likely conclusions.

So, if the primary technique in use is high-speed data crunching, is AI really any more than pattern matching based on being fed by huge amounts of data previously painstakingly tagged by humans? And does that really amount to what many would view as “intelligence”? For many people, these are troubling questions.

We can understand more about these issues if we consider a common example of AI that is now part of all of our lives: Chat bots. These are interactive systems that engage individuals in a conversation to determine their needs and respond with an answer or a follow up action. When we look at the “intelligence” behind the scenes, what we find is a quite simple architecture of data consumption, analysis, and response. This is nicely summarized by Peter Stratton by looking at how Google’s AI chat bot, Duplex, works to help you book a table at a restaurant. Not only are these systems very narrow in focus (typically aimed at rather constrained actions such as booking an appointment, returning a damaged package, or finding a required document), they also are heavily dependent on the previous data and interactions they have encountered. Their response is a direct reflection of what they have seen before with little or no adjustment for individual context, cultural background, sensitivity to environment, and so on.

Furthermore, as I have previously discussed, some digital technology experts such as Kate Crawford look at AI with a very sceptical eye. She believes that the combination of massive, curated data stores plus hardcore processing is founded on very dubious ethical and moral thinking. Her analysis highlights why this approach leads to exploitation of data sources, low wage abuse of workers involved with data tagging, bias in the interpretation of the data due to political pressure, and consolidation of power in the hands of a small number of data hoarders.

Even so, their attraction is that AI-based solutions are beginning to address a very broad set of applications to bring value to a wide set of stake holders. Kathleen Walch has classified these into 7 styles of AI solutions that we typically see today:

  • Hyperpersonalization — using AI to develop a profile of each individual, and then having that profile evolve and adapt over time based on activities being monitored.

  • Autonomous systems – combinations of hardware and software to accomplish a task, reach a goal, interact with their surroundings, and achieve an objective with minimal human involvement.

  • Predictive analytics and decision support – using AI to understand how past or existing behaviors can help predict future outcomes or help humans make decisions about future outcomes.

  • Conversational AI – supporting interaction between machines and humans across a variety of media including voice, text, and images.

  • Exception management – applying AI to seek patterns in data sources, learn about connections between data points to match known patterns, and searching for anomalies in data.

  • Recognition – using AI to identify objects and features in images, video, audio, text, or other unstructured data.

  • Goal-driven activity – learning rules and applying AI to apply those rules to find ways to achieve stated goals in areas such as strategy, role playing, gaming, and other activities.

In isolation and in combination, these patterns of AI use enable us to address many different problem areas. Much of what we do today is to refine the challenges we face to be amenable to these AI patterns. For example, driving is now redefined based on a multitude of sensors and cameras to be a predictive problem of working out what to do next given extensive processing in real time to analyze several data streams telling us the state if the car, the driver, the road ahead of us, the weather, the road conditions, and so on. Every fraction of a second the car’s digital management systems are asking the question “what can I determine from all the data I am receiving that are the optimal things to do now?”.

Which brings us back to my original question: Why does this matter? Take a look at the example below of an “AI-powered writing tool that helps you create high-quality content” provided by the Rytr app, one of several in this domain.

From Rytr by selecting its “candid” style and asking it to generate text on “why is digital transformation difficult for large organizations?”. This is what it produced:

Digital transformation is a new way of thinking and acting for companies that want to innovate and adapt to today’s competitive environment. It’s a top priority for CEOs, but many companies are running up against obstacles in their own backyard.

Digital transformation was traditionally seen as an IT project that involved upgrading technology infrastructure. But now we know that digital transformation is much more than that: it requires transforming business processes, production methods, corporate culture and human resource practices.

Digital transformation is difficult for large organizations because they are often stuck in the past. These organizations didn’t invest enough in digital technology adoption and don’t have the culture, the collaboration, or the leadership to make it happen.

What are some obstacles large organizations face when trying to digitally transform?

Companies need to be willing to get rid of the old ways of thinking and try new techniques. For example, organizations should stop thinking about what they are trying to sell because this is not what people want. Customers are more interested in the experience they will have with a product.

Companies also need to think about how they can use technology as an opportunity instead of a threat. They should work on creating and implementing new training programs that focus on artificial intelligence, automation, and virtual reality systems. This is difficult for large organizations because it requires them to change how they do things and look at the world in a different way.

I have to admit: That’s surprisingly good! But we’re always left with a few nagging questions. What kind of “intelligence” was used to construct this description? Where did this text originate? Who decided that such descriptions and definitions are meaningful, appropriate, fair, and unbiased?

It makes you wonder how much of what you see and read online is created in this way rather than written painful word by painful word sitting on the sofa at 1am in the morning. Or perhaps that’s just me.

Digital Economy Tidbits

The world’s first 3D printed bridge opens in Amsterdam. Link.

One of the key digital revolutions in the past decade is 3D printing. It’s starting to have more and more of an impact to the extent that major artifacts can now be built using this approach

The first ever 3D-printed steel bridge has opened in Amsterdam, the Netherlands. It was created by robotic arms using welding torches to deposit the structure of the bridge layer by layer, and is made of 4500 kilograms of stainless steel.

Interestingly, this structure now is also directly feeding data into a “digital twin”….a model that is modified in real time with data.

More than a dozen sensors attached to the bridge after the printing was completed will monitor strain, movement, vibration and temperature across the structure as people pass over it and the weather changes. This data will be fed into a digital model of the bridge.

The difference between teaching management vs entrepreneurship. Link.

This is a fascinating discussion by Steve Blank about the way he shifted his thinking about teaching entrepreneurial skills at business schools and the use of experiential approaches.

After a few years of trial and error in front of a lot of students, I realized that the replacement for the case method was not better cases written for startups and that the replacement for business plans was not how to write better business plans and pitch decks. (I did both!). Instead, we needed a new management stack for company creation.

He places his approach to entrepreneurial teaching in the contextof a much broader set of approaches:

The Lean Launchpad is deliberately a Flipped Classroom approach where students are made responsible for designing and executing their own learning path:

Inside the classroom, we deliberately trade off lecture time for student/teaching team interaction. The class is run using a “flipped classroom.” Instead of lecturing about the basics during class time, we assign the core lectures, recorded as video clips, as homework.

The result is a different learning approach that brings new approaches to the way students work, think, and solve problems.

This is now the basis for the courses on entrepreneurship at Stanford.

While the Lean LaunchPad/I-Corps curriculum was a revolutionary break with the past, it’s not the end. In the last decade enumerable variants have emerged. The class we teach at Stanford has continued to evolve. Better versions from others will appear. And one day another revolutionary break will take us to the next level.