Digital Economy Dispatch #204 -- Remembering the Human Side of Software and AI

Digital Economy Dispatch #204 -- Remembering the Human Side of Software and AI
6th October 2024

Software has changed a lot over the years. My own first experiences consisted of creating large stacks of COBOL statements on punch cards, writing FORTRAN programs on coding sheets for others to type into a computer, and figuring out how to move individual bytes of data between registers in machine language for computers made by long gone manufacturers. Ancient relics in comparison with today's software and systems. Yet, while these experiences may be a million miles from today's coding approaches in Python, R, Scala, and several other emerging programming systems, much of the task of delivering robust, high-quality software remains the same.

Beyond the Code

As I reflect on the evolution from those early days to our current era of cloud computing and AI, one key lesson that I have learned is that successful software development has never been solely about writing code. The real challenge, both then and now, lies in understanding people.

One of the earliest large-scale software projects I supported was a major upgrade to the en route air traffic control systems in the USA. Teams had spent months (and millions of dollars) documenting requirements, designing data management schemes and perfecting algorithms, only to find that they had completely misunderstood how air traffic controllers would interact with the system. The design was elegant, the code well written, the performance was good, but none of that mattered because the teams had failed to grasp the human element – how air traffic controllers perform their tasks. This lesson has only grown more crucial as software has grown in importance to become deeply woven into the fabric of all our lives.

The Context is the Code

Despite our best efforts to manage software development and delivery processes, the software we create doesn't exist in a vacuum of controlled environments and predictable inputs. It's out there in the messy, complicated real world, making decisions that affect people's lives -- often in profound ways. The latest AI solutions are no different. For instance, we are now more aware that systems analysing personal data to make predictions about future behaviour and make recommendations are not just about optimizing data sets – they require deep understanding of the complex lives of the people they impact with a wide variety of circumstances, needs, and constraints.

These are concerns we see today in adoption of AI in many areas, particularly domains such as healthcare and other public service contexts. It is easy for technology-driven projects in these areas to focus all their attention on streamlined, efficient system for managing patient data. However, invariably too little care is taken to address the varying comfort levels different age groups have with technology, or the emotional state of people dealing with overwhelming health issues. The technical implementation may be flawless, but understanding human behaviour is lacking. Examining major UK government initiatives such as Universal Credit and its use of AI provides insight into just how challenging this can be.

Ethics in the Age of Automation

One of the obvious issues we face is that as our software systems have grown more powerful, they've also assumed a more substantial role in every aspect of our lives. A good example is in areas of HR such as recruitment, performance management, and policy enforcement.

I was recently on a panel discussing an AI project that was introducing AI-based automation into key aspects of the hiring and promotions processes for a major company. The system was applying some very novel AI techniques, and the technology delivery team was excited by its potential to deliver efficiencies to the organization. However, I couldn't help thinking about the people on the other end of those decisions. How would they feel about an algorithm determining their job prospects? What biases might we be inadvertently encoding into the system? What approaches had been taken to ensure fairness for people of different background, cultures, and experiences?

Maintaining an explicit focus on the transparency and explainability of AI-based systems is central to meeting these needs. Organizations must make the decision-making process as clear as possible, not just for regulatory compliance, but because they believe that people deserve to understand the factors affecting their lives. On top of this, clear ways for individual to appeal automated decisions must exist, recognizing that no system, however sophisticated, can fully capture the variety and nuances of human circumstances. Especially as these evolve.

Designing for an Unknown Future

One of the most profound changes I've witnessed over the past few years is the acceleration of change itself. In the early days, we could reasonably expect a system to operate in a stable context for years. Many of the design and implementation techniques being promoted were aimed at establishing robust, fault-tolerant software architectures. Now the landscape can shift dramatically in weeks or even days. Sometimes even faster. Chatting with colleagues about the use of AI and drones in the Ukraine conflict, they suggested that some of these systems are only usable for a few hours before they are counteracted and they must be completely revised!

This reality has transformed how we approach many aspects of software design. I'm reminded of many hours I spent in the 1990s reviewing and revising designs in modelling notations such as the Unified Modelling Language (UML). We tried to predict every possible usage scenario to provide a complete view of the system. The goal was to specify every conceivable scenario. The weakness of this approach, of course, was that implementing the system often produced a rigid design focused exclusively on those scenarios that couldn't adapt easily to new contexts.

Such experiences have taught me the value of building for change. Now, when asked to review a project, I start by thinking about how it might need to evolve. What are the operational assumptions and design constraints on which it was conceived? Can it gracefully handle scenarios we haven't anticipated? To what extent can it be adjusted without requiring a complete overhaul? These aren't just technical considerations – they're about creating systems that can continue to serve people effectively as their needs and contexts change.

With advances in AI, this change-first approach is even more critical. As systems become more autonomous, human oversight becomes more crucial, not less. This might seem counterintuitive, but automation doesn't eliminate the need for human judgment – it transforms it. The are many circumstances in critical systems where automated decisions have real-world consequences.

Looking Forward, Grounded in Humanity

As we look ahead to the future of software development, there is cause for much excitement, but also a need for care. The technical challenges we can now tackle would have seemed like science fiction in the days of punch cards and coding sheets. But the fundamental challenge of software delivery remains the same: Creating systems that serve human needs effectively, ethically, and adaptably.

While it is easy in today’s AI era to focus on the new programming tools and advanced algorithms, the most important skill isn't mastering the latest framework or programming language. It is developing the ability to see beyond the code to the humans who will be affected by it. As software increasingly shapes our world, we have a responsibility to ensure it does so in a way that respects human values, promotes fairness, and adapts to serve evolving societal needs.

In the end, despite all the technological advances, the most important principle remains unchanged: software exists to serve people. As we push the boundaries of what's possible with AI and automation, let's ensure we never lose sight of the human element at the heart of everything we build. The best code in the world means nothing if it doesn't make someone's life better.