• Digital Economy Dispatches
  • Posts
  • Digital Economy Dispatch #046 -- Why Adopting a NoCode/LowCode Approach to Your Digital Transformation Might Be a Really Bad Idea

Digital Economy Dispatch #046 -- Why Adopting a NoCode/LowCode Approach to Your Digital Transformation Might Be a Really Bad Idea

Digital Economy Dispatch #04625th July 2021

Would you like some AI with that? The Pervasive Nature of AI and the Basis for its Intelligence

I’m not really sure how I ended up as a computer programmer. Growing up in a working-class area of Liverpool in the 1970s and 80s I had not even seen a computer. At Anfield Comprehensive School there were no classes teaching programming and no access to these new-fangled machines. Yet, there I was at the age of 17 heading off to university to study for a degree in computer science. The spark to set me off on this path must have come from somewhere, but I’m blowed if I can recall it.

Three years later and I was writing accounting and stock control systems in Basic-PLUS-2 on a Vax-11/780 computer for a small consulting company. I’d spent my time learning about how computers work, exploring the structures and design elements of writing software, reviewing how computing was changing the business world, and applying my skills in programming languages as varied as COBOL, FORTRAN, Pascal, Basic, and Prolog. I still had a lot to learn, but the foundation I had received over that time ensured that I not only knew “how”, I also knew “why” computers worked the way they did.

Sounds like a lot of work. Surely in creating software-based solutions, much of that can be by-passed? The rise of NoCode and LowCode approaches appear to reinforce this view. Software without tears. Some of its advocates even claim that “software engineering is facing a slow death”. I don’t think so. And to prove my point, we’re going have to take a trip back in time. Ready for the ride?

Software Without Tears

The volatility we have faced over the past 18 months has taken a heavy toll on businesses as they try to adapt quickly to changing needs of customers, a variety of restrictions to business operations, disrupted markets, and unpredictable regulatory shifts. This has required frequent substantial changes to business and operational software. One of many of the digital transformation accelerations we have seen to address this over the past few months has been the adoption of NoCode and LowCode approaches to software delivery.

The promise of NoCode/LowCode approaches is that business users with little or no understanding of software development can create their own solutions without writing a single line of code. Using visual notations and graphical user interfaces, the users of these tools can draw out their solutions as flows of actions that are converted into working systems. Sounds cool. But, does it work?

There is no doubt that NoCode/LowCode can support an organization to build solutions quickly. They create a direct link between business analysts close to the end user need and the deployed solution by avoiding costly and time-consuming parts of the software engineering lifecycle. My fear, however, is that in doing so many people are too focused on the near-term concerns and without the context to understand the limitations of NoCode/LowCode they are underestimating the longer-term impact this will have on the quality, stability, and resilience of their deployed systems.

To understand and appreciate these issues we need to take a quick trip into the past.

Back to the Future

The earliest uses of computers were for scientific tasks, mathematical calculations, and statistical analysis. However, as businesses began to appreciate the value of rapid, accurate computation, the job of programming computers moved from the domain of the scientist to a new role; that of the computer programmer, supported by business analysts able to translate business needs into processes amenable to computer-based solutions. The deployment of the resulting software into mission-critical business environments demanded that these programs be adequately tested, could be maintained once in production, and subsequently evolved appropriately as new requirements were determined. In keeping with mid-20th century management practices, software tasks were structured as complex construction and maintenance processes modelled and managed along similar lines to manufacturing and construction projects.

In spite of their many successes, software delivery organizations had a poor reputation throughout the second half of the 20th century, as projects creating or deploying large software systems were all too frequently late and ended up costing far more than originally anticipated. Large-scale software development projects were implemented through compliance-driven processes that managed the long-term delivery activities, executed by extensive teams of people through a series of phased tasks whose progress was measured in terms of incremental steps toward output-based delivery targets. Such a linear approach to successive phases was designed to ensure that completeness and accountability could be verified throughout the project’s lifetime. Success in these environments relied on extensive planning, well-defined processes, and reuse of predictable architectural solution patterns and components to ensure stability and predictability in operation.

These bureaucratic approaches were very successful for many styles of project development. They brought strong governance and rigorous consistency to demanding multi-dimensional programs delivering critical functionality on which organizations were increasingly relying for their operations.

However, these approaches also had their disadvantages, particularly in situations where the priority was to deliver solutions quickly in volatile or poorly understood contexts. Often, the rigid and bureaucratic overhead in such circumstances significantly reduced the effectiveness of the processes, and the software produced was costly, over-engineered, and inappropriate for the situation[i]. Furthermore, those involved in these projects increasingly began to see themselves less as creative solution designers, and more as mechanics in a software factory process.

In response, several distinct approaches to accelerate software delivery emerged. Of course, the most obvious way forward was to introduce more agility into the software development and delivery activities being undertaken. Tightly integrated teams prioritizing their work into short, time-boxed iterations introduced flexibility into the software process. The Agile Manifesto defined in 2001 embodied these directions and gave voice to the hopes and aspirations for a more responsive, customer-oriented approach to software delivery. It subsequently has spawned many associated techniques, tools, and technologies.

However, the more immediate path to speed up software development was to focus on reuse rather than a faster way to write new code. This approach spung from a recognition that large parts of software development repeatedly followed similar patterns: Take a user input on the screen and validate it, look up items from a database of objects, display a series of choices and allow a selection to be made, create an ordered list of items on a screen and allow a user to reorder them based on a given criteria, and so on. By creating standardized templates for each of these patterns a higher-order language for creating solutions could be defined and solutions could be pieced together from these pre-built parts. These came to be called 4th Generation Languages (4GLs) as they were the next step in moving from machine languages (1st Generation), assembly languages (2nd Generation), and procedural programming languages (3rd Generation).

Based on this simple premise, three different ways of working became popular.

The first approach was focused on creating models using a modelling notation that acted as the visual representation of the solution requirements and the basis for generating working code. These model-driven development (MDD) styles had many different variations emphasizing distinct characteristics applicable to particular tasks based on the visual notation and modelling formalism used.

The second introduced component-based development (CBD) to gather collections of well-constructed software into libraries of standardized components and devised a way to stitch together those components to create more complete applications. In some cases, substantial components were developed as the framework for solutions to act as the primary controlling mechanism and additional components then complemented this framework with additional capabilities. Application development was seen as a process of framework completion.

Finally, specialist domain knowledge can be built into tools aimed at offering customized environments to construct specific kinds of solutions. Computer-Aided Software Engineering (CASE) tools organized their capabilities around domain knowledge and associated rules (e.g., in aerospace and defence scenarios), created restricted modelling formalisms appropriate to the domain (e.g., in engineering disciplines), or provided large parts of the management infrastructure associated with the domain (e.g., the auditing and reporting capabilities essential to many business solutions). Families of similar solutions (sometimes called product lines) were created around standardized solution architectures appropriate to these domains.

So Near Yet So Far

All of these had their uses and were popular in the 1980s and 1990s. But ultimately, they all failed to have substantial sustainable impact beyond narrow niche situations. Why?

Three challenges faced these approaches and limited their success.

  1. Lack of control and limited transparency. The premise of these 4GL approaches is simple: Fast, problem-oriented descriptions are created by abstracting you from the underlying details of the computer. This is attractive as it speeds up software delivery. However, it also means that the underlying details of the implementation of a solution are obscured and may be inaccessible. You rely on the tool vendor to get this right. Fine in good times when things go well; a nightmare when it doesn’t.

  2. Scarcity of talent. Some business analysts make good software creators. But in practice many do not. Finding the right software development skills is essential. Look for a new Java or Python programmer and you will have a wide choice. Try to employ experienced 4GL developers and you may be fishing in a very small pond.

  3. Inadequate lifecycle maintenance. Of course, we’re concerned to create new software capabilities quickly. Yet, we know that much of the challenge we face is after first deployment when we need to fix errors, update the features, extend to new usage models, and so on. The processes and tools required to do this frequently are lacking.

Be Careful Out There!

The push toward NoCode/LowCode approaches is an attempt to bring speed, flexibility, and increased customer responsiveness to the task of software delivery. It has many useful characteristics and should be considered an important tool to support any digital transformation strategy. But it is far from a panacea. To understand where and how to make it a success we need to look back to previous end-user software development experiences with 4GLs. There were successes. But only in very limited circumstances. It’s still important that we know what’s going on under the hood. 

Digital Economy Tidbits

Apple vs Facebook: How the war between the Silicon Valley tech giants is changing tech. Link.

I really enjoyed this article. It clearly lays out some of the biggest digital economy challenges of our time: Control of the internet, exploitation of personal data, how we view free speech online, and the way the Big Tech companies manipulate services and pricing for their own ends. How much this is really a personal battle between Mark Zuckerberg and Tim Cook is less clear to me.

Shock Treatment: Can the pandemic turn the NHS digital? Link.

More on covid and its implications for digital transformation in the public sector.

Spoiler alert: No.