Digital Economy Dispatch #259 -- Untangling the Myth of "Could" vs "Should" in AI Decision-Making

AI is blurring the lines in decision making. As you evolve your AI strategy, if you still think AI handles the “could", while humans control the “should”, you may be leading your organization astray.

When does AI assistance shift from being a helpful aid in human decision-making to becoming the invisible hand driving our choices? At what point does AI stop supporting our decisions and start making them on our behalf?

Many senior leaders I speak with act as if these lines are clearly drawn. They're confident in their statements about where human judgment begins and AI assistance ends. "We use AI for analysis", they say, “But we make the decisions". I'm not so sure. In fact, I'm increasingly convinced that this false certainty is becoming one of the biggest risks facing leaders in many organizations today.

Consider how decisions actually unfold in AI-augmented organizations. A strategy team asks AI to analyze market opportunities, say. But unless very carefully controlled, the AI doesn't just crunch numbers, it defines what counts as an "opportunity", weights different factors, and presents options within frameworks it has learned, been given, or simply invented. By the time three strategic choices reach the boardroom, thousands of micro-judgments have already been made about what's realistic, what's valuable, and what's worth considering. The humans making the "final decision" are choosing from a menu they didn't write, filtered in ways they’d struggle to describe, using criteria that nobody agreed, based on an understanding of context that’s both narrow and flawed.

Perhaps these limitations can be easily overcome if you’re redesigning a marketing strategy or planning a product launch. The time and opportunities for human review may be clear and obvious. But what about AI use in more urgent, short-term decision making that are automated within hidden workflows, buried in products and services you offer, or implicit in the vast array of everyday mission critical actions that drive the organization.

Could vs Should

The comfortable conversation hasn't changed much in recent years. "AI will show us what we could do", the executive says confidently, "but humans will always decide what we should do." Heads nod around the table. The moral authority remains safely in human hands. The machines merely process: we decide. Really?

Of course, the intent is clear and reasonable. Digital technology in the background while humans maintain oversight and control. I fear that increasingly in the world of AI adoption this comfortable theoretical distinction cannot readily be made in practice. More than that, it's becoming a dangerous myth for organizations navigating AI transformation. The reality emerging from AI deployment at scale reveals something far more subtle and complex: the line between computational analysis and value-based judgment has become irretrievably blurred.

I’m Sorry Dave…I Can’t Let You Do That

Consider how modern AI systems function in organizational decision-making. When an AI system recommends a course of action such as restructuring supply chains to optimize for resilience over efficiency, it has already made value judgments about risk tolerance, stakeholder priorities, contextual secondary effects, and time horizons. These aren't neutral calculations. They're decisions about what matters, embedded in algorithms through weighted predictions, supplied training data, inferred optimization targets, and a million different architectural choices.

The basic "could/should" framework assumes AI systems present option sets like items on a menu, with humans selecting based on values and judgment. However, to achieve this requires incredible discipline and skill in how the AI tools are used. Sadly lacking in the vast majority of situations in which these capabilities are being applied. When AI is your preferred hammer, every problem starts looking like a nail.

As a result, AI systems don't generate neutral possibility spaces. Every parameter, every training dataset, and every reward function encodes human values. Often this is happening unconsciously, frequently inconsistently, and usually created by developers worlds away from your organizational context. When AI presents you with three strategic options, it has already eliminated thousands of others based on embedded assumptions about feasibility, desirability, and viability. Many of which you may want to question based on your experience, judgement, and knowledge of the operating environment.

More Decisions, More Often

More troublingly, the sheer complexity and speed of AI-mediated environments makes pure human judgment increasingly impossible. Algorithmic trading systems execute thousands of transactions per second, content moderation systems process millions of posts daily, predictive maintenance systems monitor countless sensor streams simultaneously. So, where exactly does human judgment intervene? We've already ceded the "should" to machines in countless micro-decisions that will aggregate into macro-consequences.

The healthcare sector illustrates this very clearly. AI diagnostic systems don't just identify possible conditions; they rank them by probability, recommend treatment pathways, and even suggest resource allocation. These actions are based on encoded medical ethics, liability concerns, and cost-benefit analyses. The radiologist reviewing an AI-flagged scan isn't making decisions in isolation but within a framework pre-structured by algorithmic judgments about what deserves attention.

Thankfully, in medical systems the governance processes and regulations around the use of AI systems are pretty robust (although not faultless). But outside of this domain there are many areas lacking all such controls. Think about what happens in your own organization, from hiring practices to supply chain management, and ask yourself how well AI use is being controlled. Worried yet?

Toward a New AI Decision Making Framework

This isn't to suggest human judgment becomes irrelevant, quite the opposite. We need frameworks that acknowledge the genuine nature of human-AI decision-making: deeply collaborative, mutually supportive, and inseparably hybrid. The question isn't whether humans or AI should make decisions, but how to design decision systems that leverage both effectively while maintaining accountability, adaptability, and alignment with organizational values.

First, we must recognize that all AI systems embody values through their design. Organizations need robust processes for interrogating these embedded values, understanding their provenance, and actively shaping them. This means involving ethicists, domain experts, and affected stakeholders in system design, not just data scientists and engineers.

Second, we need new models of distributed decision authority that map different types of decisions to appropriate human-AI configurations. Some decisions require human creativity and moral inputs; others benefit from AI's pattern recognition and consistency. Many require both, in carefully orchestrated processes. The challenge lies not in drawing rigid boundaries but in creating flexible, context-aware frameworks for decision delegation.

Third, organizations must develop new competencies in what we might call "algorithmic literacy" to provide the ability to understand not just what AI systems recommend but how they reach those recommendations, what values they encode, and what blindspots they possess. Senior leaders can no longer treat AI as a black box that delivers neutral analysis; they must understand it as a participant in organizational decision-making with its own inherent biases and limitations.

Finally, we need governance structures that reflect this new reality. Traditional approval hierarchies are inadequate for AI, often assuming human decision-makers at each level. But when AI systems make millions of micro-decisions that shape macro-outcomes, when recommendation algorithms influence rather than merely inform, when predictive models become self-fulfilling prophecies, then governance must evolve. This means new forms of algorithmic auditing, continuous monitoring of decision outcomes, and kill switches for when human intervention becomes necessary.

The organizations that will thrive in AI-dominated environments won't be those that maintain an artificial distinction between computational "could" and human "should". They'll be those that develop sophisticated frameworks for human-AI collaboration, recognizing that values and judgments are distributed throughout sociotechnical systems, not localized in human minds.

The question facing senior leaders isn't whether to preserve human decision-making authority. That need is clear. Instead, it's how to exercise that authority effectively in a world where the very nature of decision-making has fundamentally changed. The comfortable position of "AI proposes, human disposes" must give way to the complex reality of hybrid intelligence. Only then can organizations harness AI's transformative potential while maintaining the accountability and wisdom their stakeholders demand.