- Digital Economy Dispatches
- Posts
- Digital Economy Dispatch #181 -- A Basis for Measuring AI-at-Scale Maturity
Digital Economy Dispatch #181 -- A Basis for Measuring AI-at-Scale Maturity
Digital Economy Dispatch #181 -- A Basis for Measuring AI-at-Scale Maturity
28th April 2024
I've always approached maturity models with a certain amount of skepticism. Over the years, I've encountered a myriad of these models, tailored to various sectors, organizations, products, and services. While they can serve as valuable tools for assessing progress and benchmarking against competitors, I've found that too often they have been crafted to support a particular agenda or to facilitate costly consulting engagements. Thus, whenever a new maturity model report surfaces, I exercise caution, accompanied by a series of inquiries: What purpose does it serve? Who developed it? What are their motives? How was the data gathered and evaluated? Such questions are vital to discover if the motives and measures make sense.
Nonetheless, it is hard to ignore the utility of identifying a set of fundamental characteristics that can establish a common framework for comparing multiple systems and solutions. This is particularly pertinent in emerging fields where terminology is still evolving, excessive hype abounds, and new products and services are emerging incessantly. Yes – just like AI.
The Elements of AI-at-Scale Maturity
AI has emerged as a transformative force across industries. Leaders and decision-makers increasingly acknowledge its potential to fuel innovation, streamline processes, and bring competitive advantage. However, scattered uncoordinated use of AI tools by a range of individuals and teams is far from sufficient. To fully harness AI's potential, organizations must adopt an "AI-at-Scale" mentality. This will drive the seamless integration of AI across diverse business functions, fostering a culture driven by data and consistently yielding value.
In this journey, evaluating AI-at-Scale maturity can be effective for organizations to track their progress and pinpoint areas for enhancement. In my experience, several key dimensions can serve as focal points for gauging an organization's AI-at-Scale maturity:
Strategic Alignment: Mature organizations exhibit a clear alignment between their AI initiatives and overarching business strategies. They define a distinct AI vision, delineating specific objectives tied to strategic imperatives. Rigorous business case assessments ensure that AI projects address critical issues and deliver tangible returns on investment (ROI).
Data Governance and Infrastructure: Data forms the bedrock of AI. Mature organizations employ robust data governance frameworks that uphold data quality, security, and accessibility for AI models. They invest in scalable, secure data infrastructures capable of accommodating the ever-expanding volume, variety, and velocity of data essential for AI at scale.
Talent and Skills: Effective AI implementation necessitates a proficient workforce. Mature organizations nurture a culture of AI literacy, fostering continuous learning and upskilling initiatives. They actively recruit and retain talent versed in data science, machine learning, and AI engineering.
Operational Efficiency: AI projects should seamlessly integrate into existing workflows. Mature organizations establish efficient processes for model development, deployment, monitoring, and maintenance. Automation tools and robust Machine Learning Operations (MLOps) practices facilitate a smooth, uninterrupted AI lifecycle.
Business Value Generation: Ultimately, AI must deliver tangible benefits. Mature organizations define clear metrics to measure the success of their AI initiatives. These metrics extend beyond model accuracy, encompassing business-specific outcomes such as revenue growth, enhanced customer satisfaction, and operational cost reductions.
With these areas of focus, organizations can start to align and coordinate their efforts to offer a more substantive and sustainable approach to AI adoption. Understanding and tracking progress in these areas can help prioritize investments in new AI technologies and practices to ensure they advance the organization’s AI-at-Scale ambitions.
An Example: The Evident AI Index for Banks
It is with these thoughts in mind that I was interested to come across the Evident AI Index for Banks. It aims to provide an impartial assessment of AI maturity among the world's 50 largest banks. By employing an "outside-in" approach, it relies solely on publicly available information to offer a comprehensive evaluation of how these banks are approaching AI-at-Scale.
Defining AI in Context
Within the scope of the Evident AI Index, the AI focus is on the utilization of disruptive technology within a bank's operations. It encompasses computational processes that analyze extensive datasets to facilitate human-like decision-making, recommendations, or predictions. These tools automate and scale specific tasks, enhancing efficiency across the bank.
Despite AI's disruptive potential, fully realizing its value requires substantial organizational transformation. This transformation hinges on years of investment in digitization and robust data infrastructure. Recent industry discussion has heavily focused on Generative AI, particularly its potential benefits in fraud detection, customer service, and risk assessment. However, these discussions often overlook associated risks such as bias, data security, and regulatory challenges.
The Evident AI Index Perspective
The Evident AI Index views Traditional AI (pattern recognition) and Generative AI (pattern creation) as existing on a spectrum, both requiring investment in the same foundational elements. To explore this, they observe that successfully adopting AI at scale mandates continuous evaluation across four key, interconnected areas:
Talent: Long-term investment in skilled personnel. The Evident AI Index measures the number, density, and academic background of AI & Data employees working at each bank; as well as the visible initiatives underway to hire, retain, and develop leading AI talent.
Innovation Capability: Cultivating a culture of innovation. The Evident AI index measures a bank’s long-term investment in AI innovation, extending to AI-specific research and patents; AI-focused investments, acquisitions, and partnerships; as well as engagement with the open source ecosystem.
Leadership: Strong top-down leadership championing AI initiatives. The Evident AI Index measures the AI focus of the bank’s leadership, expressed through the company’s overarching AI narrative, composition of the Executive Leadership team, and external communications from select C-Level Executives.
Responsible and Ethical AI: Establishing a rigorous framework for responsible AI (RAI) development and deployment. The Evident AI Index measures the extent to which banks are focusing on RAI, as evidenced by the publication of thought leadership, establishment of key partnerships, hiring of dedicated RAI talent, and promotion of RAI principles.
Data Collection Methodology
The Evident AI Index evaluates each bank against over 100 individual indicators grouped into four core pillars. Leveraging a multifaceted approach, the Index combines extensive manual research, automated data collection, and consultations with leading subject matter experts. Transparency guides metric selection, ensuring clarity, measurability, and direct responsiveness to specific actions or investments, thereby maintaining the Index's relevance and effectiveness.
To achieve this, the Evident AI Index employs proprietary machine learning tools to extract data points from millions of sources across two primary categories:
First-party Company Reporting and Public Disclosures: This encompasses press releases, investor materials, websites, social media accounts, and senior leadership interviews.
Third-party Platforms Housing Bank Data: This includes data from LinkedIn profiles, career sites, patent databases, academic resources, conference platforms, company information platforms, employee review sites, code repositories, and media monitoring tools.
Challenges of Using Maturity Models as an Assessment Approach
Maturity models such as the Evident AI Index can provide a significant amount of insight and data. Yet, despite their utility, maturity models pose inherent challenges when utilized as assessment approaches for AI-at-Scale maturity. It is crucial to acknowledge the limitations of relying on maturity models for AI assessment. Most obviously, these models often offer broad assessments based on generic frameworks that might not capture the nuances of an organization's specific industry, business goals, or technological landscape. Additionally, excelling in one dimension (e.g., data infrastructure) might mask deficiencies in another (e.g., talent acquisition).
As a result, when employing any maturity assessment, several challenges must be addressed. Perhaps the must difficult operational aspect is that maturity models can be complex frameworks comprising multiple dimensions and criteria. This makes them expensive to apply and renders them susceptible to subjective interpretation. Assessing diverse facets of AI maturity presents challenges in developing standardized metrics and benchmarks applicable across industries and organizational contexts.
Furthermore, the rapid evolution of AI technologies requires continual adaptation and refinement of maturity models to remain relevant. This poses a dilemma. Consistency is essential if maturity models are to be used to compare and contract multiple offerings, or to review the same offering over time. However, static maturity models often fail to capture emerging trends, technological advancements, and evolving best practices, undermining their efficacy as assessment tools for AI-at-Scale maturity.
This is even more challenging because the utility of a maturity assessment relies heavily on extensive data inputs to describe organizational capabilities and performance. However, obtaining accurate and comprehensive data for assessment purposes poses challenges, especially concerning proprietary information, data silos, and inconsistencies in data quality across organizational domains. As a result, implementing a meaningful maturity assessment needs significant resource allocation in terms of time, expertise, and financial investment. Organizations may encounter challenges in dedicating resources to conduct comprehensive assessments, particularly amidst competing priorities and resource constraints.
A Measured Approach to Maturity
Achieving the benefits of AI-at-Scale requires effective leadership and careful management. Planning the steps on the journey and benchmarking performance against others will be important in this task. Conducting an AI-at-Scale maturity assessment can help. When taking this approach, there are several key considerations for digital leaders to navigate the complexities of measuring and managing AI-at-Scale maturity:
Embrace a Holistic Approach: Move beyond a checklist mentality. Assess AI maturity across various dimensions, tailoring the evaluation based on your organization's unique context and goals.
Focus on Business Outcomes: Don't get caught up solely in technical metrics. The ultimate goal is to translate AI capabilities into actionable business value. Define clear success metrics linked to strategic objectives.
Foster a Culture of Learning: Create an environment where continuous learning and experimentation are encouraged. Invest in training programs and knowledge-sharing initiatives to upskill your workforce.
Prioritize Responsible AI: Ensure your AI initiatives are implemented responsibly and ethically. Address potential biases in data and algorithms, and build trust with stakeholders by focusing on transparency and explainability.
Adapt and Iterate: The AI landscape is constantly evolving. Regularly reassess your AI maturity level and adapt your strategies based on new developments and emerging best practices.
By implementing these recommendations, digital leaders can embark on a data-driven journey towards achieving AI-at-Scale maturity. Maturity models can be helpful in this task, but only when applied with care. This journey is not about reaching a fixed endpoint, but rather about cultivating a continuous cycle of learning, improvement, and value generation through the responsible deployment of AI.