- SPEECH
Technology is neutral, governance is not: AI adoption in the banking sector
Keynote speech by Pedro Machado, KPMG RiskTech Conference
Frankfurt am Main, 24 February 2026
Thank you for having me here today.[1] It is a pleasure to address such an expert audience on a topic that is no longer on the long-term horizon of banking, but firmly at its core: the adoption of artificial intelligence (AI) and what this means from a supervisory perspective.
Let me start with a broader observation.
The banking sector is undergoing profound structural change. Digitalisation, data-driven business models, new competitors, evolving customer expectations and rapid technological innovation are fundamentally reshaping how banks operate, how they generate value and how they manage risk.[2] In response, banks need to rethink their business models and transform their operations. But this transformation cannot be pursued in isolation. It must go hand in hand with strong governance, sound risk management and full compliance with an increasingly dense and interconnected regulatory framework, including the Digital Operational Resilience Act (DORA), the Markets in Crypto-Assets Regulation, the Digital Services Act and the Artificial Intelligence Act.
Against this background, AI is not just another tool in the digital toolbox. Widely considered a general-purpose technology, it cuts across business lines, control functions and strategic decision-making. And precisely for that reason, it has become a key topic for supervisors.
Today, I would like to share a few points with you: what supervisors are currently monitoring in banks’ use of AI; where we see tangible benefits; where there are remaining gaps and emerging risks; and how supervision is adapting to ensure that innovation supports, rather than undermines, the stability and resilience of the banking system.
AI adoption: from experimentation to materiality
From a supervisory standpoint, one message is very clear: the use of AI in banking is no longer marginal.
Our annual data collection on innovative technologies across large banks under European supervision shows that more than 85% of them already use AI in some form, and adoption rates continue to increase year after year. This trend is not slowing down; quite the contrary, it is accelerating, particularly with the rapid emergence of generative and agentic AI.
Importantly, this is not just about a higher number of use cases. It is about where AI is being used and how deeply it is embedded in banking operations.
Traditionally, supervisory attention has focused on AI in areas such as credit risk assessment, fraud detection and transaction monitoring. These remain highly relevant, and I will come back to them.[3] But today, AI – and especially generative AI – is increasingly being deployed in three main areas: first, IT operations, for example to support incident management, coding or system maintenance; second, legal and document analysis, including contract reviews, regulatory interpretation and internal policy work; and third, front-line applications, such as customer support, relationship management and internal knowledge tools.
This horizontal expansion is significant. It means that AI is no longer confined to specialist modelling teams. It is now becoming part of the day-to-day operating fabric of banks.[4]
From a supervisory perspective, this shift matters because it changes the nature of risk. AI is no longer only about model risk in a narrow sense. It increasingly affects governance frameworks, business model evolution and multiple risks – operational risk, conduct risk, compliance risk and strategic risk.[5]
What we learned from engaging directly with banks
To move beyond high-level trends and better understand how AI is actually used in practice, ECB Banking Supervision recently conducted workshops with a number of banks that use AI in credit risk and fraud detection.
These workshops were particularly valuable because they allowed supervisors to engage directly with practitioners, not just at policy level, but also at operational level.
One key message from these banks was confidence. Many banks consider themselves well equipped to reap the benefits of AI, while also managing the associated risks prudently. They emphasised that they are building on existing governance structures, established risk modelling capabilities as well as long-standing experience with validation, monitoring and control frameworks.
This is a key point. AI adoption in banking does not start from a blank slate. The sector has decades of experience with complex models, internal ratings systems and supervisory scrutiny. That experience is a real asset.
At the same time, supervisors have also observed that AI introduces qualitative changes that cannot be addressed simply by expanding the existing frameworks. And this brings me to where we see the main challenges.
Governance: progress made, but work still to do
Let me start with governance, because it is here that many AI-related issues ultimately converge.[6]
Supervisors have seen encouraging developments in recent years as banks have put in place AI policies, dedicated committees or centres of expertise and clearer internal processes for approving and deploying AI use cases.
That said, governance remains uneven across institutions and use cases.
In particular, banks still need to do more in three main areas. First, they need to ensure clear and unambiguous accountability for AI-driven decisions. Second, there needs to be effective senior management oversight, reflecting the strategic importance of AI. And third, robust challenge mechanisms have to be in place, involving risk management, compliance and internal audit.
One supervisory concern we continue to encounter is fragmented ownership, with responsibility split across IT, data science teams, business lines and control functions, without a clear accountability framework.
These concerns are consistent with the principles in the European Banking Authority’s Guidelines on internal governance[7], notably the ultimate responsibility of the management body, the clarity of roles and responsibilities across the three lines of defence, effective independent challenge and adequate internal audit coverage of material risks.
From a supervisory standpoint, this is problematic. AI does not dilute responsibility. If anything, it raises the bar.
AI must be governed as a core business and risk topic, not as a technology side project. This includes ensuring that the risks stemming from AI use are clearly reflected in the risk appetite, that decisions on material AI use cases are subject to appropriate pre-implementation assessment and robust risk assessments performed by the second line of defence, and that post-implementation monitoring and internal reporting allow senior management to retain effective oversight.
Risk management frameworks: adapting to AI’s impact on how risks manifest
Closely linked to governance is the question of risk management frameworks.
Here again, supervisors recognise the efforts being made by banks. But we also note that the existing frameworks do not always fully cater to AI-specific challenges. Three areas warrant particular attention.
First, explainability. Banks are increasingly deploying explainability tools, which is a welcome development. But explainability must be properly understood. It is not just a matter of offering technical explanations or model documentation. It is about ensuring that decision-makers understand what drives model outputs, that risk managers can challenge them, that internal auditors can independently review them and that senior management can take responsibility for how they are used.
If a bank cannot explain why an AI model behaves the way it does, in terms that are meaningful for decision-making, then it cannot truly control that model.[8]
Second, model governance and lifecycle management. AI models, particularly machine learning-based ones, can evolve over time as data change. This makes monitoring, validation and change management more complex. Consequently, supervisory concerns regarding model management become even more relevant to ensure that model performance can be monitored continuously and to detect drift and unintended effects, with clear escalation and remediation processes in place for when models behave unexpectedly.
Third, data quality and data governance. AI shifts supervisory attention upstream from models to data. Data representativeness, data lineage and safeguards against bias are all critical. This is particularly true in areas such as retail credit and customer segmentation, where historical data may embed structural biases. Importantly, bias is not only a conduct or ethical issue. It can also become a prudential issue if it leads to the systematic underestimation or mispricing of risk or concentration effects.
Generative AI: a step change in dependencies
While many of these issues already arise with traditional AI, generative AI introduces additional challenges that deserve special attention. One key difference is dependency structure. Unlike many traditional models, generative AI systems are often sourced from a small number of major third-party providers and are heavily reliant on cloud infrastructure and built on large general-purpose models that are not fully transparent to users.
From a supervisory perspective, this raises several concerns regarding concentration risk, vendor lock-in, data confidentiality and security, operational resilience and exit strategies, as well as legal and reputational risks.
These issues link directly to DORA and to the broader supervisory focus on ICT and third-party risk. In other words, generative AI sits at the intersection of technology risk, operational resilience and strategic dependency risk.
This is one reason why supervisors intend to take a more targeted approach to generative AI applications in the future. In this regard, let me mention the publication, in July 2025, of the ECB’s Guide on outsourcing cloud services to cloud service providers,[9] explaining how the ECB expects banks to comply with DORA requirements. It also sets out good practices on effective outsourcing risk management for banks under ECB supervision that use third-party cloud services, based on observed industry practices, thereby fostering supervisory consistency and helping to ensure a level playing field by increasing transparency.
It is important for banks to reduce vendor lock-in risks by relying less on proprietary technology used by cloud service providers, and to have appropriate and tested contingency options for cloud services supporting critical or important functions. In light of complex supply chains, banks should take into account any risks associated with multiple sub-outsourcing arrangements. It is worth highlighting that DORA requires banks to conduct risk analysis that covers certain elements prior to entering into a new arrangement and to maintain a clear and holistic view of the risks associated with subcontracting services that support critical or important functions.
Strategy matters: AI as part of digital transformation
Let me now zoom out again and return to strategy. When banks leverage new technologies, in particular AI, supervisors expect them to do so within a coherent and explicit digitalisation strategy, in line with the key assessment criteria the ECB published in 2024.
This means that banks should be able to articulate where AI creates value, how it supports the business model, and how the associated risks are identified, assessed and managed.
Efficiency gains and innovation are legitimate objectives. But they must be pursued within a framework of control and accountability. Banks must also ensure that their strategy is aligned with their internal capabilities, not only in terms of financial resources, but also human resources such as IT skills, cultural change and innovation enablement, which are all key to success.
From a supervisory perspective, ambition is not a red flag. But unclear strategy and cost-benefit analysis of investment decisions is, because it results in poor steering of outcomes and risk-adjusted returns.
AI initiatives that proliferate without a clear strategic anchor tend to lead to fragmented governance and inconsistent controls, weak capital allocation and investment decisions, and the build-up of hidden risks.
We should also bear in mind that AI is no longer only a technological race but a geopolitical one. Technology choices, partnerships and governance models will shape long-term competitiveness, strategic autonomy and, ultimately, the digital sovereignty of the European banking system.
Supervisory outlook: technology-neutral, risk-focused
Let me now briefly outline how supervision is positioning itself in this fast-changing environment.
AI is reshaping banking rapidly and profoundly. As supervisors, we are committed to keeping pace with technological change while remaining anchored in our core mandate: safeguarding the stability and resilience of the banking sector.
In recent years, ECB Banking Supervision has identified key criteria and good practices to guide banks’ digital transformation in a sustainable and risk-aware manner. ECB Banking Supervision’s report[10] published last year on the topic revealed a significant increase in the adoption rate of AI in banking services. The targeted reviews and on-site inspections conducted between 2024 and2025 confirmed this development. Particular attention has been given to AI use in credit scoring and fraud detection, as well as to the emerging – and potentially disruptive – role of generative AI.[11] In 2025, we stepped up our monitoring through dedicated data collections and targeted supervisory dialogue to assess the microprudential implications of these developments.
Within the framework of the ECB’s supervisory priorities for 2026-28, under Supervisory Priority 2 on operational resilience and ICT capabilities, we will continue monitoring AI, with a more focused approach to generative AI applications.[12] This will allow for a broader assessment of their prudential materiality and inherent risks, paving the way for future supervisory action where needed. In parallel, we are actively following developments on both the EU AI Act and information sharing by national market surveillance authorities and the European Banking Authority.
Going forward, ECB Banking Supervision will continue to monitor the general use of AI across banks, while taking a more targeted and in-depth approach to generative AI applications and further evolution. We will also remain actively involved in discussions on the implementation and review of the AI Act, including through our contribution to the work of the AI Board’s subgroup on AI in financial services.
Our approach is technology-neutral. We do not supervise technologies. We supervise how banks apply technologies and ensure good governance, and how this affects their risk profiles – regardless of whether the risks arise from AI, tokenisation initiatives or other forms of innovation.
The supervisory objective is not to slow down this transformation, but rather to ensure that banks embrace new technologies prudently, integrate them coherently into their business and digital strategies, and remain fully in control from the outset, well before AI usage reaches systemic scale.
With the right governance framework in place, AI can strengthen banks, enhance resilience and support financial stability.
Conclusion
Let me conclude. Banks should prioritise digitalisation efforts to strengthen their competitiveness and effectively manage risks stemming from new technologies. Rapid technological changes, particularly in the area of AI, are reshaping the banking sector and institutions must act strategically to capture long-term value and adapt their business models. Banks are increasingly using AI because of both supply-side factors, including more affordable and more widely available technical resources such as model development and cloud storage, and demand-side factors, like expected efficiency gains and increased competition.
While AI has the potential to improve risk management and information processing, as well as provide efficiency gains through automation, the associated risks may become more noticeable as the corresponding AI applications are more widely used. The growing use of AI tools thus calls for a structured and holistic approach that integrates AI-related strategy, governance and risk management.
Supervisors, in turn, need to refine their assessment frameworks within their supervisory focus to better evaluate banks’ AI-related strategies, promote the adoption of industry best practices and ensure that appropriate safeguards are in place. This priority aims to help adopt a strategic supervisory stance on both the opportunities and the risks inherent in AI-driven applications and to pave the way for potential adjustments to the supervisory toolkit. This way, ECB Banking Supervision can help banks to proactively address the emerging risks, while effectively aligning the usual short to medium-term focus of the supervisory priorities with a longer-term strategic perspective.
In 1934, T.S. Eliot asked: “Where is the wisdom we have lost in knowledge? Where is the knowledge we have lost in information?”[13] At times of rapid technological change, AI vastly expands our capacity to generate and process information. But information does not automatically translate into judgement. And in banking, the real challenge lies not only in deploying technology, but also in governing it responsibly.
I am grateful to Katia Mastrodomenico, Olga Aloupi, Meven Barrillot and Gilles Bouvier for their comments and support in preparing this speech. All errors and inaccuracies are my own.
See Machado, P. (2025), “Time is on our side: Embracing digital change while ensuring stability”, speech at the SSM Conference on Digitalisation, 16 October, and Montagner, P. (2026), “Encouraging innovation, managing risks: the ECB’s approach to digital transformation”, speech at the 10th Annual FinTech and Regulation Conference, 3 February.
See Singh, S., Schupbach, A., Asiala, A. and Siwecki, D.A. (2025), “AI’s impact on banking: use cases for credit scoring and fraud detection”, Supervisory Newsletter, 20 November.
For a broad overview, including the insurance sector, see Basel Committee on Banking Supervision (2024), “Regulating AI in the financial sector”, FSI Insights, No 63, 12 December.
For a broad overview on digital dependency risks, see Dutch Authority for the Financial Markets/De Nederlandsche Bank (2025), Digital dependence of the financial sector.
For a more general overview of bank governance in the current evolving risk landscape, see Tuominen, A. (2025), Bank governance in a changing risk landscape, speech at the “Board of the Future” seminar, jointly organised by the European University Institute and the ECB, 27 October.
See EBA (2021), Final report on Guidelines on internal governance under CRD.
See Basel Committee on Banking Supervision (2025), “How supervisors can address explainability”, Occasional Paper, No 24, September.
See ECB (2025), Guide on outsourcing cloud services to cloud service providers.
ECB (2024), Digitalisation: key assessment criteria and collection of sound practices.
ECB (2025), Aggregated results of the 2025 SREP, November.
See ECB (2025), Supervisory priorities 2026-28, in particular Priority 2.
Eliot, T.S. (1934), Choruses from “The Rock”, Faber & Faber, London.
Banco Central Europeo
Dirección General de Comunicación
- Sonnemannstrasse 20
- 60314 Frankfurt am Main, Alemania
- +49 69 1344 7455
- media@ecb.europa.eu
Se permite la reproducción, siempre que se cite la fuente.
Contactos de prensa