Zoekopties
Home Media Explainers Onderzoek & publicaties Statistieken Monetair beleid De euro Betalingsverkeer & markten Werken bij de ECB
Suggesties
Sorteren op
Pedro Machado
ECB representative to the the Supervisory Board
Niet beschikbaar in het Nederlands
  • SPEECH

Artificial intelligence and supervision: innovation with caution

Speech by Pedro Machado, 10th anniversary ethics and conduct framework Banco de Portugal, Lisbon

Lisbon, 14 October 2025

Distinguished board members, ladies and gentlemen,

Thank you very much for your kind invitation to this event.[1] It is a great pleasure to join you to mark the tenth anniversary of Banco de Portugal’s ethics and conduct framework.

We are living in an era of swift and disruptive technological advances. After the advent of the internet in the early 1990s, artificial intelligence (AI) is now widely seen as the next revolutionary step in technological transformation. This profound and rapid change, which arguably will bring an array of meaningful benefits to economies and societies, also poses risks which need careful assessment and monitoring, notably from the perspective of ethics and conduct.

There is a saying: “when you have a hammer, everything looks like a nail”. Earlier this year, AI was hyped as the hammer of the century – about to master any human skill and make human work obsolete. Many, including experts in compliance and risk, remained sceptical, and this summer’s “AI disillusionment” seemed to vindicate them. However, the season ended with renewed exuberance, reflected in multibillion-euro contracts and surging market valuations across the AI landscape. One thing is clear: AI is a powerful tool, but it’s only as wise as the hands that guide it.

At the ECB, we are adapting to the technological changes and embracing the opportunities that AI presents. At the same time, we remain cautious and have realistic expectations for progress. In practice, we use AI to support our supervisors in their supervisory assessments, but the tool does not replace human judgement. We are also carefully managing the associated risks.

Today, I would like to talk in more detail about how AI is helping our supervisors do their jobs more effectively, how we are managing the related risks, and how we see the road ahead – based on a foundation of ethics, compliance and public trust. I’m also looking forward to hearing your views on this from a compliance perspective.

What we have built and why it matters

Two crucial and timely decisions have helped us to build our current solid AI foundation. In 2020 we created a division dedicated to technology and innovation within banking supervision. The timing mattered. It allowed us to put in place the foundations that make responsible AI possible today. In 2024 we encouraged our Joint Supervisory Teams to experiment with AI – carefully, transparently and with clear use cases – so that practical tools could prove their value in day-to-day supervision. The goal was to learn, improve, and gauge what worked.

Over the past three years, we have invested in a shared digital backbone for the Single Supervisory Mechanism (SSM). This is not a lab. It is a collection of tools and principles that are being used by supervisors across Europe. Let me highlight its five key foundations.

First, Agora, our prudential “data lake”, brings data together in one place. That means a supervisor can combine datasets without cutting and pasting across systems. Soon, natural language access to this data lake will allow supervisors to ask a question in plain English and receive a reliable, explainable reply without having to write a script.

Second comes intelligent access to documents. The tool Athena helps supervisors to search across millions of pages, summarise key points and instantly retrieve relevant guidance or information about banks – with visible sources.

Third, we have a Virtual Lab. This cloud collaboration space allows supervisors and data scientists to build practical AI use cases together, share prototypes and reuse what other teams have developed.

Fourth, we are building what you might call a single supervisory cockpit. This integrated view will bring together structured indicators and unstructured insights – dashboards, documents, AI assistants – with explainable flags and transparent workflows.

And lastly, we are scaling up this technology across European banking supervision. What proves its value in one team should become available for all, so that quality goes up everywhere, not just in certain areas.

These words may not be glamorous, but they describe the basic, vital components – the plumbing, if you will – of a system that makes trustworthy AI possible.

On top of these foundations, we are building specific AI tools. Let me give you some concrete examples.

Delphi enables the early detection of emerging risks for SSM banks and for the banking sector overall. Using language capabilities, it integrates market indicators and social media information into a single web-based dashboard. This allows supervisors to understand the real-time development of risks affecting each bank. Another clear example is Medusa. This tool is a one-stop-shop for multiple stakeholders working on supervisory findings and measures. Medusa enables inspectors and supervisors to access relevant documents easily, using smart search and reporting functionalities as well as visualisations and statistical analyses.

Heimdall assists with fit and proper assessments by reading banks’ questionnaires, translating where needed and highlighting relevant passages for review. Again, Heimdall does not decide. It prepares the ground so that supervisors can focus on complex cases and apply judgement where it matters. Navi visualises ownership and funding relationships across groups. When risk runs through structures – not just through simple line items – Navi reveals in pictures what spreadsheets miss.

As I mentioned, these are only some examples. Many other AI tools are under development: these will make specific assessments more robust, enable more effective supervisory measures and even provide support with complex quantitative analysis.

Of course, all of this did not happen overnight. Several years ago, we reached out to all the relevant business areas and collected more than 100 ideas. We clustered these into high-value use cases and prioritised those with real benefits. We also put effort into rolling out what works more widely, so it serves all supervisors rather than a single team. That is the spirit behind our shared IT landscape: simpler data flows, interoperable systems and reusable solutions – so that every European banking supervisor can work from the same trusted foundations.

What have we learned from these early deployments? Four things stand out. The main gain is quality. Yes, there are time savings. But the largest benefit of adopting AI is depth: reading at scale, linking a wide array of relevant sources and iterating quickly. AI is good at sifting millions of words, extracting entities and topics, and identifying patterns. This does not replace the supervisor’s assessment; it provides a more complete view to support a well-founded conclusion. AI can also improve consistency. When the same question runs on the same data and the output is traceable, outcomes are likely more consistent. That can further support a level playing field in the supervision of banks across the euro area.

In addition, AI allows supervisors to devote more time to what matters. They can spend less time on assembling information, and more time on challenging analyses and conclusions, ensuring that our supervisory actions make sense from different perspectives. That is a healthier allocation of effort.

Lastly, the risk landscape is expanding fast. As a consequence of AI, the financial system itself is becoming more complex and exposed to new forms of risk – from algorithmic trading dynamics to AI-driven fraud and cyberattacks. By embracing AI within supervision, we’re ensuring that our own capabilities stand a chance of keeping pace with these challenges.

In short: AI is strengthening our human-centred supervision by making it better informed, more consistent and more focused on judgement. This makes us better equipped to keep pace with a financial system that is becoming more digital, more complex, and ultimately ever more demanding.

How we manage the risks

For all its promise, AI brings with it risks we must manage with care. Let me highlight five of them.

Inaccuracy and hallucinations. Today’s large language models can produce answers that are fluent, confident – and wrong. In supervision, “wrong but confident” is outright dangerous. Our response is twofold: we ground systems in authoritative sources – the evidence should always be just one click away. And for any material output that affects banks, we keep human understanding and judgement at its heart – and not merely “in the loop”.

Skills and culture. There is a real risk of deskilling – people routinely accepting AI output without challenging it and, ultimately, even losing their ability to critically challenge it. We are countering this by investing in skills: targeted training, developer communities sharing best practices, and sandboxes where teams can safely experiment and reproduce results, coupled with an internal culture of respectful mutual challenge. The message is clear: use AI assistants to think more deeply – never to do your thinking for you.

Explainability. We will not outsource public-interest judgements to opaque black boxes. For us, explainability is not optional. Whether a tool is proposing a paragraph for a letter or flagging a risk, its reasoning must always be at our fingertips.

Cyber and operational risk. AI can help defenders – but it also helps attackers. Individualised social engineering, faster code generation and deepfakes mean that the bar for controls must constantly be raised. We are strengthening cyber hygiene, monitoring and incident response, and our approach to operational risk management combines artificial and human strengths.

Lastly, the cliché of “AI talking to AI”. Banks are increasingly using AI to draft what they send us. Supervisors are increasingly using AI to review it. If we are not careful, we could end up with AI assessing AI while underlying realities – including underlying risks – remain hidden. Our countermeasures are practical: sample raw data, reconcile narratives with independent indicators, and question anything that looks too neat.

Europe’s AI Act is entering into force and will be applied in phases. We are aligning our internal practices and vendor arrangements with that risk-based framework. Most of our use cases – search, classification, translation and analytical assistance – do not fall into the highest risk categories, but they are not trivial. Good governance is still required. Each of our AI use cases has to undergo an internal operational risk assessment, without which not even Virtual Lab prototypes are permitted.

Compliance is central to this. Its role is to turn principles into operating guardrails: clear accountability, proportionate controls and traceability from data to decision. That is how we protect integrity while we innovate.

The role of ethics and conduct in public institutions and financial oversight is therefore more vital than ever. In moments of rapid transformation, it is not only laws and procedures that anchor us, but also the values we embody and the trust we inspire. Institutions can only be as strong as the ethical convictions of those who serve within them.

Artificial intelligence, data-driven decision-making and other innovations challenge us to ensure that technological progress does not outpace our moral compass. Innovation without integrity can quickly erode trust, while innovation guided by sound ethical principles can multiply its benefits.

Conclusion

We often hear people say “AI will make processes more efficient”. Yes, time savings are part of the benefit: routine steps such as standard checks, common classifications and retrieval of relevant guidance become smoother. But the real opportunity comes when these time savings are invested into enhanced supervisory quality, better reasoned and more accurate assessments and more effective supervisory actions and measures. Supervision becomes less about assembling information and more about challenging conclusions. We need these enhancements to effectively adapt to an ever riskier financial landscape.

At the ECB, we will continue developing and experimenting responsibly, with appropriate checks and balances. While doing so, we will keep human judgement at the forefront, especially as supervised banks themselves increase their use of AI. Financial stability is complex. It is not a matter of a few simple rules. There is no well-defined target function to be optimised. Supervision requires deep understanding, evidence-based analysis and intuition based on experience and underlying human values. It also requires critical discussion and an open mind.

If we do all of this well, the future of supervision will not be machines supervising banks. It will be humans supervising more effectively – supported by machines – with more time to engage where we add the most value.

Innovation needs to be safe, legitimate and robust. Combining human judgement with machine assistance is not just a technical challenge. It is an ethical one. As Seneca observed, “It is not because things are difficult that we do not dare, but because we do not dare that they are difficult”.[2] So let’s pick up the hammer and embrace technological change while remaining vigilant – ensuring that our innovations are both safe and resilient as we move forward and continue to keep safety and strength at the core of what be build.

Thank you.

  1. I would like to thank Jan Hendrik Schmidt for his contribution to this speech.

  2. Epistulae Morales ad Lucilium (Moral letters to Lucilius), Letter 104.

CONTACT

Europese Centrale Bank

Directoraat-generaal Communicatie

Reproductie is alleen toegestaan met bronvermelding.

Contactpersonen voor de media
Klokkenluiders