Responsible innovation: how BBVA ensures safe use of artificial intelligence
BBVA has adopted a data and AI governance strategy that lets it widen the use of artificial intelligence safely and responsibly. The bank follows industry best practices and international regulations, keeps an inventory of its AI systems, and monitors them regularly to assess their accuracy, spot errors and ensure they remain current.

Artificial intelligence is reshaping finance. It helps firms personalize services for customers, manage risk more effectively and run operations more efficiently. Generative AI, powered by cloud computing, has greatly expanded the scope for innovation. At BBVA, it forms a core part of the bank’s transformation strategy, driving smarter solutions, sharper decision-making and closer ties with customers. Recent milestones in this strategy include a global agreement with Amazon Web Services to advance cloud adoption, the launch of the Analytics Transformation unit to bring together more than 2,500 data scientists across countries, the expansion of its AI Factories, and the integration of generative AI into its virtual assistant, Blue.
BBVA also aims to lead in managing the risks that come with AI. Its use must rest on strong governance: clear rules and practices that ensure security, transparency and compliance with regulation. As Ricardo García Martín, head of Analytics Transformation, puts it: “Our priority is to ensure transparency, reduce bias, and build on a secure infrastructure with high-quality data that respects privacy laws.”
What is Model Risk and how does it affect AI?
Model risk is the chance that an algorithm produces inaccurate or biased results, which would distort decisions and might lead to losses. As AI models grow more complex, making their workings understandable (explainability) and their outputs traceable in terms of the underlying inferences has become increasingly difficult, yet essential. The European Union’s AI Act requires companies using artificial intelligence within its borders to put strong governance in place to reduce associated risks.

BBVA manages these challenges based on three fundamental pillars:
- The first is strong data and model governance. The bank follows a framework aligned with global standards, including the BCBS 239 principles published by the BIS and the EU’s AI Act. This ensures regulatory compliance and helps reduce bias at every stage of a model’s life cycle.
- The second pillar is a centralized, up-to-date inventory of AI systems. BBVA maintains detailed oversight of its models to ensure proper use, reduce reliance on individual experts, and promote the reuse of analytical tools.
- The third is continuous monitoring and maintenance. The bank regularly tests its models for accuracy, flags deviations, and updates them as needed to keep performance on track.
Data and AI model governance
BBVA’s governance framework is built not only on current regulations but also on international standards and best practices, with the goal of reducing model risk. The framework defines clear criteria for effective data governance and the responsible development and use of AI systems, ensuring that models remain compliant throughout their entire life cycle.
In addition, BBVA takes a proactive approach to evolving regulations like the EU AI Act. The bank classifies its AI systems by risk level and has developed a detailed roadmap to prepare for and ensure full compliance as new rules come into force.
The importance of a complete and up-to-date inventory
Maintaining a robust inventory of AI models is critical for tracking and understanding each model's status throughout its life cycle. Such an inventory offers a comprehensive view of all AI systems across the organisation, helping teams gain deeper insight into each model’s purpose, assumptions, and limitations. This not only reduces reliance on key individuals but also promotes the reuse of analytical components, increasing efficiency.
BBVA is actively committed to sustaining a centralized, global model inventory that is continuously updated and evolving. This ongoing effort enhances transparency and reinforces trust in AI-driven decision-making across the organization.
Continuous monitoring and maintenance
At BBVA, the deployment of an AI model is just the first step in an ongoing cycle of improvement. To ensure each model remains effective and aligned with business goals, the bank has put continuous monitoring processes in place to assess the model’s predictive performance, detect any deviations over time, and implement updates when needed.
BBVA also defines clear roles and responsibilities for the oversight of each AI model. This structured approach enables effective, accountable maintenance. Continuous monitoring strengthens confidence in AI-driven decisions by ensuring models stay accurate and stable.
For systems with a major impact on people’s lives, such as those used in risk assessment, the Group adds a further layer of internal checks and validation before they go into production.
Balance between innovation and sound governance
Governance of AI models is vital for managing risks without stifling innovation. BBVA aims to strike a balance between compliance, risk control and the use of AI to improve how it does business.
By applying these strategies, it brings stability to a fast-changing field and allows AI to deliver lasting value for both the bank and society.