AI Act: data governance and compliance strategy implications in Pharma
Posted: 7 August 2025 | Patrice Navarro (Clifford Chance) | No comments yet
In this article, Patrice Navarro, Tech Partner (Healthcare & Life Sciences Sector), Clifford Chance, discusses the AI Act and the related compliance considerations for pharmaceutical companies in the EU.


The AI Act substantially raises the compliance bar for pharmaceutical companies using AI, particularly in high-risk applications. It compels organisations to rethink data governance, prioritise traceability and bias mitigation, harmonise compliance across existing regulations, and leverage new technologies for proactive adherence. This should be viewed not only as a challenge, but also as an opportunity to strengthen trust and spur innovation.
The act establishes a clear, risk-based regulatory framework, categorising AI systems according to their potential impact on health, safety, and fundamental rights. Specifically, it identifies certain AI systems as high-risk if they are intended to be used as safety components of products already covered by existing EU regulations, such as the Medical Devices Regulation (MDR) and the In Vitro Diagnostic Medical Devices Regulation (IVDR).1
Pharmaceutical AI applications that serve medical purposes, such as diagnostic algorithms, patient monitoring tools, and clinical decision-support systems, fall directly under these regulated categories, making them automatically classified as high-risk under Annex III of the AI Act. This level of classification triggers stringent requirements for pharmaceutical companies, compelling them to adhere rigorously to detailed obligations regarding data governance, algorithmic transparency, human oversight, accountability mechanisms, and lifecycle management throughout the development and deployment phases.
This article traces the compliance journey, from initial industry responses to the AI Act, through data‑sharing protocols, merger due‑diligence considerations, infrastructure investments, internal governance structures and, finally, the strategic advantages of Europe’s cohesive regulatory framework.
Compliance realities and industry responses
pharmaceutical companies face significant uncertainties regarding compliance implementation and expect more guidance from the authorities”
Pharmaceutical companies have responded unevenly to the AI Act. Smaller firms have shown agility, swiftly integrating compliance measures into their AI initiatives. Larger companies, typically burdened by legacy governance structures, have approached compliance more cautiously, resulting in mixed outcomes for AI-driven tools, especially diagnostic algorithms and clinical trial recruitment platforms.
Given the inherent complexity of AI technologies and the sensitivity of healthcare data, pharmaceutical companies face significant uncertainties regarding compliance implementation and expect more guidance from the authorities.
Recognising this complexity, France’s data protection authority, CNIL, announced in July forthcoming guidance tailored specifically to AI deployment in healthcare. Developed in collaboration with the Haute Autorité de Santé (HAS), a French public authority responsible for improving the quality of healthcare services and setting clinical guidelines, this guidance aims to clarify the intricate rules governing healthcare-focused AI systems, explicitly aligning AI Act requirements with the GDPR.2 This publication represents a critical step toward providing operational clarity and practical direction for pharmaceutical companies navigating this challenging regulatory environment.
[France’s data protection authority’s] forthcoming guidance tailored specifically to AI deployment in healthcare … represents a critical step toward providing operational clarity and practical direction for pharmaceutical companies”
Compliance with dual regulations necessitates considerable investments in specialised legal and technical expertise. Proactive alignment with emerging guidance, such as the CNIL’s healthcare-specific recommendations, offers potential relief by clarifying compliance requirements and reducing the risk of regulatory divergence.
Data sharing and collaboration
Deploying pharmaceutical AI solutions for real‑world evidence, decision support and personalised medicine depends on access to large heterogeneous datasets drawn from multiple jurisdictions. These cross‑border data flows must satisfy both the GDPR and the AI Act, which often impose overlapping and sometimes divergent duties. Successful collaboration therefore hinges on dynamic, well documented protocols for anonymisation, consent, role allocation, transparency and ongoing compliance, all balanced against the pace of AI driven innovation:
Anonymisation versus technical utility: Only data that have been irreversibly anonymised fall outside GDPR scope.3 Pseudonymised data remain regulated because re‑identification risk persists. AI developers, meanwhile, need detailed metadata to audit bias and validate models. Protocols must therefore balance privacy and utility, document residual risk and schedule periodic reviews as linkage techniques evolve.
Consent and secondary use: GDPR requires informed, granular, revocable consent tied to a specific purpose.4 AI projects frequently anticipate secondary or future uses, so broad scientific consent remains controversial across member states. The AI Act adds explicit transparency duties, demanding that data subjects understand when AI is processing their data and producing outputs.5 Effective frameworks combine detailed patient notices with flexible consent terms and live opt‑out mechanisms spanning the entire AI lifecycle.
Documentation, roles and legal bases: Both laws insist on exhaustive records of data provenance, processing flows and algorithmic logic. When data move across borders, contracts must spell out the responsibilities of controllers and processors, define notification duties and address jurisdiction. Additional legal bases are often needed to govern the reuse and sharing of AI generated outputs and real‑world feedback data, aligning scientific aims with patient rights.
AI regulation impact on M&A activities
GDPR and AI Act compliance now explicitly shape due diligence in pharmaceutical mergers and acquisitions, particularly within late-stage pharma deals”
GDPR and AI Act compliance now explicitly shape due diligence in pharmaceutical mergers and acquisitions, particularly within late-stage pharma deals. Due diligence increasingly scrutinises datasets’ origins, consent validity, data governance maturity, transparency mechanisms, and intellectual property documentation. Proper documentation proving human inventorship in AI innovations is now critical, with IP considerations explicitly linked to data governance strategies.6
Specialised external counsel is often required to navigate these complex intersections of compliance and IP risks effectively.
Planning for this, pharmaceutical companies now routinely negotiate vendor contracts to include detailed clauses regarding data ownership, transparency, inventorship documentation, and AI monitoring obligations.
Infrastructure investment in AI development and usage
Pharmaceutical companies are moving beyond general IT infrastructure to specialised facilities, commonly called ‘AI factories’.7 These purpose-built environments support computational, regulatory, and operational demands specific to pharmaceutical AI applications. AI factories optimise high-performance computing and AI workloads, incorporating GPU clusters, scalable storage, and advanced networking for accelerated model training and inference.
AI factories8 enable continuous output generation such as diagnostic analytics, digital biomarkers, and patient stratification models, essential for R&D, manufacturing, and clinical operations. Their design addresses strict regulatory and security demands, including secure data segregation, tailored access controls, and audit readiness. Crucially, these facilities support compliant validation aligned with regulatory standards where AI directly impacts clinical or therapeutic decisions.
Governance challenges within pharmaceutical companies
We have seen that pharmaceutical companies face significant internal governance challenges due to AI’s multidisciplinary impact, spanning IT, business strategy, compliance, ESG, liability, and long-term strategic considerations. Decision-making authority and accountability for AI initiatives are often unclear, complicating internal validation processes and risk management.
Legal experts increasingly advocate a structured, three-tiered governance framework as follows:
- An AI standing committee comprising cross-functional specialists to handle operational AI issues and report to senior leadership
- A strategic executive committee (including the Chief AI Officer and Chief Legal Officer) responsible for approving significant AI projects and managing associated risks
- Board-level oversight, providing structured reporting and enforcing accountability for AI risks.
Implementing this governance structure clarifies internal responsibilities and significantly improves the management of AI-related operational and legal risks.
Regulatory coherence as strategic advantage
The coherence among the AI Act, GDPR, and related regulations provides measurable advantages for EU-based pharmaceutical innovators. Regulatory alignment across the EU offers tangible compliance clarity and predictability, reducing legal fragmentation and associated operational complexity.9 Moreover, Europe’s stringent regulatory environment enhances credibility and trust in pharmaceutical AI solutions globally.
Despite criticisms over regulatory rigidity, stringent compliance practices in healthcare fundamentally support ethical standards, directly underpinning patient and public trust, an essential factor for pharmaceutical companies’ long-term commercial viability.
Strategic recommendations
To meet the AI Act while sustaining innovation, pharmaceutical companies should embed the following practices:
- Implement a three-tier governance model that assigns operational responsibility to an AI committee, strategic control to an executive committee, and ultimate oversight to the board
- Build privacy by design into every data flow, combining robust anonymisation techniques, dynamic consent management and transparent patient communication
- Safeguard data quality through documented provenance checks, bias testing and periodic re‑assessment of residual re‑identification risk
- Align intellectual property and data strategies by recording human inventorship, negotiating clear ownership of training data and model outputs, and updating contracts accordingly
- Detail liability allocation in internal policies and vendor agreements, supported by explainable AI methods that make decision logic auditable
- Invest in secure, high-performance infrastructure that supports compliant model training, validation and continuous monitoring across jurisdictions
- Maintain exhaustive documentation of data sources, processing steps and algorithm performance to satisfy both GDPR and AI Act transparency duties and to streamline merger or partnership due diligence.
About the author


Patrice Navarro is Tech Partner in Clifford Chance’s Healthcare & Life Sciences Sector. Patrice focuses on data privacy, cybersecurity, tech transactions, and health data matters. He has extensive experience in all technology-related issues, particularly in contract drafting and solving privacy challenges across the Healthcare and Life Sciences sector.
References
- AI Act, Regulation (EU) 2024/1689, Article 6(1)(a)-(b), and Supporting list: Annex I, AI Act (lists MDR, IVDR, and other relevant regulations)
- AI: CNIL Finalizes Its Recommendations On The Development Of AI Systems And Announces Its Future Work. [Internet] CNIL. 2025. Available from: https://www.cnil.fr/fr/ia-finalisation-recommandations-developpement-des-systemes-ia
- Recital 26, GDPR, Regulation (EU) 2016/679
- Article 7, GDPR
- EDPB Opinion 28/2024 On Certain Data Protection Aspects Related To The Processing Of Personal Data In The Context Of AI Models – Adopted on 17 December 2024
- European Patent Office, Guidelines for Examination 2025 revision
- Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee and the Committee of the regions AI Continent Action Plan
- AI Factories. Definition of AI Factories. [Internet] European Commission. 2025. Available from: https://digital-strategy.ec.europa.eu/en/policies/ai-factories
- Artificial Intelligence in Healthcare. [Internet] European Commission. Available from: https://health.ec.europa.eu/ehealth-digital-health-and-care/artificial-intelligence-healthcare_en#eu-legislation-shaping-ai-in-healthcare