top of page

June 2025 Regulatory Update: What MDR, IVDR & the AIA Mean for MedTech

  • Writer: Camilla Costa
    Camilla Costa
  • Jun 11
  • 8 min read

Updated: 4 days ago

In 2025, manufacturers of AI-powered medical devices face a new regulatory reality. The European Union has introduced the Artificial Intelligence Act (AIA). If your device falls under the Medical Device Regulation (MDR) or the In-Vitro Diagnostic Regulation (IVDR), the AIA is mandatory. It’s now a core part of your compliance strategy.


If you build or deploy medical device software or in vitro diagnostic software that uses artificial intelligence—especially machine learning—you need to understand how these regulations combine. Devices that fall under the MDR or IVDR and also meet the definition of an AI system are likely to be classified as high-risk AI systems under AIA Article 6(1).


This means you are now subject to a broad set of AI-specific obligations, on top of your existing clinical and performance evaluation rules. These include data governance, risk management, human oversight, transparency, bias mitigation, and cybersecurity. This article outlines all the significant implications and compliance requirements.


AIB 2025-1 MDCG  2025-6  Interplay between the Medical   Devices Regulation (MDR) & In vitro Diagnostic Medical Devices Regulation (IVDR) and the Artificial Intelligence Act (AIA)
AIB 2025-1 MDCG 2025-6 Interplay between the Medical Devices Regulation (MDR) & In vitro Diagnostic Medical Devices Regulation (IVDR) and the Artificial Intelligence Act (AIA)

Overview of the EU AI Act


The EU AI Act categorises AI systems based on their risk potential, establishing guidelines for compliance ranging from high-risk to minimal-risk applications. High-risk AI systems face strict requirements, including:


  • Regular risk assessments to identify potential pitfalls

  • Enhanced transparency measures ensure users understand AI decision-making, and

  • Comprehensive data governance standards to secure user information.


For example, companies developing AI systems used for clinical diagnosis must provide clear documentation about how their algorithms work. This transparency is a significant step toward enhancing accountability and safety in healthcare applications, ultimately leading to improved patient trust.


Organisations utilising high-risk AI systems must conduct extensive testing and validation to ensure their innovations meet these new standards. This level of scrutiny is anticipated to require new investments, likely altering the AI landscape within the medical device industry, where companies might see compliance costs rise by 15-30% as they adapt to the new regulations.




The Integration with MDR 2017/745 and IVDR 2017/746


The EU AI Act intersects strongly with the MDR and IVDR regulations. With the MDR prioritising patient safety and product efficacy, the goals of the EU AI Act align with these principles by reducing the risks associated with AI technology.


Implications for Medical Devices


Medical device manufacturers using AI technology will now be juggling compliance demands from both the EU AI Act and MDR. For instance, AI-enhanced imaging devices that classify disease risk levels must pass through rigorous MDR testing protocols. This dual regulatory focus will compel manufacturers to invest significantly in research, quality assurance, and post-market surveillance—areas expected to see funding increases of 20-40% in the coming years.


Higher stakes come with higher risks. Organisations need to proactively identify potential issues in their AI systems to meet evolving regulations. For example, suppose an AI device for detecting heart conditions fails to deliver accurate readings in 5% of cases. In that case, the regulatory burden increases significantly to ensure that these devices are trustworthy and effective.


Implications for In-vitro Diagnostic Devices


Like the MDR, the IVDR will require compliance with the EU AI Act for in-vitro diagnostics that use AI elements. These diagnostics play a crucial role in disease detection, and their integration with EU AI standards ensures not only effectiveness but also the ethical deployment of AI technologies.


Companies creating AI-driven diagnostic tools, such as those used for cancer detection, must thoroughly validate their algorithms. These validations are central to meeting patient care standards and regulatory expectations. As an example, a diagnostic tool that inaccurately diagnoses cancer could lead to mismanagement of 10-15% of patients, highlighting the need for transparency and accountability in AI algorithms.


Eye-level view of a modern healthcare facility showcasing advanced medical technology
A modern healthcare facility equipped with AI-driven devices and systems.

Challenges in Compliance


Adhering to the EU AI Act, MDR, and IVDR presents opportunities for enhanced safety but also creates challenges.


Resource Allocation


Organisations need to evaluate their current resources and internal structures to comply with these complex regulations. Compliance may require reallocating budgets or increasing training sessions for staff. For smaller companies, this can strain resources since their financial flexibility is often limited. Research indicates that approximately 30% of small to medium-sized companies may struggle with these additional compliance costs.


Data Governance


Data governance is a critical issue. With the need for high data integrity in AI systems and medical devices, organisations must adopt comprehensive data management strategies. An investment in advanced data governance practices could boost AI performance by 25% while ensuring compliance with GDPR privacy regulations.


Continuous Monitoring and Adaptation


As AI regulation evolves rapidly, organisations must remain flexible. They should maintain regular contact with regulatory bodies and continually train their workforce. Companies might need to adapt practices quarterly or even monthly to keep pace with new compliance requirements.



Opportunities for Innovation


While facing compliance challenges, the EU AI Act and its relationship with MDR and IVDR also create significant potential for innovation in healthcare.


Enhanced Patient Safety


The regulations encourage rigorous testing, leading to more reliable AI systems. Companies can leverage this to develop cutting-edge solutions that improve diagnostic accuracy rates by as much as 40% and streamline workflows, saving an estimated 20% in time spent on patient evaluation.


Collaboration Across Sectors


These regulations promote collaboration across varied sectors, leading to innovative, cross-disciplinary solutions. By facilitating partnerships between technology developers and healthcare providers, organisations can create comprehensive solutions that blend AI prowess with medical expertise to address complex healthcare challenges.


Moving Forward with Confidence


The EU AI Act represents a significant milestone in the governance of AI technologies in healthcare. Its integration with MDR and IVDR creates both challenges and opportunities for organisations navigating these frameworks. By focusing on compliance, stakeholders can not only ensure patient safety but also unleash innovation in medical technology.


The goal of these regulations is clear: to protect patients while enabling the healthcare industry to adopt AI technologies responsibly. As organisations adapt, a commitment to ethical practices and transparent AI deployment will be pivotal in establishing standards for safety, efficacy, and trust across the EU.


Close-up view of a healthcare professional analyzing AI-generated data on a digital interface
A healthcare professional interacting with AI-driven data analytics.

Navigating these regulations may be complex, but with informed strategies, organisations can successfully integrate the EU AI Act while capitalising on AI's potential to transform healthcare.



Is Your Medical Device Considered a High-Risk AI System? You can download our checklist HERE.


A device is high-risk under the AIA if:


  1. It functions as a safety component or is itself a medical device, and

  2. It is subject to a third-party conformity assessment under MDR or IVDR.


This includes most devices classified as MDR Class IIa, IIb, III and IVDR Class B, C, D. It also includes Annex XVI devices (non-medical but regulated products like aesthetic lasers) that incorporate AI.


In-house developed devices used solely within a healthcare institution (e.g., by hospitals under Article 5(5) MDR/IVDR) are excluded—unless they undergo third-party notified body certification.



Expanded Compliance Obligations for AI Medical Devices


Risk and Lifecycle Management


You must treat risk management as a continuous, iterative process. The AIA and MDR/IVDR both require ongoing safety monitoring and performance evaluation across:


  • Design and development

  • Clinical evaluation or performance studies

  • Deployment

  • Post-market updates


For AI systems that continue to learn or adapt, post-market monitoring becomes essential. This includes real-time analysis, detection of system drift, and user feedback loops.



Quality Management Systems (QMS)


You now need a dual-layer QMS:


  • The MDR/IVDR QMS handles device safety, manufacturing control, and clinical validation.

  • The AIA QMS adds requirements for:


    • AI-specific risk controls

    • Logging of system behaviour

    • Oversight procedures

    • Cybersecurity planning

    • Algorithm transparency


Instead of building two separate systems, manufacturers are encouraged to integrate AIA controls into their MDR/IVDR quality processes.



Data Governance for AI in Medical Devices


Training data, validation data, and testing data must be:


  • Relevant, representative, and statistically robust

  • Free from errors and bias

  • Aligned with the intended use and target population

  • Transparent in terms of collection methods and data provenance


This applies to data used for machine learning, deep learning, and any AI training pipelines. You must also monitor data quality over time to ensure ongoing performance.


You must implement:

  • Bias mitigation strategies

  • Data logging and traceability

  • Secure storage and processing

  • Detailed documentation


AIA-compliant data governance is now part of your regulatory burden, mainly when AI outputs are used to inform diagnosis or clinical decisions.



Transparency, Explainability, and Human Oversight


If a medical device incorporates AI that influences clinical outcomes or treatment decisions, the AIA mandates human oversight to ensure accountability and safety.


You must provide:

  • Clear instructions for use

  • Documentation of how the AI system works

  • Training for medical professionals and deployers

  • Interface features that allow operators to pause, override, or question AI outputs


All high-risk AI systems must include:

  • Log files

  • Explanation mechanisms

  • Clear user interfaces

  • Defined intervention protocols


This supports trustworthy AI and aligns with obligations to safeguard fundamental rights, such as patient safety, privacy, and non-discrimination.


Clinical and Performance Evaluation for AI-Powered Devices


The AIA introduces:

  • Requirements for accuracy, robustness, and cybersecurity

  • Obligations to test AI systems against defined performance metrics

  • Procedures for real-world testing prior to placing on the market, where authorised


MDR/IVDR already require:

  • Clinical evaluation reports (for MDR)

  • Performance evaluation studies (for IVDR)

  • Usability testing and clinical benefit analysis


When combined with AIA, manufacturers must now also validate:

  • System behaviour over time

  • AI learning models

  • Ability to meet performance targets without infringing fundamental rights



Cybersecurity for Medical AI Systems


Cybersecurity is an essential requirement under all three regulations. You must:


  • Prevent unauthorised access or manipulation of models

  • Secure training data and AI pipelines

  • Monitor vulnerabilities in real-time

  • Design AI algorithms to resist manipulation and adversarial attacks


Security applies not just to the software but to the AI model, training sets, API interfaces, and any adaptive behaviour the system exhibits post-market.



Conformity Assessment and Technical Documentation


For high-risk MDAI, conformity assessment involves:


  • Full QMS audit (by a notified body)

  • Technical documentation review

  • Product-specific sampling rules (based on classification)


AIA Annex VII and MDR/IVDR Annex II & III must be satisfied. The AIA allows a single technical documentation file to cover both regulations.


All design decisions, training data sets, testing results, and system outputs must be documented. You must also show evidence of:


  • Explainability

  • Logging systems

  • Risk assessments

  • Bias mitigation efforts



What Counts as a Substantial Modification?


After August 2027, any significant change in the AI system will require a new conformity assessment—unless it was declared in your Pre-Determined Change Control Plan.


If your system adapts over time (e.g. through machine learning), these plans must:


  • Be included in the initial certification stage

  • Cover model retraining and real-world adaptation

  • Define performance limits and change boundaries


Without a plan, any meaningful update to a deployed AI model could trigger re-certification.



Post-Market Surveillance and Monitoring


AIA Article 72 mandates post-market monitoring for high-risk AI, in alignment with MDR/IVDR. This includes:


  • AI-specific performance tracking

  • Bias detection after deployment

  • Feedback loops with deployers

  • Detection of interactions with other systems


A standard PMS template will be published by February 2026. You can integrate AIA monitoring requirements into your MDR/IVDR surveillance plan to streamline operations.



Final Takeaways for Medical AI Manufacturers


  • If your device utilises AI and falls under MDR or IVDR, it is likely to be considered high-risk under the AIA.

  • You must integrate AI-specific controls into your quality systems, risk files, and documentation.

  • Start building AIA compliance now, well ahead of the August 2027 enforcement date.

  • Focus on data governance, transparency, cybersecurity, human oversight, and post-market monitoring.

  • Align your conformity assessment process to cover both MDR/IVDR and AIA together.


Medical AI regulation in Europe is no longer a future concern—it’s today’s strategic priority.


Complying early is not just about avoiding penalties; it's also about ensuring a smooth process. It’s about earning trust, scaling safely, and staying competitive in a regulated AI health market.


Sign up for our newsletter for more.

Comments


Don't Forget  >

If you’re looking to navigate this rapidly evolving space, your regulatory strategy, technical design, and go-to-market plan should be aligned with how these frontrunners are structuring theirs.

 

To stay ahead of the curve, sign up to :)

Subscribe to our newsletter

bottom of page