top of page

EU AI Act and Its Application in Developing Rare Disease Therapies for Neurological Disorders

  • Writer: Camilla Costa
    Camilla Costa
  • Jul 27
  • 5 min read

Overview

The EU AI Act (Artificial Intelligence Act Regulation EU 2024/1689), which came into force on 1 August 2024, establishes a harmonised framework for regulating AI systems. It impacts the use of AI in medical settings, particularly when integrated with the EU Medical Device Regulation (MDR) and In Vitro Diagnostic Regulation (IVDR).


Innovative approaches to neurological disease treatment are highlighted in this digital illustration, which presents a conceptual representation of brain wave modulation and neural network connectivity.
Innovative approaches to neurological disease treatment are highlighted in this digital illustration, which presents a conceptual representation of brain wave modulation and neural network connectivity.

1. Neurological AI Systems as Medical Devices


Dual Regulation and High-Risk Classification

AI systems used in the diagnosis, monitoring, or treatment of rare neurological disorders are generally considered medical devices. These include software-aided diagnostics and AI-enabled therapeutic devices. When these systems either form the medical device or are integral to its safety, they are classified as high-risk AI systems under the AI Act.


Timeline and Transitional Rules

Devices already on the EU market before 2 August 2026 may remain under MDR alone unless they undergo substantial design changes. From that date onward, any new or significantly altered AI-based medical devices must comply with both MDR/IVDR and the AI Act. Full compliance with the AI Act is required by 2 August 2027.


Conformity and Technical Documentation

High-risk AI systems must undergo a conformity assessment involving a Notified Body. A single EU Declaration of Conformity can cover both MDR and AI Act requirements. Technical documentation must include elements such as risk management, dataset governance, bias mitigation strategies, explainability, and cybersecurity protocols.


Key AI Act Requirements for Medical AI

  • Risk management procedures to assess safety and rights impacts

  • Transparency and interpretability of AI outputs

  • Human oversight mechanisms

  • Incident and breach reporting

  • Staff training on AI safety and ethical use


2. Neurological AI Tools Outside Medical Device Scope


Research and Non-Clinical Use

Some AI systems in neurology, such as research tools or non-clinical prediction models, do not meet the criteria for classification as medical devices. These tools fall under different risk categories in the AI Act:

  • Limited-risk AI systems must inform users of AI interaction and maintain basic documentation

  • Minimal-risk systems have no specific AI Act obligations beyond general compliance


Research Exemption

AI systems used exclusively for scientific research or pre-market testing, without deployment or influence on real-world patient decisions, may be exempt from the AI Act. However, once such systems process personal data or are used in applied settings, they are subject to transparency and documentation requirements.


3. Special Considerations for Rare Neurological Therapies


Data Challenges

Rare diseases often involve limited data, which complicates the AI Act’s requirement for robust and representative datasets. Developers must plan for bias mitigation and document dataset constraints throughout the development process.


Neuroethical and Privacy Considerations

Neurotechnologies, such as brain-computer interfaces and implantable AI, raise concerns regarding autonomy, privacy, and cognitive rights. The AI Act enforces stricter conditions for transparency, bias reduction, and data protection in these contexts.


Regulatory Integration

Developers must align MDR and AI Act documentation and compliance efforts. Regulatory guidance from EU authorities, including technical specifications and FAQs, supports integrated conformity approaches for neuro-AI systems.


4. General-Purpose AI Used in Neurology


General-purpose AI models that are fine-tuned for neurological research or clinical purposes must comply with additional transparency requirements. Providers must publish a summary of the training data used for these models. A voluntary code of practice provides a framework for effectively managing these obligations.


Summary Table

Use Case

MDR/IVDR Scope

High-Risk AI

Key AI Act Requirements

AI-enabled neuro device

Yes

Yes

Full conformity assessment, integrated documentation, risk management, transparency, incident reporting, training

AI clinical decision support software

Yes

Yes

Same as above

Research-only neuro-AI system

No

Possibly limited-risk

Transparency, documentation; may be exempt

Administrative AI for neurologists

No

Minimal-risk

No specific requirements

General-purpose AI fine-tuned for neurology

No (unless used clinically)

Possibly high-risk

Publish training data summary, consider adopting code of practice

Key Takeaways

  1. AI systems used for clinical diagnosis or treatment in neurology are regulated under both MDR/IVDR and as high-risk AI systems.

  2. Developers should integrate AI Act documentation with MDR files to streamline the assessment process.

  3. AI systems used solely for research may be exempt, but they must still follow transparency rules if deployed.

  4. Addressing data limitations and mitigating bias is essential for rare disease applications.

  5. Use of general-purpose AI models triggers training data summary obligations under Article 53.

  6. Adoption of the voluntary code of practice can help meet expectations and reduce regulatory risk.

Would you like me to export this cleaned version as a Word or PDF file now?


Specific Articles of Impact – EU AI Act


Article 6: Definition of High-Risk AI Systems

Defines the scope and criteria of high-risk AI systems—including medical AI devices—based on intended use and sector.


Summary Publication

General-purpose AI providers must publish summaries of the data used to train their models.


Article 113(c): Implementation Timeline

Specifies that high-risk AI systems, including those used in medical contexts, must comply with the regulation by 2 August 2027.


Transparency and Labelling Obligations

Users must be clearly informed that they are interacting with an AI system—especially for systems listed in Annex III (high-risk use cases).

All links provided here direct to official EU documentation via EUR-Lex and Digital Strategy portals.


Main sources



Practical Guidance: Implementation Examples

To help bridge the gap between regulatory requirements and day-to-day implementation, here are some practical approaches for organisations developing AI-enabled neurological therapies under the EU AI Act:


1. Bias Mitigation in Small Rare Disease Datasets

  • Use data augmentation strategies (e.g., synthetic data generation) to improve representation.

  • Clearly document limitations in the technical file and explain the clinical justification for narrow data.

  • Engage early with ethics committees or patient representatives to validate the fairness of inclusion.


2. Explainability in Neurodiagnostic Tools

  • Implement visual explainability tools (e.g., heat maps in AI-based imaging) to aid clinician understanding.

  • Offer structured output summaries that break down AI decision contributions to each variable.

  • Document how explainability was tested with end users during design validation (Annex VII MDR).


3. Unified MDR and AI Act Documentation

  • Integrate AI risk assessments (Article 9 AI Act) into your MDR Annex I GSPR file.

  • Align incident reporting and post-market surveillance procedures under both frameworks.

  • Use a single clinical evaluation report that includes AI performance metrics and human oversight data.


4. Transparent Use of General-Purpose AI in Neuro Interfaces

  • Publish a model training summary following the Commission’s official template (Annex XII AI Act).

  • When fine-tuning a general-purpose model, log all fine-tuning datasets and transformations used.

  • Clearly label the system in the interface to alert users to its AI-driven nature (Article 13).


5. Training and User AI Literacy Programs

  • Develop e-learning modules tailored to AI-specific risks in neuro-interventions.

  • Include examples of AI error types, data drift signs, and safe handover to human decision-making.

  • Maintain staff competency records to demonstrate compliance under Articles 10 and 17.


Final Thoughts

For developers working in neurology and rare diseases, the EU AI Act introduces a new layer of regulation on top of the MDR/IVDR. Ensuring compliance will require strategic integration of AI transparency, bias safeguards, incident reporting, and training into your existing QMS and regulatory framework. Leveraging MDCG/AIB FAQs, standardised templates (especially for GPAI data disclosure), and the voluntary Code of Practice will help streamline this integration.

Don't Forget  >

If you’re looking to navigate this rapidly evolving space, your regulatory strategy, technical design, and go-to-market plan should be aligned with how these frontrunners are structuring theirs.

 

To stay ahead of the curve, sign up to :)

Subscribe to our newsletter

bottom of page