Opportunity and challenge: Regulatory requirements for AI-based medical devices
August 09 2024
Imagine a world where diagnosis and treatment are more precise and tailored than ever before, where life-saving decisions are made in fractions of a second and complex medical data is transformed into meaningful insights. This vision is becoming a reality through the integration of artificial intelligence (AI) into medical technology. AI has the potential to revolutionize healthcare and improve the lives of millions of people - but only if it is used safely and reliably.
The safety of AI-based medical devices is paramount. From the early detection of diseases and personalized medicine to the automation of administrative tasks - the potential applications of AI are almost limitless. However, these possibilities also come with great responsibilities. AI must meet strict regulatory requirements to ensure that it is both effective and safe for patients and users.
When do we even talk about artificial intelligence? Is every algorithm AI then?
The EU AI Act defines an AI system in Article 3 (1) as follows:
(1) 'AI system' means a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments;
If we break down this definition with the question in mind as to whether every algorithm fulfills the definition of artificial intelligence, we can extract the following elements:
- It is a computer system that has been developed so that it can work independently
- After its introduction to the market, it can adapt (but does not have to, note: "may")
- It can derive how it generates results from the information it receives
- These results can be predictions, content, recommendations or decisions, which then influence both the physical world (e.g. by controlling a device) and digital environments (e.g. through adjustments in software).
In short, an AI system is a computer-controlled system that works independently and makes decisions or creates content based on information that can influence our environment. A core element is that it derives new content from the content provided and does not follow simple if-else sequences. This clearly distinguishes it from an algorithm that analyzes the heart rhythm in a defibrillator, for example, and delivers an electric shock based on this. This algorithm is not able to derive decisions independently but was developed based on special parameters against which it compares the heart rhythm.
Overview of regulatory requirements in Europe and the USA
In Europe, the EU AI Act is currently on everyone's lips, even outside the world of medical technology. The EU AI Act is a comprehensive regulation on AI by an important regulatory authority. The Act assigns applications of artificial intelligence to three risk categories:
- Applications and systems that present an unacceptable level of risk, such as state-run social scoring, will be banned.
- High-risk applications, such as a CV scanning tool that ranks applicants, are subject to special legal requirements.
- Applications that are not expressly prohibited or classified as high-risk remain largely unregulated.
What does the EU AI Act mean for medical devices? Article 6 of the EU AI Act on the classification rules for high-risk AI systems provides the answer here. It states that an AI system is to be considered a high-risk system if both of the following conditions are met:
- The AI system is intended to be used as a safety component of a product or the AI system is itself a product covered by the Union harmonization legislation listed in Annex I;
- The product of which the AI system is the safety component referred to in point (a), or the AI system itself as a product, shall be subject to third party conformity assessment in order to be placed on the market or put into service in accordance with the Union harmonization legislation listed in Annex I.
Annex I of the EU AI Act lists, among others, Regulation (EU) 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices (MDR) and Regulation (EU) 2017/746 of the European Parliament and of the Council of 5 April 2017 on in vitro diagnostic medical devices (IVDR) as harmonization legislation.
By 02.02.2026 at the latest, the European Commission will provide guidance on the practical implementation of Article 6 together with a comprehensive list of practical examples of high-risk and non-high-risk use cases of AI systems.
The standard 34971:2023 Application of ISO 14971 to machine learning in artificial intelligence can be used as a further basis for the regulation of AI in medical devices in Europe. Its intention is to manage risks specifically in AI medical devices. Risks are often assumed based on clinical knowledge of anatomy, physiology or condition and real-world experience with similar medical devices. For AI-based medical devices, real-world experience with similar technologies may be lacking, increasing the risk that hazards are not recognized, the technology or process is misunderstood, or the risk is underestimated. No changes are made to ISO 14971, but information and guidance is provided on the application of ISO 14971 to AI medical technology.
DIN has also published a guideline on the subject of "DIN SPEC 13288 Guidelines for the development of deep learning image recognition systems in medicine". This specification focuses primarily on the handling of training data and also provides practical guidelines for the development and design of deep learning systems. The training data is used by the underlying AI to recognize statistical patterns, which can then be used to solve complex image recognition tasks. The specification pays particular attention to the representativeness of the data and to rare image classes that are typical in medicine. The explainability of the results is also of great importance.
The topic of validation of AI in medical technology is covered in the following specification BS 30440:2023 Validation framework for the use of artificial intelligence (AI) within healthcare. In addition to numerous practical examples, it contains specific tips on the areas of development, validation, maintenance and market surveillance and thus maps the life cycle model for an AI system in medical technology. For each life cycle phase, a series of thematic criteria are listed that cover the most important requirements for an AI system for the healthcare sector in the respective development phase.
Numerous other standards, technical reports and specifications are currently being developed, so it is worth checking new publications regularly.
In contrast to Europe, the FDA already began publishing articles on artificial intelligence and machine learning in 2019.
The first article published on this topic was "Proposed Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)". The article was developed to clarify how to deal with the ability of artificial intelligence to continuously evolve at the regulatory level. The FDA's existing paradigms for regulating medical devices were not originally designed to reflect the iterative, autonomous and adaptive nature of artificial intelligence. Nor are they really suited to allowing a rapid cycle of product improvement and enabling AI systems to continuously improve while providing effective safeguards. Therefore, the discussion paper offers approaches to encourage innovative AI technology while ensuring that it remains safe and effective, especially if it is modified or improved after market-launch. FDA seeks a flexible but controlled environment in which AI/ML-based medical devices can continuously evolve without compromising their safety and performance. Briefly summarized, the discussion paper includes the following approaches:
- Lifecycle model (Total Product Lifecycle - TPLC): All process steps must cover the entire lifecycle of a product - from development and market launch through to continuous updates and improvements.
- Initial Premarket Assurance of Safety and Effectiveness: Manufacturers have the option of submitting a plan for product adaptations during the Initial Premarket Review, analogous to the principle of a previously defined Change Control Plan.
- SaMD Pre-Specifications (SPS): Adjustments to the performance, the inputs or changes in relation to the intended purpose that are already foreseeable can be defined as a "region of potential changes" to the initial specification. Here, the manufacturer determines what the AI system should be able to do when it develops as a result of its adaptation.
- Algorithm Change Protocol (ACP): This is a methodical procedure to ensure that the expected adaptations are further developed taking into account risk control measures and that the medical device remains "safe and effective" even after the change. It is therefore about how the AI system should be further developed.
- Approach for modifications after initial review with an established SPS and ACP: The following flowchart from the FDA discussion paper on page 13 can be used to answer the question of how to deal with changes with regard to regulatory obligations:
- Quality Systems and Good Machine Learning Practices (GMLP): The development of and compliance with "Good Machine Learning Practices" alongside a quality management system are required. This is to ensure that the AI systems remain safe and effective, even if they are continuously adapted and updated.
- Transparency and real-world performance monitoring of AI/ML-based SaMD: Manufacturers should collect and monitor data in the field to mitigate the risk of changes to AI-based medical devices and thus support the benefit-risk profile in the evaluation of the AI medical device. In the interest of transparency, manufacturers are required to regularly inform the FDA of adjustments implemented under SPS and ACP. In addition, users, patients and medical device manufacturers should also be informed.
The following diagram, which can be found on page 8 of the FDA discussion paper, provides a good overview of these approaches:
In January 2021, the FDA published the article "Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD) Action Plan" as a further article on the topic of AI. This action plan further refines the proposals of the "Regulatory Framework for Modifications to Artificial Intelligence/Machine Learning (AI/ML)-Based Software as a Medical Device (SaMD)". For example, a guidance document will be published that lists in more detail which elements SPS and ACP should include. Harmonization is also being promoted as part of the further development of Good Machine Learning Practices (GMLP). Scientific studies on the topic of bias and the robustness of artificial intelligence are being promoted and supported.
Further publications followed from the Action Plan, e.g. the "Good Machine Learning Practice for Medical Device Development: Guiding Principles" in October 2021, which comprises 10 basic principles for the development of AI systems in medical technology:
- Multidisciplinary expertise is used across the entire product life cycle
- Good software development and security practices are implemented
- Clinical trial participants and data sets are representative of the intended patient population
- The training data sets are independent of the test data sets
- The selected reference data sets are based on the best available methods
- The model design is tailored to the available data and reflects the intended purpose of the device.
- The focus is on the performance of the human and AI team
- The tests show the clinical performance under clinically relevant conditions
- The user is provided with clear, essential information
- The performance of the models in the field is monitored and the risks of the further development of AI are managed
Two further publications on the topic of change control plans followed: In April 2023 as Draft Guidance: "Marketing Submission Recommendations for a Predetermined Change Control Plan for Artificial Intelligence/Machine Learning (AI/ML)-Enabled Device Software Functions" and in October 2023 "Predetermined Change Control Plans for Machine Learning-Enabled Medical Devices: Guiding Principles". To further refine the principles on transparency, an article followed in June 2024 on: "Transparency for Machine Learning-Enabled Medical Devices: Guiding Principles".
Challenges, future developments and trends
An overview of the regulatory landscape in the USA and Europe shows that the USA is already much better or more reliable in this respect and offers more concrete methods for manufacturers of AI medical devices to follow. In Europe, an initial basis has been created with the EU AI Act, but there are still some more precise requirements for medical technology. The emergence of numerous technical reports on AI topics, which often do not directly concern medical devices, means that there are currently uncertainties among manufacturers and notified bodies regarding the implementation of the Essential Safety and Performance Requirements for AI-based medical devices, among other things. For the most part, only the standards that also apply to non-AI medical devices can currently be applied. At the same time, numerous new standards and technical specifications are currently being developed in the field of AI medical technology, meaning that manufacturers of these products have to spend a lot of time regularly reviewing the applicable standards. In this environment, it is currently not easy for manufacturers to maintain an overview. The FDA's "Good Practice" guidelines for the development of AI medical devices can therefore provide good guidance in the currently somewhat uncertain European environment.
Artificial intelligence (AI) has the potential to fundamentally change medical technology. These innovations can significantly improve patient care and expand access to high-quality medical services. However, with these opportunities come great responsibilities. Safety and sound risk management are essential to ensure that AI systems operate reliably and safely. Sensible and strategically planned implementation of regulatory requirements will help to realize the full potential of these breakthrough technologies while maintaining the trust of patients and healthcare professionals.