How AWS can help you adapt to new regulatory draft guidance for use of learning AI in medical devices

June 15, 2023 Ben Moscovitch

New draft guidance from the Food and Drug Administration (FDA) opens the door for medical devices with AI/ML to learn and improve after the product comes to market. The guidance, required by Congress in late 2022 as part of the fiscal year 2023 omnibus bill, addresses a long-standing challenge with the approval or clearance of AI/ML-powered medical devices that has hindered the ability of these products to reach their full potential to improve outcomes and save lives.

Via the new guidance, FDA can now approve Predetermined Change Control Plans (PCCP), which outline manufacturers’ planned changes to a product, including anticipated changes as the result of training an associated AI model. So long as the agency can maintain a reasonable assurance that the product remains safe and effective for a specific indication and intended use, then the FDA could approve the PCCP. This would remove the need for product submissions to the agency every time changes occur. This new flexibility applies to both devices with pre-market approvals or cleared through the 510(k) pathway.

The FDA guidance, while in draft form, provides parameters for factors the agency will consider for PCCPs, such as whether the AI/ML-based product obtains data from a new source or expanded population. The guidance also outlines necessary content for the PCCP, including data management practices, re-training practices, performance evaluation protocols, update procedures, transparency to users, and real-world monitoring plans.

AWS can help support device manufacturers in the automation and control of PCCPs and monitoring the performance of AI models to ensure a manufacturer’s product continues to function within agreed-upon parameters.

AWS tools can support implementation of predetermined change control plans

An important aspect of a PCCP is the methodology used to implement changes in a controlled manner that manages risks to patient. The best way of controlling change in an AI pipeline is by automating it and building sufficient controls into that automation. Ideally, the data scientist should be able to make the changes, such as provide new training data, then trigger a pipeline to retrain, build, test and maybe even deploy the model. As that pipeline executes, it is generating evidence (data) showing control, adherence to the PCCP, and that the change has not negatively impacted the model performance.

Amazon Sagemaker can support automation and granular control and change management in ML pipelines. A customer can choose to create an Amazon Sagemaker Pipeline that is frozen and doesn’t change, but allow the model that is part of the pipeline to be updated/changed as new training data is available. This “decoupling” of the pipeline allows customers to maintain adherence to the PCCP by only changing the model parameters, training data or preprocessing and feature engineering steps.

Lets take a typical MLOps machine learning model pipeline.

A customer has a validated and frozen pipeline that reads the model training data, transforms and performs feature engineering, runs a training job and deploys the resulting model all within a single pipeline. The steps on data transformation, feature engineering and training happens using a “script” that can be updated/changed as the nature of data changes over time or when new data is available. With every execution of the pipeline, a new model artifact is produced that can be deployed on a new endpoint or be used to update the existing one. This deployment can be completely automated using conditional logic on resulting model metrics. For example, deploy the model only if the accuracy is greater than a threshold value. The pipeline can also support human intervention to allow data scientists to manually verify the model before deploying to production.

In the above example, since the only change happening is in the scripts and not the pipeline itself, customers can document these changes in the PCCP in their submissions to the FDA and avoid revalidation every time their models are updated.

Amazon Sagemaker provides multiple features around machine learning operationalization that can be used for these workflows.

1. Sagemaker Pipelines (to stitch together different steps in the Pipeline)
2. Sagemaker Data Wrangler and Sagemaker Processing (for data transformation and feature engineering)
3. Sagemaker Feature Store (to store the features for model training)
4. Sagemaker Training (for model training)
5. Sagemaker Hosting (for deploying the trained model)

Another interesting feature is Sagemaker Model Cards to help document critical details about your machine learning (ML) models in a single place for streamlined governance and reporting. The information contained can support audit activities with detailed descriptions of model training and performance. You can catalog details such as the intended use and risk rating of a model, training details and metrics, and evaluation results.

AWS is ready to help

As FDA begins to implement predetermined change control plans for all devices, AWS stands ready to help medical product developers achieve the full promise of software and AI/ML in the development of the next generation of cures, treatments, diagnostics, and innovations.

To learn more about how AWS support medical device organizations visit,

Previous Article
Improve Patient Safety Intelligence Using AWS AI/ML Services
Improve Patient Safety Intelligence Using AWS AI/ML Services

Today, healthcare organizations rely on a combination of automated and manual processes to compose, review,...

Next Article
New FHIR API capabilities on Amazon HealthLake helps customers accelerate data exchange and meet ONC and CMS interoperability and patient ac
New FHIR API capabilities on Amazon HealthLake helps customers accelerate data exchange and meet ONC and CMS interoperability and patient ac

Every hour, healthcare and life sciences organizations continuously generate large amounts of structured an...