Updating active models in predictions with MLOps
As a data scientist, you can use Machine Learning Operations (MLOps) to approve changes to models that are used in predictions for deployment to the production environment. You can change models independently or by responding to a Prediction Studio notification that a prediction does not generate enough lift.
To improve the performance of a prediction, you can replace a low-performing model with a high-accuracy external model that you upload to a Pega repository or directly to Prediction Studio. As a result, you start a standard approval and validation process to deploy the model update to production. Before you approve any changes, you can compare the candidate model with the existing model based on data science metrics, such as score distribution or lift. For more information, see Active and candidate model comparison charts.
Managing model updates
In your Business Operations Environment (BOE), you can start and manage the model update process from Prediction Studio or remotely by using the Prediction Studio API. For more information about using the API endpoints, see Updating active models in predictions through API with MLOps.
Model deployment
In Pega Customer Decision Hub environments, changes to models that you approve in Prediction Studio are deployed to production through Pega 1:1 Operations Manager and the Business Change pipeline.
For more information, see Understanding MLOps.
To replace a model in a prediction, and then deploy the model to production, perform the following procedures:
- MLOps prerequisites
To start updating models in your system with Machine Learning Operations (MLOps), configure the appropriate access rights for data scientist operators, update Prediction Studio settings, and ensure that your Business Change pipeline meets the requirements.
- Replacing models in predictions with MLOps
Improve a prediction by replacing a low-performing model with a high-accuracy model. You can also replace a model with a scorecard or a field in the data model that contains a score.
- Evaluating candidate models with MLOps
After you add a candidate model to a prediction, Prediction Studio configures and validates the new model, and provides comparison data to help you evaluate the new model. Decide whether you want to approve the new model for deployment to production or reject it.
- Promoting shadow models with MLOps
A shadow model runs alongside an active model. Both models receive production data and generate outcomes, but the outcomes of the shadow model are not used to make business decisions. Check how the shadow model performs over time, and if the model proves better suited to your business needs, promote it as the active model.
- Rejecting shadow models with MLOps
A shadow model runs in your production environment alongside an active model. The shadow model receives production data and generates outcomes, but does not impact your business decisions. The system tracks the outcomes to help you evaluate how the shadow model performs in production. If the model is not suitable for your needs, you can reject the model, and then replace it with a different one.
Previous topic Prediction analysis charts Next topic MLOps prerequisites