Monitoring predictive models
Monitor predictive models to verify whether they accurately predict customer behavior. After you deploy and use a model in a live environment, you can interpret monitoring data to adjust that model and make it more accurate. You can enable the advanced reporting and statistical capabilities of Prediction Studio by defining the .pyPrediction property in a response strategy.
In this multi-part tutorial, learn how to monitor predictive models to improve your decision strategies, based on a sample use case of the uPlusTelco company. Simulate real run-time interactions in data flows to test the monitoring functionality, and to practice analyzing the prediction data.
Use case
uPlusTelco wants to improve the experience of their customer support by predicting the reason for each customer call. To achieve that goal, the data analytics team built a predictive model and want to monitor its performance by capturing actual outcomes and comparing them with the predicted outcomes.
As a data scientist, you are responsible for improving predictive model performance. Your role is to upload that model to Prediction Studio and analyze the outcomes to decide whether the model requires any updates. To use the model for predictions and capture the responses, cooperate with a system architect to deploy the model in decision and response strategies. Accomplish these tasks by performing the following procedures:
- Importing a PMML model
- Deploying a model in a strategy
- Defining the outcome to monitor
- Capturing and analyzing the monitoring data
For a summary of this tutorial, see the following video:
Previous topic Integrating Python and R predictive models into Pega Platform Next topic Importing and configuring a PMML model