Monitoring predictions
Analyze how successful your predictions are in predicting the outcomes that bring value to your business. Gain insights by reviewing performance charts for predictions and the models that drive them.
- In the navigation pane of Prediction Studio, click Predictions.
- From the list of predictions, open a prediction that you want to analyze.
- On the Analysis tab, click
Prediction.
- In the Outcomes field, select the outcome to
analyze.If a prediction predicts a single outcome (single-stage prediction), the outcome is already selected. If a prediction predicts outcomes that occur in a sequence (multistage prediction), you can select each individual outcome or the two outcomes combined.
- In the Time frame field, select the period that
you want to analyze.
The charts show the performance measures that are relevant to the selected outcome. For example, a performance analysis with regard to churn covers the churn rate, lift, AUC, and total cases, as in the following figure:
- In the Outcomes field, select the outcome to
analyze.
- Click the Models tab.
- If more that one model drives the prediction, in the
Models field, select the model the performance of
which you want to analyze.
The performance chart shows how effective the model is over time. If a shadow model runs alongside an active model, the chart displays the performance curves for each model in the active-shadow pair. The comparison can help you determine which model is more effective and decide whether you want to promote the shadow model to the active model position. The analysis is available for both outcome-based and supporting models.
The following figure shows the performance chart for the active model, Predict Churn DRF. This model is paired with a shadow model, Churn GBM. The chart shows two distinct lines that illustrate the performance of each model over time. Hover over the lines with your cursor to see the exact scores for each model at different times. In this example, the two lines converge around January 27, suggesting that the models performed equally well. The detailed information on mouse hover shows that the active model had a slightly higher score.
Previous topic Customizing predictions Next topic Updating active models in predictions with MLOps