To ensure that the performance of your predictive models is high, apart from
accessing the default charts in the Monitor tab of the predictive
models, you can create your own reports. View examples of such reports in Prediction Studio.
Before you begin: To monitor a predictive model, ensure that a system
architect creates a response strategy that references the model and defines the values
for the .pyOutcome and .pyPrediction properties,
For more information, see Headless decisioning.
- The .pyPrediction value is the same as the model objective
that is visible in the Model tab for that predictive model
(applies to all model types).
- For binary models, the .pyOutcome value is the same as one of
the outcome labels that is visible in the Model tab for
that predictive model. For continuous and categorical models, this parameter
value does not need to correspond to the model settings.
- In the header of Prediction Studio, click , and select a predictive model report type that you want to
- To verify models that predict two predefined possible outcome
categories, click List of binary models.
- To verify models that predict three or more predefined possible outcome
categories, click List of categorical
- To verify models that predict a range of possible outcome values, click
List of continuous models.
- To compare the accuracy of all predictive models, click
Latest performance per model.
- Optional: In the list of models, decide what data you want to see in the report by
clicking Edit report and choosing the columns to
- In the list of models, click the model you want to analyze in detail.
- In the detailed model view, review the predicted and actual outcome data.