Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Metrics for measuring predictive performance

Updated on July 5, 2022

Use different types of monitoring charts and statistics to verify the performance and accuracy of your predictive models.

Predictive models are classified into three categories based on the model output data type.

Binary outcome models

Binary models predict two predefined possible outcome categories. Use the following chart types to analyze their accuracy:

Performance (AUC)
Shows the total predictive performance in the Area Under the Curve (AUC) measurement unit. Models with an AUC of 50 provide random outcomes, while models with an AUC of 100 predict the outcome perfectly.
ROC curve
The Receiver Operating Characteristic (ROC) curve shows a plot of the true positive rate versus the false positive rate. The higher the area under the curve is, the more accurately the model distinguishes positives from negatives.
Score distribution
Shows generated score intervals and their propensity. Higher scores are associated with higher propensity for the actual outcome. You can set any other number of score intervals, for exampe, 10 intervals (deciles). The more predictive power of the model is, the more distinct the bands are in terms of their performance.
Success rate
Shows the rate of successful outcomes as a percentage of all captured outcomes. The system calculates this rate by dividing the number of 'positive' outcomes (the outcome value that the model predicts) by the total number of responses.

Categorical (multi-class) models

Categorical models predict three or more predefined possible outcome categories. Use the following chart types to analyze their performance:

Performance (F-score)
Shows the weighted harmonic mean of precision and recall, where precision is the number of correct positive results divided by the number of all positive results returned by the classifier, and recall is the number of correct positive results divided by the number of all relevant samples. The F-score of 1 means perfect precision and recall, while 0 means no precision or recall.
Confusion matrix
Shows a contingency table of actual outcomes versus the expected outcomes. The diagonal axis shows how often the observed actual outcome matches the expected outcome. The off-diagonal elements of the matrix show how often the actual outcome does not match the predicted outcome.

Continuous models

Continuous models predict a continuous numeric outcome. Use the following chart types to analyze their performance:

Performance (RMSE)
Shows the root-mean-square error value calculated as the square root of the average of squared errors. In this measure of predictive power, the difference between the predicted outcomes and the actual outcomes is represented by a number, where 0 means flawless performance.
Residual distribution
Shows the distribution of the difference between the actual and the predicted values. Wider distribution means a greater error. On this chart, you can observe when the predicted value is systematically higher or lower.
Outcome value distribution
Shows the distribution of actual outcome values. When the outcome value distribution is available, you can compare it to the expected distribution for the model.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us