Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Active and candidate model comparison charts

Updated on July 5, 2022

Before you deploy a candidate model in shadow mode or as a replacement of the active model, evaluate the efficiency of the candidate model by adding a validation data set and generating charts that compare the two models in terms of score distribution, ROC curve, gains, and lift.

Operations related to updating and evaluating models are part of the Machine Learning Operations (MLOps) feature of Pega Platform.

Sample model comparison charts
A column chart shows score distribution, three line charts illustrate ROC curve, gains, and lift.

Learn more about the model comparison charts, and how to use them:

Score distribution
The score distribution chart displays a non-aggregated classification of the results of a model. The scores from the model are classified into intervals with increasing likelihood of positive behavior. With model analysis based on score distribution, you can compare models on consistent terms: how they distribute cases over the score bands, as well as their behavior in each band. The more predictive power a model has, the more distinct the bands are in terms of their performance. To compare model performance, Pega Platform distributes model results into 16 predefined intervals and compares the corresponding score bands of the two models.
ROC curve
The receiver operating characteristic (ROC) curve represents the predictive ability of a model by plotting the true positive rate (sensitivity) against the false positive rate (1 - specificity or inverted specificity). The true positive rate shows how often the model predicts a positive outcome when the actual outcome is positive. The false positive rate shows how often the model predicts a positive outcome when the actual outcome is negative. The closer the area under the curve (AUC) is to the top-left corner of the chart, the more accurately the model distinguishes positives from negatives.
Gains
A gains chart is a tool to compare the cumulative gains that each model provides. A gains curve represents the cumulative percentage of positive responses (gains) that you get by targeting increasing percentages of the population based on the predictions of a model, compared to the gains obtained by targeting the population at random. The closer the curve is to the top-left corner of the chart, the better the model is at predicting positive responses. A model with a better gains curve allows you to obtain the highest number of positive responses when reaching the smallest percentage of the population. A gains chart also shows the tipping point at which you have reached the most positive responses. Beyond this point the cost of contacting the remaining portion of the population might outweigh the gains.
Lift
Lift is a measure of how much more likely you are to get positive responses by using a predictive model than by targeting a random sample of the population. A lift chart graphically represents the lift that you get from each model as you target increasing percentages of the population. A typical lift curve runs down diagonally from the top-left corner of the chart to the value of 1, which represents no improvement in efficiency compared to targeting the population at random. A model with a lift curve that extends closer to the top-left corner has more predictive power and allows you to get a greater number of positive responses by targeting a smaller portion of the population. For example, a lift of 3 for the top 10% of the population (identified by the model as the group with the highest response ratio) means that you can get three times as many positive responses by targeting that group as when you do not use any model.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us