Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-197730 · Issue 686234

Prediction outcome response timing updated

Resolved in Pega Version 8.7

Predictions using a response timeout were not emitting a negative response ('NoResponse') when the specified waiting time expired. This was traced to the the outcome and response timeout values being overridden while triggering responses for multi stage predictions along with chained predictions. This has been resolved by modifying the flow to emit each outcome as it is received and by adding the dataflow trigger in the function so that it does not override the values in case of chained predictions.

INC-198232 · Issue 685700

Tombstone cleanup modified to handle very large volume environments

Resolved in Pega Version 8.7

Cassandra nodes were periodically suffering from an out of memory condition. This was traced to tumbling windows creating too many tombstones for Cassandra to clean up effectively when running in a very high volume environment with a short tumbling time. This has been resolved by modifying the procedure away from using individual tombstones to instead use range tombstones which will continuously supersede the previous tombstone so there is never an opportunity for them to build up.

INC-200152 · Issue 689865

Performance improvements for action check-in

Resolved in Pega Version 8.7

The process of 'Saving As' an action to a higher ruleset, Saving, and then running Check-in was taking an excessive amount of time. This was traced to redundant page copy activities and performance has been improved by updating the implementation.

INC-201364 · Issue 690806

Prediction outcome response timing updated

Resolved in Pega Version 8.7

Predictions using a response timeout were not emitting a negative response ('NoResponse') when the specified waiting time expired. This was traced to the the outcome and response timeout values being overridden while triggering responses for multi stage predictions along with chained predictions. This has been resolved by modifying the flow to emit each outcome as it is received and by adding the dataflow trigger in the function so that it does not override the values in case of chained predictions.

INC-201366 · Issue 691508

Performance improvements for stale thread warnings

Resolved in Pega Version 8.7

Stale thread warnings were causing performance issues during dataflow run execution. Stale thread/slow component warnings are added as part of the dataflow execution when a processing thread takes more than 5 minutes to process a single dataflow record. The stacktrace of the dataflow thread is added as part of the warning for debugging purposes, but in some scenarios the stacktrace can become very large. This has been resolved by removing the stacktraces from the warning, improving the query logic, and adding the run ID to the exception method to assist if there is an error.

INC-203994 · Issue 698853

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-180246 · Issue 699700

Support for apostrophe added to keyword tokenization

Resolved in Pega Version 8.7.1

A keyword containing an apostrophe was not detected properly in Text extraction model. This has been resolved by updating the annotator used in the tokenization.

INC-193399 · Issue 688115

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-193632 · Issue 679172

Cassandra driver metrics exposed for performance troubleshooting

Resolved in Pega Version 8.7.1

By default Cassandra driver metrics are now enabled. Metrics can be disabled by setting the dnode/disable_driver_metrics prconfig parameter.

INC-193847 · Issue 695974

DSS added to allow masking of subjectID in alerts

Resolved in Pega Version 8.7.1

In order to allow customizing whether or not a subjectID is included in alerts, a DSS has been added to conditionally mask the subjectID from being logged. To use this, set the "alerts/maskIHsubjectID" DSS in the Pega-DecisionEngine ruleset to true to hide the pySubjectID.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us