Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-D60121 · Issue 525492

All interactions visible in "Latest Responses" for ADM

Resolved in Pega Version 8.4

Interactions were not visible in the "Latest Responses" section of the Model Management landing page for Adaptive models if the requests were stored on multi-node systems. This was due to the system fetching the Last Responses using a list of server nodes built using a version of deployment.getClusterState(tools) which gave only the ADM nodes list instead of all the ADM nodes both client and server. To resolve this, the system has been updated to use ServiceRegistry to get all of the data flow nodes and get the last responses from each of them.

SR-D60268 · Issue 521467

Performance and thread-handling improvements for SSA

Resolved in Pega Version 8.4

The SecureRandom class was used internally by SSAExecutionContext indirectly via UUID generation. Because this exhibited performance issues on some Linux environments, UUID has been replaced with static AtomicLong. In addition, a memory leak was observed when the strategy (SSA) execution resulted in an exception, and the strategy template has been modified to gracefully shutdown the VM under all circumstances. Thread-safety measures have also been tuned to be more fine-grained to reduce the potential thread contention that was seen while borrowing the SSAInterpreter object from SSAInterpreterPool.

SR-D69028 · Issue 528974

Deadlock in static Initialization of IntList resolved

Resolved in Pega Version 8.4

JVM Deadlock was seen related to the static Initialization of a subclass field in class com.pega.decision.strategy.ssa.runtime.collections.api.IntList . Thread dumps showed threads in RUNNABLE State that were parked to wait for class initialization, and this was traced to a missed sonar alert which failed in multi-threading. To resolve this, the system handling has been updated to prevent potential deadlock.

SR-D41730 · Issue 508144

TTL value correctly passed for Adaptive Event store

Resolved in Pega Version 8.4

The ADM table was growing due to the Time to Live (TTL) for entries in the Adaptive Event Store not being propagated to clean them out. This was traced to the TTL field on the data flow not being checked, causing the TTL value to be supplied as zero so there was no expiration. This has been corrected.

SR-69015 · Issue 619995

Unescaping characters implemented for expressions

Resolved in Pega Version 8.3.6, Resolved in Pega Version 8.4.4, Resolved in Pega Version 8.5.3, Resolved in Pega Version 8.6

An issue where expression builder statements were evaluated differently at runtime than at testing has been resolved. Pega Platform expressions with String literals(that is, sequences of characters enclosed in quotation marks) now unescape characters in strategy shapes such as Set Property or Filter.

INC-203994 · Issue 698853

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-180246 · Issue 699700

Support for apostrophe added to keyword tokenization

Resolved in Pega Version 8.7.1

A keyword containing an apostrophe was not detected properly in Text extraction model. This has been resolved by updating the annotator used in the tokenization.

INC-193399 · Issue 688115

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-193632 · Issue 679172

Cassandra driver metrics exposed for performance troubleshooting

Resolved in Pega Version 8.7.1

By default Cassandra driver metrics are now enabled. Metrics can be disabled by setting the dnode/disable_driver_metrics prconfig parameter.

INC-193847 · Issue 695974

DSS added to allow masking of subjectID in alerts

Resolved in Pega Version 8.7.1

In order to allow customizing whether or not a subjectID is included in alerts, a DSS has been added to conditionally mask the subjectID from being logged. To use this, set the "alerts/maskIHsubjectID" DSS in the Pega-DecisionEngine ruleset to true to hide the pySubjectID.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us