Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-161604 · Issue 631764

Corrected unreleased database connections

Resolved in Pega Version 8.6

When using a custom activity that calls Dataset-execute on a database table dataset, the DSS Pega-Engine.prconfig/classmap/usemergestatement/default was set to false and the prepared statement for database table dataset was failing. Upon the failure, an exception was generated that prevented the database connection from being released. This has been corrected.

INC-161604 · Issue 631762

Node stability improved when adding relevant records

Resolved in Pega Version 8.6

A node was going down after maxing out the database connection pool whenever the inbound service call was invoked. This was traced to several relevant record-related database queries being invoked within a short time for marking 'when' rules referenced in the proposition filter rule as relevant records. This has been resolved by adding 'when' rules as Relevant Records based on a DSS and not adding auto-generated 'when' rules at all.

INC-163723 · Issue 633441

Queue Processors made more robust

Resolved in Pega Version 8.6

After upgrade, multiple queue processors were not running as expected. Attempting to restart them generated an error. Investigation showed that the real time data flow runs were not picking up or accepting assignments because the local node was under the impression it was still processing data. In this case, the need to synchronize the state of multiple threads caused the queue processors to become stuck in an initializing state due to a race condition that caused the data flow engine to think this run still had threads running when all threads were already stopped. To resolve this, the callback handling has been simplified and made more robust. In addition, in some cases the data flow leader node would believe the service nodes did not accept assignments even when they did. This occurred if many runs and nodes were involved, and was traced to an implicit limit on the NativeSQL query used to read the data to see which assignments were accepted. To resolve this, the key-value store in the Service Registry has been modified to allow a query of more than 500 entries at once.

INC-164581 · Issue 634154

Import UI option updated to handle data import for upgrades

Resolved in Pega Version 8.6

After upgrade from Pega 8.1 to 8.5, importing data into the datasets using the Actions->Import UI option was not working. This was due to the previous save operation used in import being deprecated, and has been resolved by instituting a new save operation in import to handle this scenario.

INC-166354 · Issue 637300

Queue Processors made more robust

Resolved in Pega Version 8.6

After upgrade, multiple queue processors were not running as expected. Attempting to restart them generated an error. Investigation showed that the real time data flow runs were not picking up or accepting assignments because the local node was under the impression it was still processing data. In this case, the need to synchronize the state of multiple threads caused the queue processors to become stuck in an initializing state due to a race condition that caused the data flow engine to think this run still had threads running when all threads were already stopped. To resolve this, the callback handling has been simplified and made more robust. In addition, in some cases the data flow leader node would believe the service nodes did not accept assignments even when they did. This occurred if many runs and nodes were involved, and was traced to an implicit limit on the NativeSQL query used to read the data to see which assignments were accepted. To resolve this, the key-value store in the Service Registry has been modified to allow a query of more than 500 entries at once.

SR-69015 · Issue 619995

Unescaping characters implemented for expressions

Resolved in Pega Version 8.3.6, Resolved in Pega Version 8.4.4, Resolved in Pega Version 8.5.3, Resolved in Pega Version 8.6

An issue where expression builder statements were evaluated differently at runtime than at testing has been resolved. Pega Platform expressions with String literals(that is, sequences of characters enclosed in quotation marks) now unescape characters in strategy shapes such as Set Property or Filter.

INC-203994 · Issue 698853

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-180246 · Issue 699700

Support for apostrophe added to keyword tokenization

Resolved in Pega Version 8.7.1

A keyword containing an apostrophe was not detected properly in Text extraction model. This has been resolved by updating the annotator used in the tokenization.

INC-193399 · Issue 688115

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-193632 · Issue 679172

Cassandra driver metrics exposed for performance troubleshooting

Resolved in Pega Version 8.7.1

By default Cassandra driver metrics are now enabled. Metrics can be disabled by setting the dnode/disable_driver_metrics prconfig parameter.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us