Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-B70652 · Issue 325760

Read operations updated for Datastax 3.1.x use

Resolved in Pega Version 7.3.1

In the 3.1.x Datastax driver reads are no longer retried by default. Therefore, the read operations have now been explicitly marked as idempotent to force the Datastax driver to retry timed out reads.

SR-B70652 · Issue 325924

Timeout error resolved for write future

Resolved in Pega Version 7.3.1

With retries enabled, the write future was timing out before retries could be completed. This has been fixed by removing the timeout on the write future since timeouts will be caught using Datastax driver exceptions.

SR-B74934 · Issue 327017

Facebook connector refined to handle page deletion while the AADrivetech is active

Resolved in Pega Version 7.3.1

A Facebook connector trying to connect to a invalid page was causing out of memory errors. In this scenario, a Facebook Page that was no longer required was deleted but the AADrivetech Facebook connector was left running. This connector, in its attempts to contact the Facebook page and retrieve the getFrom parameter which was now null, started to consume resources on the node leading to the outage. To solve this, the code has been modified to not expect the user name to always come from Facebook but instead put in an anonymous name if the page disappears.

SR-B76526 · Issue 326533

Backwards compatibility enhancement for @when() validation

Resolved in Pega Version 7.3.1

After upgrade, a Strategy rule which referred a 'when' with the syntax @when(isOnlineApplication) was failing with design time validation saying that when rule was not found in the SR and instead it had to find it in the Customer/Applies to class. For the @when() issue, the root cause was that the validation context of @when() was switched from Applies-To class to the Step Page class due to a change in the core engine. The behavior of expression parsing for when rule calls was changed in release 7.2. It used to take the Apply-to class to validate the existence of the rule, but not it is taking the Step Page class. And Strategy rule doesn't push/pop stackframe due to performance reasons, thus StepPage on stack for Strategy is always the same as Primary page. For greater compatibility, the system will set the PageContextClass to the Apply To class so the expression parser can validate the setup which is expected at run-time.

SR-B69409 · Issue 317548

CLOB type handling enhanced in Marketing

Resolved in Pega Version 7.3.1

Pega Marketing uses the out-of-the-box DSM activity pxRunDDFWithProgressPage for triggering Data flow runtime. After upgrade in DB2 environments, the DF execution completed but the activity returned a failure. It was found that if a CLOB column is present in the filter of a NativeSQL object invoked from an activity, the status had an error message even though functionality was fine. To ensure consistent behavior, CLOB has been added to supported data types so the severity of the exception can be changed so as to not show error message in activity's status.

INC-203994 · Issue 698853

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-180246 · Issue 699700

Support for apostrophe added to keyword tokenization

Resolved in Pega Version 8.7.1

A keyword containing an apostrophe was not detected properly in Text extraction model. This has been resolved by updating the annotator used in the tokenization.

INC-193399 · Issue 688115

DSS added to handle merges with lower versions of Postgres

Resolved in Pega Version 8.7.1

After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the DB table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.

INC-193632 · Issue 679172

Cassandra driver metrics exposed for performance troubleshooting

Resolved in Pega Version 8.7.1

By default Cassandra driver metrics are now enabled. Metrics can be disabled by setting the dnode/disable_driver_metrics prconfig parameter.

INC-193847 · Issue 695974

DSS added to allow masking of subjectID in alerts

Resolved in Pega Version 8.7.1

In order to allow customizing whether or not a subjectID is included in alerts, a DSS has been added to conditionally mask the subjectID from being logged. To use this, set the "alerts/maskIHsubjectID" DSS in the Pega-DecisionEngine ruleset to true to hide the pySubjectID.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us