Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-A88481 · Issue 256518

VBS starts with DDS configured to use external Cassandra

Resolved in Pega Version 7.2.2

If DDS is configured with Internal Cassandra which is then switched to External, DDS reports to be in state NORMAL and VBD can start. However, when starting from scratch with DDS configured to use External Cassandra, DDS was failing validation and reporting to be in state CLIENT, so VBD was unable to start due to VBD's check that ensures at least one DDS is in the NORMAL state before starting. To address this, the validation code that checks the DDS state when VBD is starting has been removed and the system will instead rely on the DSM service layer to ensure that DDS is functional before VBD can start.

SR-A100120 · Issue 266448

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A100221 · Issue 266644

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A95518 · Issue 264095

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A87133 · Issue 255127

Allowed import of RAP containing ADM

Resolved in Pega Version 7.2.2

Import of a Pega Marketing .rap file containing Adaptive models failed if an ADM host was not available during import. This was caused by the system checking if the server had that Rule (configuration) already before processing the save request, and has been resolved by adding an extra test in pxOnDoSave activity to not check for existing configurations if there is no ADM node configured.

SR-A87739 · Issue 257135

BeanShell upgraded for security

Resolved in Pega Version 7.2.2

In order to address a potential security vulnerability in BeanShell that could be exploited for remote code execution in applications that have BeanShell on its classpath (CVE-2016-2510, BeanShell has been upgraded to org.apache-extras.beanshell:bsh:2.0b6

SR-A102021 · Issue 268416

Cassandra keyspace configuration updated

Resolved in Pega Version 7.2.2

In order to better support external Cassandra instances and using DDS-based Cassandra when the logic is running from a non-DDS node, the datacenter name will be read from Cassandra using the Datastax driver rather than JMX. The Datastax driver will request the information across the network if needed, rather than trying to use the JMX connection to Cassandra on the localhost (which would fail since there is no JMX port open because there is no running Cassandra on the non-DDS Node). In addition, creation of the keyspaces will be performed on the startup of the first DDS instance prior to any other DSM services running. This will prevent the other services from attempting to create the data keyspace since it will already exist.

SR-A93286 · Issue 270071

Data flow shape batch size increased from 1 to 250

Resolved in Pega Version 7.2.2

In order to support large and complex systems, the data flow shape batch size has been increased from 1 to 250.

SR-A93286 · Issue 265355

Data flow shape batch size increased from 1 to 250

Resolved in Pega Version 7.2.2

In order to support large and complex systems, the data flow shape batch size has been increased from 1 to 250.

SR-A76054 · Issue 251761

Enhancement added to bulk delete ADM models

Resolved in Pega Version 7.2.2

In order to support the deletion of ADM models in bulk, an enhancement has been added to remove the unwanted models from the database provided a list of them is given as input. The new activity is pxDeleteModelsByCriteria, applying to DSMPublicAPI-ADM. The pzInsKey of this activity is: RULE-OBJ-ACTIVITY DSMPUBLICAPI-ADM PXDELETEMODELSBYCRITERIA #20160608T102848.043 GMT It has been created in ruleset Pega-DecisionArchitect:07-10-16 on 7.1.7 HFIX system vengwindb180:8282/. The usage text, explaining the activity, is as follows: "Used to delete ADM models in bulk. Models to be deleted are determined by the criteria selected by this Activity's parameters; a model is deleted if it matches all selected criteria. PLEASE NOTE: activity is not constrained by Application, only by the criteria provided as parameters. Therefore in most cases you will probably want to constrain by 'applies to' class in addition to the other criteria. Please see the tooltip / description of each parameter for more info on their usage. Integer output parameter 'NumberDeleted' returns the number of models that were successfully deleted."

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us