Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-B3872 · Issue 273277

Cassandra partitioning attempts optimal size for efficiency

Resolved in Pega Version 7.2.2

Due to the way that Cassandra partitioning worked, it was possible that at the end of each data flow execution many of the threads would run out of work to do and leave the remaining threads to complete all of the outstanding work. This would create a period where only one or two threads were running and all other threads were idle because the next data flow could not start until the current data flow had finished. The root of this is the nature of Cassandra token distribution as described here, causing some partitions to be very small, and some very large: http://www.datastax.com/dev/blog/token-allocation-algorithm To resolve this, the system will find an optimal size for Cassandra tokens and split large ones in pieces so all of the token ranges are approximately the same size.

SR-A23968 · Issue 249991

Cassandra version updated to correct JVM crash

Resolved in Pega Version 7.2.2

Enabling D-Node was causing a JVM crash on Solaris 64bit JVM. This was an Apache issue related to a flaw in the version of Cassandra being used in earlier releases, and has been resolved by upgrading Cassandra to a version that contains this fix. In 7.2.1 and later releases Cassandra is no longer running embedded as part of PRPC, rather it is started as an external process.

SR-A86704 · Issue 255597

Campaign stage error resolved for complex configurations

Resolved in Pega Version 7.2.2

If a Campaign data construction is very complex and involves many other data set and strategies that add additional data to Customer instances, a pointer exception was thrown if a secondary data flow with a strategy shape was initialized twice without executing a strategy. To support these complex configurations, the code has been updated to allow the initialization code to be executed twice in a row with similar results.

SR-A88481 · Issue 256518

VBS starts with DDS configured to use external Cassandra

Resolved in Pega Version 7.2.2

If DDS is configured with Internal Cassandra which is then switched to External, DDS reports to be in state NORMAL and VBD can start. However, when starting from scratch with DDS configured to use External Cassandra, DDS was failing validation and reporting to be in state CLIENT, so VBD was unable to start due to VBD's check that ensures at least one DDS is in the NORMAL state before starting. To address this, the validation code that checks the DDS state when VBD is starting has been removed and the system will instead rely on the DSM service layer to ensure that DDS is functional before VBD can start.

SR-A100120 · Issue 266448

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A100221 · Issue 266644

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A95518 · Issue 264095

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A87133 · Issue 255127

Allowed import of RAP containing ADM

Resolved in Pega Version 7.2.2

Import of a Pega Marketing .rap file containing Adaptive models failed if an ADM host was not available during import. This was caused by the system checking if the server had that Rule (configuration) already before processing the save request, and has been resolved by adding an extra test in pxOnDoSave activity to not check for existing configurations if there is no ADM node configured.

SR-A87739 · Issue 257135

BeanShell upgraded for security

Resolved in Pega Version 7.2.2

In order to address a potential security vulnerability in BeanShell that could be exploited for remote code execution in applications that have BeanShell on its classpath (CVE-2016-2510, BeanShell has been upgraded to org.apache-extras.beanshell:bsh:2.0b6

SR-A102021 · Issue 268416

Cassandra keyspace configuration updated

Resolved in Pega Version 7.2.2

In order to better support external Cassandra instances and using DDS-based Cassandra when the logic is running from a non-DDS node, the datacenter name will be read from Cassandra using the Datastax driver rather than JMX. The Datastax driver will request the information across the network if needed, rather than trying to use the JMX connection to Cassandra on the localhost (which would fail since there is no JMX port open because there is no running Cassandra on the non-DDS Node). In addition, creation of the keyspaces will be performed on the startup of the first DDS instance prior to any other DSM services running. This will prevent the other services from attempting to create the data keyspace since it will already exist.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us