Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-A90821 · Issue 258447

Delayed Learning data flows use dynamic objects

Resolved in Pega Version 7.2.2

Data flows with delayed learning enabled were failing to run when initiated by a campaign. The root cause for this bug was the process where a data flow object was created only once and the same object was shared between multiple data flow runs in the node. The changes for this fix include making the DataFlow object non-static in the DelayedLearning.java class.

SR-A77522 · Issue 253817

DnodeServiceListener ignores unknown data flow status values at startup

Resolved in Pega Version 7.2.2

DNodeServiceListener failed during PRPC startup with the exception "ERROR - Cannot initialize the Data Flow run manager, previously running flows may not be marked failed". This was traced to a corrupted assignment in the data flow that contained an unknown assignment status value (for example, a data flow assignment opened via report definition for diagnostic purposes can be corrupted by report definition resetting the pyAssignmentStatus to null). To better handle this scenario, the data flow run manager will ignore unknown status values during startup.

SR-B3872 · Issue 273277

Cassandra partitioning attempts optimal size for efficiency

Resolved in Pega Version 7.2.2

Due to the way that Cassandra partitioning worked, it was possible that at the end of each data flow execution many of the threads would run out of work to do and leave the remaining threads to complete all of the outstanding work. This would create a period where only one or two threads were running and all other threads were idle because the next data flow could not start until the current data flow had finished. The root of this is the nature of Cassandra token distribution as described here, causing some partitions to be very small, and some very large: http://www.datastax.com/dev/blog/token-allocation-algorithm To resolve this, the system will find an optimal size for Cassandra tokens and split large ones in pieces so all of the token ranges are approximately the same size.

SR-A23968 · Issue 249991

Cassandra version updated to correct JVM crash

Resolved in Pega Version 7.2.2

Enabling D-Node was causing a JVM crash on Solaris 64bit JVM. This was an Apache issue related to a flaw in the version of Cassandra being used in earlier releases, and has been resolved by upgrading Cassandra to a version that contains this fix. In 7.2.1 and later releases Cassandra is no longer running embedded as part of PRPC, rather it is started as an external process.

SR-A86704 · Issue 255597

Campaign stage error resolved for complex configurations

Resolved in Pega Version 7.2.2

If a Campaign data construction is very complex and involves many other data set and strategies that add additional data to Customer instances, a pointer exception was thrown if a secondary data flow with a strategy shape was initialized twice without executing a strategy. To support these complex configurations, the code has been updated to allow the initialization code to be executed twice in a row with similar results.

SR-A88481 · Issue 256518

VBS starts with DDS configured to use external Cassandra

Resolved in Pega Version 7.2.2

If DDS is configured with Internal Cassandra which is then switched to External, DDS reports to be in state NORMAL and VBD can start. However, when starting from scratch with DDS configured to use External Cassandra, DDS was failing validation and reporting to be in state CLIENT, so VBD was unable to start due to VBD's check that ensures at least one DDS is in the NORMAL state before starting. To address this, the validation code that checks the DDS state when VBD is starting has been removed and the system will instead rely on the DSM service layer to ensure that DDS is functional before VBD can start.

SR-A100120 · Issue 266448

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A100221 · Issue 266644

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A95518 · Issue 264095

Resolved loop in batch insertion

Resolved in Pega Version 7.2.2

If due to any reason (ex, duplicate fact id or database crash) a batch insertion is not successful, the system should retry 5 times before an error is thrown. An issue with the retry logic was found that caused a flag once initialized to 'true' to not ever be set to 'false', leading to a loop situation that inserted duplicate data. This has been corrected by setting the retry flag to false once insertion is successful (failed=false) so that loop exits.

SR-A87133 · Issue 255127

Allowed import of RAP containing ADM

Resolved in Pega Version 7.2.2

Import of a Pega Marketing .rap file containing Adaptive models failed if an ADM host was not available during import. This was caused by the system checking if the server had that Rule (configuration) already before processing the save request, and has been resolved by adding an extra test in pxOnDoSave activity to not check for existing configurations if there is no ADM node configured.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us