Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-B1679 · Issue 272770

Dataflow pre- and post-activities can be run across nodes

Resolved in Pega Version 7.2.2

An enhancement has been added to allow the execution of pre- and/or post-activities on all data flow nodes by way of the following properties. Please note these are not available in the UI and must be used in programmatically-created runs. Data-Decision-DDF-RunOptions.pyPreActivity.pyRunOnAllNodes = true Data-Decision-DDF-RunOptions.pyPostActivity.pyRunOnAllNodes = true

SR-B5134 · Issue 274372

Smoothed data flow terminations

Resolved in Pega Version 7.2.2

An issue was found with data flows not correctly terminating when encountering an error. This has been fixed by ensuring a work object is marked as stopped when an assignment fails. In addition, when performing a commit, a dataflow failed with "Interrupted by unexpected service shutdown". This has been corrected by ensuring the system always provides a regular clipboard page for assignments and work object and not a DSM clipboard when loading/saving an assignment or work object from/to the database.

SR-A87606 · Issue 257332

VDB start waits for Cassandra node with keyspace data

Resolved in Pega Version 7.2.2

An issue was found with restarting a DNode if there were no copies of the VBD keyspace data on the available nodes in the Cassandra cluster when VBD started up. VBD uses replica factor 3 on its keyspace, meaning there will be at most 3 nodes in the cluster with full copies of the VBD data. If when restarting the cluster only Cassandra nodes that happen to not have VBD's data were brought up, starting up VBD would hang trying to read partition summary data. As a solution, If VBD is started and its Cassandra data is not yet available, the process to load partition summary data will wait until the next time there is VBD activity. Once the Cassandra node comes up with VBD's data, it will be loaded and VBD functionality will be enabled. Additionally, an intermittent deadlock was discovered when starting 2 VBD nodes in parallel. The deadlock appeared when VBD was initializing its persistence at the same time another thread checked to see if persistence was initialized (triggered by remote request from second node). The first thread owned a Hazelcast distributed lock and attempted to use a Supplier to get an instance of an object. The second thread was already calling the Supplier and was waiting for the Hazelcast distributed lock. This deadlock has been fixed.

SR-A97924 · Issue 264130

Expression Builder grammar updated for better compatibility

Resolved in Pega Version 7.2.2

Backward compatibility problems were found with the translation of OR and AND into || and && due to added support of "and" and "or" as operands in the new expression builder. The preprocessing and the grammar behind the expression builder have been modified to improve system-wide compatibility.

SR-A102834 · Issue 270083

VBD checks for tables and creates if needed

Resolved in Pega Version 7.2.2

Code has been added so VBD will always attempt to create its tables if they do not already exist.

SR-A90821 · Issue 258447

Delayed Learning data flows use dynamic objects

Resolved in Pega Version 7.2.2

Data flows with delayed learning enabled were failing to run when initiated by a campaign. The root cause for this bug was the process where a data flow object was created only once and the same object was shared between multiple data flow runs in the node. The changes for this fix include making the DataFlow object non-static in the DelayedLearning.java class.

SR-A77522 · Issue 253817

DnodeServiceListener ignores unknown data flow status values at startup

Resolved in Pega Version 7.2.2

DNodeServiceListener failed during PRPC startup with the exception "ERROR - Cannot initialize the Data Flow run manager, previously running flows may not be marked failed". This was traced to a corrupted assignment in the data flow that contained an unknown assignment status value (for example, a data flow assignment opened via report definition for diagnostic purposes can be corrupted by report definition resetting the pyAssignmentStatus to null). To better handle this scenario, the data flow run manager will ignore unknown status values during startup.

SR-B3872 · Issue 273277

Cassandra partitioning attempts optimal size for efficiency

Resolved in Pega Version 7.2.2

Due to the way that Cassandra partitioning worked, it was possible that at the end of each data flow execution many of the threads would run out of work to do and leave the remaining threads to complete all of the outstanding work. This would create a period where only one or two threads were running and all other threads were idle because the next data flow could not start until the current data flow had finished. The root of this is the nature of Cassandra token distribution as described here, causing some partitions to be very small, and some very large: http://www.datastax.com/dev/blog/token-allocation-algorithm To resolve this, the system will find an optimal size for Cassandra tokens and split large ones in pieces so all of the token ranges are approximately the same size.

SR-A23968 · Issue 249991

Cassandra version updated to correct JVM crash

Resolved in Pega Version 7.2.2

Enabling D-Node was causing a JVM crash on Solaris 64bit JVM. This was an Apache issue related to a flaw in the version of Cassandra being used in earlier releases, and has been resolved by upgrading Cassandra to a version that contains this fix. In 7.2.1 and later releases Cassandra is no longer running embedded as part of PRPC, rather it is started as an external process.

SR-A86704 · Issue 255597

Campaign stage error resolved for complex configurations

Resolved in Pega Version 7.2.2

If a Campaign data construction is very complex and involves many other data set and strategies that add additional data to Customer instances, a pointer exception was thrown if a secondary data flow with a strategy shape was initialized twice without executing a strategy. To support these complex configurations, the code has been updated to allow the initialization code to be executed twice in a row with similar results.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us