Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-B90267 · Issue 338706

DataFlowGenerator class files preserved longer

Resolved in Pega Version 7.4

When creating a campaign, running the generated data flow manually resulted in an illegal argument exception indicating a null property name. This was traced to the use of an overwrite data flow API: class files were generated and removed immediately after the DataFlow object is constructed. In some cases, the java classloader may delay loading of the classes, causing the NoClassDefFoundException as files have been already removed. To resolve this, the DataFlowGenerator will now wait one day to remove the classes.

SR-B90756 · Issue 337902

Interaction history lookup fixed

Resolved in Pega Version 7.4

Interaction History load after updated was failing when there were multiple Interaction History shapes on a single strategy rule. This was due to the wrong API being used to parse pxOutcometime and has been fixed.

SR-B91617 · Issue 338996

Added handling for VBD backup integrity

Resolved in Pega Version 7.4

VBD queries and inserts were failing after terminating a VBD node with kill -9. Investigation showed that VBD partition summary was not getting properly propagated to backup partitions after aggregation runs. If the node that owns the partition terminated ungracefully, then the backup partition ended up being promoted to primary with inaccurate state details and subsequent attempts to access the partition failed. This did not occur If the VBD node was shut down in an orderly fashion. To fix this, code has been added to ensure VBD sends proper partition summary data to the backup partitions after aggregation runs.

SR-B91934 · Issue 339644

pzADMInputs made available for serialization

Resolved in Pega Version 7.4

There was an issue with saving the pzADMInputs.pzData (JavaObject) in the blob due to the serialize object option being unset in a final rule. This has been corrected by making pzADMInputs serializable.

SR-B92155 · Issue 343548

ADM and Batch Decision functions compile

Resolved in Pega Version 7.4

Trying to compile the functions of ADM or batch decision was failing with a compilation error. An issue was found in the library which was resulting in picking up and compiling the all the previous versions, and this has been corrected so the system will pick up the latest version of the Batch decisioning Library.

SR-B92155 · Issue 346408

ADM and Batch Decision functions compile

Resolved in Pega Version 7.4

Trying to compile the functions of ADM or batch decision was failing with a compilation error. An issue was found in the library which was resulting in picking up and compiling the all the previous versions, and this has been corrected so the system will pick up the latest version of the Batch decisioning Library.

SR-B92177 · Issue 344720

Catch and log added for incomplete surrogate pairs in parsed tweets

Resolved in Pega Version 7.4

In pyUpdateSummary activity, using @(Pega-RULES:Page).getXMLOfPage(Primary) for Twitter posts resulted in the intermittent exception "Insufficient input to properly transform characters; no low surrogate". This exception was caused by the Twitter text containing incomplete surrogate pairs (only one part of the surrogate pair is present and other is missing). To resolve this, code has been added which will catch the exception and log the post or tweet information whenever high/low surrogate characters are encountered.

SR-B93311 · Issue 341565

Resolved service timeout on ADM service node startup

Resolved in Pega Version 7.4

After the ADM commit log has collected some very large amount of information (ex. 15+ GB of ADM responses), the first ADM service node in the cluster failed to start. Because the first ADM node to come up has to perform reconciliation of information in its Cassandra caches and database, reading a massive ADM commit log was causing a timeout error. This startup has been amended so that read queries don't fail in case of a big amount of data stored in the ADM commit log.

SR-B93453 · Issue 340456

Cassandra enhancements for multi-datacenter environments

Resolved in Pega Version 7.4

1) Locking has been added to the creation of Cassandra key/value table to support multi-datacenter configuration of Cassandra in DDS. 2) A new parameter called cassandra_java_home has been added to allow the use of a different java than that used for the web server. 3) Support has also been added for multi-datacenter configuration by way of the following configuration parameter which should be set in either prconfig.xml or the prpc command line, where datacentername is the name of the datacenter configured in the node's cassndra-dcrack.properties file:

SR-B94811 · Issue 342612

Twitter connector exception upgraded to 67

Resolved in Pega Version 7.4

A issue with a Twitter connector not processing new posts was traced to a thread exception that was tagged as warning instead of fatal. Even though the connectors had stopped, this was not reflected in the dashboard. To correct this, the Pega error code has been upgraded to 67 so that when the connector stops in the background it will be reflected in the dashboard and an email is sent.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us