Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-B94811 · Issue 342612

Twitter connector exception upgraded to 67

Resolved in Pega Version 7.4

A issue with a Twitter connector not processing new posts was traced to a thread exception that was tagged as warning instead of fatal. Even though the connectors had stopped, this was not reflected in the dashboard. To correct this, the Pega error code has been upgraded to 67 so that when the connector stops in the background it will be reflected in the dashboard and an email is sent.

SR-B95655 · Issue 349412

Support added for Apache Kafka

Resolved in Pega Version 7.4

Enhancements have been added to support consumer and producer settings for an Apache Kafka DataSet instance, including connection timeout.

SR-B95915 · Issue 348427

TTL data generation improved

Resolved in Pega Version 7.4

Logic improvements have been made for Data Flows with strategy components and delayed learning to ensure the correct Java is generated so records are saved with Time To Live (TTL) data. This will also allow more robust use of TTL by Cassandra.

SR-B96377 · Issue 342936

Cassandra enhancements for multi-datacenter environments

Resolved in Pega Version 7.4

1) Locking has been added to the creation of Cassandra key/value table to support multi-datacenter configuration of Cassandra in DDS. 2) A new parameter called cassandra_java_home has been added to allow the use of a different java than that used for the web server. 3) Support has also been added for multi-datacenter configuration by way of the following configuration parameter which should be set in either prconfig.xml or the prpc command line, where datacentername is the name of the datacenter configured in the node's cassndra-dcrack.properties file:

SR-B96377 · Issue 342939

Cassandra enhancements for multi-datacenter environments

Resolved in Pega Version 7.4

1) Locking has been added to the creation of Cassandra key/value table to support multi-datacenter configuration of Cassandra in DDS. 2) A new parameter called cassandra_java_home has been added to allow the use of a different java than that used for the web server. 3) Support has also been added for multi-datacenter configuration by way of the following configuration parameter which should be set in either prconfig.xml or the prpc command line, where datacentername is the name of the datacenter configured in the node's cassndra-dcrack.properties file:

SR-B97064 · Issue 345331

Column size for property reference increased

Resolved in Pega Version 7.4

In an adaptive model where predictors have a long clipboard reference due to being located deep within the workpage (E.g. : .SubmissionData.PropertyLocation(1).BuildingInfo(1).LeftExposureAndDist), the model could be saved and run, but the models did not show up in the Adaptive Analytics portal. This was traced to adaptive model monitoring having a limit of 128 characters for the property reference column, and has been fixed by increasing the column size for pxInsName to 255 characters. In addition, the pzInsKey has been increased to 600 characters.

SR-C1507 · Issue 344564

Existing Kafka topic name used for connection

Resolved in Pega Version 7.4

When running a Kafka dataflow, Pega was using the dataset name instead of the topic name for the topic connection. This has been fixed, along with forcing a capital letter for the first character of the dataset name in order to ensure proper matching.

SR-C2352 · Issue 347253

Data join optimized for performance

Resolved in Pega Version 7.4

The data join implementation has been modified in order to improve performance for ADE applications built on Pega and based on DSM.

SR-C5585 · Issue 347734

Correct error messages shown for failed data flow records

Resolved in Pega Version 7.4

When running the data flow, the failed records were returned with incorrect error messages. In this case, the original exception generated by the CassandraBrowseByKeysOperations was not propagated to the caller. This has been fixed by adding the original exception to the one thrown to the caller.

SR-C6486 · Issue 349824

Check added for Kafka dataflow error handling

Resolved in Pega Version 7.4

When an event dataflow has failed because too many errors were detected, it is possible to continue it. However, it was noticed that the "continue" button had to be used twice: The first time, the dataflow failed again and the "Input record" was increased more than expected. For instance, if the event flow was sent 2 incorrect events, the "Input record" was increased by 3 instead of 1. The second time, the event dataflow was resumed correctly. Investigation showed that when a bad record is inserted into Kafka, a dummy error record is generated that has no information of the partition and position, so the data flow cannot update the partition table correctly. In order to correct this, the partition and position information will be given on the error record. The data flow execution has also been updated such that when there is an onError call, a check will be performed to assess whether the error originated on the primary source and an input record is present. If so, then the partition table will be updated from that record.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us