SR-B92177 · Issue 344720
Catch and log added for incomplete surrogate pairs in parsed tweets
Resolved in Pega Version 7.4
In pyUpdateSummary activity, using @(Pega-RULES:Page).getXMLOfPage(Primary) for Twitter posts resulted in the intermittent exception "Insufficient input to properly transform characters; no low surrogate". This exception was caused by the Twitter text containing incomplete surrogate pairs (only one part of the surrogate pair is present and other is missing). To resolve this, code has been added which will catch the exception and log the post or tweet information whenever high/low surrogate characters are encountered.
SR-B93311 · Issue 341565
Resolved service timeout on ADM service node startup
Resolved in Pega Version 7.4
After the ADM commit log has collected some very large amount of information (ex. 15+ GB of ADM responses), the first ADM service node in the cluster failed to start. Because the first ADM node to come up has to perform reconciliation of information in its Cassandra caches and database, reading a massive ADM commit log was causing a timeout error. This startup has been amended so that read queries don't fail in case of a big amount of data stored in the ADM commit log.
SR-B93453 · Issue 340456
Cassandra enhancements for multi-datacenter environments
Resolved in Pega Version 7.4
1) Locking has been added to the creation of Cassandra key/value table to support multi-datacenter configuration of Cassandra in DDS. 2) A new parameter called cassandra_java_home has been added to allow the use of a different java than that used for the web server. 3) Support has also been added for multi-datacenter configuration by way of the following configuration parameter which should be set in either prconfig.xml or the prpc command line, where datacentername is the name of the datacenter configured in the node's cassndra-dcrack.properties file:
SR-B94811 · Issue 342612
Twitter connector exception upgraded to 67
Resolved in Pega Version 7.4
A issue with a Twitter connector not processing new posts was traced to a thread exception that was tagged as warning instead of fatal. Even though the connectors had stopped, this was not reflected in the dashboard. To correct this, the Pega error code has been upgraded to 67 so that when the connector stops in the background it will be reflected in the dashboard and an email is sent.
SR-B95655 · Issue 349412
Support added for Apache Kafka
Resolved in Pega Version 7.4
Enhancements have been added to support consumer and producer settings for an Apache Kafka DataSet instance, including connection timeout.
SR-B95915 · Issue 348427
TTL data generation improved
Resolved in Pega Version 7.4
Logic improvements have been made for Data Flows with strategy components and delayed learning to ensure the correct Java is generated so records are saved with Time To Live (TTL) data. This will also allow more robust use of TTL by Cassandra.
SR-B96377 · Issue 342936
Cassandra enhancements for multi-datacenter environments
Resolved in Pega Version 7.4
1) Locking has been added to the creation of Cassandra key/value table to support multi-datacenter configuration of Cassandra in DDS. 2) A new parameter called cassandra_java_home has been added to allow the use of a different java than that used for the web server. 3) Support has also been added for multi-datacenter configuration by way of the following configuration parameter which should be set in either prconfig.xml or the prpc command line, where datacentername is the name of the datacenter configured in the node's cassndra-dcrack.properties file:
SR-B96377 · Issue 342939
Cassandra enhancements for multi-datacenter environments
Resolved in Pega Version 7.4
1) Locking has been added to the creation of Cassandra key/value table to support multi-datacenter configuration of Cassandra in DDS. 2) A new parameter called cassandra_java_home has been added to allow the use of a different java than that used for the web server. 3) Support has also been added for multi-datacenter configuration by way of the following configuration parameter which should be set in either prconfig.xml or the prpc command line, where datacentername is the name of the datacenter configured in the node's cassndra-dcrack.properties file:
SR-B97064 · Issue 345331
Column size for property reference increased
Resolved in Pega Version 7.4
In an adaptive model where predictors have a long clipboard reference due to being located deep within the workpage (E.g. : .SubmissionData.PropertyLocation(1).BuildingInfo(1).LeftExposureAndDist), the model could be saved and run, but the models did not show up in the Adaptive Analytics portal. This was traced to adaptive model monitoring having a limit of 128 characters for the property reference column, and has been fixed by increasing the column size for pxInsName to 255 characters. In addition, the pzInsKey has been increased to 600 characters.
SR-C1507 · Issue 344564
Existing Kafka topic name used for connection
Resolved in Pega Version 7.4
When running a Kafka dataflow, Pega was using the dataset name instead of the topic name for the topic connection. This has been fixed, along with forcing a capital letter for the first character of the dataset name in order to ensure proper matching.