INC-130757 · Issue 571421
Migration data pipeline import activity updated
Resolved in Pega Version 8.4.2
The migration data pipeline was not working in the DevOps environment when moving data from Production to Simulation environments. This has been resolved by updating the activity used to generate file data set for configuring the datetime, date and time formats and inserting a check so that if a format for a type is empty, parsing properties of this type will be skipped.
INC-131260 · Issue 571786
Handling added for runs using Completed state
Resolved in Pega Version 8.4.2
After upgrade, creating a data flow rule and the batch Data flow run using the Dataflow-Execute method in the activity resulted in the exception "Could not start run DataFlowRunConfig" when executing. This was a missed use case in the new architecture of data flow engine introduced in 8.4, causing DataFlow-Execute to not work properly for existing runs in COMPLETED state. This has been resolved with the addition of a branch of execution in DataFlow-Execute.Start to correctly handle completed runs.
INC-131340 · Issue 573928
Added handling for single case data flows run on a web node
Resolved in Pega Version 8.4.2
After making NBA and Capture Response calls using the out-of-the-box APIs, the Latest Response was not updated in the landing page even though the models were updated. This was a missed use case for single case data flows run on a web node, and has been resolved by removing the filtering of data flow nodes while sending messages for last responses.
INC-131700 · Issue 570973
DSS added to configure Dataset-Execute page handling
Resolved in Pega Version 8.4.2
When Kafka Data Set pages were saved to a data set with the Dataset-Execute method, there was is feedback if any of the pages were not successfully saved. Instead, the step always completes as successful. In addition, if any properties are added or modified by the save operation itself, those changes are not visible. This is due to the data set execute save operation saving pages as DSM pages to the data set. Due to the conversion of the pages, copies of the pages are used which do not reflect back any changes on the input pages. DSM pages are used by default because they are more lightweight than regular clipboard pages and therefore have potentially better performance. In order to allow the use of DSM pages to be customized, a new Dynamic System Setting: dataset/execute/save/statusFailOnError has been added. This can be enabled by setting it to true; it is disabled by default for greater backwards compatibility. By removing the DSM page conversion in the generated save code, changes to input pages will be reflected back if any are performed by the data set save operation, and the system will report back which pages are saved or failed by adding messages to the pages that failed to save. Performance may be affected with this change as regular clipboard pages are in general slower than DSM pages, however, that may be offset by removing the conversion to DSM pages process and will depend on the site configuration.
INC-132164 · Issue 575347
Updated Email Listener sentence detector logic
Resolved in Pega Version 8.4.2
When the Email Listener encountered a specific XLSM attachment, it became hung up in Running state. The email was marked as read in the email box, but the Email Triage case was not created and the Listener did not process subsequent emails. If the Listener was stopped, it would not restart. Investigation showed that multiple numbers of \n followed by a repeating sentence in the document caused the pointer to lag behind and create multiple annotations pointing to the same offsets. This in turn caused the further analysis to run in a really large loop, causing the slowdown. This has been resolved by updating the logic of the post processing for the sentence detector so that the duplicate annotations are not created.
INC-132516 · Issue 571342
Updated reference values for Strategy Rules test execution
Resolved in Pega Version 8.4.2
A Stale Thread exception was seen while running strategies with the test run panel. Investigation showed this was due to the Pega API's "pega" and "tools" references not being synched up, causing the Proposition Filter to use the old "pega" reference even if it was recycled. This has been resolved by updating the system so it will not use the cached "pega" value but it will use "tools" to make sure "pega" is the correct reference.
INC-132532 · Issue 576414
Strategy Exclusion Component naming conflict resolved
Resolved in Pega Version 8.4.2
After upgrade, modifying the RHS of Exclusions shape in Strategy "ExclusionTest" resulted in compilation issues during the save of the Rule. Investigation showed there was a compile time error related to a use case around shapes with exclusion in both sides using an expression provided that does more than just property value access, and which resulted in a naming conflict in the generated code. This has been resolved.
INC-133169 · Issue 572613
Service Registry heartbeat updates
Resolved in Pega Version 8.4.2
If a service (node) did not update its heartbeat for more than 90 seconds, eventually these stale services were removed from the database because the service registry did not consider them present. To resolve this, topology listeners will now use a java thread pool to run their logic and no longer use the heartbeat thread. Even if these listeners are slow, it won't affect the heartbeat and won't cause nodes to become unhealthy. If for some reason the heartbeat becomes slow (due to database issues) it will issue a thread dump to help identify what causes the slowness and aid in troubleshooting.
INC-134097 · Issue 574512
Service Registry heartbeat updates
Resolved in Pega Version 8.4.2
If a service (node) did not update its heartbeat for more than 90 seconds, eventually these stale services were removed from the database because the service registry did not consider them present. To resolve this, topology listeners will now use a java thread pool to run their logic and no longer use the heartbeat thread. Even if these listeners are slow, it won't affect the heartbeat and won't cause nodes to become unhealthy. If for some reason the heartbeat becomes slow (due to database issues) it will issue a thread dump to help identify what causes the slowness and aid in troubleshooting.
SR-D75583 · Issue 547301
JMX access enhancement
Resolved in Pega Version 8.4.2
An enhancement has been added to JMX access that provides two new prconfigs, "dnode/cassandra_jmx_username" and "dnode/cassandra_jmx_password". These allow adding credential requirements to local JMX.