SR-C78362 · Issue 415981
Preprocessing added to GET assignments and cases
Resolved in Pega Version 8.2
When the /assignments/{ID}/actions/{actionID} API was called, the result of the flow action did not include either the data transform nor the activity from the preprocessing section. The system has now been updated to run preprocessing for GET /assignments/{ID}/actions/{ID} and GET /cases/{ID}/actions/{ID}.
SR-C77100 · Issue 417036
Flow actions with validation errors are correctly populated to embedded page
Resolved in Pega Version 8.2
After upgrade, a pyFlowData page was not populating an embedded pyFlowActionsInError page with the list of flow actions with validation errors. This led to the AddRuntimeInfo activity (called from step 8 of GetFlowData activity) to fail in setting the correct step status (pyStatus) as “error”(under pySteps embeded pages of pyFlowData page). Any additional logic based on this status value then failed because of the incorrect status. This was traced to a missing call to the researchPropertyReference API in pzGetActiveValue RUF.changes, and has been corrected.
SR-C77459 · Issue 416897
Report tab names are correctly maintained after refresh
Resolved in Pega Version 8.2
When multiple reports are opened and the browser is refreshed, the tab names/labels failed to show up on reload. This was caused by an error in the method used to set the tab label, and has been corrected.
SR-C75930 · Issue 416830
Security Event log enhanced to use 24-hour format for timestamps
Resolved in Pega Version 8.2
Previously, the Security Event log used timestamps in the 12-hour format. An enhancement has now been added to use the 24-hour format to clarify the timing of events.
SR-C77625 · Issue 414765
Improvements made for the stability and reliability of the stream tier
Resolved in Pega Version 8.2
Several enhancements and fixes have been made to improve the stability and reliability of the stream tier with a focus on rolling restart. Issue 410567: When a stream node shuts down, it takes up to a minute for all client nodes to recognize the event and failover to other nodes due to large default timeouts. Timeouts have now been reduced to allow nodes to failover much earlier. Issue 413375: When a stream node shuts down, other stream nodes are notified and attempt to discover the new cluster topology. Since the cluster is still in flux, this operation may take a long time blocking all producers until a new topology is determined. This procedure has been changed to be less heavy and make the discovery only if required. Issue 401326: Kafka (running as a separate process as part of the Stream services) generates log files, but does not use log rotation with a maximum amount of log files. This means that the log files will eventually fill up the disk, leading to failures of the service and PRPC instance running on the same disk. To resolve this, the system will now limit number of generated log files to prevent high disk usage by using the parameters of file size = 2 MB and a max nr of size = 10. Issue 410280: A large number of ""Processing override for entityPath"" entries were seen in the Kafka logs. This has been addressed by ensuring message.timestamp.difference.max.ms is set to equal to retention.ms. In addition, the system will properly compare the Kafka Configuration created for the entire topic and the configuration retrieved for Partitions so the configuration is not unnecessarily updated. Issue 414765: During stream node restart, decision hub nodes may block for 15-30 seconds when writing to a stream data under certain conditions. This can cause high response times for a small period. This has been addressed by updating stream dataset logic to handle stream tier restarts properly and not cause response time degradation. Issue 401093: HttpMessageDecoder was throwing an exception regarding “TooLongFrameException” causing a 502 on HTTP stream tier requests going over 8192. The default max header size used by stream data set REST service was 8KB. In this particular case, the HTTP header contains a lot of cookies and other meta-info and exceeds 8KB. To resolve this, the default max header size has been increased to 16KB. A new prconfig setting to control max header size has also been introduced.
SR-C75099 · Issue 413375
Improvements made for the stability and reliability of the stream tier
Resolved in Pega Version 8.2
Several enhancements and fixes have been made to improve the stability and reliability of the stream tier with a focus on rolling restart. Issue 410567: When a stream node shuts down, it takes up to a minute for all client nodes to recognize the event and failover to other nodes due to large default timeouts. Timeouts have now been reduced to allow nodes to failover much earlier. Issue 413375: When a stream node shuts down, other stream nodes are notified and attempt to discover the new cluster topology. Since the cluster is still in flux, this operation may take a long time blocking all producers until a new topology is determined. This procedure has been changed to be less heavy and make the discovery only if required. Issue 401326: Kafka (running as a separate process as part of the Stream services) generates log files, but does not use log rotation with a maximum amount of log files. This means that the log files will eventually fill up the disk, leading to failures of the service and PRPC instance running on the same disk. To resolve this, the system will now limit number of generated log files to prevent high disk usage by using the parameters of file size = 2 MB and a max nr of size = 10. Issue 410280: A large number of ""Processing override for entityPath"" entries were seen in the Kafka logs. This has been addressed by ensuring message.timestamp.difference.max.ms is set to equal to retention.ms. In addition, the system will properly compare the Kafka Configuration created for the entire topic and the configuration retrieved for Partitions so the configuration is not unnecessarily updated. Issue 414765: During stream node restart, decision hub nodes may block for 15-30 seconds when writing to a stream data under certain conditions. This can cause high response times for a small period. This has been addressed by updating stream dataset logic to handle stream tier restarts properly and not cause response time degradation. Issue 401093: HttpMessageDecoder was throwing an exception regarding “TooLongFrameException” causing a 502 on HTTP stream tier requests going over 8192. The default max header size used by stream data set REST service was 8KB. In this particular case, the HTTP header contains a lot of cookies and other meta-info and exceeds 8KB. To resolve this, the default max header size has been increased to 16KB. A new prconfig setting to control max header size has also been introduced.
SR-C77625 · Issue 415579
Cassandra retry policy enanhancement
Resolved in Pega Version 8.2
When configuring the datastax client for Cassandra, it is possible to specify a retry policy. By default the DefaultRetryPolicy is used. While a retry was always attempted on client timeout with this retry policy, a retry on server timeout was only attempted in a limited set of circumstances. This code indicates that for a retry: - The number of received responses is at least the same as the number of required responses - The retry will be done to the same node, not a new node This use makes sense if consistency is set to QUORUM or higher, but it does not make sense for consistency ONE. To resolve this, a new retry policy has been added which will retry in this scenario a configurable number of times. The retry policy and the retry count will be controlled by prconfig settings: prconfig/dnode/cassandra_custom_retry_policy/default - True/False (default: False) prconfig/dnode/cassandra_custom_retry_policy/retryCount/default - >= 0 (default:1)
SR-C59790 · Issue 398643
Enhancement added to get archive details as response on getStatus API call with external repository
Resolved in Pega Version 8.2
Previously, getStatus service did not return the repository location of the exported zip file if export to repository was passed true with repository name. An enhancement has been added to also get repositoryLocation with a getStatus service call if exportToRepository was true in the export operation and the archive has been successfully exported to the repository.
SR-C78045 · Issue 415131
Check added to resolve teradata empty batch error
Resolved in Pega Version 8.2
ExecuteBatch in Teradata was failing if there were no calls to addBatch prior to it, a different behavior than is seen in other databases. This has been corrected by adding a condition to check if there are any statements added to the batch before executing the batch.
SR-C49342 · Issue 400585
Check added to resolve teradata empty batch error
Resolved in Pega Version 8.2
ExecuteBatch in Teradata was failing if there were no calls to addBatch prior to it, a different behavior than is seen in other databases. This has been corrected by adding a condition to check if there are any statements added to the batch before executing the batch.