INC-177993 · Issue 658726
Schema packaging updated for better backwards compatibility
Resolved in Pega Version 8.7
After installing a hotfix, attempting to install another download which contained the same hotfix as a dependent resulted in the notification that schema changes were required and the message "ALTER TABLE pegadata.pr_archival_class_dependency ADD CONSTRAINT pr_archival_class_de_1812_PK PRIMARY KEY ("pzinskey")" was generated. Because the dependency already existed, attempting to install the hotfix failed until the constraint was manually removed from the database. Investigation showed this was caused by a difference in how schema was being generated for newer hotfixes vs older versions of Pega, and has been resolved by updating the schema packager and fixing the single col primary keys for Pega version 8.3 and lower.
INC-178002 · Issue 663770
Restore point handling updated for absent pzpvstream column
Resolved in Pega Version 8.7
While executing the “get restore point” action for rollback, a PZPVSTREAMerror error occurred with the message "(util.HistoryCollectorDataModel) WARN|Rest|SystemManagement|v2|restorepoint - History collection for the table will be slow because it does not have all of the required columns". This was a missed use case for Robotics Automation not having a pzpvstream column for one of the tables; this has been corrected with a check to validate for pzpvstream column so the system will not seek history records if the pzpvstream column is not present.
INC-180479 · Issue 659639
DSS partition count setting made backward compatible
Resolved in Pega Version 8.7
After upgrade, the DSS 'dsm/services/stream/pyTopicPartitionsCount' used to limit the number of partitions was no longer working and instead used the default value of 20. This has been corrected and made backwards compatible.
INC-181921 · Issue 666932
Date normalization enforced for reference time
Resolved in Pega Version 8.7
A job which was scheduled to run weekly once on Monday at 8AM CET was running a second time at 9AM CET. Investigation showed the original recurrence pattern implementation was normalizing the reference time only at construction; this has been resolved by forcing the date normalization on each reference time update.
INC-181941 · Issue 664808
Handling added for using virtual network interface for Stream Services startup
Resolved in Pega Version 8.7
After update, the restart of any node failed with the error "Unable to create DSM service DATA-DECISION-SERVICE-STREAMSERVER DEFAULT". This has been resolved by adding support for allowing stream service to start on the virtual network interface in cases where it was explicitly configured via the "cluster/hazelcast/interface".
INC-185322 · Issue 668321
Configuration added for extending queue processor timeout
Resolved in Pega Version 8.7
Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.
INC-186898 · Issue 670313
Configuration added for extending queue processor timeout
Resolved in Pega Version 8.7
Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.
INC-188065 · Issue 671403
Catch added for corrupted alert configuration
Resolved in Pega Version 8.7
On creation of a queue processor rule, the alert ID was not present in the logs due to a missing alert configuration page. After this action, the server was not accessible and could not be started. This has been resolved by catching the exception if the alert configuration is malformed or missing.
INC-189781 · Issue 677816
Database Transaction Log update overflow resolved
Resolved in Pega Version 8.7
When automatic.resume=false was encountered during an update, cleaning up the existing codeset from previous updates ended up filling up the database transaction logs and caused the update to fail. This has been resolved by updating the process of clearing the codeset so it doesn't overflow the transaction log.
INC-190070 · Issue 676643
Restored local blocking queue cache
Resolved in Pega Version 8.7
After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.