INC-201338 · Issue 690896
Restored local blocking queue cache
Resolved in Pega Version 8.7
After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.
INC-190070 · Issue 676644
Restored local blocking queue cache
Resolved in Pega Version 8.6.3
After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.
INC-190070 · Issue 676900
Restored local blocking queue cache
Resolved in Pega Version 8.6.3
After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.
INC-193153 · Issue 686295
Restored local blocking queue cache
Resolved in Pega Version 8.6.3
After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.
INC-201338 · Issue 690897
Restored local blocking queue cache
Resolved in Pega Version 8.6.3
After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.
INC-172675 · Issue 649455
Configuration added for extending queue processor timeout
Resolved in Pega Version 8.7
Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.
INC-185322 · Issue 668321
Configuration added for extending queue processor timeout
Resolved in Pega Version 8.7
Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.
INC-186898 · Issue 670313
Configuration added for extending queue processor timeout
Resolved in Pega Version 8.7
Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.