Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-A91805 · Issue 259301

Archive/ Purge JVM dependence removed

Resolved in Pega Version 7.2.2

If the JVM was shut down in the middle of the purge and archive process, some SLA queue items remained in a 'now-processing state' and were not automatically picked up by the SLA agent for rescheduling when the JVM was brought up again. This has been fixed by redesigning the the Archive/ Purge flow to run in the background in queue batches so SLA queue items will be updated immediately with their next goal. With this approach, updates of SLA queue items will not depend on purge and archive process duration, and SLA queue items will always be in a scheduled state.

SR-A91805 · Issue 265722

Archive/ Purge JVM dependence removed

Resolved in Pega Version 7.2.2

If the JVM was shut down in the middle of the purge and archive process, some SLA queue items remained in a 'now-processing state' and were not automatically picked up by the SLA agent for rescheduling when the JVM was brought up again. This has been fixed by redesigning the the Archive/ Purge flow to run in the background in queue batches so SLA queue items will be updated immediately with their next goal. With this approach, updates of SLA queue items will not depend on purge and archive process duration, and SLA queue items will always be in a scheduled state.

SR-A99908 · Issue 270284

Purge export logic updated to handle TimeCreated keys that differ by milliseconds

Resolved in Pega Version 7.2.2

Purge archive was failing during export of the archives. Analysis showed that when exporting keys of history class mapped to a table with no blob and pxTimeCreated as Key column, mapping this to a column of Date type on Oracle caused the pzinskeys of history- instances to be recalculated and the millisecond part of the time stamp was rounded in the key to 000.This caused keys which just differed in milliseconds to become duplicates and export failed. This has been resolved by modifying the export logic to handle keys which differ in milliseconds for Blobless History class instances.

SR-A15627 · Issue 232738

JBoss shutdown errors resolved

Resolved in Pega Version 7.2.2

During a controlled shutdown, database-level exceptions occurred and a hard reset was required. This was traced to the container not correctly detecting dependencies, leading to JBoss prematurely terminating JMS and JDBC resources that were needed during Pega shutdown. Pega applications on JBoss now contain information to ensure the application server understands that this resource has dependent processes and the lookup-name (with the JBoss specific namespace) is maintained correctly to make sure it isn't cleaned up prematurely. The web.xml file has also been updated to include a lookup-name in the entry for ResourceRef_1 to handle cases where the datasources are injected using CLI scripts after JBoss is started and before Pega is deployed.

INC-190070 · Issue 676643

Restored local blocking queue cache

Resolved in Pega Version 8.7

After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.

INC-193153 · Issue 686293

Restored local blocking queue cache

Resolved in Pega Version 8.7

After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.

INC-201338 · Issue 690896

Restored local blocking queue cache

Resolved in Pega Version 8.7

After update, it was not possible to bring up secondary VBD nodes after restarting. Investigation traced this to earlier work done to resolve a memory leak issue, in which stale entries for local blocking queues were removed from cache. This resulted in modifying the queue listener logic to use "cache.getQueueIfPresent(jobId)" instead of "cache.getQueue(jobId)". Because the listener was not creating the cache if it was not present and the cache which held the local blocking queue didn't have the entry for the current remote execution job ID, the caller of the remote execution on Node2 ended up in blocking state forever, waiting on the local blocking queue. To resolve this, the code has been updated to ensure the blocking queue is created and stored in the local queue cache before publishing the remote job message.

INC-172675 · Issue 649455

Configuration added for extending queue processor timeout

Resolved in Pega Version 8.7

Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.

INC-185322 · Issue 668321

Configuration added for extending queue processor timeout

Resolved in Pega Version 8.7

Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.

INC-186898 · Issue 670313

Configuration added for extending queue processor timeout

Resolved in Pega Version 8.7

Alerts for queue processor (QP) items which took more than 15 minutes to run could result in the system marking the node as 'unhealthy'. In environments with Pega Health Check enabled, this would shut down the node gracefully. It was not possible to change this default as it was hardcoded. In order to support systems that may have custom processes that run beyond 15 minutes, a a new setting has been exposed that allows configuration of the interval after which a node with long-running queue processor is marked as unhealthy and is restarted. By default this remains 900000 milliseconds / 900 seconds / 15 minutes, but it may be adjusted up to 24 hours to avoid premature node shutdown. The stale thread detection mechanism will take that setting into account and use the provided value or default to 15 minutes if the value was not provided. In addition, the threshold's units in the UI have been changed from ms to seconds.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us