Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-130304 · Issue 567924

Retry logic added for downloading upgraded rules

Resolved in Pega Version 8.1.9

A Rules upgrade failed while downloading applications from the maintenance server due to an SFTP server connection failure. This has been resolved by adding logic to retry if the first connection attempt fails.

INC-130695 · Issue 587659

Enhancements for upgrading in multi-tenant environment

Resolved in Pega Version 8.1.9

Some muti-tenant installations use the same applications or rule instances with the same pzInsKeys for different tenants. This can cause upgrades to time out due to the system fetching all pzInsKeys (which will have duplicates) and working with them in a default batch size of 500 each over 4 threads. This led to the same keys potentially being allocated and processed in different threads, resulting in duplicate processing and timeouts. This has been resolved by updating the select query to fetch the tentantid and pzInskeys in the MT system to avoid duplicate work in multiple threads. In addition, running Generate Declarative indexes fetches the pzinskeys and generates indexes for each record, but before generating, the existing index for the record is deleted and then inserted. Because the delete query to generate the index was not tenant aware, all of the records for the key were deleted for the tenants for that key, but the new index was created only in one tenant. This has been resolved by enhancing the DELETE query to be tenant aware, which will avoid deleting the indexes for all the tenants given an index key.

INC-132218 · Issue 573359

Resolved buffer overflow for Migration loadDatabase

Resolved in Pega Version 8.1.9

A Rules upgrade failed in the Migration step at loadDatabase stage, involving the move of all the table records from old schema to new schema. This was traced to the inability of Migration to load blob of sizes more than 100 MB, and has been resolved by updating Migration to use byte[] to read the blob content with the help of metadata that contains blob length.

INC-133202 · Issue 574701

TableRenameUtil hashing improved

Resolved in Pega Version 8.1.9

During index name generation, the algorithm that was responsible for index name uniqueness was sometimes insufficient and cerated a loop condition. This has been resolved by using a stronger hash algorithm and refactoring the code that could result in a loop.

SR-D84364 · Issue 551402

Check for circular references added to SearchInventoryImpl to prevent recursive call

Resolved in Pega Version 8.1.9

An out of memory error was traced to SearchInventoryImpl infinitely recursing over a clipboard property, where the child property referenced a parent property and resulted in an endless loop. This has been resolved with the addition of a depth check to ensure that the search does not recurse infinitely.

INC-215937 · Issue 713773

Added exception handling for PageGroup alerts

Resolved in Pega Version 8.6.5

Queue items were going to the broken queue if there was an issue fetching the alert configuration from the Queue Processor rule. The error "java.lang.IllegalArgumentException: Alert id cannot be blank" was seen. This has been resolved by adding exception handling while gathering alerts from PageGroup so that a malformed alert configuration will not cause overall failure of a processed message, but instead an empty alert will be returned if configuration-data is corrupted.

INC-217781 · Issue 714185

JobScheduler updated to better handle DST change

Resolved in Pega Version 8.6.5

If a job scheduler was set to run on a weekly basis between 1 AM CET and 3 AM CET, the DST time change caused the job scheduler to skip that week. During DST, there is one 23-hour day in the year, and if execution time is set to that missing hour the system was throwing an IllegalArgumentException for the non-existent date. This has been resolved by adding a check that verifies whether a given date does exist; if it does not exist, the system will postpone execution time by one hour.

INC-218001 · Issue 719922

Error text revised for parameterized data page used for token generation

Resolved in Pega Version 8.6.5

While trying to add a claim in the header of a Token Generation Profile instance, selecting Map From as "Clipboard" and trying to give any DataPage(parameterized) as the source property failed to be saved and the error "JWS Alias— Please provide correct algorithm key with correct key length." appeared. Changing the "Map From" to a Constant and giving a dummy value worked as expected. Tracer showed the error "declare page parameters not supported by PropertyReference", indicating the actual issue: at this time, the Token profile does not support using a parameterized data page. This has been addressed by ensuring an appropriate error message is shown on save of the token profile rule form when a parameterized data page reference is configured. The error will now read "The reference D_pzPreferenceStore[PreferenceOperatorID:"[email protected]"].pxObjClass is not valid. Reason: Parameterized data page reference is not supported." Support for a parameterized data page used with Map From will be taken as an enhancement for a future release.

INC-218340 · Issue 714663

Override added to delete records for a stream dataset after processing

Resolved in Pega Version 8.6.5

Kafka data was accumulating for a Stream data set due to huge volume of inbound calls. This has been resolved by adding support to override pyDeletedProcessed through a DASS in order to remove the records for a particular stream dataset (topic) as soon as they are processed by Pega.

INC-218909 · Issue 715282

Override added to delete records for a stream dataset after processing

Resolved in Pega Version 8.6.5

Kafka data was accumulating for a Stream data set due to huge volume of inbound calls. This has been resolved by adding support to override pyDeletedProcessed through a DASS in order to remove the records for a particular stream dataset (topic) as soon as they are processed by Pega.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us