Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-D51554 · Issue 514061

Local UUID cache will be updated when merge event is detected

Resolved in Pega Version 8.2.5

Cluster-related issues were seen in multiple production clusters. For some nodes in the cluster the Cluster Management screen showed all expected nodes with valid Node IDs displayed, and on other nodes the Cluster Management screen showed the node ID of itself, SERVER@localhost:5701. On an impacted node displaying the wrong ID, the Node Information landing page did not work and displayed the error "Unable to execute job on ." Multiple advanced agents running on nodes in the affected clusters, both with correct and incorrect IDs, also failed with a similar error "Unable to execute job on <node's job id>". This was traced to a merge performed after a split brain. To resolve this, the code has been updated to handle merge events: when the node UUID is changed as part of a split brain recovery, the local UUID cache will be updated when the merge event is detected.

SR-D52969 · Issue 514703

Column population honors thread count of 1

Resolved in Pega Version 8.2.5

The thread count parameter in the column population activity was not being honored, causing repeated deadlocks when trying to populate columns. Investigation showed that the ExposeCols process did not honor the thread count when it was 1 (the default is 4), and this has been fixed by adding the necessary code so that if the thread count is 1, it will not run in multhreaded mode.

SR-D53408 · Issue 516735

Expired Oauth Refresh Token will persist for obtaining new token

Resolved in Pega Version 8.2.5

OAuth2.0 was providing the refresh token only once in the first time response of the token endpoint. Once the token expired for first time, it was possible to get a new access token using the refresh token. However, if the access token expired for the second time, it was not possible to generate the new access token automatically because the expired token was set as null. To resolve this, the system has been updated to persist the previous refresh token in order to get a new access token.

SR-D62754 · Issue 559848

PrepareResponse updated to explictly close Input Handler

Resolved in Pega Version 8.2.7

When using prpcServiceUtils to export a product in a Windows+Weblogic environment, attempting to export repeatedly using the same archiveName with the intention of overwriting the older product with the newer one in the ServiceExport directory failed with a FileNotFoundException. Investigation showed that the product file that was created by the pzExport REST call was not being released by the Weblogic File Handler process. Due to this, the next time the call was invoked the system tried to create the same file in the directory but failed due to the earlier File handle lock. To resolve this, the system has been updated to explicitly close the InputStream using try-with-resources.

SR-D78274 · Issue 544092

Handling added for dual privileges with MSSQL

Resolved in Pega Version 8.2.7

After setting up dual privileges, the Admin user was able to create a table but the base user received an "insufficient privileges" error. Investigation showed this was an issue when using MSSQL: the generated grant statements used the server login name as the user in the grant statement, instead of the database user. For all other databases, the username passed into the connection is the correct user to use for grants. Only MSSQL has a distinction between this connection user name (the login) and the database user, and since the login did not exist in the user table, the grant failed. To resolve this, when MSSQL is used, the system will fetch the underlying database user when determining the user for grant statement generation.

SR-D84190 · Issue 547173

Post-Import Migration Agent query optimized

Resolved in Pega Version 8.2.7

A Post-Import Migration agent belonging to the Pega-ImportExport Ruleset and set to run every 60 seconds by default triggered the SQL query "select ASTERISK from pegadata.pca_CWT_CXP_Work_Interaction" which ran for an excessive amount of time, caused a utilization spike, and then crashed the utility nodes. Investigation showed the excessive run time and load was caused by the query fetching a very large number of results. To better handle this scenario, the query usage has been updated.

SR-D84364 · Issue 551400

Check for circular references added to SearchInventoryImpl to prevent recursive call

Resolved in Pega Version 8.2.7

An out of memory error was traced to SearchInventoryImpl infinitely recursing over a clipboard property, where the child property referenced a parent property and resulted in an endless loop. This has been resolved with the addition of a depth check to ensure that the search does not recurse infinitely.

SR-D85100 · Issue 556260

ProductInfoReader updated to fetch only most recent version information

Resolved in Pega Version 8.2.7

After update, running Hfix scanner on Pega Marketing 8.2 displayed missed critical Hfixes for Pega Marketing 8.1. This has been resolved by modifying ProductInfoReader.runQuery to fetch only latest version of DAPF instances during a scan.

SR-D90687 · Issue 560427

IOException handling improved to resolve broken pipe errors

Resolved in Pega Version 8.2.7

Frequent "connection reset by peers" exceptions were being generated and broken-pipe exceptions were seen in the logs. Investigation traced the issue to unhanded IOExceptions on the server side that were a result of the client application not always closing the TCP connection gracefully. To resolve this, error handling for IOExceptions has been improved.

SR-B64134 · Issue 313542

Workaround added for Oracle 4000 character column value max

Resolved in Pega Version 7.3.1

Due to an Oracle limitation, RD on a clob column fails if the column value is of length greater than 4000. To work around this, support has been added to retrieve data from a clob column of length greater than 4000 characters using report definition based on DASS setting "reporting/retrieveFullClobContent" defined on "Pega-RULES".

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us