Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-231625 · Issue 735231

Handling updated for fetching chain of archived ancestor cases

Resolved in Pega Version 8.8

Previous work done to show the case hierarchy dropdown for archived cases in the ReviewArchivedCase harness has been further modified to cover additional use cases. The original method to fetch the parent case was "Obj-Open-By-Handle" which supports secondary storage but triggers all declaratives, which may lead to errors under specific configuration. Using "Obj-Browse" instead does not support secondary storage, but does not trigger declaratives. In order to ensure the best coverage for various use cases, this has been updated to conditionally use both methods; the system will use "Obj-Browse" first, and if it fails it will check secondary storage using "Obj-Open-By-Handle".

SR-C75099 · Issue 413432

Improvements made for the stability and reliability of the stream tier

Resolved in Pega Version 8.2

Several enhancements and fixes have been made to improve the stability and reliability of the stream tier with a focus on rolling restart. Issue 410567: When a stream node shuts down, it takes up to a minute for all client nodes to recognize the event and failover to other nodes due to large default timeouts. Timeouts have now been reduced to allow nodes to failover much earlier. Issue 413375: When a stream node shuts down, other stream nodes are notified and attempt to discover the new cluster topology. Since the cluster is still in flux, this operation may take a long time blocking all producers until a new topology is determined. This procedure has been changed to be less heavy and make the discovery only if required. Issue 401326: Kafka (running as a separate process as part of the Stream services) generates log files, but does not use log rotation with a maximum amount of log files. This means that the log files will eventually fill up the disk, leading to failures of the service and PRPC instance running on the same disk. To resolve this, the system will now limit number of generated log files to prevent high disk usage by using the parameters of file size = 2 MB and a max nr of size = 10. Issue 410280: A large number of "Processing override for entityPath" entries were seen in the Kafka logs. This has been addressed by ensuring message.timestamp.difference.max.ms is set to equal to retention.ms. In addition, the system will properly compare the Kafka Configuration created for the entire topic and the configuration retrieved for Partitions so the configuration is not unnecessarily updated. Issue 414765: During stream node restart, decision hub nodes may block for 15-30 seconds when writing to a stream data under certain conditions. This can cause high response times for a small period. This has been addressed by updating stream dataset logic to handle stream tier restarts properly and not cause response time degradation. Issue 401093: HttpMessageDecoder was throwing an exception regarding “TooLongFrameException” causing a 502 on HTTP stream tier requests going over 8192. The default max header size used by stream data set REST service was 8KB. In this particular case, the HTTP header contains a lot of cookies and other meta-info and exceeds 8KB. To resolve this, the default max header size has been increased to 16KB. A new prconfig setting to control max header size has also been introduced.

SR-C77625 · Issue 414765

Improvements made for the stability and reliability of the stream tier

Resolved in Pega Version 8.2

Several enhancements and fixes have been made to improve the stability and reliability of the stream tier with a focus on rolling restart. Issue 410567: When a stream node shuts down, it takes up to a minute for all client nodes to recognize the event and failover to other nodes due to large default timeouts. Timeouts have now been reduced to allow nodes to failover much earlier. Issue 413375: When a stream node shuts down, other stream nodes are notified and attempt to discover the new cluster topology. Since the cluster is still in flux, this operation may take a long time blocking all producers until a new topology is determined. This procedure has been changed to be less heavy and make the discovery only if required. Issue 401326: Kafka (running as a separate process as part of the Stream services) generates log files, but does not use log rotation with a maximum amount of log files. This means that the log files will eventually fill up the disk, leading to failures of the service and PRPC instance running on the same disk. To resolve this, the system will now limit number of generated log files to prevent high disk usage by using the parameters of file size = 2 MB and a max nr of size = 10. Issue 410280: A large number of ""Processing override for entityPath"" entries were seen in the Kafka logs. This has been addressed by ensuring message.timestamp.difference.max.ms is set to equal to retention.ms. In addition, the system will properly compare the Kafka Configuration created for the entire topic and the configuration retrieved for Partitions so the configuration is not unnecessarily updated. Issue 414765: During stream node restart, decision hub nodes may block for 15-30 seconds when writing to a stream data under certain conditions. This can cause high response times for a small period. This has been addressed by updating stream dataset logic to handle stream tier restarts properly and not cause response time degradation. Issue 401093: HttpMessageDecoder was throwing an exception regarding “TooLongFrameException” causing a 502 on HTTP stream tier requests going over 8192. The default max header size used by stream data set REST service was 8KB. In this particular case, the HTTP header contains a lot of cookies and other meta-info and exceeds 8KB. To resolve this, the default max header size has been increased to 16KB. A new prconfig setting to control max header size has also been introduced.

SR-C75099 · Issue 413375

Improvements made for the stability and reliability of the stream tier

Resolved in Pega Version 8.2

Several enhancements and fixes have been made to improve the stability and reliability of the stream tier with a focus on rolling restart. Issue 410567: When a stream node shuts down, it takes up to a minute for all client nodes to recognize the event and failover to other nodes due to large default timeouts. Timeouts have now been reduced to allow nodes to failover much earlier. Issue 413375: When a stream node shuts down, other stream nodes are notified and attempt to discover the new cluster topology. Since the cluster is still in flux, this operation may take a long time blocking all producers until a new topology is determined. This procedure has been changed to be less heavy and make the discovery only if required. Issue 401326: Kafka (running as a separate process as part of the Stream services) generates log files, but does not use log rotation with a maximum amount of log files. This means that the log files will eventually fill up the disk, leading to failures of the service and PRPC instance running on the same disk. To resolve this, the system will now limit number of generated log files to prevent high disk usage by using the parameters of file size = 2 MB and a max nr of size = 10. Issue 410280: A large number of ""Processing override for entityPath"" entries were seen in the Kafka logs. This has been addressed by ensuring message.timestamp.difference.max.ms is set to equal to retention.ms. In addition, the system will properly compare the Kafka Configuration created for the entire topic and the configuration retrieved for Partitions so the configuration is not unnecessarily updated. Issue 414765: During stream node restart, decision hub nodes may block for 15-30 seconds when writing to a stream data under certain conditions. This can cause high response times for a small period. This has been addressed by updating stream dataset logic to handle stream tier restarts properly and not cause response time degradation. Issue 401093: HttpMessageDecoder was throwing an exception regarding “TooLongFrameException” causing a 502 on HTTP stream tier requests going over 8192. The default max header size used by stream data set REST service was 8KB. In this particular case, the HTTP header contains a lot of cookies and other meta-info and exceeds 8KB. To resolve this, the default max header size has been increased to 16KB. A new prconfig setting to control max header size has also been introduced.

SR-C61477 · Issue 418265

HTML conversion added to email IVA

Resolved in Pega Version 8.2

When sending through emails that are in HTML format, HTML tags were visible within the Analysis of the email under Entities. These tags also potentially had an impact on topic detection, as the same email sent through in HTML would be classified under a different topic from one sent through as Plain Text.This issue originated because Email IVA received text as HTML while it expected plain text without HTML tags, and was traced to the use of an IMAP setting which sends only HTML to the listener. This has now been fixed such that If any HTML is received by email IVA, it will be converted to plain text via JSoup APIs in the service method as the first step.

SR-C79142 · Issue 417678

HTML conversion added to email IVA

Resolved in Pega Version 8.2

When sending through emails that are in HTML format, HTML tags were visible within the Analysis of the email under Entities. These tags also potentially had an impact on topic detection, as the same email sent through in HTML would be classified under a different topic from one sent through as Plain Text.This issue originated because Email IVA received text as HTML while it expected plain text without HTML tags, and was traced to the use of an IMAP setting which sends only HTML to the listener. This has now been fixed such that If any HTML is received by email IVA, it will be converted to plain text via JSoup APIs in the service method as the first step.

SR-D54920 · Issue 518272

Extra checks added for pasting Excel content to rich-text editor

Resolved in Pega Version 8.1.8

Copying content from Excel into rich-text editor pasted an image of the content either instead of the actual content or in addition to the actual content. This was traced to the handling of the isHTML flag: the flag should be set to true when there is HTML content in the datatransfer item or while pasting images (!isHTML is the condition in if). However, the sequence of the data items in the datatransfer can change depending on the browser/OS, causing isHTML to sometimes not be set to true before it is used in the condition while pasting images. To resolve this, changes have been made to the pasteHandler in the pzpega_ckeditor_extras file so proper checks are made to figure out the type of data from the clipboard that is being pasted.

INC-215847 · Issue 712003

Support added for trimming job log for queries

Resolved in Pega Version 8.8

In order to make heavily-used job scheduler queries more performant, this update introduces the ability to trim job logs by max-count using the pzTrimLog activity.

SR-C48318 · Issue 388212

Template circumstancing works for report definition

Resolved in Pega Version 8.2

If a report definition was circumstanced using the template option, the base report definition was always shown shown instead of the circumstanced one. This was caused by a missed use case , and has been corrected.

INC-195793 · Issue 697920

Enhanced ruleset validation for portal creation

Resolved in Pega Version 8.8

Attempting to create a new portal (web channel) from the app studio using the Creation Wizard with "Branch development preferences" enabled resulted in the error message "No valid rulesets in application preferences". Investigation showed this occurred due to CPM-Portal being specified in the application as the UI class: when attempting to create the portal, the system was evaluating the class against ruleset candidates. The class was not visible to the rulesets in the stack and thus the portal could not be created. This has been resolved by adding validation on UI pages, Int and Data classes in the application to ensure that they exist in user rulesets, and by adding an allowance for classes to exist only in branched rulesets and not the base ruleset during check if the ruleset is present in application stack.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us