Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-D36591 · Issue 507537

Kafka producer message size made configurable

Resolved in Pega Version 8.2.4

Kafka producer was using a default max message size setting of 1.3Mb while the Kafka broker max message size was set to 5Mb. This caused large processing queues to eventually throw errors indicating a scheduler.JobExecutionException related to "There was a problem saving an instance of class System-Message-QueueProcessor-DelayedItem". To correct this, a producer message size configuration option has been added along with additional logging for the KafkaSaveOperation.

SR-D41730 · Issue 508143

TTL value correctly passed for Adaptive Event store

Resolved in Pega Version 8.2.4

The ADM table was growing due to the Time to Live (TTL) for entries in the Adaptive Event Store not being propagated to clean them out. This was traced to the TTL field on the data flow not being checked, causing the TTL value to be supplied as zero so there was no expiration. This has been corrected.

SR-D37945 · Issue 506799

Server node cache refresh will use remote execution timeout

Resolved in Pega Version 8.2.4

A campaign was failing due to VBD remote ping timeout with a stacktrace that indicated a StageException. Investigation showed that when the cluster is heavily loaded, calls to the remote execution API could time out. If this occurred when the VBD client was refreshing its cache of VBD server nodes, then the insert failed and the error was propagated up the calling data flow. To resolve this, the system will use the remote execution timeout when refreshing node cache, extend the timeout to 60 seconds, and ensure timeouts are retried during inserts.

SR-D38415 · Issue 507995

Resolved Transfer to Queue duplicate assignments

Resolved in Pega Version 8.2.4

The Transfer to a Queue option was creating duplicate assignments after accepting the Chat. Once the chat was accepted, an instance was created\maintained in both Assign-Workbasket and Assign-Worklist tables. This happened after adding the "Transfers" queue to an Agent; if that queue was not added, the transfer to a queue option gave an error to the Agent receiving\accepting the chat. The Work-.ReassignDefaults activity is an extension that was customized in the Pega-DecisionManager ruleset for a certain use case in revision management. The customization is no longer required and has become redundant and has therefore been removed to resolve this issue.

SR-D40833 · Issue 506792

Response Strategy works for predictive models

Resolved in Pega Version 8.2.4

After implementing the response strategy for the predictive model and capturing the response, "Refresh Data" in the monitor tab of the predictive model still showed no response captured. As a result, it was not possible to analyze the performance of the model or use it for reporting. This was traced to recent work done so that Response processing now references the factory against a new table. However, the entries in this table were not created for predictive models, causing responses to not be processed. This has been resolved by adding predictive models to the event processor and ensure the functions are using the new factory initializer.

SR-D31103 · Issue 502978

VBD insert process updated for better retry handling

Resolved in Pega Version 8.2.4

When restarting Data Flow and VBD nodes, the VBD client can encounter some exceptions indicating components in the stack are temporarily unavailable. In most cases the VBD client retries, but there are some cases where it did not and data flow failures occurred. To resolve this, the code has been updated to remove logging of the VBD cluster status during a retry, and the retry duration has been made configurable.

SR-D32719 · Issue 505261

Compliation error resolved for editing complex offers

Resolved in Pega Version 8.2.4

The following error was logged while trying to edit offers: "com.pega.pegarules.pub.generator.FirstUseAssemblerException: Failed to compile generated Java, com.pegarules.generated.html_section.ra_stream_pyeditelement." Research showed the code for the static initializer was exceeding the 65535 byte limit, causing the java compilation of the pyEditElement section to fail for a decision data rule. This was due to the pyEditElement section for a decision data rule being circumstanced: the assembler architecture of circumstanced section rules generated and added code for all the versions of a circumstanced section. Hence, if the section referenced a lot of properties and it existed in multiple ruleset(s)/ruleset versions, the java compilation for the section rule failed. To correct this, the integration of the pyEditElement section with decision data rules has been revised and enhanced, and a function and utility have been provided to call the function to clean up the old and redundant pyEditElement sections for a given decision data rule.

SR-D39956 · Issue 511637

Corrected method IF use with shortcut function

Resolved in Pega Version 8.2.5

After upgrade, method IF was not working as expected when used in an expression like "@if(.totalorders_120days>0,(.remakeorders_120days/.totalorders_120days)<0.3,false)". This was caused by a missed use case for the combination of an exception-generating function in combination with a shortcut function (i.e. ternary, and, or), and has been resolved.

SR-D41207 · Issue 512087

Fallover stategy added to chat routing to keep event processor running

Resolved in Pega Version 8.2.5

Chats were becoming stuck in the queue and end users were not able to connect with the customer service representative. An excessive number of queued items were observed in a Queue Processor named "EventProcessor". This was traced to the setting "Browse from the offset" having been removed because of a retention policy. This resulted in "Browse from the end of the stream" being used instead even though browse should start from the earliest known offset. To resolve this, Stream Producer will be cached based on topic, and Stream consumer will fall over to an earliest strategy in case the requested offset isn't found so the event queue will be handled in a timely manner.

SR-D42662 · Issue 516870

Support added for auto restart of system paused nodes

Resolved in Pega Version 8.2.5

After the system paused a run, nodes had to be manually restarted by hand. Investigation showed that a node had fallen from the Hazelcast cluster due to an instability and that there was no support for an auto-restart under this condition. This has been resolved by adding a pulse task to resume runs stuck in system pause.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us