Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-D89428 · Issue 550391

Data Flow StartTime uses locale timezone

Resolved in Pega Version 8.3.3

The start time of the dataflow was displayed in GMT instead of the operator locale timezone. This has been corrected.

SR-D77157 · Issue 544471

DataSet preview will use date instead of datetime

Resolved in Pega Version 8.3.3

While using a DataSet preview functionality, the date appeared as reduced by one day. This has been resolved by parsing date as 'date' instead of 'datetime' to avoid issues with timezone interactions.

SR-D87709 · Issue 552397

Default context check added for saving adaptive model with locked rulesets

Resolved in Pega Version 8.3.3

When updating an adaptive model rule in Prediction Studio, the error message "No unlocked Rulesets/Versions found that are valid for this record. Unlock at least one Ruleset/Version that can contain records of this type." appeared when clicking Save. This occurred when a branch was used in the default context of the Prediction Studio settings. Although there was a workaround to use Dev Studio to Save As the adaptive model rule to the required branch, this has been resolved by adding a check for default context and then saving the model there if it is mentioned.

SR-D89012 · Issue 550799

DelegatedRules refresh icon made accessible

Resolved in Pega Version 8.3.3

When using Accessibility, the refresh icon in pzDelegatedRules was being read as "Link". This has been corrected by adding text for the refresh icon.

INC-218145 · Issue 715678

DSS introduced to control DSM clipboard page serialization

Resolved in Pega Version 8.7.3

When using a Kafka dataset to consume a message from an external topic that had an attribute name with a special character contained in a page list structure, using a JSON data transform for the mapping in a realtime dataflow resulted in the error "Exception in stage: KafkaDS; LegacyModelAspectInvokableRuleContainer.invoke-Exception encountered a :java.lang.UnsupportedOperationException." To resolve this, a new DSS dataset/CLASS_NAME/DATASET_NAME/JSONDataTransform/deserialization/useDSMPage has been introduced. When the value is set to true, the process will follow the previous behavior of DSM clipboard pages being generated when Kafka records are deserialized using JSON data transform. When the value is set to false, the JSON data transform will generate regular clipboard pages and convert them later to DSM clipboard pages. This would avoid errors when a JSON data transform calls methods from the Clipboard API that are not implemented by DSM pages. This DSS is set per data set instance. CLASS_NAME and DATASET_NAME are placeholders which should be replaced by data set's pyClassName and pyPurpose property values. In addition, a similar DSS, dataset/CLASS_NAME/DATASET_NAME/JSONDataTransform/serialization/useDSMPage, has been introduced for serialization.

SR-D85095 · Issue 546338

Updated COUNT logic for strategies with ssavm set to true

Resolved in Pega Version 8.3.3

An error was seen when attempting to save a strategy after setting ssavm to true, indicating an issue in the “COUNT” method in Group By shape. Since the source field is not used and does not need to be evaluated here, the system has been updated to ignore the source field if the operation is COUNT.

INC-218172 · Issue 716398

Text analytics character limit set to avoid memory issues

Resolved in Pega Version 8.7.3

Utility nodes were unstable related to searching, and email listener threads became stuck during Rule-based Text Annotation (RUTA) and natural language processing (NLP) work on incoming emails. This happened when the system experienced high memory consumption or exceeded memory usage when using text analytics. This has been resolved by setting the default maximum character limit for NLP analysis to 25,000 characters to avoid RUTA memory issues. If text is provided > 25,000 characters, the system will consider only the top 25,000 characters and a flag will appear on NLPOutcome to indicate text has been limited. This character limit is configurable, but if the configuration is set in excess of 25,000 a warning will be shown prior to saving the change.

INC-223376 · Issue 723575

JMX authentication enabled by default for embedded Kafka and Cassandra

Resolved in Pega Version 8.7.3

For on-premises clients, a potential vulnerability for a Remote Code Execution using the JMX interface on Cassandra and Kafka using exposed network ports has been mitigated by enabling JMX authentication by default for embedded Kafka and Cassandra.

SR-D85558 · Issue 548285

Handling added for prolonged Heartbeat Update Queries

Resolved in Pega Version 8.3.3

After restart, the pyFTSIncrementalIndexer queue size had hundreds of thousands of entries even though it was empty prior to the restart.Investigation traced this to a job scheduler that checked all the database connections everyday at 1 EST by using a list that contained some connections which did not exist. Checking those invalid connections caused other update queries to queue and wait, resulting in the update heartbeat query taking longer than its default beat. This caused a Split Brain issue wherein other nodes considered the long-executing node to be dead and triggered a rebalance while the node itself continued to execute partitions thinking that it was healthy. This caused duplicate processing of records. To resolve this, a fail safe has been added: while updating heartbeat in Service Registry, nodes will enter safe mode when the update query is taking longer than the default beat.

SR-D89643 · Issue 548290

Old Tumbling Time data in event strategies given TTL for cleanup

Resolved in Pega Version 8.3.3

Old Tumbling Time data keys in event strategies were not being cleaned up, causing Cassandra timeouts after the dataflow run had been running for several months. The longer the dataflow was running using standard compaction, the more the data was potentially spread out across SSTables and the slower it became. This has been resolved by adding a 'time to live' value for tumbling time windows, and event strategies has been switched to use leveled compaction by default.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us