Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please update your bookmarks. This site will be discontinued in Dec 2024.

Pega Platform Resolved Issues for 8.1 and newer are now available on the Support Center.

SR-D85095 · Issue 546339

Updated COUNT logic for strategies with ssavm set to true

Resolved in Pega Version 8.2.7

An error was seen when attempting to save a strategy after setting ssavm to true, indicating an issue in the “COUNT” method in Group By shape. Since the source field is not used and does not need to be evaluated here, the system has been updated to ignore the source field if the operation is COUNT.

SR-D87709 · Issue 552398

Default context check added for saving adaptive model with locked rulesets

Resolved in Pega Version 8.2.7

When updating an adaptive model rule in Prediction Studio, the error message "No unlocked Rulesets/Versions found that are valid for this record. Unlock at least one Ruleset/Version that can contain records of this type." appeared when clicking Save. This occurred when a branch was used in the default context of the Prediction Studio settings. Although there was a workaround to use Dev Studio to Save As the adaptive model rule to the required branch, this has been resolved by adding a check for default context and then saving the model there if it is mentioned.

SR-D89643 · Issue 548291

Old Tumbling Time data in event strategies given TTL for cleanup

Resolved in Pega Version 8.2.7

Old Tumbling Time data keys in event strategies were not being cleaned up, causing Cassandra timeouts after the dataflow run had been running for several months. The longer the dataflow was running using standard compaction, the more the data was potentially spread out across SSTables and the slower it became. This has been resolved by adding a 'time to live' value for tumbling time windows, and event strategies has been switched to use leveled compaction by default.

SR-D90367 · Issue 556686

Cleanup enhanced for long pyEditElement names

Resolved in Pega Version 8.2.7

A pyEditElement error relating to decision data was seen multiple times in a stack trace. Research showed that while the utility worked as expected for decision data rules with names of less than 30 characters, the pyEditElement section was truncated the name for the decision data. This meant that decision data with the name SampleIssueandSampleGroupTwosalkdjkightntbmkblffvfvfv would be saved as SampleIssueandSampleGroupT for the pyEditElement section. Because of this, the utility failed the match and did not clean up the pyEditElement section. To resolve this, the cleanup utility has been updated to handle pyEditElement sections of decision data with longer names. Additional logging has also been added to improve debugging.

SR-D90579 · Issue 550769

Real Time data flow exception during shutdown corrected

Resolved in Pega Version 8.2.7

A Real Time DataFlow run was failing with a java.util.concurrent.RejectedExecutionException. This occurred when the node was shutting down while picking up a partition, and was due to the life cycle events not being able to update the partition status. To resolve this, a check has been added to prevent the exception form occurring by evaluating whether the Executor is shutdown while distributing messages.

SR-D96836 · Issue 555748

Refinements made to MarkerNode memory use

Resolved in Pega Version 8.2.7

Significant memory usage can be observed when data join in strategies are missing the join conditions. E.g., when the primary source in the strategy can have up to 300 or even more propositions, the join without the join condition would perform a cartesian product with possibly up to 400 or even more records returned. This may cause performance degradation. To guard against this, an update has been made that seeks to prevent repetitive markers being accumulated under a MarkerNode while trying not to rely on any implementation of equality and hashCode for individual marker implementations.

SR-B52067 · Issue 311178

Handling improved for extremely large data flow run statistics page

Resolved in Pega Version 7.3.1

A data flow monitoring page with 1000+ data flow shapes in total was hanging while loading the statistics. This happened because the component statistics table had no pagination enabled, and displaying all 1000+ shapes in one screen caused the browser to hang. This has been remedied with: - Pagination on the component statistics table in the data flow monitoring screen - Filtering on the component statics table, allowing monitoring of only relevant data flow components

SR-B69080 · Issue 318226

Local PegaAPI instance added for Monte-Carlo dataset

Resolved in Pega Version 7.3.1

A dataflow running from a Monte-Carlo data set was failing due to stale thread exception. This was due to the PegaAPI variable 'pega' not being defined as local variable in the Montecarlo dataset generator, and has been resolved by creating a local variable PegaAPI instance.

SR-B53147 · Issue 310299

HDFS configuration updated to support KMS server

Resolved in Pega Version 7.3.1

A field has been added in HDFS configuration to allow configuration of a KMS server. This setting will be propagated to all places where Hadoop configuration is used - hdfs & parquet data sets.

SR-B74689 · Issue 324632

Marketing data flow run made more robust

Resolved in Pega Version 7.3.1

After upgrade, it was observed that Pega Marketing Campaigns were failing if there were no customers in the Audience configured on the Campaign, generating the error message "The run failed, because it exceeds the maximum number of failed records, which is currently set to 0". The cause of this was executing a distributed data flow with a database as primary source on an empty table, leading the run to fail as a table without any partition was considered in the handling. The database dataset has now been updated to differentiate the case when there's no partition available from the case when there's a single partition for every record, ensuring the DB data set now returns 'all' records when there is no partition key defined, and the data flow handles the no values for partitions in a more robust way.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us