Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-D96836 · Issue 555750

Refinements made to MarkerNode memory use

Resolved in Pega Version 8.4.2

Significant memory usage can be observed when data join in strategies are missing the join conditions. E.g., when the primary source in the strategy can have up to 300 or even more propositions, the join without the join condition would perform a cartesian product with possibly up to 400 or even more records returned. This may cause performance degradation. To guard against this, an update has been made that seeks to prevent repetitive markers being accumulated under a MarkerNode while trying not to rely on any implementation of equality and hashCode for individual marker implementations.

SR-D96847 · Issue 556988

Unneeded MBean startup warning removed

Resolved in Pega Version 8.4.2

After upgrade, Catalina logs indicated the error "WARN com.pega.dsm.dnode.api.prpc.service.monitoring.MBeanDSMService - Service DataFlow doesn't implement Monitoring operation". This was due to a flaw in the way the data flow DSM service was initialized in client mode. The exception itself was not a problem and did not impact functionality, but the client initialization has been modified to remove the warning.

INC-126556 · Issue 564032

Declaratives disabled during startup

Resolved in Pega Version 8.4.2

Declaratives firing before the engine is fully up can lead to null pointer errors. In order to avoid this condition, declaratives will be disabled during startup so that unnecessary operations can be avoided and system can be started faster.

INC-126796 · Issue 561533

Modifications to getFunctionalServiceNodes process

Resolved in Pega Version 8.2.7

The count of the Interaction History write related threads was increasing rapidly and a stack trace indicated "waiting on condition" and "java.lang.Thread.State: WAITING (parking)" errors. Investigation showed that this was due to getFunctionalServiceNodes using Hazelcast to determine node status by making a service request on an installation with a very large number of nodes, causing thread locking. To resolve this, the implementation has been updated to avoid calling getFunctionalServiceNodes on save of Interaction History, instead using Cassandra and only calling getFunctionalServiceNodes on the master node, not on all nodes.

INC-128385 · Issue 564519

Behavior made consistent between SSA and legacy engines

Resolved in Pega Version 8.2.7

There was a behavioral disparity between the legacy execution engine and the SSA engine where the latter was not creating a new page when the index was one above the size of the page list. This has now been corrected in order to make the SSA behavior fully backward compatible with the legacy engine, i.e. a new blank page will be added to the list if the index is one above the size of the list.

INC-128898 · Issue 564690

Updated precondition checks for Tumbling Time data in event strategies

Resolved in Pega Version 8.2.7

Tumbling Time data keys in event strategies were not being properly executed when certain window configurations were used. This was has been resolved by turning off the optimalization of Cassandra reads for small windows when window size is not known upfront and set dynamically (set size by property). In addition, an issue with Cassandra timeouts after the dataflow run had been running for several months has been resolved by adding a 'time to live' value for tumbling time windows, and event strategies has been switched to use leveled compaction by default.

SR-D82727 · Issue 547723

Improved management for table pr_log_dataflow_events

Resolved in Pega Version 8.2.7

The Lifecycle event table was sometimes growing too large. This additional strain of database transaction volume caused poor performance on the Dataflow tier and lead to cluster instability and time-consuming cluster restarts. Due to problems in one of the Pulse tasks, the Pulse thread was not processing single case metrics properly and causing the unbounded queue for single case to grow. This has been addressed by switching to a fixed queue size, which is configurable with the DSS: dnode/single_case_queue_size. The default value of the DSS is 4000, and if changed a system restart is required. An error will be logged each 1000 queue misses, and metrics will be dropped if the queue is full. In addition, the Pulse task frequency has been improved and managed to prevent interference with other Pulse tasks and will be triggered only if a run is system-paused for a long interval. Rebalances now have a failsafe if something unexpected happens during the Pausing of the run, and If the cluster becomes unstable, the life cycle events logs may be disabled with dataflow/run/events/persist .

SR-D85095 · Issue 546339

Updated COUNT logic for strategies with ssavm set to true

Resolved in Pega Version 8.2.7

An error was seen when attempting to save a strategy after setting ssavm to true, indicating an issue in the “COUNT” method in Group By shape. Since the source field is not used and does not need to be evaluated here, the system has been updated to ignore the source field if the operation is COUNT.

SR-D87709 · Issue 552398

Default context check added for saving adaptive model with locked rulesets

Resolved in Pega Version 8.2.7

When updating an adaptive model rule in Prediction Studio, the error message "No unlocked Rulesets/Versions found that are valid for this record. Unlock at least one Ruleset/Version that can contain records of this type." appeared when clicking Save. This occurred when a branch was used in the default context of the Prediction Studio settings. Although there was a workaround to use Dev Studio to Save As the adaptive model rule to the required branch, this has been resolved by adding a check for default context and then saving the model there if it is mentioned.

SR-D89643 · Issue 548291

Old Tumbling Time data in event strategies given TTL for cleanup

Resolved in Pega Version 8.2.7

Old Tumbling Time data keys in event strategies were not being cleaned up, causing Cassandra timeouts after the dataflow run had been running for several months. The longer the dataflow was running using standard compaction, the more the data was potentially spread out across SSTables and the slower it became. This has been resolved by adding a 'time to live' value for tumbling time windows, and event strategies has been switched to use leveled compaction by default.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us