Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

INC-136969 · Issue 585546

Section and Paragraph rule types added to Revision Manager

Resolved in Pega Version 8.5.1

An enhancement has been made to add support for section and paragraph rule types in revision management. With this change, the sections and paragraph rule types can be added to overlay and can be added to change requests and modified.

INC-138103 · Issue 585639

Enhancement added for node heartbeat recovery process

Resolved in Pega Version 8.5.1

Nodes were not showing up in the admin portal even though they were up and running and could be seen in the pr_sys_statusnodes table. The exception "An exception was encountered while invoking the cluster membership listener callback" was seen. All nodes became visible again after multiple restarts. The root cause was traced back to a temporary database connectivity problem. The database itself was fine according to database monitoring reports, but a network problem, a slow database query, or another issue prevented Pega from establishing a connection for more than a minute. An enhancement has been made to resolve this: if a node becomes unhealthy due to the service registry missing due to a failed heartbeat, the heartbeat will try to recover after 60 seconds and keep trying every 30 seconds until it succeeds.

INC-136128 · Issue 580227

Data Transform added to support Kafka custom serialization/deserialization

Resolved in Pega Version 8.5.1

Previously in a Kafka DataSet it was possible to either serialize or deserialize messages as mapped objects to JSON, or to implement some other mechanism to achieve the expected results. An enhancement has now been added to the Pega platform functionality: DataTransformSerde will be available to be used as custom serialization/deserialization mechanism in Kafka DataSet.

INC-126801 · Issue 575960

Improved cleanup for adm_response_meta_info

Resolved in Pega Version 8.5.1

The adm_commitlog.adm_response_meta_info column family was growing, leading to gradual increase in CPU utilization on the ADM nodes over time. Investigation showed that the compaction on the adm_response_meta_info table was not being triggered by the ADM service, and the compaction did not remove rows that belonged to models that had been deleted. To resolve this, compaction of the adm_response_meta_info table has been moved from the ADM client nodes to the ADM service nodes, which will correctly trigger the compaction on a predefined schedule. The compaction logic has also been refactored to remove rows that belong to models that have been deleted.

INC-133728 · Issue 583124

Performance improvements for very heavy use of strategy decision logic

Resolved in Pega Version 8.5.1

Updates have been made to improve memory performance for scenarios where a single request primary page had tens of thousands of pages under an embedded pagelist property and the decision logic involves a strategy running on all the pages in that page list. These include modifications to GetFramesSSA to create reusable frames under heavy load conditions that contain the information about all the primary pages that it has to iterate; if CallSsaProgram receives one of these special frames, it will use it invoke the program by repurposing it with a new primary page at each iteration.

INC-138037 · Issue 586594

Strategy handling updated for very large systems using IH summary

Resolved in Pega Version 8.5.1

When a Strategy in a Real-time dataflow used IH Summary on a system with more than 5000 groups for one eventKey, the message "Error retrieving aggregates from Cassandra KVS" intermittently appeared. Investigation showed that if the number of result rows was greater than the FETCH_SIZE (set to 5000), it meant another read to Cassandra was required and an exception was generated. To resolve this, updates have been made so that instead of returning maps, the system will return iterators and change them to map on the calling thread.

SR-D90367 · Issue 556687

Cleanup enhanced for long pyEditElement names

Resolved in Pega Version 8.5

A pyEditElement error relating to decision data was seen multiple times in a stack trace. Research showed that while the utility worked as expected for decision data rules with names of less than 30 characters, the pyEditElement section was truncated the name for the decision data. This meant that decision data with the name SampleIssueandSampleGroupTwosalkdjkightntbmkblffvfvfv would be saved as SampleIssueandSampleGroupT for the pyEditElement section. Because of this, the utility failed the match and did not clean up the pyEditElement section. To resolve this, the cleanup utility has been updated to handle pyEditElement sections of decision data with longer names. Additional logging has also been added to improve debugging.

SR-D71621 · Issue 533296

Real time processing picks up correct datetime for Capture Response records

Resolved in Pega Version 8.5

A Realtime Data flow for the Capture Response flow was configured with a strategy shape set to load previous decisions within the past 7 days. Once this Realtime DF was started, attempting to Capture Response for decisions made after that startup timepoint did not work. This was traced to the InteractionID being written with global properties for the datetimes, and has been resolved by making those datetime properties local so the start and end time are not cached and the time range is calculated based on "now”.

SR-D85558 · Issue 548286

Handling added for prolonged Heartbeat Update Queries

Resolved in Pega Version 8.5

After restart, the pyFTSIncrementalIndexer queue size had hundreds of thousands of entries even though it was empty prior to the restart. Investigation traced this to a job scheduler that checked all the database connections everyday at 1 EST by using a list that contained some connections which did not exist. Checking those invalid connections caused other update queries to queue and wait, resulting in the update heartbeat query taking longer than its default beat. This caused a Split Brain issue wherein other nodes considered the long-executing node to be dead and triggered a rebalance while the node itself continued to execute partitions thinking that it was healthy. This caused duplicate processing of records. To resolve this, a fail safe has been added: while updating heartbeat in Service Registry, nodes will enter safe mode when the update query is taking longer than the default beat.

SR-D66397 · Issue 530333

ADM out-of-sync corrected for multi-datacenter Cassandra cluster

Resolved in Pega Version 8.5

After setting up the multi-datacenter configuration for a Cassandra cluster that consisted of six nodes in datacenter 1 and three nodes in datacenter 2, failover testing revealed a mismatch in the number of ADM models stored in each datacenter. The mismatch was observed mostly in the number of records present in the "adm_scoringmodel" and "adm_response_commit_log_date_tiered" tables. When Cassandra nodes are down, the other nodes in the cluster will store hints (records to be written) for the down nodes. When these nodes come back online the hints are replayed to those nodes and the data is written. Hints are written for 3 hours, so if a node come back up within 3 hours data is recovered and repairs are not required. The gc_grace_seconds for the above tables that were getting out of sync across the two datacenters was set to zero seconds. The "gc_grace_seconds" attribute is not just used as the time for removal of tombstones, it's also used to set the TTL for records written to the system.hints table. That meant that when the hints were written for the ADM tables for the nodes that were down, they were immediately expired since it was set to 0 and not played back when the terminated nodes restarted and joined the cluster. This has been resolved with this fix for all customers new to this release. Existing customers already on v7.3 or higher will need to complete the local change detailed below: Connect to the Cassandra cluster using cqlsh in the Pega Cassandra distribution and then run ALTER TABLE adm_commitlog.adm_response_commit_log_date_tiered WITH gc_grace_seconds = 86400; to change the relevant setting from zero to the equivalent of one day - the same length of time that the data in the table lives for. This will mean that any hints written can still be used to replay data to another node while the data itself is alive. It does also mean, however, that, given a constant load, a day's worth of expired ADM event data in the table will always be present on the disk, as the tombstones can now not be cleaned up for a day.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us