Skip to main content

Resolved Issues

View the resolved issues for a specific Platform release.

Go to download resolved issues by patch release.

Browse release notes for a selected Pega Version.

NOTE: Enter just the Case ID number (SR or INC) in order to find the associated Support Request.

Please note: beginning with the Pega Platform 8.7.4 Patch, the Resolved Issues have moved to the Support Center.

SR-D91834 · Issue 554425

Related cases of different types properly linked in Case Worker Portal

Resolved in Pega Version 8.5

After creating a case of type1 in the Case Worker portal, creating a case of type2 from the first case showed the case ID of the second case in the Related Work section as expected. However, after clicking on the link of the case ID of the second case from the related work section, the second case opened but the case ID of the first case was not shown in the Related work. The cases were correctly associated when the Case Manager portal was used instead. This was traced to the Case Worker clipboard continuing to hold the previous case ID thread, and has been resolved.

SR-D71475 · Issue 538721

Check added to apply values of newAssignPage.pxFormName

Resolved in Pega Version 8.5

After upgrade, trying to open existing assignments resulted in a different harness being opened. This was traced to changes in how the property newAssignPage.pxFormName was used, and has been resolved by checking whether the harnesspurpose is dynamic before setting the pxFormName. If formname is already present, the system will proceed with picking the harness from the shape.

SR-D63307 · Issue 542770

Unneeded class name filter removed from GetRelevantPropertiesForDataType

Resolved in Pega Version 8.5

Given one class with a set of properties and another class inherited from the first class containing a relevant records set for class 2, then a new harness did not show the base class fields. Investigation showed that the fields present in the parent class and marked relevant in the case were not being fetched due to pzGetRelevantPropertiesForDataType having a class name filter along with filter by rule resolution. To resolve this, the class name filter has been removed as it is not required due to the report already filtering by rule resolution and relevant class through a join.

SR-D61681 · Issue 532561

Handling added for different access groups updating a case

Resolved in Pega Version 8.5

When a parent flow was configured with a Wait shape to wait until any AccessChild case reached Pending-Authentication and then the “Update a case” shape was used to update the case status of child cases using a Data Transform, the Wait shape was being processed successfully but the child cases were not always updated as expected. This issue occurred when the cases were processed by users with different access groups, so the ProcessFlowDependencies agent processed the dependency. In this scenario, findPageByHandle returned an incorrect WorkPage because of the different access groups. To resolve this, pyLoadMyCasesNested Step-5 and pzProcessIndividualDepAssignment Step -13 now make additional checks to verify whether the page found by findPageByHandle API is a valid WorkPage or not.

SR-D90544 · Issue 550807

Corrected row focus for deleting in App Studio case model

Resolved in Pega Version 8.5

When attempting to delete a row of properties from the 2nd page of the data model of a case type while using App Studio, clicking on the delete icon brought up a dialog box asking for confirmation for deletion but at the same time the screen went back to the first page of the data model instead of remaining on the second page. Because of this, clicking on the OK button to confirm the deletion caused a random property from the first page to be deleted instead of the targeted row of the second page as expected. This was due to the refresh being triggered immediately within overlay UI actions, and has been resolved by updating the first trash icon action set for the section pzExpressFieldActions to be a modal instead of an overlay when launching local action.

SR-D72141 · Issue 542663

Approved flow rule image unlocked

Resolved in Pega Version 8.5

When the Approval Required check box was enabled for rulesets (i.e another person with access to this work queue should approve changes to the rules), a rule which was approved was unlocked and moved back to the original ruleset as expected, but the binary image associated with the flow rule remained locked and any other user other than the one who previously checked in the rule was denied access with a "check out failed" error. This locking error has been resolved by modifying the Rule-Obj-Flow!CleanUp activity to set Param.IgnoreInstanceLockedBy = true.

SR-D90367 · Issue 556687

Cleanup enhanced for long pyEditElement names

Resolved in Pega Version 8.5

A pyEditElement error relating to decision data was seen multiple times in a stack trace. Research showed that while the utility worked as expected for decision data rules with names of less than 30 characters, the pyEditElement section was truncated the name for the decision data. This meant that decision data with the name SampleIssueandSampleGroupTwosalkdjkightntbmkblffvfvfv would be saved as SampleIssueandSampleGroupT for the pyEditElement section. Because of this, the utility failed the match and did not clean up the pyEditElement section. To resolve this, the cleanup utility has been updated to handle pyEditElement sections of decision data with longer names. Additional logging has also been added to improve debugging.

SR-D71621 · Issue 533296

Real time processing picks up correct datetime for Capture Response records

Resolved in Pega Version 8.5

A Realtime Data flow for the Capture Response flow was configured with a strategy shape set to load previous decisions within the past 7 days. Once this Realtime DF was started, attempting to Capture Response for decisions made after that startup timepoint did not work. This was traced to the InteractionID being written with global properties for the datetimes, and has been resolved by making those datetime properties local so the start and end time are not cached and the time range is calculated based on "now”.

SR-D85558 · Issue 548286

Handling added for prolonged Heartbeat Update Queries

Resolved in Pega Version 8.5

After restart, the pyFTSIncrementalIndexer queue size had hundreds of thousands of entries even though it was empty prior to the restart. Investigation traced this to a job scheduler that checked all the database connections everyday at 1 EST by using a list that contained some connections which did not exist. Checking those invalid connections caused other update queries to queue and wait, resulting in the update heartbeat query taking longer than its default beat. This caused a Split Brain issue wherein other nodes considered the long-executing node to be dead and triggered a rebalance while the node itself continued to execute partitions thinking that it was healthy. This caused duplicate processing of records. To resolve this, a fail safe has been added: while updating heartbeat in Service Registry, nodes will enter safe mode when the update query is taking longer than the default beat.

SR-D66397 · Issue 530333

ADM out-of-sync corrected for multi-datacenter Cassandra cluster

Resolved in Pega Version 8.5

After setting up the multi-datacenter configuration for a Cassandra cluster that consisted of six nodes in datacenter 1 and three nodes in datacenter 2, failover testing revealed a mismatch in the number of ADM models stored in each datacenter. The mismatch was observed mostly in the number of records present in the "adm_scoringmodel" and "adm_response_commit_log_date_tiered" tables. When Cassandra nodes are down, the other nodes in the cluster will store hints (records to be written) for the down nodes. When these nodes come back online the hints are replayed to those nodes and the data is written. Hints are written for 3 hours, so if a node come back up within 3 hours data is recovered and repairs are not required. The gc_grace_seconds for the above tables that were getting out of sync across the two datacenters was set to zero seconds. The "gc_grace_seconds" attribute is not just used as the time for removal of tombstones, it's also used to set the TTL for records written to the system.hints table. That meant that when the hints were written for the ADM tables for the nodes that were down, they were immediately expired since it was set to 0 and not played back when the terminated nodes restarted and joined the cluster. This has been resolved with this fix for all customers new to this release. Existing customers already on v7.3 or higher will need to complete the local change detailed below: Connect to the Cassandra cluster using cqlsh in the Pega Cassandra distribution and then run ALTER TABLE adm_commitlog.adm_response_commit_log_date_tiered WITH gc_grace_seconds = 86400; to change the relevant setting from zero to the equivalent of one day - the same length of time that the data in the table lives for. This will mean that any hints written can still be used to replay data to another node while the data itself is alive. It does also mean, however, that, given a constant load, a day's worth of expired ADM event data in the table will always be present on the disk, as the tombstones can now not be cleaned up for a day.

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us