SR-D88499 · Issue 551188
Check added to minimize Obj-Open-By-Handle error logging
Resolved in Pega Version 8.1.9
When using a Data Type with the "Automatically generate a unique id " option, calling the Save-DataPage method by using the savable data page of the data type finished correctly but showed Obj-Open-By-Handle errors on PegaRules.log. Investigation showed the exception was thrown when running the save plan from DataPageSaverImpl: while attempting to run the save plan, the system does not know whether a parameter (pyGUID in this case) will be required to run the save plan or not, meaning that it can't detect any possible error in DataPageSaverImpl. The implementation instead makes a call to db.open to check if an instance exists and hence logs are thrown. To resolve the error logging, a check has been added: if the save-to class has an autogen key and the savable data page instance doesn't have the autogen key in it, the system will directly call pxCreateRecord. This will avoid a call to db.open to check if instance exists and hence no failed logs will be thrown. This partial change will work only for classes having an autogen key and in cases where the page is trying to create a record by intentionally not passing the key.
SR-D90687 · Issue 560430
IOException handling improved to resolve broken pipe errors
Resolved in Pega Version 8.1.9
Frequent "connection reset by peers" exceptions were being generated and broken-pipe exceptions were seen in the logs. Investigation traced the issue to unhanded IOExceptions on the server side that were a result of the client application not always closing the TCP connection gracefully. To resolve this, error handling for IOExceptions has been improved.
SR-D92707 · Issue 551693
QP exception handling improved
Resolved in Pega Version 8.1.9
There is currently a configurable maximum size limit for QP items of 5MB. If the message size exceeded 5 MB, the message failed to be enqueued to Kafka, but still ended up in the delayed message table where it remained. This caused issues with pzDelayedQueueProcessorSchedule JS as a result. To resolve this, the system has been updated to better detect Kafka errors related with message size and move any corrupted item to the broken message queue with the appropriate message attached.
INC-126129 · Issue 569666
PropertyToColumnMap made more robust
Resolved in Pega Version 8.1.9
The DF_ProcessEmails dataflow was intermittently failing with a StageException error. This was traced to schema changes being propagated asynchronously by system pulse, which seem to have caused PropertyToColumnMap to cache stale schema. To resolve this, if the property mapping is not found the first time, the system will make a second attempt to get the mapping. Additional logging has also been added for better diagnostics.
INC-128385 · Issue 564521
Behavior made consistent between SSA and legacy engines
Resolved in Pega Version 8.1.9
There was a behavioral disparity between the legacy execution engine and the SSA engine where the latter was not creating a new page when the index was one above the size of the page list. This has now been corrected in order to make the SSA behavior fully backward compatible with the legacy engine, i.e. a new blank page will be added to the list if the index is one above the size of the list.
INC-129222 · Issue 568530
Handling improvements for commit logs
Resolved in Pega Version 8.1.9
The ADM commit log logs the number of unconsumed messages that are going to expire. In certain circumstances, it can include unconsumed messages that are not going to expire in the count. Because they are not expired and removed, the environment was running out of disk space due to the ADM commitlogs table growing larger than expected and performance issues were seen. To resolve this, a new adm_commitlog.adm_responses_commit_log_date_tiered table has been created, with a default_time_to_live of 24 hours. DateTieredCompactionStrategy has been set with max_window_size_seconds as 24hrs and tombstone_compaction_interval as 24hrs.
INC-132976 · Issue 580685
Performance improvements for Test Strategy data flow
Resolved in Pega Version 8.1.9
In the Test Strategy panel under Single case -> "Settings", selecting the "Data flow" option and choosing CustomerData dataflow was taking an excessive amount of time to run on a system with an extremely large database. To improve performance, two areas have been addressed: 1) the default behavior for record key suggestions in the test panel has been modified to collect only the ID as the additional data is not necessary at that time; 2) a DSS has been added that will opt out of reading and collecting the customer IDs in order to minimize data stored on the clipboard.
INC-138037 · Issue 586593
Strategy handling updated for very large systems using IH summary
Resolved in Pega Version 8.1.9
When a Strategy in a Real-time dataflow used IH Summary on a system with more than 5000 groups for one eventKey, the message "Error retrieving aggregates from Cassandra KVS" intermittently appeared. Investigation showed that if the number of result rows was greater than the FETCH_SIZE (set to 5000), it meant another read to Cassandra was required and an exception was generated. To resolve this, updates have been made so that instead of returning maps, the system will return iterators and change them to map on the calling thread.
SR-D92734 · Issue 553412
Simulation can take Data flow type as destination
Resolved in Pega Version 8.1.9
Support has been added for Data flow functionality as simulation target and data transform in simulation input.
INC-125803 · Issue 568661
Cross-site scripting updated on activities
Resolved in Pega Version 8.1.9
Additional Cross-site scripting work has been done on activities.