INC-142084 · Issue 599876
Support added for expression in strategy scorecards
Resolved in Pega Version 8.2.8
When invoking REST against a dataflow that had a strategy containing a scorecard that used an expression and the "Include model explanations option" was enabled in the Strategy configuration, the system failed with the error "PropertyValueInvalid .pxMaxScore Cannot cast the value (unknown) to double". This was traced to the Scorecard explanations failing during serialization when an expression was used, and has been corrected.
INC-143927 · Issue 599491
Oracle database performance improvements
Resolved in Pega Version 8.2.8
When the IH Summary was enabled and materialized on an ADM model, updating the ADM model was very slow on large sites. This has been resolved by adding several performance improvements for working with Oracle databases, including Oracle pre/post processing steps.
SR-D93777 · Issue 565692
Handling added for Oracle Aggregate IH Summaries
Resolved in Pega Version 8.2.8
When using using (Non Materialized) IH Summaries to aggregate IH data, the data returned by the IH summary did not include all the expected records. If the same criteria was executed on the database via SQL or by using a strategy to process raw IH data then the results were as expected. This was due to a difference in handling of Oracle vs postgres which causes an order by clause not to be generated in the query: the postgres column name is lower case, while in Oracle it is upper case. This has been resolved by updating the system to get the column name correctly from the propertytocolumn map so IH records are returned in correct order by Browse By Keys operation.
SR-D95605 · Issue 565486
Data Flow correctly saved to Database table Dataset
Resolved in Pega Version 8.2.8
After setting up a dataflow with Report definition as a source and Database table dataset as destination with the option "Insert new records and override existing", a data transform was used to modify a few of the values and write to the same database table. This table was mapped to pegadata schema and didn't have a pzpvstream column. Running the data transform generated an exception stating "DataStoreSaveStatementWithoutStream(PageDatabaseMapperImpl). The error was not seen when the pzpvstream column was added or if pzInskey was also removed along with pzpvstream, or if "only insert new records" was selected. This was traced to pzInsKey and pxInsName being null in the query formed while writing. There are two execution paths for the database dataset save operation, one for internal tables and another one for external tables. For a table with pzInskey but no blob, the system was incorrectly using the external table logic. This has been corrected.
INC-126796 · Issue 561533
Modifications to getFunctionalServiceNodes process
Resolved in Pega Version 8.2.7
The count of the Interaction History write related threads was increasing rapidly and a stack trace indicated "waiting on condition" and "java.lang.Thread.State: WAITING (parking)" errors. Investigation showed that this was due to getFunctionalServiceNodes using Hazelcast to determine node status by making a service request on an installation with a very large number of nodes, causing thread locking. To resolve this, the implementation has been updated to avoid calling getFunctionalServiceNodes on save of Interaction History, instead using Cassandra and only calling getFunctionalServiceNodes on the master node, not on all nodes.
INC-128385 · Issue 564519
Behavior made consistent between SSA and legacy engines
Resolved in Pega Version 8.2.7
There was a behavioral disparity between the legacy execution engine and the SSA engine where the latter was not creating a new page when the index was one above the size of the page list. This has now been corrected in order to make the SSA behavior fully backward compatible with the legacy engine, i.e. a new blank page will be added to the list if the index is one above the size of the list.
INC-128898 · Issue 564690
Updated precondition checks for Tumbling Time data in event strategies
Resolved in Pega Version 8.2.7
Tumbling Time data keys in event strategies were not being properly executed when certain window configurations were used. This was has been resolved by turning off the optimalization of Cassandra reads for small windows when window size is not known upfront and set dynamically (set size by property). In addition, an issue with Cassandra timeouts after the dataflow run had been running for several months has been resolved by adding a 'time to live' value for tumbling time windows, and event strategies has been switched to use leveled compaction by default.
SR-D82727 · Issue 547723
Improved management for table pr_log_dataflow_events
Resolved in Pega Version 8.2.7
The Lifecycle event table was sometimes growing too large. This additional strain of database transaction volume caused poor performance on the Dataflow tier and lead to cluster instability and time-consuming cluster restarts. Due to problems in one of the Pulse tasks, the Pulse thread was not processing single case metrics properly and causing the unbounded queue for single case to grow. This has been addressed by switching to a fixed queue size, which is configurable with the DSS: dnode/single_case_queue_size. The default value of the DSS is 4000, and if changed a system restart is required. An error will be logged each 1000 queue misses, and metrics will be dropped if the queue is full. In addition, the Pulse task frequency has been improved and managed to prevent interference with other Pulse tasks and will be triggered only if a run is system-paused for a long interval. Rebalances now have a failsafe if something unexpected happens during the Pausing of the run, and If the cluster becomes unstable, the life cycle events logs may be disabled with dataflow/run/events/persist .
SR-D85095 · Issue 546339
Updated COUNT logic for strategies with ssavm set to true
Resolved in Pega Version 8.2.7
An error was seen when attempting to save a strategy after setting ssavm to true, indicating an issue in the “COUNT” method in Group By shape. Since the source field is not used and does not need to be evaluated here, the system has been updated to ignore the source field if the operation is COUNT.
SR-D87709 · Issue 552398
Default context check added for saving adaptive model with locked rulesets
Resolved in Pega Version 8.2.7
When updating an adaptive model rule in Prediction Studio, the error message "No unlocked Rulesets/Versions found that are valid for this record. Unlock at least one Ruleset/Version that can contain records of this type." appeared when clicking Save. This occurred when a branch was used in the default context of the Prediction Studio settings. Although there was a workaround to use Dev Studio to Save As the adaptive model rule to the required branch, this has been resolved by adding a check for default context and then saving the model there if it is mentioned.