INC-220174 · Issue 716384
Improved cleanup for data from joining nodes
Resolved in Pega Version 8.6.5
A Cassandra node that is down for longer than the grace period (default 10 days) can introduce zombie data that creates instability when it returns to the cluster. This can include VBD partition summary data that was deleted and can break loading rules and cause the service to fail to start up. To resolve this, additional logic has been implemented to detect zombie summary records, including summary records without field descriptors and summary records without dictionaries having already been provided, and to read dictionaries from the latest summary record in case there are preceding zombie records with dictionaries.
INC-222561 · Issue 721043
Check added for destination type for distribution test reports
Resolved in Pega Version 8.6.5
When there were two output destinations in the system, one of type VBD and another of type Database table and both had the same name, an incorrect class was set for distribution test reports and an error was generated when trying to open the report. Investigation showed the system was only checking for the name of the destination and not its type; this has been resolved by adding a pzSetSimulationOutputClass data transform to check for the destination type in addition to the destination name when setting the class for reports.
INC-224038 · Issue 723258
Performance improvement for serviceregistry table queries
Resolved in Pega Version 8.6.5
The expression filter ($1 - "SSR"."pytimeout") in the queries for service registry table caused full table scans which might result in performance issue and even risk of contention/locking. This has been resolved by replacing the pyTimeout column reference in filters to the default value = 9000ms so that performance for these queries can be improved.
INC-202111 · Issue 710106
Logging extended for PRPCPropertyInfoProvider
Resolved in Pega Version 8.7.3
In order to assist with diagnosing issues with Kafka and JSON, additional logging has been added for PRPCPropertyInfoProvider.
INC-208976 · Issue 719165
Enhanced SSA metrics made available
Resolved in Pega Version 8.7.3
In order to better diagnose delays related to the time when a Campaign is scheduled to start and the time when the Dataflow actually starts to run, an update has been made which will generate detailed metrics to cover some of the strategy execution key performance intensive areas. Additional lower level internal metrics related to SSA engine execution have also been made available by way of a DSS to collect more runtime insight for diagnosis. To enable the collection of these Level 2 SSA internal metrics, set the dataflow/shape/strategy/detailed_metrics/level2 DSS in the Pega-DecisionEngine rule set to 'true'. A comprehensive set of enhanced metrics will be available in Pega 8.8.
INC-217290 · Issue 721375
Added support for creating predictive models in Production
Resolved in Pega Version 8.7.3
While creating a new predictive model rule in Prediction studio, the case was going into broken process after selecting the template with the error message "Error loading D_ProjectList , Reason : No databases defined in properties file:/databases.properties". This was an unexpected use case for creating models in Production level, and has been resolved by updating the flows to turn off the draft mode in this scenario.
INC-218145 · Issue 715678
DSS introduced to control DSM clipboard page serialization
Resolved in Pega Version 8.7.3
When using a Kafka dataset to consume a message from an external topic that had an attribute name with a special character contained in a page list structure, using a JSON data transform for the mapping in a realtime dataflow resulted in the error "Exception in stage: KafkaDS; LegacyModelAspectInvokableRuleContainer.invoke-Exception encountered a :java.lang.UnsupportedOperationException." To resolve this, a new DSS dataset/CLASS_NAME/DATASET_NAME/JSONDataTransform/deserialization/useDSMPage has been introduced. When the value is set to true, the process will follow the previous behavior of DSM clipboard pages being generated when Kafka records are deserialized using JSON data transform. When the value is set to false, the JSON data transform will generate regular clipboard pages and convert them later to DSM clipboard pages. This would avoid errors when a JSON data transform calls methods from the Clipboard API that are not implemented by DSM pages. This DSS is set per data set instance. CLASS_NAME and DATASET_NAME are placeholders which should be replaced by data set's pyClassName and pyPurpose property values. In addition, a similar DSS, dataset/CLASS_NAME/DATASET_NAME/JSONDataTransform/serialization/useDSMPage, has been introduced for serialization.
INC-218172 · Issue 716398
Text analytics character limit set to avoid memory issues
Resolved in Pega Version 8.7.3
Utility nodes were unstable related to searching, and email listener threads became stuck during Rule-based Text Annotation (RUTA) and natural language processing (NLP) work on incoming emails. This happened when the system experienced high memory consumption or exceeded memory usage when using text analytics. This has been resolved by setting the default maximum character limit for NLP analysis to 25,000 characters to avoid RUTA memory issues. If text is provided > 25,000 characters, the system will consider only the top 25,000 characters and a flag will appear on NLPOutcome to indicate text has been limited. This character limit is configurable, but if the configuration is set in excess of 25,000 a warning will be shown prior to saving the change.
INC-223376 · Issue 723575
JMX authentication enabled by default for embedded Kafka and Cassandra
Resolved in Pega Version 8.7.3
For on-premises clients, a potential vulnerability for a Remote Code Execution using the JMX interface on Cassandra and Kafka using exposed network ports has been mitigated by enabling JMX authentication by default for embedded Kafka and Cassandra.
INC-229717 · Issue 730667
Cassandra startup calls reordered to avoid deadlock
Resolved in Pega Version 8.7.3
Nodes received a service request but it became stuck. This was traced to a deadlock related to CassandraSessionCache.getSession, and has been resolved by reordering the method calls used to initialize the Cassandra session to delay adding the session change listener and avoid a deadlock scenario.