INC-228935 · Issue 731524
Writes to movie data set made optional in Delayed learning flow
Resolved in Pega Version 8.8
In order to prevent the Cassandra event store from accumulating excessive tombstones, an option has been added on the movie landing page to disable writes to the event store dataset.
INC-229319 · Issue 739145
Service registry queries optimized
Resolved in Pega Version 8.8
Full table scans were being performed by queries running against pr_sys_serviceregistry_kvs and pr_sys_serviceregistry table. This has been resolved by optimizing service registry queries to avoid full table scans.
INC-229371 · Issue 730011
Corrected Schema inconsistency in pxDR after update
Resolved in Pega Version 8.8
After running make decision and capture response data flows using delayed learning and populating the pxDecisionResults data set, updating the system to Pega 8.7 and running capture response data flow with interactions from before the update resulted in the error "DataStoreConnection$InconsistentTableSchemaException: Inconsistency in table [data.pxDecisionResults_Pega_DecisionEngine_ab5de04587] schema between data store and schema repository". This was traced to a difference in DDS Data Sets pre- and post- introduction of DDS SDK and Data-Admin-DDS-Table. To ensure better backwards compatibility, an update has been made to ensure DdsDataSetFactory checks for Data-Admin-DDS-Table record existence in the dataabse, and if found uses tableName from that D-A-D-T. This resolves inconsistencies of the case of the table name.
INC-229682 · Issue 731965
Repository schema updated for DdsDataSetFactory
Resolved in Pega Version 8.8
After update, running the CDH NBA Scheduler resulted in the error "UnsupportedOperationException: This operation is not supported by NonVersionedJDBCJarReader". This has been resolved with an update to use the explicit construction of PRPCSchemaRepository in DdsDataSetFactory rather than loading SchemaRepository via SPI ServiceLoader.
INC-229717 · Issue 730669
Cassandra startup calls reordered to avoid deadlock
Resolved in Pega Version 8.8
Nodes received a service request but it became stuck. This was traced to a deadlock related to CassandraSessionCache.getSession, and has been resolved by reordering the method calls used to initialize the Cassandra session to delay adding the session change listener and avoid a deadlock scenario.
INC-230327 · Issue 738104
Updated DDS table migration handling
Resolved in Pega Version 8.8
In order to prevent data loss during update, Data-Admin-DataSet-DDS activities have been modified so that an existing pyCorrespondingTableName is not overwritten on subsequent save calls for data sets already saved to the database.
INC-230436 · Issue 733943
PropositionFilter logic updated
Resolved in Pega Version 8.8
In order to reduce processing overhead and better handle custom code, new LogicStringParser has been added for evaluating complex logic in PropositionFilter.
INC-231505 · Issue 733974
Error handling improved for NBA run
Resolved in Pega Version 8.8
The data flow partition on a node was sporadically becoming stuck during a Next Best Action Campaign outbound run when it encountered an issue such as a NoClassDefFoundError, requiring the node to be restarted to clear the problem. This has been resolved by explicitly making sure the flow partition is failed when an input thread dies or there is an error in the dataflow.
INC-231704 · Issue 741219
Email template fragment updated to correct footer handling
Resolved in Pega Version 8.8
After creating a new Email correspondence rule and configuring the place holders as per the selected template, when the preview was rendered the "Footer" verbiage was being displayed irrespective of the footer region configurations. This has been resolved by updating the template fragment.
INC-231808 · Issue 735286
Heuristics moved from individual Annotator to Operations for consistent tokenization
Resolved in Pega Version 8.8
Because tokenisation heuristics were part of annotator and not part of operations, rule-based taxonomy loaded token operations directly, bypassing these heuristics and resulting in different tokenization outcomes for taxonomy keywords and analysis time text. Due to this discrepancy, emails were getting tokenized and not matching analysis time content. This has been resolved by moving heuristics from individual Annotator to Operations for tokenization to be consistent.