INC-193326 · Issue 683805
Adaptive model retry mechanism enabled
Resolved in Pega Version 8.5.6
Adaptive models were missing from the Model Management page as well as in the Prediction studio while similar models for the same proposition, only differing by the Channel name, were visible. This was traced to data not being synchronized between the database and Cassandra. The pegadata.pr_data_adm_factory database table did not contain the record of the missing channel, but Cassandra did. Since the current Cassandra adm_scoringmodel contained model information, the system still believed the model was present. In order to ensure Cassandra and the database table are in sync, an update has been made to enable the retry mechanism "SyncFactoryKeysTask" to create the ADM model in factory table by periodically looking for scoring models without factories or an entry in adm_meta.
INC-193632 · Issue 679171
Cassandra driver metrics exposed for performance troubleshooting
Resolved in Pega Version 8.5.6
By default Cassandra driver metrics are now enabled. Metrics can be disabled by setting the dnode/disable_driver_metrics prconfig parameter.
INC-193986 · Issue 680031
Parameter logic updated for metrics activity counter
Resolved in Pega Version 8.5.6
An error was causing the PushCDHMetrics agent to fail. This was traced to an undefined parameter in the activity which was used as a counter, and has been resolved by replacing it with a local variable of type integer.
INC-194305 · Issue 681679
File Dataset wildcards updated for ADM storage
Resolved in Pega Version 8.5.6
Attempting to access the filepath was failing and an error was seen indicating "Could not obtain lock 'Create repository file pegacloudfilestorage:ADM/Rule-Decision-AdaptiveModel/Data-Decision-Request-Customer". While creating a file name, a lock is generated while checking the file names of existing files. If the file data set has a large number of files, a condition may occur where some threads are unable to save data and an exception is thrown. To resolve this, the wildcard previously used in the file name has been replaced with the Pega node id + thread id + current timestamp. This will ensure all wildcards are unique and there is no need to lock and list existing files.
INC-194657 · Issue 682004
ADMinputsource field population updated to handle transactional decisions
Resolved in Pega Version 8.5.6
The CaptureResponse flow was failing while writing to AdaptiveAnalytics with the error "IllegalArgumentException - argument does not represent JSON object". This occurred while running an outbound campaign where a few decisions were treated as transactional and did not have Model executions. Attempting to set a response for these decisions resulted in a JSON parse exception being thrown due to pxADMInputs not being populated (no models executed). This occurred only when using transactional actions in CDH, and was caused by the system only including predictive models during the make decision flow due to common inputs not being stored. While there was a workaround of overriding the property pzADMInputSource to use modelReferences instead of admInputContainer, this has been resolved by correcting how the pzADMinputsource field is populated. When there is no modelInput, the system will not populate that field so the page is ignored.
INC-197730 · Issue 686235
Prediction outcome response timing updated
Resolved in Pega Version 8.5.6
Predictions using a response timeout were not emitting a negative response ('NoResponse') when the specified waiting time expired. This was traced to the the outcome and response timeout values being overridden while triggering responses for multi stage predictions along with chained predictions. This has been resolved by modifying the flow to emit each outcome as it is received and by adding the dataflow trigger in the function so that it does not override the values in case of chained predictions.
INC-200218 · Issue 692430
Added handling for calling truncate with external Cassandra
Resolved in Pega Version 8.5.6
A JMX exception was generated when using external Cassandra. This was traced to the combination of calling truncate and using external Cassandra for DDS, and has been resolved by adding a 'do not execute' consistency check during a truncate operation when using external Cassandra.
INC-200609 · Issue 699302
Corrected single case metrics being combined
Resolved in Pega Version 8.5.6
In Single Case dataflow runs, even though the data is processed on nodes, after some point of time the metrics were moving to 'Combined metrics of unavailable node'. This occurred when the corresponding service (realtime) was not available on the node when running the single case run, and has been resolved by updating the system to prevent single case metrics being combined while running on a node without realtime service enabled.
INC-202937 · Issue 695940
RealTimeProcessingDelay made configurable
Resolved in Pega Version 8.5.6
When using interaction history summaries in Engagement Policy strategies, the check whether a particular action was sent previously was not returning any results for the customers that did have an action "Sent". These records were present in the IH Fact table but the IH Summary tables were missing these records. This has been resolved by making the realTimeProcessingDelay configurable for reading from IH in a real-time flow configurable (done as part of IH pre-aggregation). This may be useful if there can be a difference in time between machines inserting into IH which causes pre-aggregation to miss processing records. The relevant DSS is "interactionHistory/realTimeProcessingDelay", and the default is 5 seconds. This must be set before starting pre-aggregation. This is the difference between the end position and the log time. RuleSet: Pega-DecisionEngine DSS Name: interactionHistory/realTimeProcessingDelay Value: <new delay in seconds>
INC-203994 · Issue 698852
DSS added to handle merges with lower versions of Postgres
Resolved in Pega Version 8.5.6
After update, executing the batch campaign with volume constraint resulted in the second data flow DF_Wait failing with error message "ERROR: number of columns (1844) exceeds limit (1664)". This was due to the database set’s change (in 8.5) to use the database layer’s merge statement. Prior to that, the logic used "deletes and inserts". Depending on the version of Postsgres, the generated SQL statement for a merge statement is different. The “INSERT … ON CONFLICT … UPDATE” syntax is generated for Postgres 9.5+ AND when there is a PK constraint defined for the database table. Otherwise, the complex UPSERT statement (old syntax) is generated, as was the case in this issue. This is a known issue in the Postgres server software where it mis-interprets the number of columns involved. i.e., it mistakenly counts the number of columns twice. As a result, the actual maximum columns allowed is only half of the official limit (1664). The same UPSERT statement does not cause the “exceeds limit” exception if there are 832 or fewer columns in the statement. To resolve this, an option has been provided to select between the “original logic” (deletes and inserts) and the “merge statements” logic by way of the DSS “decision/datasets/db/useMergeStatementForUpdates”. Setting “true” will use the merge statement logic, and setting “false” will use deletes and inserts. When the DSS is not defined, the default is "true" and the system will use merge statements in the form preferred by Postgres 9.5+.