SR-D87870 · Issue 548771
Resolved putting JMS Message through JMS Listener interface
Resolved in Pega Version 8.3.3
When using MS Listener integrated into Pega running on WebSphere Liberty, the listener was running correctly and able to consume JMS Messages. The JMS Listener ruleform provided options to browse and put JMS messages. However, when attempting to put a message into the destination queue, an error occurred. Investigation showed that if the resource lookup was using 'Resource references' on JMS Listener, the naming context was initialized with a null hashtable. This has been corrected by modifying AddJMSMessage and GenericViewJMSMessage to initialize the naming context with the default constructor RefreshJMSMessages to set JNDIServerName and pass the current parameter page when calling GenericViewJMSMessage.
SR-D50664 · Issue 531686
Support added for tracing PageList during automations
Resolved in Pega Version 8.3.3
When using the out-of-the-box email reply on an email triage case, an exception was observed in the tracer stating "com.pega.pegarules.pub.clipboard.WrongModeException: The property To.pxResults was of mode Page List while com.pega.pegarules.data.internal.clipboard.ClipboardPropertyImpl.getObjectValue() was expecting Java Object mode." This has been resolved by adding support for tracing PageList during automations.
SR-D94505 · Issue 553016
Toggles added to allow file listener performance improvement
Resolved in Pega Version 8.3.3
In order to offer flexibility in improving the performance of file listener CPU usage while processing intake files, rsf.setSaveIntermediateState( true ) has been replaced with rsf.setSaveIntermediateState( listener.getListenerProperties().getAttemptRecovery() ) in FileActionImpl. This change allows the option of not selecting the setting for attempting recovery on the file listener rule-form, which then would skip the intermediate save state normally performed per record/per file in case recovery is needed, thereby speeding up the processing. A toggle has also been added to see whether the system should bypass the ListenerState check at the end of each record. To use this, set the Dynamic System Setting "listener/skipListenerStateCheck" in the "Pega-IntegrationEngine" ruleset to true.Additional Installation Instructions Please create/set the Dynamic System Setting "listener/skipListenerStateCheck" in the "Pega-IntegrationEngine" ruleset as true.
SR-D89428 · Issue 550391
Data Flow StartTime uses locale timezone
Resolved in Pega Version 8.3.3
The start time of the dataflow was displayed in GMT instead of the operator locale timezone. This has been corrected.
SR-D77157 · Issue 544471
DataSet preview will use date instead of datetime
Resolved in Pega Version 8.3.3
While using a DataSet preview functionality, the date appeared as reduced by one day. This has been resolved by parsing date as 'date' instead of 'datetime' to avoid issues with timezone interactions.
SR-D87709 · Issue 552397
Default context check added for saving adaptive model with locked rulesets
Resolved in Pega Version 8.3.3
When updating an adaptive model rule in Prediction Studio, the error message "No unlocked Rulesets/Versions found that are valid for this record. Unlock at least one Ruleset/Version that can contain records of this type." appeared when clicking Save. This occurred when a branch was used in the default context of the Prediction Studio settings. Although there was a workaround to use Dev Studio to Save As the adaptive model rule to the required branch, this has been resolved by adding a check for default context and then saving the model there if it is mentioned.
SR-D89012 · Issue 550799
DelegatedRules refresh icon made accessible
Resolved in Pega Version 8.3.3
When using Accessibility, the refresh icon in pzDelegatedRules was being read as "Link". This has been corrected by adding text for the refresh icon.
SR-D85558 · Issue 548285
Handling added for prolonged Heartbeat Update Queries
Resolved in Pega Version 8.3.3
After restart, the pyFTSIncrementalIndexer queue size had hundreds of thousands of entries even though it was empty prior to the restart.Investigation traced this to a job scheduler that checked all the database connections everyday at 1 EST by using a list that contained some connections which did not exist. Checking those invalid connections caused other update queries to queue and wait, resulting in the update heartbeat query taking longer than its default beat. This caused a Split Brain issue wherein other nodes considered the long-executing node to be dead and triggered a rebalance while the node itself continued to execute partitions thinking that it was healthy. This caused duplicate processing of records. To resolve this, a fail safe has been added: while updating heartbeat in Service Registry, nodes will enter safe mode when the update query is taking longer than the default beat.
SR-D82727 · Issue 547722
Improved management for table pr_log_dataflow_events
Resolved in Pega Version 8.3.3
The Lifecycle event table was sometimes growing too large. This additional strain of database transaction volume caused poor performance on the Dataflow tier and lead to cluster instability and time-consuming cluster restarts. Due to problems in one of the Pulse tasks, the Pulse thread was not processing single case metrics properly and causing the unbounded queue for single case to grow. This has been addressed by switching to a fixed queue size, which is configurable with the DSS: dnode/single_case_queue_size. The default value of the DSS is 4000, and if changed a system restart is required. An error will be logged each 1000 queue misses, and metrics will be dropped if the queue is full. In addition, the Pulse task frequency has been improved and managed to prevent interference with other Pulse tasks and will be triggered only if a run is system-paused for a long interval. Rebalances now have a failsafe if something unexpected happens during the Pausing of the run, and If the cluster becomes unstable, the life cycle events logs may be disabled with dataflow/run/events/persist .
SR-D89643 · Issue 548290
Old Tumbling Time data in event strategies given TTL for cleanup
Resolved in Pega Version 8.3.3
Old Tumbling Time data keys in event strategies were not being cleaned up, causing Cassandra timeouts after the dataflow run had been running for several months. The longer the dataflow was running using standard compaction, the more the data was potentially spread out across SSTables and the slower it became. This has been resolved by adding a 'time to live' value for tumbling time windows, and event strategies has been switched to use leveled compaction by default.