INC-157357 · Issue 636713
Hazelcast remote execution not called from synchronized context
Resolved in Pega Version 8.5.4
After navigating to the Admin Studio portal to view the nodes, the portal was temporarily freezing. Investigation of the thread dump revealed this was caused by a DDS pulse sending a remote execution call to all nodes to update logger settings even though the site was not using DDS. This has been resolved by updating the system to avoid calling Hazelcast remote execution from a synchronized context.
INC-157629 · Issue 626635
Duplicate key exception resolved for adaptive model
Resolved in Pega Version 8.5.4
During the model snapshot update, a DuplicateKeyException was generated while trying to insert a record in to the predictor table. This did not affect the model's learning, but did appear ion the model monitoring report. This was traced to a local scenario of having the same outcome values defined on the model with different cases (Accept and accept). All predictors used in an Adaptive model are inserted into the model monitoring tables as a part of the monitoring job: because the monitoring tables are not case sensitive, this lead to a unique constraint exception since there were multiple IH predictors with the same name. To resolve this, validation has been added which will skip adding duplicates from new responses.
INC-159836 · Issue 631204
Upgraded Apache UIMA Ruta libraries to resolve memory leak
Resolved in Pega Version 8.5.4
A memory leak issue that resulted in a reboot being needed every few days was traced to the class org.apache.uima.ruta.rule.RuleMatch. This has been resolved by upgrading the Apache UIMA Ruta libraries to v2.8.1. A high level of exception logging may be seen under high loads due to annotations in the standard Ruta scripts; this will not impact execution and further work will resolve this in a future release.
INC-161604 · Issue 631763
Corrected unreleased database connections
Resolved in Pega Version 8.5.4
When using a custom activity that calls Dataset-execute on a database table dataset, the DSS Pega-Engine.prconfig/classmap/usemergestatement/default was set to false and the prepared statement for a database table dataset was failing. Upon the failure, an exception was generated that prevented the database connection from being released. This has been corrected.
INC-161604 · Issue 631761
Node stability improved when adding relevant records
Resolved in Pega Version 8.5.4
A node was going down after maxing out the database connection pool whenever the inbound service call was invoked. This was traced to several relevant record-related database queries being invoked within a short time for marking 'when' rules referenced in the proposition filter rule as relevant records. This has been resolved by adding 'when' rules as Relevant Records based on a DSS and not adding auto-generated 'when' rules at all.
INC-161829 · Issue 645199
EditElement Indexoutofbound exception addressed
Resolved in Pega Version 8.5.4
Attempting to run the activity pyEditElement was failing with an IndexoutofboundException error. Because Java compiles circumstanced pyEditElement sections in the system from the non-circumstanced base version that exists only in the @baseclass, the java compilation of the pyEditElement can end up exceeding the 65 000 byte threshold as the number of Decision Data rule instances and the corresponding properties grows across multiple ruleset versions. As an intermediate fix for the Indexoutofbound exception, the ruleset version has been removed from the circumstance value to reduce the java generation. Please note that this may result in a decision data rule that was created or updated in a branch and then merged to a ruleset version failing a subsequent checkout from the same merged ruleset version. Further work will be done on this issue in the next update.
INC-163597 · Issue 634741
Node stability improved when adding relevant records
Resolved in Pega Version 8.5.4
A node was going down after maxing out the database connection pool whenever the inbound service call was invoked. This was traced to several relevant record-related database queries being invoked within a short time for marking 'when' rules referenced in the proposition filter rule as relevant records. This has been resolved by adding 'when' rules as Relevant Records based on a DSS and not adding auto-generated 'when' rules at all.
INC-163723 · Issue 633440
Queue Processors made more robust
Resolved in Pega Version 8.5.4
After upgrade, multiple queue processors were not running as expected. Attempting to restart them generated an error. Investigation showed that the real time data flow runs were not picking up or accepting assignments because the local node was under the impression it was still processing data. In this case, the need to synchronize the state of multiple threads caused the queue processors to become stuck in an initializing state due to a race condition that caused the data flow engine to think this run still had threads running when all threads were already stopped. To resolve this, the callback handling has been simplified and made more robust. In addition, in some cases the data flow leader node would believe the service nodes did not accept assignments even when they did. This occurred if many runs and nodes were involved, and was traced to an implicit limit on the NativeSQL query used to read the data to see which assignments were accepted. To resolve this, the key-value store in the Service Registry has been modified to allow a query of more than 500 entries at once.
INC-164581 · Issue 634155
Import UI option updated to handle data import for upgrades
Resolved in Pega Version 8.5.4
After upgrade from Pega 8.1 to 8.5, importing data into the datasets using the Actions->Import UI option was not working. This was due to the previous save operation used in import being deprecated, and has been resolved by instituting a new save operation in import to handle this scenario.
INC-165513 · Issue 645683
Queue Processors made more robust
Resolved in Pega Version 8.5.4
After upgrade, multiple queue processors were not running as expected. Attempting to restart them generated an error. Investigation showed that the real time data flow runs were not picking up or accepting assignments because the local node was under the impression it was still processing data. In this case, the need to synchronize the state of multiple threads caused the queue processors to become stuck in an initializing state due to a race condition that caused the data flow engine to think this run still had threads running when all threads were already stopped. To resolve this, the callback handling has been simplified and made more robust. In addition, in some cases the data flow leader node would believe the service nodes did not accept assignments even when they did. This occurred if many runs and nodes were involved, and was traced to an implicit limit on the NativeSQL query used to read the data to see which assignments were accepted. To resolve this, the key-value store in the Service Registry has been modified to allow a query of more than 500 entries at once.