INC-161604 · Issue 631764
Corrected unreleased database connections
Resolved in Pega Version 8.6
When using a custom activity that calls Dataset-execute on a database table dataset, the DSS Pega-Engine.prconfig/classmap/usemergestatement/default was set to false and the prepared statement for database table dataset was failing. Upon the failure, an exception was generated that prevented the database connection from being released. This has been corrected.
INC-161604 · Issue 631762
Node stability improved when adding relevant records
Resolved in Pega Version 8.6
A node was going down after maxing out the database connection pool whenever the inbound service call was invoked. This was traced to several relevant record-related database queries being invoked within a short time for marking 'when' rules referenced in the proposition filter rule as relevant records. This has been resolved by adding 'when' rules as Relevant Records based on a DSS and not adding auto-generated 'when' rules at all.
INC-163723 · Issue 633441
Queue Processors made more robust
Resolved in Pega Version 8.6
After upgrade, multiple queue processors were not running as expected. Attempting to restart them generated an error. Investigation showed that the real time data flow runs were not picking up or accepting assignments because the local node was under the impression it was still processing data. In this case, the need to synchronize the state of multiple threads caused the queue processors to become stuck in an initializing state due to a race condition that caused the data flow engine to think this run still had threads running when all threads were already stopped. To resolve this, the callback handling has been simplified and made more robust. In addition, in some cases the data flow leader node would believe the service nodes did not accept assignments even when they did. This occurred if many runs and nodes were involved, and was traced to an implicit limit on the NativeSQL query used to read the data to see which assignments were accepted. To resolve this, the key-value store in the Service Registry has been modified to allow a query of more than 500 entries at once.
INC-164581 · Issue 634154
Import UI option updated to handle data import for upgrades
Resolved in Pega Version 8.6
After upgrade from Pega 8.1 to 8.5, importing data into the datasets using the Actions->Import UI option was not working. This was due to the previous save operation used in import being deprecated, and has been resolved by instituting a new save operation in import to handle this scenario.
INC-166354 · Issue 637300
Queue Processors made more robust
Resolved in Pega Version 8.6
After upgrade, multiple queue processors were not running as expected. Attempting to restart them generated an error. Investigation showed that the real time data flow runs were not picking up or accepting assignments because the local node was under the impression it was still processing data. In this case, the need to synchronize the state of multiple threads caused the queue processors to become stuck in an initializing state due to a race condition that caused the data flow engine to think this run still had threads running when all threads were already stopped. To resolve this, the callback handling has been simplified and made more robust. In addition, in some cases the data flow leader node would believe the service nodes did not accept assignments even when they did. This occurred if many runs and nodes were involved, and was traced to an implicit limit on the NativeSQL query used to read the data to see which assignments were accepted. To resolve this, the key-value store in the Service Registry has been modified to allow a query of more than 500 entries at once.
SR-69015 · Issue 619995
Unescaping characters implemented for expressions
Resolved in Pega Version 8.3.6, Resolved in Pega Version 8.4.4, Resolved in Pega Version 8.5.3, Resolved in Pega Version 8.6
An issue where expression builder statements were evaluated differently at runtime than at testing has been resolved. Pega Platform expressions with String literals(that is, sequences of characters enclosed in quotation marks) now unescape characters in strategy shapes such as Set Property or Filter.
SR-D37163 · Issue 505477
Corrected Decision Data import filter behavior
Resolved in Pega Version 8.2.4
After upgrade, if a Decision data import referred another component to filter out the import, it did not work. This has been corrected.
SR-D12733 · Issue 488666
Code fragment removed to eliminate Fortify false positive
Resolved in Pega Version 8.2.4
A code remnant related to Boolean.getBoolean(..) in Rule-Declare testConsistency was causing a false positive in a Fortify scan. This piece of code is obselete and is not used anywhere, and has been removed.
SR-D26976 · Issue 507217
Filter added to ensure correct context for proposition strategy rules
Resolved in Pega Version 8.2.4
Given two applications (ex App1 and App2) hosted on the same domain where App2 was built on App1, trying to create a strategy rule in App1 and do a test run strategy using the propositional data component which internally uses App2 propositions generated the error: Failed to find a 'RULE-DECISION-DECISIONPARAMETERS' with the name 'GROUP_2'. There were 1 rules with this name in the rulebase, but none matched this request." Investigation showed the strategy was using the PropositionNoCacheUtils and PropositionTools java classes to load the propositions during run time. In these classes, the group classes were browsed from the db irrespective of the application context, causing the strategy run to fail as it was not able to access the decision data rules in other applications which shared the same SR class as the current application. To resolve this, a filter has been added to the PropositionNoCacheUtils and PropositionTools java classes to filter out the groups that are not in the current application context.
SR-D36591 · Issue 507537
Kafka producer message size made configurable
Resolved in Pega Version 8.2.4
Kafka producer was using a default max message size setting of 1.3Mb while the Kafka broker max message size was set to 5Mb. This caused large processing queues to eventually throw errors indicating a scheduler.JobExecutionException related to "There was a problem saving an instance of class System-Message-QueueProcessor-DelayedItem". To correct this, a producer message size configuration option has been added along with additional logging for the KafkaSaveOperation.