INC-218909 · Issue 715282
Override added to delete records for a stream dataset after processing
Resolved in Pega Version 8.6.5
Kafka data was accumulating for a Stream data set due to huge volume of inbound calls. This has been resolved by adding support to override pyDeletedProcessed through a DASS in order to remove the records for a particular stream dataset (topic) as soon as they are processed by Pega.
INC-219566 · Issue 721290
Handling updated for CSRF in queue processor trace
Resolved in Pega Version 8.6.5
The field level audit on properties was intermittently not being shown on the Audit history table for the first time for some users. This has been resolved by modifying the queue processor trace to better handle CSRF tokens.
INC-225519 · Issue 724398
Improved handling for thread resolution issues
Resolved in Pega Version 8.6.5
Queue Processor/Dataflow was moving to STOPPED state due to failed records in its execution. Investigation showed there was a minor logic issue in the queue processor activity which allowed the Page-Remove step to be called even before the pages were actually created, and this has been resolved by improving the recovery from a cleared ThreadContainer which might cause thread resolution issues.
INC-163791 · Issue 704030
Simplified default reference time calculations
Resolved in Pega Version 8.8
After a job scheduler was configured to run at Start time = 21:00:00 for Time zone = Europe/London, the scheduler determined 20:00:00 as the next start time. This was due to the calculation for the next start time using the time zone offset calculation pattern for the date and time stored in System-Runtime-Context.pxCreateDateTime, which had difficulty with changes to the time zone definition implemented in the time between the given date and today (meaning the current time) such as daylight savings time. To resolve this, the default reference time from System Runtime Context will be 'now' instead of Date(0).
INC-184964 · Issue 705932
TextMask_Encrypted rule added for use with Oracle
Resolved in Pega Version 8.8
When a property was being encrypted by propertyEncrypt access control policy and masked by propertyRead access control policy, it showed a "@@getMaskedValueOfText" error. This has been resolved with the addition of a new rule pxTextMask_Encrypted for Oracle product type which will remove extra spaces from the SOURCE string to handle ORACLE specific usecases.
INC-186897 · Issue 681030
DSS DisableAutoComplete setting honored
Resolved in Pega Version 8.8
Setting DisableAutoComplete DSS was not working as expected. This was traced to the system not being able to read the DSS value due to timing related to database startup, and has been resolved by directing the system to read the setting in PREnvironment.java instead of from the prconfig.
INC-191404 · Issue 689095
Tracer settings made configurable for queue processor
Resolved in Pega Version 8.8
An enhancement has been added to allow configuring tracer settings for the Queue Processor module.
INC-200030 · Issue 698955
Handling added for external Kafka authorization exception
Resolved in Pega Version 8.8
When using external Kafka for stream service, the dataflow was failing with the error 'QueueProcessorDataSubscriberException' when topic create permission was missing. As a workaround, the topics could be pre-created, though a "Topic already exists" warning was generated. To resolve this, the cluster-wide right that a producer needs, IdempotentWrite, has been added. For more information please refer to the link https://docs.confluent.io/platform/current/kafka/authorization.html
INC-202865 · Issue 709921
Shared partition operations performance improvements
Resolved in Pega Version 8.8
A significant performance degradation was seen in queue processor overhead related to maintaining the partition table. This has been resolved by adding an update which will improve partition operations in a shared context.
INC-205938 · Issue 721198
Improved handling for heavy use of PushDailyUserData
Resolved in Pega Version 8.8
The PushDailyUserData agent was causing utility node performance issues due to the amount of data it was fetching from pr_hourly table. To resolve this, an update has been made which will run the agent once per day and chunk large data.