INC-159879 · Issue 627831
Race condition resolved for input/output pipe streams
Resolved in Pega Version 8.5.3
Writing to S3 using file data set was failing with the error "Exception occurred while uploading file". The system was relying on PipedInputStream for getting the data from the file while uploading, which needs to be connected to PipedOutputStream which holds the data to be uploaded. Investigation showed a race condition was occurring where for some use cases the reading of inputStream was happening before the connection of I/P and O/P streams, resulting in a "Pipe not connected" error. This has been resolved.
INC-160103 · Issue 627603
REST connector creation errors resolved
Resolved in Pega Version 8.5.3
Attempting to create REST Connectors via the dev studio wizard using live endpoint invocation as "Add a REST response" was failing with a java.util.LinkedList error. This had a workaround of either running the REST wizard with file upload samples or configuring them directly on the rule form. In addition, REST Connectors were not executing if both the DEBUG level logging was turned on and the request had no headers, which had a workaround of using any other logging level (INFO, ERROR). These issues have been resolved by updating the system such that pyResponseHeaders are populated appropriately, and if there are no headers in the request, the system will not try to remove the trailing comma that would have been introduced in the oLog message.
INC-160360 · Issue 625685
Optimizing helper class enhanced to handle external databases
Resolved in Pega Version 8.5.3
Running a BIX extract that included a manifest for a target database was resulting in a null pointer exception for the manifest extraction. Attempting to generate the DDL for the manifest table also failed. This was traced to an issue with the helper class using a hardcoded default database for forming the queries, causing it to ignore the database config/DADN/prconfig for the Oracle database and form a query using the PegaRules' database credentials. This only occurred when trying to do external database operations on a different database platform; Oracle PegaRules worked as expected with an Oracle external database and Postgres Pegarules worked with a Postgres external database, but mixing Postgres PegaRules and an Oracle external database would result in the null pointer exception. To resolve this, the helper class has been enhanced to work with external databases by passing the database name as a parameter so it will properly calculate the query based on the type of target. An error in the name of the class has also been corrected, and is now available as PerformanceHelper rather than the previous "PerformaneHelper".
INC-160767 · Issue 628374
Email headers correctly mapped when using MSGraph
Resolved in Pega Version 8.5.3
The value of "Send Date" was not correctly populated when using MSGraph instead of IMAP, causing the Email Listener to fail. Microsoft populates the "sendDateTime" field in the JSON with the value of the RFC 822 email header "Date:", but this value was not being passed to Java object of type "Message" as part of the query. To resolve this, ReceivedDateTime and SentDatetime have been added in the select filter of getMessagebymessageID.
INC-143121 · Issue 610735
Timeout for loading predictors made configurable
Resolved in Pega Version 8.5.3
When using an extremely large number of predictors, the Report definition pzADMPredictorsFilter was suffering timeouts due to the time for loading predictors from the database exceeding the time threshold allowed. This has been resolved by marking the rule as editable to allow custom setting of the threshold according to need.
INC-150395 · Issue 625070
Tokenizer updated to handle commas
Resolved in Pega Version 8.5.3
The Text Analyzer was not working as expected in cases where the number was combined with a comma (,) with it but was working when a space was used between the number and the comma. This was traced to the tokenizer not correctly processing and splitting the input text when there was a special character before or after the token. This has been resolved by updating the tokenizer logic.
INC-150873 · Issue 612897
Performance improvement for saving ADM model rule
Resolved in Pega Version 8.5.3
Saving an ADM model rule generated a heap dump. The stack trace from the heap dump showed a thread consuming the maximum memory (4.7 GB of memory). Configurations on all factories are updated when a model rule is saved, but at the time of development it was not expected that there would be a lot of factories in a Dev environment so the system was loading all existing factories simultaneously into memory and updating configurations on them. To improve performance, an update has been made which will now sequentially load factories and update the configuration.
INC-151037 · Issue 609626
Enhanced ADM diagnostic logging
Resolved in Pega Version 8.5.3
Diagnostic logging enhancements have been added in order to better identify issues related to ADM models not being created or where learning is impacted.
INC-151421 · Issue 614596
Adaptive Model Save-as works With Predictor Type
Resolved in Pega Version 8.5.3
A predictor change which was accounted for in a previous adaptive model rule version did not work on future save-as. This was traced to a missed use case that when the value 'pyPrevPredTransformation' is empty, validation on that is unnecessary. This has been resolved by adding an 'if' condition in the validation activity.
INC-153223 · Issue 613705
DSS added to set Cassandra query page size limit
Resolved in Pega Version 8.5.3
When a site with a large number of nodes captured responses to the commit log, it was possible for nodes run out of heap space and cause system instability. This was due to the Cassandra query not having a specified page size, so the cluster-wide default setting of 5000 created an issue for large sites which could have over 150 nodes capturing responses. This has been resolved by specifying the page size on the Cassandra query that used to read responses from the commit log. This value will be set dynamically with a DSS to not read more than n responses from all shards combined in a single batch.