SR-D11909 · Issue 488672
Secure and httponly attributes added to Pega-Perf cookie
Resolved in Pega Version 8.1.6
A vulnerability test identified the Pega-Perf cookie as not having any attributes for secure and httponly. This has been resolved.
SR-D8650 · Issue 475299
Handling added for null sort order in file import metadata
Resolved in Pega Version 8.1.6
A NullPointerException was seen while fetching the sort order of indexes during a file import. This was caused by a null being returned for an index that did not have the sort order in the metadata, and has been resolved by adding a null check; if the metadata of the index returns "null" as sort order, then the order will be set to "A".
SR-C88015 · Issue 427061
Null check tuned for hotfix bulk import
Resolved in Pega Version 8.1.6
If a hotfix contained any changes to rules, the import failed with a null pointer exception.This was traced to a null bulk object. Investigation showed that if debug logging was enabled on ESBulkIndexerEmbedded, the system was attempting to get a value off of customBulkRequestBuilder, an object that could potentially be null but that was not subjected to a null check until after the debug statement. This has been resolved by adding a conditional within the debug statement to output a different message if the builder is null, preventing this issue from occurring.
SR-C93572 · Issue 440080
Search initialization logic updated to ensure consistent node member information
Resolved in Pega Version 8.1.6
The search function was intermittently failing across nodes after restart. Investigation showed that this was due to inconsistent results from search initialization cluster logic using Hazelcast APIs to tell whether or not a given node was part of a Pega cluster. To resolve this, the logic has been updated to rely on ES APIs instead of Hazelcast cluster membership to determine offline nodes.
SR-D23170 · Issue 493024
Elasticsearch will ignore tokens for not-analyzed fields
Resolved in Pega Version 8.1.6
When performing re-index, many warnings were thrown indicating a workitem could not be indexed. Investigation showed that this was related to work objects that have very long text stored in text type property because Elasticsearch has limit for token length equal to 32766 bytes. To resolve this, an ignore_above property has been added for not-analyzed fields which will cause Elasticsearch to ignore the token.
SR-D15520 · Issue 497156
Exported formatting corrected to display security scan line found detail
Resolved in Pega Version 8.1.6
Running a Rule-Security-Analyzer scan using a pyUnsafeURL regular expression and RSA provided items found, but the exported results did not contain line found detail. Investigation showed this was caused by the results being converted to an HTML table when the result itself contained HTML tags, resulting in broken HTML generation. To resolve this, the results will be sanitized for display as HTML.
SR-D447 · Issue 471236
Truststore updated to utilize SECURE SSL /TLS
Resolved in Pega Version 8.1.6
Creating a Truststore to use with SSL-protected resources by referencing the JKS on the file system resulted in an I/O exception. This was due to the "getCertificate" Activity that applies to Data-Admin-Security-Keystore only supporting the "Upload file" mode for "Keystore location." Because there was a hard dependency on the "pyFileSource" property, using any other option such as "URL" or "Data Page" resulted in a "No Keystore has been uploaded" Runtime Exception. To resolve this, a new function was added in PegaSSLProtocolSocketFactory to pass the setTltlSSLContext() function to set the SSLContext directly from the following activities:AuthenticationLDAPVerifyCredentials AuthenticationLDAPWebVerifyCredentials LDAPVerifyCredentials pyAuthenticationKerberosCredentials ValidateDirectoryInfo In the above activities SSLContext is created with the help of keystore and truststore information and set the SSLContext in PegaSSLProtocolSocketFactory class which is used to the the LDAP context in the activities.
SR-D8319 · Issue 445547
Case name caption security inserted with Cross-site scripting filtering
Resolved in Pega Version 8.1.6
In order to protect against the possibility of executing malicious JavaScript code by entering an appropriately modified name while adding new case type, pyCaption in menu items has been made HTMLSafe by converting JSON through the GSON library. An additional fix has been made to use Cross-site scripting filtering to ensure the script does not execute while page is loaded. Additional handling for Firefox has also been added to normalize tabName to properly display Recents.
SR-D16970 · Issue 485763
New API introduced to handle DSM service startup fail or delay
Resolved in Pega Version 8.1.6
A Stream node was shown with JOINING_FAILED status in the landing page, but executing a JMX call to get the status on the landing page resulted in a message that it was not a registered bean. This was caused by JMX being registered before service initialization and allowing for decommissioning a node other than itself. Investigation showed that this feature was developed prior to its use for cloud monitoring, and subsequent development led to the same feature being used via LP. In order to prevent conflicts, a new REST API has been introduced to allow cloud monitoring to manage nodes in which DSM services failed to startup or are still in the process of getting to "NORMAL" state.
SR-D16971 · Issue 491325
New API introduced to handle DSM service startup fail or delay
Resolved in Pega Version 8.1.6
A Stream node was shown with JOINING_FAILED status in the landing page, but executing a JMX call to get the status on the landing page resulted in a message that it was not a registered bean. This was caused by JMX being registered before service initialization and allowing for decommissioning a node other than itself. Investigation showed that this feature was developed prior to its use for cloud monitoring, and subsequent development led to the same feature being used via LP. In order to prevent conflicts, a new REST API has been introduced to allow cloud monitoring to manage nodes in which DSM services failed to startup or are still in the process of getting to "NORMAL" state.