SR-C1507 · Issue 344564
Existing Kafka topic name used for connection
Resolved in Pega Version 7.4
When running a Kafka dataflow, Pega was using the dataset name instead of the topic name for the topic connection. This has been fixed, along with forcing a capital letter for the first character of the dataset name in order to ensure proper matching.
SR-C2352 · Issue 347253
Data join optimized for performance
Resolved in Pega Version 7.4
The data join implementation has been modified in order to improve performance for ADE applications built on Pega and based on DSM.
SR-C5585 · Issue 347734
Correct error messages shown for failed data flow records
Resolved in Pega Version 7.4
When running the data flow, the failed records were returned with incorrect error messages. In this case, the original exception generated by the CassandraBrowseByKeysOperations was not propagated to the caller. This has been fixed by adding the original exception to the one thrown to the caller.
SR-C6486 · Issue 349824
Check added for Kafka dataflow error handling
Resolved in Pega Version 7.4
When an event dataflow has failed because too many errors were detected, it is possible to continue it. However, it was noticed that the "continue" button had to be used twice: The first time, the dataflow failed again and the "Input record" was increased more than expected. For instance, if the event flow was sent 2 incorrect events, the "Input record" was increased by 3 instead of 1. The second time, the event dataflow was resumed correctly. Investigation showed that when a bad record is inserted into Kafka, a dummy error record is generated that has no information of the partition and position, so the data flow cannot update the partition table correctly. In order to correct this, the partition and position information will be given on the error record. The data flow execution has also been updated such that when there is an onError call, a check will be performed to assess whether the error originated on the primary source and an input record is present. If so, then the partition table will be updated from that record.
SR-C691 · Issue 349064
VBD reworked to retrieve IH DB metadata
Resolved in Pega Version 7.4
Campaign dashboard performance was slow when accessed via the Navigation bar button PRPC user configured for app server has restricted access to schema metadata. This causes VBD to be unable to build SQL query used to synchronize Actuals data set with Interaction History. It was possible to optionally configure vbd/useTableMapping = true in PR Config or DSS (owning RS: Pega-DecisionEngine) in order to force VBD to load IH database column metadata using Pega table info mapping instead of using database connection API, but the issue has now been fixed by reworking VBD to use Pega class/database mapping rules instead of JDBC DatabaseMetadata to retrieve IH fact/dim table columns.
SR-C8023 · Issue 350229
Updated dataflow used after pause/continue
Resolved in Pega Version 7.4
When using an event dataflow that has a Kafka dataset as the source, running the flow, pausing it, and importing updated rules was resulting in the resumed flow still using the old rules. This was a known limitation in the data flow metrics management where when a run was resumed, the previous metrics were "merged" with new metrics. In cases where the structure of a data flow changed between pause/resume, merge failed silently and new metrics were not saved to the database. The metric management has now been updated to merge metrics correctly: in case of a data flow structure update (e.g. shapes added/removed), after the data flow is run, stage metrics will be cleared up as it is not possible to match them with the new structure, and all other metrics will be properly resumed from the "paused" position (e.g. number of processed records, throughput etc.).
SR-B50830 · Issue 318366
Logic update for expandAll to ensure consistency in nested structures
Resolved in Pega Version 7.4
Declare expressions were not triggering properly when using a nested tree structure. This was traced to a CME error on the expandAll(true, true) function that led to a declare expression not being fired to incorporate property value changes on the clipboard. To correct this, expandAll() has been modified to expand a stream inside an already expanded page even when the embedded page has not been moved or copied.
SR-B64752 · Issue 327010
MapValue null value handling added
Resolved in Pega Version 7.4
The exception "clipboard.InvalidReferenceException: The reference * is not valid. Reason: invalid property name: '*'" was being logged sporadically. This exception was thrown when the MapNametoTitle Map Value tried to map the first row as @replaceAll(.pyWorkParty(param.WorkPartySubscript).pyFirstName, "a", "o") and param.WorkPartySubscript was null. The "*" appeared due to a wild card inserted by the system when the value is null. Handling has been added for the null value, and the error message has been updated to be more informative.
SR-B74504 · Issue 328594
ValidateTargetURL now more flexible for port number length
Resolved in Pega Version 7.4
After configuring the system name, clicking "Migrate" in the migration wizard generated the error "Invalid URL". This was due to the validation of "pxValidateTargetURL" encountering an unexpected 5-digit port number in the URL. The regex validation rule has now been made less restrictive when checking the URL generated using the system name.
SR-B75131 · Issue 336171
Added missed use case to Date/datetime formatting
Resolved in Pega Version 7.4
Due to a missed use case, the format type for date/datetime was not sent to the formatter, causing the function Date Format to change on declare expression trigger. This has been corrected by adding formattype in the format tokens of the pzGetControlsFormat RUF along with the added parameter 'formatTypeForFormatting' in the pzFormatControl RUF.