Triggering a real-time event with the Event Stream service
Use the out-of-the-box Event Stream service to trigger Events and respond to them immediately. This method is recommended if the client calls are coming from within the cluster that is running Pega Customer Decision Hub.
Pega Customer Decision Hub
You can create real-time runs for data flows that have a data
set that can be streamed in real-time as the primary input. Data flow runs that are initiated
through the Data Flows landing page process the data using the checked-in instance of the Data
Flow rule and the rules that are referenced by that Data Flow rule.
- Log in to Pega Customer Decision Hub as an operator with access to Dev Studio.
- Click .
- On the Real-time processing tab, click New.
- Associate a Data Flow rule with the data flow run by doing the following steps:
- In the Applies to field, select the event class, PegaMKT-Data-Event.
- In the Access group field, select PegaNBAM:Agents as the access group context for the data flow run.
- In the Data flow field, select the ProcessFromEventSource Data Flow rule.
- In the Service instance name field, select RealTime.
- Optional: Configure the number of threads for the data flow nodes. The recommended default value is 5. The number of threads should not exceed the number of cores on the data flow nodes. Check your hardware specifications in order to determine the maximum number of possible threads to use.
- Optional: To keep the run active and restarted automatically after every modification, select
Manage the run and include it in the application, and then select
the ruleset.If you move the ruleset to a new environment, the application will move the run with the ruleset and keep it active.
- Optional: Specify any activities that you want to run before the data flow starts or after the
data flow run has completed.
- Expand the Advanced section.
- In the Additional processing section, perform the following
actions:
- Specify a preprocessing activity that you want to run before running the data flow.
- Specify a postprocessing activity that you want to run after running the data flow.
- Optional: In the Resilience section, specify the data flow run resilience
settings for resumable or non-resumable data flow runs. You can configure the following
resilience settings:
- Record failure
- Fail the run after more than x failed records – Terminate the processing of the data flow and mark it as failed after the threshold for the allowed total number of failed records is reached or exceeded. If the threshold is not reached or exceeded, the data flow run finishes with errors. The default value is 1000 failed records.
- Node failure
- Restart the partitions on other nodes – For non-resumable data flow runs, transfer the processing to the remaining active Data Flow service nodes. The starting point is based on the first record in the data partition. With this setting enabled, each record can be processed more than once.
- Fail the entire run – For non-resumable data flow runs, terminate the data flow run and mark it as failed when a Data Flow service node fails. This setting provides backward compatibility with previous versions of Pega Platform.
- Snapshot management
- Create a snapshot every x seconds – For resumable data flow runs, specify the elapsed time for creating snapshots of the data flow runs state. The default value is 5 seconds.
- Click Done.
- To analyze the lifecycle of the run and troubleshoot potential issues, in the Run details tab of the data flow run, click View Lifecycle Events.
Previous topic Creating a real-time event Next topic Triggering an Event with the SOAP, HTTP and REST services