Triggering a real-time event with the Event Stream service
Use the out-of-the-box Event Stream service to trigger Events and respond to them immediately. This method is recommended if the client calls are coming from within the cluster that is running Pega Customer Decision Hub.
Pega Customer Decision Hub
You can create real-time runs for data flows that have a data
set that can be streamed in real-time as the primary input. Data flow runs that are initiated
through the Data Flows landing page process the data using the checked-in instance of the Data
Flow rule and the rules that are referenced by that Data Flow rule.
- Log in to Pega Customer Decision Hub as an operator with access to Dev Studio.
- Click .
- On the Real-time processing tab, click New.
- Associate a Data Flow rule with the data flow run by doing the following steps:
- In the Applies to field, select the event class, PegaMKT-Data-Event.
- In the Access group field, select PegaNBAM:Agents as the access group context for the data flow run.
- In the Data flow field, select the ProcessFromEventSource Data Flow rule.
- In the Service instance name field, select RealTime.
- Optional: Configure the number of threads for the data flow nodes. The recommended default value is 5. The number of threads should not exceed the number of cores on the data flow nodes. Check your hardware specifications in order to determine the maximum number of possible threads to use.
- Optional: To keep the run active and restarted automatically after every modification, select
Manage the run and include it in the application, and then select
the ruleset.If you move the ruleset to a new environment, the application will move the run with the ruleset and keep it active.
- Optional: Specify any activities that you want to run before the data flow starts or after the
data flow run has completed.
- Expand the Advanced section.
- In the Additional processing section, perform the following
actions:
- Specify a preprocessing activity that you want to run before running the data flow.
- Specify a postprocessing activity that you want to run after running the data flow.
- Optional: In the Resilience section, specify the data flow run resilience
settings for data flow runs. You can configure the following resilience settings:
- Record failure
- Fail the run after more than x failed records – Terminate the processing of the data flow and mark it as failed after the threshold for the allowed total number of failed records is reached or exceeded. If the threshold is not reached or exceeded, the data flow run finishes with errors. The default value is 1000 failed records.
- Node failure
- Resume on other nodes from the last snapshot - When a node failure occurs, the data flow run will resume from the last known snapshot it has performed. The snapshot interval can be configured within the snapshot management section.
- Snapshot management
- Create a snapshot every x seconds – Specify the elapsed time or the number of records for creating snapshots of the data flow runs state. The condition that is satisfied first will trigger the snapshot creation. The default values are 5 seconds and 2000000000 records.
- Click Run.
- To analyze the lifecycle of the run and troubleshoot potential issues, in the Run details tab of the data flow run, click View Lifecycle Events.
Previous topic Creating a real-time event Next topic Triggering an Event with the SOAP, HTTP and REST services