Run-time data
Connect to large streams of real-time event and customer data to make your strategies and models more accurate.
To process decision management data in real-time, create Stream, Kafka, and Kinesis data sets.
- Creating a Stream data set
Process a continuous data stream of events (records) by creating a Stream data set.
- Connecting Kafka and Pega Platform
Apache Kafka is a fault-tolerant and scalable platform that you can use as a data source for real-time analysis of customer records as they occur. Create Kafka data sets to read and write data from and to Kafka topics, and use this data as a source of events, such as customer calls or messages. Your application can use these events as input for rules that process data in real time and then trigger actions.
- Creating a Kinesis data set
You can create an instance of a Kinesis data set in Pega Platform to connect to an instance of the Amazon Kinesis Data Streams. The Amazon Kinesis Data Streams service ingests a large amount of data in real time, durably stores it, and makes it available for lightweight processing.
- The use of streaming data sets in data flows
The configuration of a streaming data set, such as Kafka, Kinesis, or Stream, can impact the life cycle of the records consumed by the data flow run that utilizes these data sets. You can use the following information to prevent duplicate processing of records or loss of records during data flow runs using these data sets.
Previous topic Requirements for custom stream processing in File data sets Next topic Creating a Stream data set