Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Run-time data

Updated on July 5, 2022

Connect to large streams of real-time event and customer data to make your strategies and models more accurate.

To process decision management data in real-time, create Stream, Kafka, and Kinesis data sets.

  • Creating a Stream data set

    Process a continuous data stream of events (records) by creating a Stream data set.

  • Connecting Kafka and Pega Platform

    Apache Kafka is a fault-tolerant and scalable platform that you can use as a data source for real-time analysis of customer records as they occur. Create Kafka data sets to read and write data from and to Kafka topics, and use this data as a source of events, such as customer calls or messages. Your application can use these events as input for rules that process data in real time and then trigger actions.

  • Creating a Kinesis data set

    You can create an instance of a Kinesis data set in Pega Platform to connect to an instance of the Amazon Kinesis Data Streams. The Amazon Kinesis Data Streams service ingests a large amount of data in real time, durably stores it, and makes it available for lightweight processing.

  • The use of streaming data sets in data flows

    The configuration of a streaming data set, such as Kafka, Kinesis, or Stream, can impact the life cycle of the records consumed by the data flow run that utilizes these data sets. You can use the following information to prevent duplicate processing of records or loss of records during data flow runs using these data sets.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us