Creating a Kafka data set
You can create a Kafka data set in Pega Platform, and then associate it with a topic in a Kafka cluster.
Configure Kafka data sets to read and write data from and to Kafka topics, and use this data as a source of events, such as customer calls or messages. Your application can use these events as input for rules that process data in real time and then trigger actions.
For example, when a customer who has a checking account with UPlus Bank accesses the bank's ATM, this event can initiate an associated action, such as displaying an offer for a new credit card on the ATM's screen. For more information, see Triggering a real-time event with the Event Stream service.
You can connect to an Apache Kafka cluster version 0.10.0.1 or later.
- In Dev Studio, click .
- Provide the data set label and identifier.
- From the Type list, select Kafka.
- Provide the Context , Apply to class, and select Add to ruleset.
- Click Create and open.
- In the Connection section, in the Kafka configuration
instance field, perform one of the following actions:
- Select a Kafka configuration instance in the Data-Admin-Kafka class.
- Create a Kafka configuration instance (for example, when no instances are present)
by clicking the Open icon.
For more information, see Creating a Kafka configuration instance.
- Check whether Pega Platform is connected to the Kafka cluster by clicking Test connectivity.
- In the Topic section, perform one of the following actions:
- Select Create new, and then enter the topic name to define a new topic in the Kafka cluster.
- Select Select from list, and then connect to an existing topic in the Kafka cluster.
- Select Use application settings with topic values, and your Kafka data set will use different topics in different environments (for example, development, staging, production), without the need to modify and save a data set rule in each environment. To use this setting, you first have to configure the Application Settings rule, for more information, see Configuring application settings for Kafka data set topics.
- Optional: To define the data set partitioning, in the Partition Key(s)
section, perform the following actions:
- Click Add key.
- In the Key field, press the Down Arrow key to select a
property to be used by the Kafka data set as a partitioning key.
By configuring partitioning you can ensure that related records are sent to the same partition. If no partition keys are set, the Kafka data set randomly assigns records to partitions. - Optional: To configure the Message Values, perform one of the following
actions:
- If you choose JSON, you can select between two Field
mappings:
- Automatically map fields
- Auto-map fields from Kafka to fields with identical names in Pega.
- Use data transform
- Uses the JSON Data Transform ruleform with the ability to map only the properties that you want to map. If you have a long JSON message with many attributes, you can skip some of them. You can also have special characters in your property names (for example, the $ sign), and then map them to the corresponding Pega properties. For more information, see Data transform actions for JSON.
- If you choose Avro, you must preconfigure an Avro schema. For more information, see Configuring Avro schema for Kafka data set.
- If you choose a Custom configuration, you must configure the record settings:
- Serialization implementation
- In this field, enter a fully qualified Java class name for your value serialization.
- For example: com.pega.dsm.kafka.CsvPegaSerde.
- Additional configuration
- In this field you define additional configuration options for the implementation class. Click Add key value pair, and then enter properties in the Key and Value fields.
- If you choose JSON, you can select between two Field
mappings:
- Optional: To enable custom value processing, in the Add the Java class with reader implementation and Add the Java class with writer implementation fields,provide the Java class which implements the custom serialization and deserialization logic. You can find a sample implementation of custom processing here: Kafka message custom processing.
- Optional: To configure the Message Keys, perform one of the following
actions:
- If you choose JSON, you can select between two Field
mappings:
- Automatically map fields
- Auto-map fields from Kafka to fields with identical names in Pega.
- Use data transform
- Uses the JSON Data Transform ruleform with the ability to map only the properties that you want to map. If you have a long JSON message with many attributes, you can skip some of them. You can also have special characters in your property names (for example, the $ sign), and then map them to the corresponding Pega properties. For more information, see Data transform actions for JSON.
- If you choose Avro, you must preconfigure an Avro schema. For more information, see Configuring Avro schema for Kafka data set.
- If you choose a Custom configuration, you must configure the record settings:
- Serialization implementation
- In the Serialization implementation field, enter a fully qualified Java class name for your keys serialization.
- For example: com.pega.dsm.kafka.CsvPegaSerde.
- Additional configuration
- In the Additional configuration section, define additional configuration options for the implementation class. Click Add key value pair, and then enter properties in the Key and Value fields.
- If you choose JSON, you can select between two Field
mappings:
- Optional: Specify key-value pair in the Message header section.
- Click Save.
Previous topic Partition keys for Stream data sets Next topic Creating a Kafka configuration instance