Improve your Kafka data set with new enhancements (8.6)
The Kafka data set is a high-throughput and low-latency platform for handling real-time data feeds that you can use as input for Pega Platform™ event strategies. Kafka data sets are characterized by high performance and horizontal scalability in terms of event and message queueing.
For better integration with externally hosted Kafka, Pega Platform version 8.6 implements the following enhancements:
Support for message keys and headers
All three components of the Kafka message record (values, keys, and headers) are now supported, and you can configure them while creating a Kafka data set. All of the data formats available for Message values are also available for Message keys.
By using the JSON Data Transform, you can choose to map only certain properties in those messages. If your JSON message has many attributes, you can skip some of them. You can also have special characters in your property names (for example, the $ sign), and then map them to the corresponding Pega properties.
You can now also use Apache Avro as your data format. Avro is a lightweight binary message encoding, which relies on schemas to structure the encoded data. Avro schemas are held in external storage. Your keys and values may have different schemas, which are stored in different registries.
While configuring an Avro schema, you can use the option to Perform schema evolution when reading messages to be able to use versioning of the schemas. You then use the Upload different schema button to upload a second schema.
In the process of Kafka data set configuration, you can define the data format for the Message header, which you can then use to store additional information or metadata.
Custom value processing
Custom message processing allows you to apply your own modifications to a Kafka message while it is in serialized form: just before deserialization, when a Kafka data set reads a message, or right after serialization, when a Kafka data set sends a message. You can use this processing opportunity for message encryption and decryption.
To enable custom value processing, while configuring a Kafka data set, in the Add the Java class with reader implementation and Add the Java class with writer implementation fields, you provide the Java class that you want to use to implement the custom serialization and deserialization logic.
Configuring topic names by using Application Properties
While configuring the Kafka data set Topic, you can Use application settings with topic values option, to allow your Kafka data set to use different topics in different environments (for example development, staging, production), without the need to modify and save a data set rule in each environment. To use this option, you must first configure the Application Settings rule.
Data-Admin-Kafka enhancements
During Kafka instance creation, you can upload a custom configuration file, to configure the connection with the Kafka cluster. Using the Advanced configuration settings you can upload specific client properties. You can also combine consumer and producer properties in a single file.
For more information about enhancements to Kafka data set, see Creating a Kafka data set and Creating a Kafka configuration instance.
Previous topic Replace models in predictions and migrate changes to production with MLOps (8.6) Next topic Enhance your revision management process with Deployment Manager pipelines (8.5)