Creating a Stream data set
Process a continuous data stream of events (records) by creating a Stream data set.
You can test how data flow processing is distributed across Data Flow service nodes in a multinode decision management environment by specifying the partition keys for Stream data set and by using the load balancer provided by Pega. For example, you can test whether the intended number and type of partitions negatively affect the processing of a Data Flow rule that references an event strategy.- In the header of Dev Studio, click .
- On the Create Data Set tab, in the Data Set
Record Configuration section, define the following settings to
identify your data set:
- In the Label field, enter the data set label.
- Optional: To change the automatically created identifier, click Edit, enter an identifier name, and then click OK.
- In the Type list, select Stream.
- In the Context section, specify the application context, applicable class, ruleset, and ruleset version of the data set.
- Click Create and open.
- Optional: To create partition keys for testing purposes, in the
Stream tab, in the Partition
key(s) section, perform the following actions:
- Click Add key.
- In the Key field, press the Down arrow key, and
then select a property to use as a partition key.The available properties are based on the applicable class of the data set which you defined in step 3.
- To add more partition keys, repeat steps 5.a through 5.b.
For more information on when and how to use partition keys in a Stream data set, see Partition keys for Stream data sets. - Optional: To disable basic authentication for your Stream data set, perform the following
actions: in the Settings tab, perform the following
actions:
- Click the Settings tab.
- Clear the Require basic authentication check
box.The REST and WebSocket endpoints are secured by using the Pega Platform common authentication scheme. Each post to the stream requires authenticating with your user name and password. By default, the Enable basic authentication check box is selected.
- Confirm your settings by clicking Save.
- Optional: To populate the Stream data set with external data, perform the following
actions:
- In the navigation panel of Dev Studio, click .
- Select an existing Pega REST service or create a new Connect REST rule.
- Configure the settings in the Methods tab.
- Partition keys for Stream data sets
You can define a set of partition keys in a Stream data set to test how data flow processing is distributed across Data Flow service nodes in a multinode decision management environment by using the default load balancer. For example, you can test whether the intended number and type of partitions negatively affect the processing of a Data Flow rule that references an event strategy.
Previous topic Run-time data Next topic Partition keys for Stream data sets