The Data Flow service is essential to running data flows. Data flows are rules for sequencing and combining data (based on various sources) and writing the results to a destination.
You must configure the Data Flow service before you can start a data flow (except for test runs, where data flows always run on the local decision data node). The Data Flow tab on the Services landing page lists the nodes where data flow work items are run. Depending on the partitioning configuration of data flow work items, the data flow can process data on a different number of nodes than that listed on the tab. On that tab, you can also configure the number of the Pega 7 Platform threads that you want to use for running the data flow work items.