Connecting to a machine learning as a service model

Apart from creating new predictive models or importing PMML models into Pega Platform, you can run your custom artificial intelligence (AI) and machine learning (ML) models externally in third-party machine learning services. This way, you can improve your predictive models by using advanced algorithms of machine learning as a service (MLaaS) providers, such as Google AI Platform, and apply the results to enhance your customer strategies.

Note: Pega Platform currently supports only Google AI Platform models.
Before you begin: Define your model and the cloud service connection:
  1. In a third-party cloud ML service of your choice, create an ML model.
  2. In Dev Studio, connect to your cloud service instance by creating an authentication profile. For more information, see Authentication profiles.

    For example, for a Google AI Platform service connection, create an OAuth 2.0 authentication profile.

  3. In Prediction Studio, define your ML service. For more information, see Configuring a machine learning service connection.
  1. In the navigation panel of Prediction Studio, click Predictions.
  2. In the header of the Predictions work area, click New > Predictive model.
  3. In the New predictive model dialog box, enter a Name for your model.
  4. In the Create model section, click Select external model.
  5. In the Machine learning service list, select the ML service from which you want to run the model.
  6. In the Model list, select the model that you want to run.
    The list contains all the models that are part of the authentication profile that is mapped to the selected service.
  7. In the Context section, specify where you want to save the model:
    1. Click the Apply to class field, press the Down arrow key, and click the class in which you want to save the model.
    2. Define the class context by selecting the appropriate values in the Development branch, Add to ruleset, and Ruleset version lists.
  8. Verify the settings and click Next.
  9. In the Outcome definition section, enter the model objective.
    To enable response capture, the model objective label that you want to monitor must be the same as the .pyPrediction parameter value in the response strategy.
  10. In the Predicting list, select the model type:
    • For binary models, select Two categories.
    • For categorical models, select More than two categories, and then add the categories that you want to predict.
    • For continuous models, select A continuous value, and then enter the value range that you want to predict.
  11. In the Expected performance field, enter a value that represents the expected predictive performance of the model:
    • For binary models, enter the expected area under the curve (AUC) value between 50 and 100.
    • For categorical models, enter the expected F-score performance value between 0 and 100.
    • For continuous models, enter the expected RMSE value between 0 and 100.
    For more information about performance measurement metrics, see Metrics for measuring predictive performance.
  12. Confirm the model settings by clicking Create.
    Result: Your custom model is now available in Pega Platform.
  13. On the predictive model page, click the Mapping tab, and then upload a JSON file with input mapping and outcome categories for the model:
    1. Download the baseline JSON file by clicking Download template.
    2. On your local hard drive, define the mapping of input fields to Pega Platform properties by editing and saving the template file that you downloaded.
      For example: For the Call Context model, use the following input file structure:
      For information about the JSON file fields and the available values, see Metadata file specification for predictive models.
    3. In Prediction Studio, click Choose file and double-click your JSON file.
  14. Confirm your updates by clicking Save.
What to do next: Add and run your model in a strategy. For more information about strategies, see "About Strategy rules" in the Dev Studio help.