Configuring a machine learning service connection for topic models using REST API
To run your custom models in Prediction Studio through an API, configure the connection between Pega Platform and your web server.
- In the navigation pane of Prediction Studio, click .
- In the header of the Machine learning services area, click New.
- In the New machine learning service dialog box, in the Service type list, select Custom model.
- Enter the service name, and then select the authentication profile that you want to map to the new service.
- Click Connection type, and then select the type of API
connection that you want to use:
Choices Actions Service discovery (Open API) - In the Service discovery endpoint field, enter the HTTP address of your web service discovery endpoint.
- In the Prediction service list, select the endpoint to which you want to connect.
Stand alone API - In the Prediction API URL field, enter the HTTP address of your prediction endpoint.
- Select the GET or POST request method.
- In the Request parameters section, map the parameters
obtained from the endpoint to field types in Prediction Studio.
The following field types are available for mapping:
Field type Importance Description Model identifier Mandatory The parameter that specifies the name of the model that you want to deploy if the prediction service hosts multiple models. Text Mandatory The parameter through which text passes to the model for analysis. Default Optional Map a default parameter to define a constant value to use across all the models that you create in Prediction Studio using the same machine learning service. Prompt Optional Map a prompt parameter to define a value which is different for each topic model that you create in Prediction Studio using the same machine learning service. When creating a model, you need to provide a value for the Prompt parameter that is specific to that model. - Click Back.
- In the Define output mapping field, select a data
transform that determines how the API sends the output in JSON format back to
the topic model.If you deployed a custom model by using sample containers, select DTInhouseMLService. For more information, see Configuring sample containers to use Python models for topic detection.
- Save the new service connection configuration by clicking Submit.
- Optional: To test the service connection, select the More icon, and then select Test connection.
Previous topic Configuring sample containers to use Python models for topic detection Next topic Configuring a data transform for a JSON output mapping