Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Configuring a machine learning service connection for topic detection models using REST API

Updated on July 5, 2022

To run your custom models in Prediction Studio through an API, configure the connection between Pega Platform and your web server.

Before you begin: In Dev Studio, create an authentication profile to which you map the new service configuration. For more information, see Creating an authentication profile.
  1. In the navigation pane of Prediction Studio, click SettingsMachine learning services.
  2. In the header of the Machine learning services area, click New.
  3. In the New machine learning service dialog box, in the Service type list, select Custom model.
  4. Enter the service name, and then select the authentication profile that you want to map to the new service.
  5. Click Connection type, and then select the type of API connection that you want to use:
    ChoicesActions
    Service discovery (Open API)
    1. In the Service discovery endpoint field, enter the HTTP address of your web service discovery endpoint.
    2. In the Prediction service list, select the endpoint to which you want to connect.
    Stand alone API
    1. In the Prediction API URL field, enter the HTTP address of your prediction endpoint.
    2. Select the GET or POST request method.
  6. In the Request parameters section, map the parameters obtained from the endpoint to field types in Prediction Studio.

    The following field types are available for mapping:

    Field typeImportanceDescription
    Model identifierMandatoryThe parameter that specifies the name of the model that you want to deploy if the prediction service hosts multiple models.
    TextMandatoryThe parameter through which text passes to the model for analysis.
    DefaultOptionalMap a default parameter to define a constant value to use across all the models that you create in Prediction Studio using the same machine learning service.
    PromptOptionalMap a prompt parameter to define a value which is different for each topic model that you create in Prediction Studio using the same machine learning service. When creating a model, you need to provide a value for the Prompt parameter that is specific to that model.
    For example: For a topic detection model, you can map the parameters as follows:
    ParameterMapping
    data#modelNameModel identifier
    data#textText
    data#modelTypeDefault
    data#languagePrompt
  7. Click Back.
  8. In the Define output mapping field, select a data transform that determines how the API sends the output in JSON format back to the topic detection model.
    If you deployed a custom model by using sample containers, select DTInhouseMLService. For more information, see Configuring sample containers to use Python models for topic detection.
    Tip: To configure the selected data transform or to create a new one, click the Open icon. For more information, see Configuring a data transform for a JSON output mapping.
  9. Save the new service connection configuration by clicking Submit.
  10. Optional: To test the service connection, select the More icon, and then select Test connection.
What to do next: Run your custom model through the new service connection.

For more information, see Creating a text categorization model to run topic detection models through an API.

  • Previous topic Configuring sample containers to use Python models for topic detection
  • Next topic Configuring a data transform for a JSON output mapping

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us