Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Connecting to topic detection models through an API

Updated on May 17, 2024

Broaden your selection of topic detection models in Pega Platform by connecting to custom models through an API. Train and deploy your topic detection models, and expose an API endpoint to allow Pega Platform to interact with the models.

To help you serve your models through an API, the Pega GitHub repository provides sample Docker containers, with which you can train and deploy your models.

  1. In Dev Studio, configure the OAuth 2.0 authentication profile.
    For more information, see Creating an authentication profile.
  2. Deploy your topic detection model and expose an API endpoint to allow Pega Platform to interact with the model.
    Note:

    You can deploy a Python model by using sample Docker containers. For more information, see Configuring sample containers to use Python models for topic detection.

  3. In Prediction Studio, define a machine learning service to connect to topic detection models through an API.
  4. In Prediction Studio, create a text categorization model using the new service connection.
  • Previous topic Creating a text categorization model to run topic detection models in Cloud AutoML
  • Next topic Configuring sample containers to use Python models for topic detection

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us