Skip to main content


         This documentation site is for previous versions. Visit our new documentation site for current releases.      
 

Configuring sample containers to use Python models for topic detection

Updated on May 17, 2024

Set up sample Docker containers to run your Python topic models, and then serve the models to Pega Platform through an API endpoint. Deploy the sample containers in a cloud or on-premises environment.

Important: Before using the sample containers in production, you need to manage any additional production-level requirements, such as security, load balancing, and monitoring.
Before you begin: Set up your Docker environment. For more information, see the Docker documentation.
  1. Train your topic model.
    You can use the sample training scripts provided in the Pega GitHub repository.
  2. Save the model in one of the supported formats:
    • For machine learning models: .bst,.joblib,.pkl
    • For deep learning models: .h5
    Note: Ensure that the model file contains feature vectorization and model hyperparameter information for deployment purposes.
  3. Go to the Pega GitHub repository, and then clone or download the sample containers.
    The repository provides two sample containers:
    • machine-learning-nlp-container for deploying machine learning models.
    • deep-learning-nlp-container for deploying deep learning models.
  4. Deploy your model in the sample container:
    1. Copy the model to the specified location.
    2. Build a Docker image.
    3. Run the container.
    For instructions, see the README.md file that is provided with the sample container.
    Result: When the container is running, you can access the API at one of the following endpoints:
    • If you deployed the container using OAuth 2.0: https://IP-address:port/auth/predict
    • If you deployed the container without authentication: http://IP-address:port/noauth/predict
    where:
    • IP-address is the IP address of the machine that hosts the container.
    • port is a port on the machine that hosts the container.
  5. Test your model endpoint API using an API testing tool, such as Postman, to ensure that the model works properly.
What to do next: Configure a machine learning service to connect to the model through an API. For more information, see Configuring a machine learning service connection for topic models using REST API.
  • Previous topic Connecting to topic models through an API
  • Next topic Configuring a machine learning service connection for topic models using REST API

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best.

Pega.com is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us