Configuring sample containers to use Python models for topic detection
Set up sample Docker containers to run your Python topic models, and then serve the models to Pega Platform through an API endpoint. Deploy the sample containers in a cloud or on-premises environment.
- Train your topic model.You can use the sample training scripts provided in the Pega GitHub repository.
- Save the model in one of the supported formats:
- For machine learning models:
.bst
,.joblib
,.pkl
- For deep learning models:
.h5
- For machine learning models:
- Go to the Pega
GitHub repository, and then clone or download the
sample containers.The repository provides two sample containers:
machine-learning-nlp-container
for deploying machine learning models.deep-learning-nlp-container
for deploying deep learning models.
- Deploy your model in the sample container:
- Copy the model to the specified location.
- Build a Docker image.
- Run the container.
For instructions, see theREADME.md
file that is provided with the sample container. - Test your model endpoint API using an API testing tool, such as Postman, to ensure that the model works properly.
Previous topic Connecting to topic models through an API Next topic Configuring a machine learning service connection for topic models using REST API