Configuring sample containers to use Python models for topic detection
Set up sample Docker containers to run your Python topic models, and then serve the models to Pega Platform through an API endpoint. Deploy the sample containers in a cloud or on-premises environment.
- Train your topic model.You can use the sample training scripts provided in the Pega GitHub repository.
- Save the model in one of the supported formats:
- For machine learning models:
.bst
,.joblib
,.pkl
- For deep learning models:
.h5
Note: Ensure that the model file contains feature vectorization and model hyperparameter information for deployment purposes. - For machine learning models:
- Go to the Pega
GitHub repository, and then clone or download the
sample containers.The repository provides two sample containers:
machine-learning-nlp-container
for deploying machine learning models.deep-learning-nlp-container
for deploying deep learning models.
- Deploy your model in the sample container:
- Copy the model to the specified location.
- Build a Docker image.
- Run the container.
For instructions, see theREADME.md
file that is provided with the sample container.Result: When the container is running, you can access the API at one of the following endpoints: - If you deployed the container using OAuth 2.0: https://IP-address:port/auth/predict
- If you deployed the container without authentication: http://IP-address:port/noauth/predict
- IP-address is the IP address of the machine that hosts the container.
- port is a port on the machine that hosts the container.
- Test your model endpoint API using an API testing tool, such as Postman, to ensure that the model works properly.
Previous topic Connecting to topic models through an API Next topic Configuring a machine learning service connection for topic models using REST API