Meeting requirements and prerequisites
Pega best practices include the following Kubernetes tools, required settings, and supported Kubernetes environments. Review the information about the required tools to ensure that your enterprise meets these requirements.
Kubernetes tools
Install and configure the following required tools:
- Docker – Ensure Docker is installed in your deployment environment. You will need Docker to prepare a Pega Platform installation Docker image. For more information, see the Docker documentation.
- kubectl – Configure the Kubernetes command line tool to connect and manage your
Kubernetes resources. For more information, see the kubectl documentation.
To verify that your kubectl is correctly connected and Docker is running, use the kubectl cluster-info command.
- Helm – It is a Pega best practice to deploy using the Helm charts. Install Helm 3.0 or later. To use Helm, install kubectl first and configure it to connect to your Kubernetes environment. For more information, see the Helm documentation portal.
For detailed guidance on how to install the required tools on your local system, refer to the following documents:
At a minimum, your cluster must be provisioned with at least two worker nodes that have 32GB of RAM in order to support the typical processing loads in a Pega Platform deployment. In addition, you must deploy the database instance for Pega Platform in the same region as the worker nodes running in your environment. After you initially deploy and test your Pega application workloads, you may find that your applications require more or less resources. To adjust the settings, see the Kubernetes documentation for Resource Quotas.
Storage class settings
Search, Stream, and Cassandra nodes all require dynamic provisioning of persistent volumes. To dynamically provision volumes at deployment time, you must configure a default storage class. For overview information, see Storage Classes; for details on configuring Storage Classes for use with Kubernetes, see Configure a Security Context for a Pod or Container.
Service deployment guidelines
Due to standard security conventions that limit running containers with root privileges in most datacenters, Pega recommends running the Pega Platform docker image as a non-root user using a custom security context. For information about defining privilege and access control settings for a Pod or Container, see Configure a Security Context for a Pod or Container.
Because some Kubernetes environments can impose restrictions to prevent running as a specific user or using privileged commands, ensure that your environment meets the following requirements so that your deployment is successful:
- The Elasticsearch and Cassandra replica sets included with the Pega example deployment require containers to be run as a specific User ID. This is needed to ensure that files written to a volume are accessible when the container restarts. This User ID is specified in the template spec as securityContext.runAsUser. Some Kubernetes environments may restrict the ability to run as a specific user by default or limit a User ID to a predefined range. If you encounter an error specifying this User ID, contact the provider of your environment. Elasticsearch and Cassandra specify the following User IDs:
Services | securityContext Setting |
Elasticsearch | runAsUser: 1000 |
Cassandra | runAsUser: 999 |
- The Pega Platform search service uses Elasticsearch, which uses an
mmapfs
directory by default for 64-bit systems to store its indices (for more information, see the Elastic document, Virtual Memory. To increase the mmap count on your environment, do one of the following:- Update the
vm.max_map_count
setting in/etc/sysctl.conf
on your nodes and set the valuevm.max_map_count=262144
. - Utilize the
set-max-map-count
initContainer in the example yaml file to automatically configure the setting. If you choose this method, note that the initContainer runs in privileged mode, so you must verify that your environment allows privileged containers.
- Update the
Supported Kubernetes environments
Pega supports deployments into many Kubernetes environments. Pega provides configuration templates so that you can customize your deployment into one of these supported Kubernetes environments, in addition to other settings your organization needs to deploy Pega Platform for your business needs.
Supported Kubernetes Environments
Environment type | Supports all vendor supported versions as detailed on the vendor site |
---|---|
Kubernetes | Kubernetes documentation |
Red Hat OpenShift Container Platform (Self-managed) | Red Hat OpenShift Container Platform documentation |
Amazon EKS Kubernetes (EKS) | Available Amazon EKS Kubernetes Versions |
Google Kubernetes Engine (GKE) | Available GKE cluster versions |
VMware Tanzu Kubernetes Grid Integrated Edition (TKGI) | VMware Tanzu Kubernetes Grid Integrated Edition Documentation |
Azure Kubernetes Service (AKS) | Supported Kubernetes versions in Azure Kubernetes Service (AKS) |
For information about how to deploy Pega Platform on each environment, see the pega-helm-charts Github repository.
Recommendations for cloud-based government workloads
The Pega Client-managed cloud solution does not support deployments of Pega software on government-targeted infrastructure running on public clouds due to the unique nature of their configurations and requirements. This restriction includes AWS GovCloud, Azure for Government, or GCP Assured Workloads for Government. For most cloud-based, government workloads, Pega recommends using a Pega Cloud Services account, the Pega-managed solution that is built on AWS (Infrastructure as a Service (IaaS) that currently provides an authority to operate at FedRAMP High. For details, see Learning about Pega Cloud Services.
Required configurations and settings
Ensure that the following configurations are completed in your environment:
- A database into which you install Pega Platform and maintain Pega application data - See the list of database types that Pega Platform supports in the Platform Support Guide.
- DNS settings - You must configure domain names in your enterprise DNS servers to ensure the exposed Kubernetes services used in Pega deployments receive traffic over your network. For details, see the individual runbook that Pega provides for your type of Kubernetes environment.
- URLs and credentials - You must provide the following URLs and associated credentials
when you configure your deployment:
- Your database URL, username, and password
- Your Docker registry URL, username, and password
- To use your existing Cassandra service and not deploy the Cassandra service Pega provides, you will need the URL, username, and password to access your external Cassandra node. A Cassandra service is required for Customer Decision Hub and Pega Marketing.
Choosing your preferred Kubernetes-based services for your deployment
Pega allows you to choose which load balancing and logging services your Pega Platform deployment uses. To support this choice, Pega provides an “addons” Helm chart in the pega-helm-charts project that lets you configure the deployment to use the existing, native services you have in your Kubernetes environment or to use open-sourced services that use publicly-available Docker images.
Pega-provided Helm charts reference open-sourced services that the Pega supports:
- For deploying Cassandra (required for Pega Customer Decision Hub deployments): docker.io/cassandra
- For EFK Logging: docker.elastic.co/elasticsearch/elasticsearch-oss
- For EFK Logging: gcr.io/google-containers/fluentd-elasticsearch
- For EFK Logging: docker.elastic.co/kibana/kibana-oss
- For load balancing using Traefik: docker.io/traefik
If you plan on using these services, you should review the information about these public images that Kubernetes deployments of Pega Platform may use to ensure your enterprise can access these images. If your environment is unable to access these public registries, you must add them to your private registry following your enterprise Docker image policies and then update the reference to your images in the Helm chart.
Pega supports using load balancing and logging services that already exist in your enterprise or environment or using open-sourced ones. You configure your option by using a Pega-provided “addons” Helm chart the pega-helm-charts project. For details, see the Addons Helm chart.
The load balancing configuration requires a domain name for the Pega web service and optionally the stream service. This may require you to work with your networking team or DNS service to provide domain names that route to the correct IP addresses. For examples, on AWS, you can use Amazon Route 53, while in GKE you can use GCP Cloud DNS to configure domain names that route to your Kubernetes cluster.
Previous topic Understanding the Pega deployment architecture Next topic Using Pega-provided Docker images