Deploying Pega Platform clusters on PCF
Pega provides a customizable Pega service broker for deploying Pega Platform™ on Pivotal Cloud Foundry (PCF). You can simplify IT operations and automate system management tasks by using customized Pega Platform clusters on Cloud Foundry.
The system administrator uses BOSH or the Pivotal Ops Manager to make the Pega Service Broker available. After the Pega Service Broker is available, the application developer can customize the custom_properties.json file, configure Pega Platform column size, and deploy Pega Platform clusters. For more information about deploying the Pega Service Broker, see Deploying the Pega Platform Service Broker on Cloud Foundry by using BOSH or Deploying the Pega Platform Service Broker on Pivotal Cloud Foundry by using Ops Manager.
Editing the custom_properties.json file
To create cluster-specific properties, edit the custom_properties.json file. The properties in the custom_properties.json file override the default property values.
To make it easier to edit the file, every required placeholder value starts with "YOUR" and is followed by example text. The fields included in the file override the defaults in the manifest or in PCF Ops Manager and the service plan so no fields should be left with placeholder values. Any properties that do not require customization can be removed from the JSON file.
Open the custom_properties.json file in a text editor.
Configure the common properties. Use the properties in the following table and enter the values in double quotation marks.
Property name
Notes
jdbc_url
Replace YOUR_JDBC_URL with the URL for the Pega Platform database:
- IBM DB2 for Linux, UNIX, and Windows: jdbc:db2://host:port/dbname
- Microsoft SQL Server: jdbc:sqlserver://host:port;databaseName=dbName;selectMethod=cursor;sendStringParametersAsUnicode=false
- Oracle: Use either the service name or the SID:
- jdbc:oracle:thin:@host:port/service-name
- jdbc:oracle:thin:@host:port:SID
- PostgreSQL:jdbc:postgresql://host:port/dbname
jdbc_user
jdbc_password
Replace YOUR_JDBC_USER and YOUR_JDBC_PASSWORD with the user name and password used to connect to the database.
jdbc_rule_schema
Replace YOUR_RULES_SCHEMA with the name of your rules schema.
jdbc_data_schema
Replace YOUR_DATA_SCHEMA with the name of your data schema.
route_uri
Because the route_uri must be unique for every cluster, the route_uri property is required in the custom_properties.json file.
This property defines the URL for accessing Pega Platform nodes. Replace YOUR ROUTE with a fully-qualified domain name for accessing Pega Platform nodes in the format cluster_name.pcf_system_subdomain.
stream_route_uri This property defines the URL for accessing Pega Platform stream nodes. Replace YOUR_STREAM_SERVICE_ROUTE with a fully-qualified domain name for accessing Pega Platform stream nodes in the format cluster_name.pcf_system_subdomain. - Configure the Pega Platform web nodes. These nodes are externally facing,
- In the
instance_groups
section, locate the universal section. - Configure the VM type for the nodes. Specify a VM type with sufficient CPU and RAM to cover both the heap size and the operating system. The default setting for vSphere is xlarge; the valid settings depend on the IaaS. For more information about valid settings, use the BOSH cloud-config command. Replace YOUR VM TYPE with the value for your nodes, for example:
"vm_type" : "xlarge"
- Configure the network for the web nodes. Replace YOUR NETWORKS with the network for this node, for example:
"networks" : "my_network_name"
- Configure the availability zone for the web nodes. Replace YOUR AZS with the availability zone for this node, for example:
"azs" : "az1"
- Optional: If you need additional web nodes, enter the number of web node instances added to the load balancer, for example:
"instances" : "2"
- In the
- Optional: Configure the remaining Pega nodes. Repeat step 3 for each remaining node type you want. Set the persistent disk type for the decisioning and search nodes.
- Optional: Delete the sections for any nodes you do not configure.
Specify the
pegasystems
properties. Use the properties in the following table and enter the values in double quotation marks.Property name
Notes
listen_port Optional: Edit the port on which PCF will listen. jdbc_conn_props Optional: Enter any custom connection properties. jdbc_driver_class
Replace YOUR_JDBC DRIVER_CLASS with the JDBC driver for your database:
- IBM DB2 for Linux, UNIX, and Windows: com.ibm.db2.jcc.DB2Driver
- Microsoft SQL Server: com.microsoft.sqlserver.jdbc.SQLServerDriver
- Oracle: oracle.jdbc.OracleDriver
- PostgreSQL: org.postgresql.Driver
max_idle_connections
Optional: Enter the maximum number of idle connections in the connection pool.
max_total_connections
Optional: Enter the number of total connections in the connection pool.
max_wait_millis
Optional: Enter the number of milliseconds to wait for a database to become available. For infinite wait times, enter -1.
additional_datasource_settings
Optional: Enter a space-delimited list of setting names and values in the format setting="value". For example:
validationQuery="SELECT 1"
validationQueryTimeout="5"
jdbc_driver_downloads:
uri
user
password
secret
Replace YOUR_JDBC_DOWNLOAD_URL with the URI source from which to download a JDBC driver over HTTP, HTTPS, FTP, or FTPS. For example,
https://jdbc.postgresql.org/download/postgresql-9.4.1212.jarIf using basic authentication or FTP, also enter the user name and secret value to download the JDBC driver.
Optional: In the
context_settings
section, specify name/value (java.lang.String) pairs to be bound to the JNDI context during deployment. You can configure Data-Admin-System settings or prconfig.xml settings to override standard behavior. For example:{ "context_settings" : [ { "name" : "setting1", "value" : "value1"}, { "name" : "setting2", "value" : "value2"} ] }
Optional: In the
additional_jvm_arguments
section, add lines to specify additional JVM command-line arguments. You can specify any number of arguments. Do not specify either the maximum heap size (-Xmx) or minimum heap size (-Xms); set those values later in this procedure. Enter the arguments to be passed to the JVM as it would appear on the command line. For example:{ “additional_jvm_arguments”:[ {“argument”:”-Dsome.prop1=true”}, {“argument”:”-Dsome.prop2=false”} ] }
Optional: In the
file_store
section, add lines to specify the storage locations for processing files without manual intervention. The settings depend on the file share type:NFS Volume Share
Property name
Notes
file_spec
Enter the file specification destination storage reference for Pega applications such as file listeners and BIX. The actual storage location differs based on system settings. The value must include only lowercase letters and numbers, and must be a single word.
For example:
- For BIX, enter pegacloudrepository.
- For file listeners, enter file://Destination:/directory to watch the directory inside the Destination file storage location.
mount_options
Optional: Add additional mount options to be provided to the mount command during deployment.
remote_host
Enter the host name or IP address of the remote host where the NFS share is located. This host must be accessible from the virtual machine.
remote_path
Enter the remote path to the NFS share.
For example:
"file_store": { "nfs": [ { "file_spec": "file://cloud:/", "mount_options": "timeo=10,intr,lookupcache=positive,soft", "remote_host": "10.3.8.99", "remote_path": "/share/pega" } ] }
Amazon Web Services S3 BLOB Share
Property name
Notes
file_spec
Enter the file specification destination storage reference for Pega applications such as file listeners and BIX. The actual storage location differs based on system settings. The value must include only lowercase letters and numbers, and must be a single word.
For example:
- For BIX, enter pegacloudrepository.
- For file listeners, enter file://Destination:/directory to watch the directory inside the Destination file storage location.
bucket
Enter the AWS bucket name.
access_key
Enter your AWS access key. The key is optional if you are using EC2 instances with attached IAM roles; otherwise it is required.
secret_access_key
Enter your AWS secret key. The key is optional if you are using EC2 instances with attached IAM roles; otherwise it is required.
root_path
Optional: Enter the root path under the bucket for this share.
kms_encryption_key_id
Optional: To read or write encrypted files, enter the Amazon Key Management System (KMS) encryption key ID. For more information, see the Amazon KMS documentation.
For example:
"file_store": { "aws_s3": [ { "file_spec": "file://cloud:/", "bucket": "something.s3.amazonaws.com", "access_key": "BLDAI44QH8DHCEXAMPLE", "secret_access_key": "je7MtGbClwBF/2Zp9Utk/h3yCo8nvbEXAMPLEKEY", "root_path": "customer/data", "kms_encryption_key_id": "db3ca323-2833-5924-9ada-87c2164ed822" } ] }
Microsoft Azure BLOB Share
Property name
Notes
file_spec
Enter the file specification destination storage reference for Pega applications such as file listeners and BIX. The actual storage location differs based on system settings. The value must include only lowercase letters and numbers, and must be a single word.
For example:
- For BIX, enter pegacloudrepository.
- For file listeners, enter file://Destination:/directory to watch the directory inside the Destination file storage location.
bucket
Enter the Azure container name.
user_name
Enter your Azure user name.
secret_access_key
Enter your Azure secret access key.
root_path
Optional: Enter the root path under the bucket for this share.
For example:
"file_store": { "azureblob": [ { "file_spec": "file://cloud:/", "bucket": "https://myaccount.blob.core.windows.net/mycontainer", "user_name": "someuser", "secret_access_key": "Eby8vdM02xNOcqFlqUwJPLlmEtlCDXJ1OUzFT50uSRZ6IFsuFq2UVErCz4I6tq/K1SZFPTOtr/KBHBeksoGMGw==", "root_path": "/pega" } ] }
Optional: In the
memory
section, set the minimum and maximum Java heap size for each Pega Platform node. The recommended system heap size is 4096 MB - 8192 MB based on memory usage, garbage collection frequency, and the VM type for the node. For example:{"memory" : { "max_heap" : "8192", "min_heap": "1024" }
Optional: In the
ping_timeout
line, specify the number of seconds to wait for a ping response before the server is considered unresponsive.In most cases, the default value of 1000 should be sufficient. If the Pega Platform application server fails to start within the specified amount of time, increase the value. If the value specified is too low, BOSH will try to deploy each Pega Platform instance type two times before failing completely.
Optional: In the
additional_datasources
section, enter additional JDBC data sources for external databases. Use the properties in the following table and enter the values in double quotation marks:If you plan to deploy Pega Marketing, add an additional datasource for the ExternalMKTData with a Data-Admin-DB-Name of mktdatasource. For more information about settings, see the Pega Marketing Installation Guide and the DSM Operations Guide.Property name
Notes
pega_db_name
Enter the database name. This value corresponds to a portion of the data source JNDI name. The corresponding Data-Admin-DB-Name that refers to this data source must have the same database name, and must refer to the JNDI reference at
java:comp/env/jdbc/database_namedriver_class
Enter the JDBC driver class for your database:
- IBM DB2 for Linux, UNIX, and Windows: com.ibm.db2.jcc.DB2Driver
- Microsoft SQL Server: com.microsoft.sqlserver.jdbc.SQLServerDriver
- Oracle: oracle.jdbc.OracleDriver
- PostgreSQL: org.postgresql.Driver
database_url
Enter the JDBC connection URL:
- IBM DB2 for Linux, UNIX, and Windows: jdbc:db2://host:port/dbname
- Microsoft SQL Server: jdbc:sqlserver://host:port;databaseName=dbName;selectMethod=cursor;sendStringParametersAsUnicode=false
- Oracle: Use either the service name or the SID:
- jdbc:oracle:thin:@host:port/service-name
- jdbc:oracle:thin:@host:port:SID
- PostgreSQL:jdbc:postgresql://host:port/dbname
user
password.secret
Enter the user name and password used to connect to the database.
connection_props
Optional: Enter any custom connection properties. For Apache Tomcat, the connection properties correspond to the connectionProperties attribute.
default_schema
Enter the schema name for unqualified object references.
max_total_connections
Optional: Enter the number of total connections in the connection pool.
max_idle_connections
Optional: Enter the maximum number of idle connections in the connection pool.
max_wait_millis
Optional: Enter the number of milliseconds to wait for a database to become available. For infinite wait times, enter -1.
additional_datasource_settings
Optional: Enter a space-delimited list of setting names and values in the format setting="value". For example:
validationQuery="SELECT 1"
validationQueryTimeout="5"
Optional: In the
log_forwarding_settings
section, configure Cloud Foundry to forward log files. Pega Platform uses rsyslog for log forwarding. You can configure the release to forward logs to any tool that can consume syslog. All Pega Platform logs are read to the VM local log facility and forwarded to the target. Use the properties in the following table and enter the values in double quotation marks:Property name
Notes
enabled
Set to true to enable log forwarding.
address
Enter the name or IP address of the target system.
port
Enter the port name or number to use when connecting to the target system.
protocol
Enter the type of protocol to use:
- 'tcp'
- 'udp'
- 'relp'
- Optional: Configure a site from which to download Apache Tomcat and create Tomcat users.
Property name
Notes
tomcat_download_uri The URL to download your Apache Tomcat version. tomcat_download_user The user name for Apache Tomcat downloads. tomcat_download_password The password for the Apache Tomcat user. tomcat_roles
Enter a string array of Apache Tomcat role names. The PegaDiagnosticUser role is created automatically. Do not enter PegaDiagnosticUser in this array. tomcat_users
Enter an object array to define Apache Tomcat users. Each object represents a Apache Tomcat user and must include the following components:
- name: the user name
- password
- roles: a comma-delimited string of Apache Tomcat roles
For example:
[{"name":"admin", "password":"adminpass","roles":"PegaDiagnosticUser"}, {"name":"user1", "password":"pass1","roles":"role1,role2"}]
Optional: In the
update
section, configure watch timeouts. Use the properties in the following table and enter the values in double quotation marks:Property name Notes canaries Enter the number of canary instances.
max_in_flight Enter the number of non-canary instances to update in parallel. canary_watch_time Enter the maximum number of milliseconds that Cloud Foundry waits for a healthy message from a preliminary canary job. If Cloud Foundry does not receive a healthy message before the timeout, the job fails and Cloud Foundry does not load other jobs. update_watch_time Enter the number of milliseconds that Cloud Foundry waits for a healthy message from a preliminary job that runs after a tile update. If Cloud Foundry does not receive a healthy message before the timeout, Cloud Foundry abandons the tile update.
serial Optional: Specify whether to deploy Pega Platform instances sequentially. - Delete all sections that still include placeholder values.
- Save and close the file.
Configuring Pega Platform tables for long PCF 2.3 VM names
In PCF 2.3, the VMs that are created for service instance have very long names. The Pega Platform uses host names to identify a given node within a Pega cluster. The long host names used by PCF 2.3 exceed the size of the database columns used to contain node host names. Increase the size of the columns to enable individual nodes to function correctly.
Run the following commands against the PR_SYS_STATUSNODES database table in the Pega data schema to increase the column sizes:
ALTER TABLE <data schema>.pr_sys_statusnodes ALTER COLUMN pynodename TYPE VARCHAR(192); ALTER TABLE <data schema>.pr_sys_statusnodes ALTER COLUMN pxinsname TYPE VARCHAR(320); ALTER TABLE <data schema>.pr_sys_statusnodes ALTER COLUMN pzinskey TYPE VARCHAR(352);
Deploying a Pega Platform cluster
After Cloud Foundry is connected to the Pega Service Broker, and you set the cluster-specific properties in the custom_properties.json file, you can deploy Pega Platform clusters. The application developer performs these steps:
- Deploy your Pega Platform database and verify that it is accessible remotely.
- Obtain the following information:
- Pega Platform access URL – Defined in the custom_properties.json file.
- Pega Platform service name – Defined in the Pega Service Broker manifest or in PCF Ops Manager.
- Pega service plan name
- Service name – A unique name that identifies the service. For example, pega_crm_app, warranty-test-2, or 7-3-1-test-environment. You will use this service name when using CF commands to update or manage the service.
- Run
cf marketplace
and verify that pega-service appears in the list of available services. - To create a service, run a
cf create-service
command:- To use the settings in the custom_properties.json file:
- Verify that the custom_properties.json file includes a unique route_uri value.
- Run a command similar to the following:
cf create-service pega-service service-plan-name MyClusterName -c <path to the custom_properties.json file>
- To use the command line to set the property values, run a command similar to the following where the route_uri property is mandatory:
- LINUX:
cf create-service pega-service service-plan-name myClusterName –c "{"jdbc_url" : "jdbc:postgresql://myService:5432/myDB", "jdbc_user" : "user", "jdbc_password" : "password", "jdbc_rule_schema" : "rules", "jdbc_data_schema" : "data", "route_uri" : "pegacluster.mycompany.com" }"
- Windows:
<cf create-service pega-service service-plan-name myClusterName –c "{\"jdbc_url\" : \"jdbc:postgresql://myService:5432/myDB\", \"jdbc_user\" : \"user\", \"jdbc_password\" : \"password\", \"jdbc_rule_schema\" : \"rules\", \"jdbc_data_schema\" : \"data\", \"route_uri\" : \"pegacluster.mycompany.com\" }"
- LINUX:
- To use the settings in the custom_properties.json file:
- Determine whether the service is successfully created. Run the following command:
cf service service-name
Where service-name is the unique name that identifies this service. You might need to run this multiple times until you see a create succeeded message.
Updating a Pega Platform cluster
Update a cluster to change any cluster configuration information. Run run a command similar to the following:
- LINUX:
cf
update-service service-name–c "{"jdbc_url" : "jdbc:postgresql://myService:5432/myDB", "jdbc_user" : "user", "jdbc_password" : "password", "jdbc_rule_schema" : "rules", "jdbc_data_schema" : "data", "route_uri" : "pegacluster.mycompany.com" }"
- Windows:
<cf
update-service service-name–c "{\"jdbc_url\" : \"jdbc:postgresql://myService:5432/myDB\", \"jdbc_user\" : \"user\", \"jdbc_password\" : \"password\", \"jdbc_rule_schema\" : \"rules\", \"jdbc_data_schema\" : \"data\", \"route_uri\" : \"pegacluster.mycompany.com\" }"
Accessing the Pega Platform
To access Pega Platform, navigate to the Pega Platform URL:route_uri/prweb
Where route_uri is provided in the custom_properties.json file.
Troubleshooting Pega Platform on PCF
If Pega Platform does not start successfully, connect to BOSH and review the log files. For more information, see the BOSH documentation and the Troubleshooting On-Demand Services documentation from Pivotal.
Previous topic Deploying Pega Platform on PCF using Ops Manager using BOSH Next topic Deploying Pega Customer Service and Pega Sales Automation on PCF