Skip to main content

         This documentation site is for previous versions. Visit our new documentation site for current releases.      

This content has been archived and is no longer being updated.

Links may not function; however, this content may be relevant to outdated versions of the product.

Webinar Q&A "Expediting Rollout with Automated Testing"

Updated on September 10, 2021


During each developer Webinar, Pegasystems dedicates additional staff to answer your questions via Chat. What appears below is a capture of the relevant questions from the Expediting Rollout with Automated Testing session on 28-NOV-07.

To view the webinar recording, see Webinar Archive: Expediting Rollout with Automated Testing.

Q-01: Is this feature available in PRPC v5.3 SP1?A-01: The Automated Testing product is available starting with the V5.3 release. See "Automated Testing -- Running Test Cases and Test Suites" to identify the features supported. Some of the features shown during the webinar are not available in the current product and were shown as a technology preview.
Q-02: You mentioned PRPC V 5.x. Do you have roadmaps for Pega's products?A-02: Customers should go through their sales rep or professional services for product roadmaps. The Pega staff would then follow-up with the appropriate product manager. Pegasystems does not publish roadmaps to the PDN or corporate Web site.
Q-03: Will there be an additional charge for this product?A-03: Yes.
Q-04: Is this automated test product will be compatible with PRPC/Smart Dispute 4.x version?A-04: Automated Testing is available starting with the PRPC v5.3 release. If frameworks are running on 5.3 or future releases, then the Automated Testing product will be available for those frameworks. The answer here is that it depends on what version of PRPC the framework is on
Q-05: Does this tool only work with the PRPC UI (i.e. is it tied to the Harness) or does it work with non UI flows as well?A-05: The AutoTest features are tied into flow processing activities directly and not PRPC UI directly. However, flow processing out of the box is tied to the PRPC UI. Pega has not tested these features with anything but the PRPC UI.
Q-06: How the functional and UI testing are conducted? Is there any framework model to achieve this?A-06: We deliberately decided against UI testing and instead test the foundation data and database commits. The basic overview of what we call functional testing is that a user runs through a flow while "recording" it and saves this off as a test case. Future runs of the flow against the test case compare the current run against the saved, known to be good test case version. When a flow works end to end, it has been tested for functionality.
Q-07: We are implementing a pure BRE solution. How does this help me?A-07: Developers can test their decision rules/logic used by the BRE.
Q-08: Do we need any additional automation tool such as Quick Test Pro or Silk Test?A-08: No additional testing products are required to run the AutoTest features.
Q-09: Do you envision this test framework to be used for testing software projects that does not involve PRPC for the application solution?A-09: The goal of these features is to test a PRPC application. That said, we support SOAP Service testing, so we do see testing other applications that are part of a PRPC solution in that way. Note, too, that this is not a framework.
Q-10: How do I enable the auto test feature?A-10: The process of enablement is subject to change, but it will definitely be enabled by user. Note that in V5.3, PDN articles document how to enable the AutoTest features by using the @baseclass.AutoTestEnabled privilege.
Q-11: Does it work with (integrate with) other applications as well, (Test Director, Etc.)A-11: We have chosen to not limit the implementation by integrating only with a few test products. All data is eligible to be ported into any 3rd party product, however no specific product has been tested or is recommended.
Q-12: Does this need DB configurations to store the values for the differences while using the tool? Where are the values stored?A-12: Results of a test case run (differences, if any) are stored in instances of Data-AutoTest-Result-Case. Any system that has the AutoTest features will come with this DB configuration set up already.
Q-13: The presentation showed Flow, Decision Tables, & Decision Trees. Can it run activity separately and for automated test cases?A-13: In 5.3 Flows, Decision Tables/Trees, and SOAP services are the rules that can have test cases created for them.
Q-14: Does the unit testing feature have a coverage analysis over code, meaning what parts of the code is hit most and what is hit the least and if any exception occurs then on which section of the code. This should help in making the code a lot more efficient.A-14: No, not at this time.
Q-15: Can you use Tracer on these test case scenarios?A-15: Absolutely, running a record against a test case executes the same rules that running the record on its own does and you can use Tracer or any other debugging tool exactly as you would otherwise at the same time.
Q-16: Can the same Test case originated initially for one version be used for all higher versions of the same rule?A-16: Yes, test cases are rules and previous versions will continue to work for later versioned rules until/unless a new, higher version is created.
Q-17: How can we execute test for multiple input values or in a loop?A-17: To execute many different paths of one record (with different inputs), multiple test cases are required. For Decision Tables, you could see in the webinar many test cases created when all possible paths were auto-generated. If these test cases were saved to a Rule-AutoTest-Suite, this suite would be run at one time, looping through the many test cases to test all paths of a record.
Q-18: Can you re-run the test without changing any values? This will show us what the code change differences look like.A-18: We were unable to demonstrate this in the session, but if we had, you would see that no differences would have been found. Any differences that you might expect, such as work object ID, create time, etc, are filtered out by one or more Rule-Obj-Model records named AutoTestPropsToIgnore. We ship a version at all the major class levels (Work-, Assign-, etc.) that contains every property that we know will be different across test case executions and don't signify a true difference found. Customers have the ability to add their own models to this chain that can either add additional properties to always ignore or remove the command causing the out of the box property ignores. (note that this is in addition to the "ignore" check boxes that you saw during the webinar which only ignore the specified properties within that current test case).
Q-19: The ignore checkbox doesn't carry from one step to the next?A-19: When executing the test case, the ignore checkbox displays when we report differences found for a particular step. You will not see the option to ignore differences if no differences are found. Also, for each step we show the input values prior to step execution followed by results after the step executed. On the input values prior to step execution display, you will see the input values - you will not see a checkbox for ignoring differences
Q-20: For Interfaces (SOAP or XML over HTTP) how does this testing feature aid, can the tester capture the entire port input and output?A-20: We capture and replicate/compare the input and output as it is sent out of/into PRPC.
Q-21: When you run the flow marker again and say that some change in rules affect the page, will it reflect the changes?A-21: As flow markers are a development tool and not a testing tool, in order to successfully use flow markers to always advance to the saved point in a flow, no differences must be encountered. When a flow marker finds differences in the steps it is skipping over, it will present those differences to the user and leave the flow on the step that first encountered them.
Q-22: It was shown how you can refer back to the process flow to see where you are in the automation. Do you have to create a process flow in Pega before you can create a test case?A-22: Test cases are currently created by storing a "known good" state from an initial, successful run of an existing process. Future enhancements involve more test-driven development features.
Q-23: Any way to set verification points that will be shown as passing (when the actual expected results match)?A-23: Currently, every step is treated as a verification point and if no differences are returned, the results are "as expected" which could be seen as a "pass".
Q-24: Is this only data based testing or can we validate GUI changes?A-24: Yes, we deliberately do not validate GUI changes unless those changes cause the underlying data to change.
Q-25: Do the Test Cases work when we use generated Java properties?A-25: Sure, there is no special requirement about how the properties are created for them to be tested. All testing is done on the clipboard or database level, so as long as the properties work, they will be just fine for AutoTesting.
Q-26: Can there be multiple flowmarkers in one flow and will it stop at each?A-26: Yes.
Q-27: How can I capture or set an expected value as an exception or a negative result?A-27: Negative result testing would be an enhancement to the current features. Currently, setting a value to be ignored would allow a test case to pass with a difference, but not a true negative test.
Q-28: How does PRPC know to open the test page when a rule is run?A-28: Test cases are integrated with the Run feature. So whenever you are running a rule manually, if the user has the auto test feature enabled, they are always either recording test cases (to the clipboard which they can choose to save to the DB) or playing back a previously saved test case that you selected form the drop down of test cases available for this rule.
Q-29: For Interfaces (SOAP or XML over HTTP) how does this testing feature aid, can the tester capture the entire port input and output?A-29: We capture and replicate/compare the input and output as it is sent out of/into PRPC.
Q-30: The presentation showed properties on pyWorkpage, what happens to properties on temp pages or pyWorkpage.XXXPageA-30: Any page of the clipboard that has differences (even embedded ones) work the same as pyWorkPage does.
Q-31: Does the tool have the ability to show a comparison of the results if the same test was run several times?A-31: Yes, when the tests are run via the suite/agent (in the background), results are saved and can be compared.
Q-32: Does this feature enable us to compare 'desired result' vs. 'actual result'. Examples of results can be any action/event like (a)Sending email (b)updating commit action to database (c)assignment of a Work Object (d)exception / failure (e)Screen changeA-32: The AutoTest features compare every clipboard page and database commit that occurs during flow execution, so yes, we are comparing events such as a) sending email, b) committing anything to the database, c) anything to do with an assignment, and d) any exception/failures. When it comes to screen changes, however, the AutoTest features deliberately work one layer below that so existing test cases need not be changed for different style sheets/flow action layouts. Only when the new screen display changes the data (a property entry field is removed from the screen, for example) would it be noted.
Q-33: How can I activate the agent to run the test suite?A-33: The Agent-Queue that runs the test suites is the shipped Pega-ProCom: Correspondence, SLA events, & Bulk Processing one, and the activity it executes every 5 minutes is Rule-AutoTest-Case.RunTestSuitesFromAgent. If it is not enabled on your server, that's the one you want to target.
Q-34: How can you execute condition with multiple user and how can you validate the dependencies on user role?A-34: Test Suites specify a user to use for running the tests. If you want to test for multiple users, it would be as simple as duplicating a Rule-AutoTest-Suite record for as many user types as you wish to test and changing the user id field in each suite. When you compare results, you would be comparing the same tests run with different users.
Q-35: Is there any way to group a set of test cases by the name of the rule for which the test case is. For example, this set of test cases if for CustomerStatus rule.A-35: Yes, Test cases can be automatically sorted by the record for which they were created - it's part or their key structure.
Q-36: Can you run a test case multiple times i.e.: looping - 50 times?A-36: Yes, you could set a Suite record up to run a test case this way. Note that this is not a substitute for true application load testing because running them this way does not mimic a true environment.
Q-37: Can we run the test cases automatically when we would like to perform volume testing?A-37: Yes, running the test cases automatically is done by using a Rule-AutoTest-Suite record to group/run test cases automatically.
Q-38: Are the tests data driven, or is each test case run based on static data?A-38: The features demo'ed in this session require complete, working rules and the test case essentially takes a snapshot of how that rule is working which all subsequent runs of the test case are compared against.
Q-39: Can we schedule the execution of the test cases at a particular time of the day every day?A-39: No, not at this time.
Q-40: In Automated testing how can I use parameterized inputs in large numbers - and allow parameterized values for each input property to be unique?A-40: Each test case is an individual path through a record with defined inputs. In the case of date inputs, we allow variable inputs to account for situations where you'd always like to pass in an input as "today" or a birth date of "18 years ago yesterday", but otherwise, you should be creating distinct test cases and then running them together in a test suite to run them all at once.
Q-41: Can I kick off test suite execution via ANT?A-41: Test suite execution is just handled by an activity initiated by the PRPC agent processing functionality so you can call into your PRPC server however you would like to execute the same activity.
Q-42: How do you do stress test? By this I mean load test with incremental number of users till system crashes. We would like to know that, for example, current configuration have 5 minute response time for 150 concurrent users and crashes after we have 250 concurrent users.A-42: We use a tool called OpenSTA which is an open source tool that behaves much like Load Runner. It captures the HTTP traffic and then provides you a scripting language to customize the script for concurrent user execution. We use this tool and process for both Nightly performance testing and Scale testing and internal.
Q-43: Does the tool allow for Test Driven Development (i.e. writing the test before the rule, and then writing the rule to pass the test)?A-43: Currently, "No". A rule definition is required for us to record a "known good" to store in the test case that is being created.

Have a question? Get answers now.

Visit the Support Center to ask questions, engage in discussions, share ideas, and help others.

Did you find this content helpful?

Want to help us improve this content?

We'd prefer it if you saw us at our best. is not optimized for Internet Explorer. For the optimal experience, please use:

Close Deprecation Notice
Contact us