Testing Text Analyzer rules

You can test the performance of a Text Analyzer rule after you configured that rule to perform natural language processing tasks that fulfill your business requirements.

You can use real-life data such as Facebook posts, tweets, blog entries, and so on, to check whether your configuration produces expected results. Testing facilitates discovering potential issues with your configuration and fine-tuning the rule by retraining text analytics models, modifying topic detection granularity, changing the neutral sentiment score range, and so on.
  1. In the Records panel, click Decision > Text Analyzer.
  2. In the Text Analyzer rule form, click Actions > Run to open the Run window.
  3. In the Run window, in the Sample text field, paste the text that you want to analyze.
  4. Click Run.
  5. View the test run results:
    • In the Overall sentiment section, view the aggregated sentiment of the analyzed document, the accuracy score, and the detected language. Each sentiment type is color-coded.
      The following highlight colors are used to identify the sentiment of the text:
      • Green – Positive
      • Gray – Neutral
      • Red – Negative
    • In the Category section, view the categories that were identified in the document. These categories are part of the selected taxonomy. You can also view the sentiment and confidence score for each category.
    • In the Intent section, view the detected intent types and the associated confidence score. There can be multiple intent types detected in the analyzed sample.
    • In the Text extraction section, view the entities that were identified in the document, such as auto tags or keywords. You can also view the summary of the analyzed text and highlight the content that was extracted to form the summary in the original text.
    • In the Topics section, view the categories that the text analyzer extracted from the document.