Testsigma Automation Standards and Best Practices


Testsigma Automation Standards emphasise the reusability of automated test cases to enhance the testing process and maximise efficiency. Quality engineers can accelerate the overall testing process by leveraging this reusability. Successful implementation requires a solid understanding of test automation best practices, which enable the setup of repetitive, thorough, and data-intensive tests. These best practices ensure reliable and accurate results while optimising testing efforts.


Test Case Structure and Execution

  1. Write small, atomic, independent, and autonomous test cases to focus, modularise, and maintain them easily.
  2. Use Soft Assertions wherever possible. Soft assertions allow test execution to continue even if a verification step fails and provide more comprehensive test results.
  3. Use Dynamic Waits to improve test efficiency and reduce the chances of false positives or negatives in test results.
  4. You should structure your test cases in the AAA pattern with three distinct sections: Arrange, Act, and Assert. In the arranged section, you set the preconditions for the test. In the act section, you perform the tested actions; in the assert section, you verify the expected outcomes.

Assertions and Verifications

  1. You define the expected outcomes of automated test cases and specify the validations to be performed at specific points in time as verifications to understand the concept of assertions.
  2. Navigate to Help > Actions List on the Test Case details page to find NLPs with assertions in Testsigma. Action Lists
  3. A failed verification in a test case marks the overall test case as failed by default. If validation fails, the remaining test steps will be skipped, and the test case execution will be aborted.
  4. To implement soft assertions for scenarios that require execution of remaining steps after a test step failure, follow the steps below and for more information, refer to Test Step Settings:

    • Hover over the test step, click Option, and choose Step Settings from the dropdown.
    • Uncheck Stop Test Case execution on Test Step failure and click Update.
    • You can configure specific steps to continue executing even if verification fails.

Test Case Organization and Management

  1. Filter, segment, and organise test cases for easy identification to streamline test management processes and quickly locate specific tests.
  2. Label or map relevant requirements to test cases to facilitate filtering and improve accessibility. Users can filter and save test cases in separate views based on labelled or mapped requirements.
  3. During test case creation or editing, you can add labels. The label field is available by default in the test case. Requirements and Labels
  4. You can Save Filters to quickly access and manage test cases associated with a particular functionality or scenario, such as those related to login. For more information, refer to Save Test Case Filter.

Customisation and Extensibility

  1. You can use add-ons to extend Testsigma's repository of actions and create custom NLPs for specific actions that are not available in the built-in Actions List.
  2. Share your add-ons or leverage existing ones with the test automation community through the Add-ons Community Marketplace. You can use add-ons to provide additional functionality and expand the capabilities of Testsigma. For more information, refer to Create an Add-on.
Example:

You create an add-on for verifying text from two DOM elements.


Reusability and Modularity

  1. To avoid duplication and simplify test maintenance, use Step Groups as common reusable functions across test cases. Step Groups promote modular test design and easy maintenance by separating reusable components from the test flow. Any changes made to a Step Group will be reflected in all test cases that invoke it. For more information, refer to Step Groups.
Example:

Create a Step Group to reuse login functionality in multiple test cases.

  1. Use REST API Steps to automate redundant UI actions. Performing these actions through REST API steps will improve test stability and reduce test execution time compared to using the UI. For more information, refer to Rest API.

Element Management

  1. Create elements with proper naming conventions to enable reuse in multiple test cases. For more information, refer to Create an Element.
Example:

Use descriptive names such as "UsernameInput" or "LoginButton" to make them easy to identify.

  1. You should map appropriate context details when you create elements inside iFrames or Shadow DOM contexts. Mapping context details will ensure you correctly identify and interact with elements within specific contexts. For more information refer to Shadow DOM Element. For more information, refer to Create a Shadow DOM element.
  2. You can easily access elements by saving filters and creating views based on screen names. They can check for the presence of elements in Testsigma's repository before recreating them. Element management is facilitated by adding filters. For more information, refer to Save Element Filters.
Example:

Create a view that displays elements related to the ''Login'' screen for quick reference.


Variables and Scopes

Scope Description Usage
Environment
  • The value stays constant during the test execution. The environment variable's values cannot be overwritten.
  • Any test case in any test suite can be accessed.
  • To create an Environment, navigate to Test Data > Environments > Create. For more information, refer to Environments
  • Define base URLs or configuration settings specific to the environment.
  • Create test steps using the data type * url.
  • Example: //button[text()=’*|url|’]
  • Runtime The values are the same throughout a sequential test run; other tests can update them. For more information, refer to Runtime Variable.
  • During test execution, store session-specific data or dynamic values.
  • Create test steps using the data type $ divText.
  • Example: //button[text()=’$|divText|’]
  • Test Data Profile
  • You can link specific test cases. You can update the values in test data profiles from other test cases.
  • To create a Test Data Profile, navigate to Test Data > Test Data Profile > Create. For more information, refer to Test Data Profile
  • Use data-driven testing and maintain test data sets.
  • Create test steps using the data type @ username.
  • Example: //button[text()=’@|username|’]

  • Data-Driven Testing

    1. Enable the data-driven toggle in test cases and use Test Data Profiles to perform the same action with different test data sets for data-driven testing. For more information, refer to Data-Driven Testing.
    2. Test Data Profiles use key-value pair format to store project configuration data, database connection details, and project settings for easy access and reuse of test data.
    Example:

    Create a Test Data Profile named "ConfigData" to store configuration-related test data.

    1. Linking test cases to test data profiles and data sets using the @ parameter test-data type in NLP allows you to use specific columns from the test data set in your test steps.
    Example:

    Link login credentials to a test data profile and use it to test different user logins in a test case.


    Test Data Types

    Data Type Usage Examples
    Plain Text Used for storing general textual data. “Hello World", “Test123”
    @ Parameter Dynamically changeable values in a test case. @ username, @ password
    $ Runtime Values assigned/updated during test execution. $ name, $ currenttime
    * Environment Stores information about the current environment. * url, * website
    ~ Random Generates random values within specified constraints. Random item from a list
    ! Data Generator Generates test data based on predefined rules. ! TestDataFromProfile :: getTestDataBySetName
    % Phone Number Stores phone numbers % +123456789
    & Mail Box Stores email addresses. & automation@name.testsigma.com

    Configuration for Test Execution

    1. Upload attachments for test steps in Test Data > Uploads and follow the maximum file size limit of 1024 MB. The system always considers the latest version of the uploaded file. For more information, refer to Uploads.
    2. Configure Desired Capabilities for cross-browser testing with specific browser configurations. You can configure Desired Capabilities for ad-hoc runs and test plans. For more information, refer to Desired Capabilities.
    Example:

    Specify the desired capabilities of the targeted testing, such as browser version or device type.

    1. Ensure you put test cases in the Ready state before adding them to a Test Suite. Organise relevant tests into test suites for better organisation and execution. For more information, refer to Test Suites.
    Example:

    Create a "Login Suite" and add all relevant login-related test cases for efficient execution.


    Execution and Test Plan Run

    1. Run test case and test plan in Headless mode to reduce execution time and eliminate element loading time. For more information, refer to Headless Browser Testing.
    Example:

    To achieve faster test execution, execute the test plan without a visible browser.

    1. Use the Partial Run option in the Test Plan to exclude consistently failing test suites from runs; you can exclude or disable tests for execution from the Test Machines & Suites Selection in the Test Plan. For more information, refer to Partial Run.
    2. Use the Schedule feature to run the test plan automatically without manual intervention. For more information, refer to Schedule a Test Plan.
    Example:

    Schedule unattended testing during non-business hours by executing the test plan.


    Testsigma Recorder Extension

    1. Use the Testsigma Recorder Extension to record user interactions on web applications. Customise and modify the recorded test steps to align with the desired test case behaviour. For more information, refer to Recording Test Steps.
    2. Use the Automatic Element Identification feature of the recorder extension to easily capture elements and apply validations and verifications during recording to ensure that test steps include necessary assertions.

    Third-Party Integration

    Avoid relying on third-party UI elements for UI actions and instead use APIs or a mock server to simulate actual scenarios in the Application Under Test (AUT). This reduces the fragility of tests.