- collaboration
- Invite Team Members
- Assign Projects
- Users & Role Management
- Review Management [Test Cases]
- Review Management [Elements]
- Execution Controls
- test cases
- Test Cases
- Test Case List Actions
- Import and Export Test Cases
- Import Test Project Test Cases
- Importing Postman Collections and Environments
- Test cases for Desktop Windows
- Update Test Case result in a Test Plan
- Test cases for Mobile Web Application
- Test Step Types
- Type: Natural Language
- Type: REST API
- Type: Step Group
- Type: For Loop
- Type: While Loop
- Type: Block
- Type: If Condition
- Nested Step Groups
- Create Test Steps
- Create Test Steps Using Simple English
- Test Step Settings
- Test Step Options
- Reuse Elements
- Test Step Reordering
- Bulk Actions
- Add Steps Before & After
- Web Applications
- Test Step Actions
- Test Step Settings
- Test Data in Steps
- Add Steps Manually
- Reuse Elements
- Update Elements
- Create an Element
- Reorder Test Steps
- Bulk Actions
- Add Steps Before & After
- Record steps anywhere in a Test Case
- Image Injection
- Cross-application testing
- Test Data Types
- Raw
- Parameter
- Runtime
- Random
- Data Generator
- Phone Number
- Mail Box
- Environment
- Concat Test Data
- Create Test Data [Parameter]
- Update Test Data Profile
- Updating Value in TDP
- Import TDP
- Bulk Deletion of a Test Data Profile
- Create Test Data [Environment]
- Elements (Objects)
- Web Applications
- Record Multiple Elements
- Record Single Element
- Create Elements
- Supported Locator Types
- Formulating Elements
- Shadow DOM Elements
- Verifying elements in Chrome DevTools
- Handling iframe Elements?
- Dynamic Locators using Parameter
- Dynamic Locators using Runtime
- Using Environment Test Data for Dynamic locators
- Import/Export Elements
- AI Enabled Auto-Healing
- test step recorder
- Install Chrome Extension
- Install Firefox Extension
- Install Edge Extension
- Exclude Attributes/Classes
- test plans
- Add, Edit, Delete Test Machines
- Add, Edit, Delete Test Suites
- Schedule Test Plans
- Run Test Suites In Parallel
- Cross Browser Testing
- Distributed Testing
- Headless Testing
- Test Lab Types
- Disabling Test Cases in Test Plans
- AfterTest Case
- Post Plan Hook
- AfterTest Suite
- Email Configuration in Test Plan
- Execute Partial Test Plans via API
- Ad-hoc Run
- Test Plan Executions
- Dry Runs on Local Devices
- Run Tests on Vendor Platforms
- Run Test Plans on Local Devices
- Test Locally Hosted Applications
- Debug Test Case Failures
- Parallel and Allowed queues
- debugging
- Debug results on local devices (Web applications)
- Debug Results on Local Devices
- Launch Debugger in the Same Window
- Testsigma Agent
- Pre-requisites
- Setup: Windows, Mac, Linux
- Setup: Android Local Devices
- Setting up iOS Local Devices
- Update Agent Manually
- Update Drivers Manually
- Delete Corrupted Agent
- Triggering Tests on Local Devices
- troubleshooting
- Agent - Startup and Registration Errors
- Fetching Agent logs
- Upgrade Testsigma Agent Automatically
- Testsigma Agent - FAQs
- continuous integration
- Test Plan Details
- REST API(Generic)
- Jenkins
- Azure DevOps
- AWS DevOps
- AWS Lambda
- Circle CI
- Bamboo CI
- Travis CI
- CodeShip CI
- Shell Script(Generic)
- Bitrise CI
- GitHub CICD
- Bitbucket CICD
- GitLab CI/CD
- desired capabilities
- Most Common Desired Capabilities
- Browser Console Debug Logs
- Geolocation Emulation
- Bypass Unsafe Download Prompt
- Geolocation for Chrome & Firefox
- Custom User Profile in Chrome
- Emulate Mobile Devices (Chrome)
- Add Chrome Extension
- Network Throttling
- Network Logs
- Biometric Authentication
- Enable App Resigning in iOS
- Enable Capturing Screenshots (Android & iOS)
- Configure Android WebViews
- Incognito/Private mode
- Set Google Play Store Credentials
- addons
- What is an Addon?
- Addons Community Marketplace
- Install Community Addon
- Prerequisites(Create/Update Addon)
- Create an Addon
- Update Addon
- Addon Types
- Create a Post Plan Hook add-on in Testsigma
- Create OCR Text Extraction Addon
- configuration
- API Keys
- Security(SSO)
- Setting Up Google Single Sign-On(SSO) Login in Testsigma
- Setting Up Okta Single Sign-On Integration with SAML Login in Testsigma
- Setting up SAML-based SSO login for Testsigma in Azure
- iOS Settings
- Creating WDA File for iOS App Testing
- uploads
- Upload Files
- Upload Android and iOS Apps
- How to generate mobile builds for Android/iOS applications?
- Testsigma REST APIs
- Environments
- Elements
- Test Plans
- Upload Files
- Get Project wide information
- Upload and update test data profile
- Trigger Multiple Test Plans
- Trigger Test Plan remotely and wait until Completion
- Run the same Test Plan multiple times in Parallel
- Schedule, Update and Delete a test plan using API
- Update Test Case results using API
- Create and update values of Test Data Profile using REST API
- Rerun Test Cases from Run Results using API
- open source dev environment setup
- macOS and IntelliJ Community Edition
- macOS and IntelliJ Ultimate Edition
- Windows and IntelliJ Ultimate Edition
- Setup Dev Environment [Addons]
- NLPs
- Unable to retrieve value stored in text element
- Unable to capture dropdown element
- Unable to Select Radiobutton
- Unable to Click Checkbox
- setup
- Server Docker Deployment Errors
- Secured Business Application Support
- Troubleshooting Restricted Access to Testsigma
- Why mobile device not displayed in Testsigma Mobile Test Recorder?
- Unable to create new test session due to unexpected error
- web apps
- URL not accessible
- Test Queued for a Long Time
- Issues with UI Identifiers
- Missing Elements in the Recorder
- mobile apps
- Failed to Start Mobile Test Recorder
- Troubleshooting “Failed to perform action Mobile Test Recorder” error
- Test Execution State is Queued for a Long Time
- Mobile app keeps stopping after successful launch
- More pre-requisite settings
- Unable to start WDA Process on iPhone
- Most Common causes for Click/Tap NLP failure
- on premise setup
- On-Premise Setup Prerequisites
- On-Premise Setup with Docker-compose File
- Post-Installation Checklist for On-Premise Setup
- Install Docker on an Unix OS in Azure Infrastructure
- SMTP Configuration in Testsigma
- Configure Custom Domains
- salesforce testing
- Intro: Testsigma for Salesforce Testing
- Creating a Connected App
- Creating a Salesforce Project
- Creating Metadata Connections
- Adding User Connections
- Build Test Cases: Manual+Live
- Salesforce Element Repositories
- Intro: Testsigma Special NLPs
Testsigma Automation Standards and Best Practices
Testsigma Automation Standards emphasise the reusability of automated test cases to enhance the testing process and maximise efficiency. Quality engineers can accelerate the overall testing process by leveraging this reusability. Successful implementation requires a solid understanding of test automation best practices, which enable the setup of repetitive, thorough, and data-intensive tests. These best practices ensure reliable and accurate results while optimising testing efforts.
Test Case Structure and Execution
- Write small, atomic, independent, and autonomous test cases to focus, modularise, and maintain them easily.
- Use Soft Assertions wherever possible. Soft assertions allow test execution to continue even if a verification step fails and provide more comprehensive test results.
- Use Dynamic Waits to improve test efficiency and reduce the chances of false positives or negatives in test results.
- You should structure your test cases in the AAA pattern with three distinct sections: Arrange, Act, and Assert. In the arranged section, you set the preconditions for the test. In the act section, you perform the tested actions; in the assert section, you verify the expected outcomes.
Assertions and Verifications
- You define the expected outcomes of automated test cases and specify the validations to be performed at specific points in time as verifications to understand the concept of assertions.
- Navigate to Help > Actions List on the Test Case details page to find NLPs with assertions in Testsigma.
- A failed verification in a test case marks the overall test case as failed by default. If validation fails, the remaining test steps will be skipped, and the test case execution will be aborted.
-
To implement soft assertions for scenarios that require execution of remaining steps after a test step failure, follow the steps below and for more information, refer to Test Step Settings:
- Hover over the test step, click Option, and choose Step Settings from the dropdown.
- Uncheck Stop Test Case execution on Test Step failure and click Update.
- You can configure specific steps to continue executing even if verification fails.
Test Case Organization and Management
- Filter, segment, and organise test cases for easy identification to streamline test management processes and quickly locate specific tests.
- Label or map relevant requirements to test cases to facilitate filtering and improve accessibility. Users can filter and save test cases in separate views based on labelled or mapped requirements.
- During test case creation or editing, you can add labels. The label field is available by default in the test case.
- You can Save Filters to quickly access and manage test cases associated with a particular functionality or scenario, such as those related to login. For more information, refer to Save Test Case Filter.
Customisation and Extensibility
- You can use add-ons to extend Testsigma's repository of actions and create custom NLPs for specific actions that are not available in the built-in Actions List.
- Share your add-ons or leverage existing ones with the test automation community through the Add-ons Community Marketplace. You can use add-ons to provide additional functionality and expand the capabilities of Testsigma. For more information, refer to Create an Add-on.
You create an add-on for verifying text from two DOM elements.
Reusability and Modularity
- To avoid duplication and simplify test maintenance, use Step Groups as common reusable functions across test cases. Step Groups promote modular test design and easy maintenance by separating reusable components from the test flow. Any changes made to a Step Group will be reflected in all test cases that invoke it. For more information, refer to Step Groups.
Create a Step Group to reuse login functionality in multiple test cases.
- Use REST API Steps to automate redundant UI actions. Performing these actions through REST API steps will improve test stability and reduce test execution time compared to using the UI. For more information, refer to Rest API.
Element Management
- Create elements with proper naming conventions to enable reuse in multiple test cases. For more information, refer to Create an Element.
Use descriptive names such as "UsernameInput" or "LoginButton" to make them easy to identify.
- You should map appropriate context details when you create elements inside iFrames or Shadow DOM contexts. Mapping context details will ensure you correctly identify and interact with elements within specific contexts. For more information refer to Shadow DOM Element. For more information, refer to Create a Shadow DOM element.
- You can easily access elements by saving filters and creating views based on screen names. They can check for the presence of elements in Testsigma's repository before recreating them. Element management is facilitated by adding filters. For more information, refer to Save Element Filters.
Create a view that displays elements related to the ''Login'' screen for quick reference.
Variables and Scopes
Scope | Description | Usage |
---|---|---|
Environment | ||
Runtime | The values are the same throughout a sequential test run; other tests can update them. For more information, refer to Runtime Variable. | |
Test Data Profile |
Data-Driven Testing
- Enable the data-driven toggle in test cases and use Test Data Profiles to perform the same action with different test data sets for data-driven testing. For more information, refer to Data-Driven Testing.
- Test Data Profiles use key-value pair format to store project configuration data, database connection details, and project settings for easy access and reuse of test data.
Create a Test Data Profile named "ConfigData" to store configuration-related test data.
- Linking test cases to test data profiles and data sets using the @ parameter test-data type in NLP allows you to use specific columns from the test data set in your test steps.
Link login credentials to a test data profile and use it to test different user logins in a test case.
Test Data Types
Data Type | Usage | Examples |
---|---|---|
Plain Text | Used for storing general textual data. | “Hello World", “Test123” |
@ Parameter | Dynamically changeable values in a test case. | @ username, @ password |
$ Runtime | Values assigned/updated during test execution. | $ name, $ currenttime |
* Environment | Stores information about the current environment. | * url, * website |
~ Random | Generates random values within specified constraints. | Random item from a list |
! Data Generator | Generates test data based on predefined rules. | ! TestDataFromProfile :: getTestDataBySetName |
% Phone Number | Stores phone numbers | % +123456789 |
& Mail Box | Stores email addresses. | & automation@name.testsigma.com |
Configuration for Test Execution
- Upload attachments for test steps in Test Data > Uploads and follow the maximum file size limit of 1024 MB. The system always considers the latest version of the uploaded file. For more information, refer to Uploads.
- Configure Desired Capabilities for cross-browser testing with specific browser configurations. You can configure Desired Capabilities for ad-hoc runs and test plans. For more information, refer to Desired Capabilities.
Specify the desired capabilities of the targeted testing, such as browser version or device type.
- Ensure you put test cases in the Ready state before adding them to a Test Suite. Organise relevant tests into test suites for better organisation and execution. For more information, refer to Test Suites.
Create a "Login Suite" and add all relevant login-related test cases for efficient execution.
Execution and Test Plan Run
- Run test case and test plan in Headless mode to reduce execution time and eliminate element loading time. For more information, refer to Headless Browser Testing.
To achieve faster test execution, execute the test plan without a visible browser.
- Use the Partial Run option in the Test Plan to exclude consistently failing test suites from runs; you can exclude or disable tests for execution from the Test Machines & Suites Selection in the Test Plan. For more information, refer to Partial Run.
- Use the Schedule feature to run the test plan automatically without manual intervention. For more information, refer to Schedule a Test Plan.
Schedule unattended testing during non-business hours by executing the test plan.
Testsigma Recorder Extension
- Use the Testsigma Recorder Extension to record user interactions on web applications. Customise and modify the recorded test steps to align with the desired test case behaviour. For more information, refer to Recording Test Steps.
- Use the Automatic Element Identification feature of the recorder extension to easily capture elements and apply validations and verifications during recording to ensure that test steps include necessary assertions.
Third-Party Integration
Avoid relying on third-party UI elements for UI actions and instead use APIs or a mock server to simulate actual scenarios in the Application Under Test (AUT). This reduces the fragility of tests.