Software Test Automation: False Positives & How to Avoid Them?
Software Testing is primarily an information-gathering process that assists stakeholders in making informed decisions. However, it can be problematic to gather information if the testing process is constantly interrupted by diversions and interruptions. Software Testing tools and test automation scripts can lead to false positives if proper measures and considerations are not taken when setting up the testing and automation process.
In this article, I will discuss some of the common causes behind false positives and provide you with some tips and tricks to help you avoid them during your software test automation process.
Table Of Contents
Pic Credits: Designed by Milano83 / Freepik
We have all seen cases where an automated test executes and despite everything working as expected, the test informs us that there is a “bug”. This situation is known as a “False Positive”. It occurs when a software tester/automated test incorrectly concludes that the program failed, or the intent is not met.
False Positives are commonly observed due to:
- Errors in software test automation scripts
- Instability in the test environment
- Failures due to third-party libraries or cooperating processes, etc.
If you are into software test automation, you may have already seen false positives like:
- Automation Test Script Failure due to changing locators
- Test Pipeline Failure due to Build Issue(s), etc.
Now that we have an understanding of what a false positive is, let’s look at the common causes:
1. Flaky Test Automation Scripts: It is one of the most common reasons behind false positives during software test automation. Flakiness or instability can be due to many factors. Some of the popular ones are:
i. Poor design of test scripts
ii. Non-modular / non-maintainable code structure
iii. Non-clean code
iv. Poor testability of the application
2. Changing Locators (Automation Hooks): Regardless of the test automation platform, a common feature is the reliance of automation on some hooks or locators. If these hooks change too frequently or are not defined in a robust way, it may lead to frequent false positives followed by an investigation and fixing time. Commonly used automation hooks are:
ii. JSON Paths
iii. XML Paths, etc.
3. Unstable Test Environment: There are a number of factors that can directly or indirectly affect the state of a system and the environment in which it is being tested. Some of such factors are:
i. The current and changing state of the system
ii. Dependent processes and their effects on the test environment
iii. Version changes
iv. Breaking contracts
4. Impractical Test Sequence: In some cases, a test depends on specific business logic. Ex: Reading software from a device makes sense if you have already written software for the device. If someone wants to test the read functionality but does it after a delete software test, it will just lead to an impractical test sequence and a false positive.
Write Software –> Read Software
Delete Software –> Read Software
5. Configuration Issues: Most complex software have a configurable build, and deployment system. When such configuration is not managed properly, it can lead to a range of configuration-related issues. Oftentimes, the configuration(s) is done manually, which can lead to unintended issues
Also, as an automation engineer, does your test suite consider specific versions of each dependency? Does it also consider the test environment states? Do your automation scripts use the setup and teardown (cleanup) modules to set up the test environment?
6. Script Development & Execution on one specific machine: Scripts that only run on local machine is mostly FLAKY. Script developers often ignore factors that may change on other machines when developing a script for a specific machine. Moreover, Programs don’t do what they haven’t been told to pay attention to, which leads to false positives.
False positives often end up creating a ripple of side effects. It can impact testing in ways you might not expect. Here is how false positives can impact your testing:
1. Loss of time:
i. In going through logs
ii. In investigating the problem
iii. In reproducing the test steps
iv. In fixing the test scripts, environment, etc.
2. Noise in Automated Testing Cycles
3. Loss of credibility & integrity: False positives from your software test automation is inversely proportional to your test process credibility.
4. Defocus on the actual information gathering process (testing)
“The more you go in one direction, the far you go from the opposite direction.”
Avoiding False Positives
The best strategy to deal with false positives is by avoiding them in the first place. To prevent false positives, here are some tips and tricks:
1. Auto-Healing Locators: If you have ever attempted GUI App automation, you already know about the challenge of continuously changing locators. However, a good automation solution will allow multiple locators to identify GUI elements that can be used to make the system self-heal in the event of a locator failure. Ex: Testsigma provides the facility of auto-healing locators by default.
2. Robust Locators (Automation Hooks): Writing robust locators is a craft and requires a good understanding of the element lookup process. Learning how to write robust and stable locators is always a good investment.
3. Automatic Retry / Rerun(s): Automatic retry/rerun strategy can help you know the consistency of the issue. In some cases, the issue can be due to a momentary glitch in the test environment. Automated rerun strategies confirm this and save you time investigating a false positive.
4. Scheduling test runs periodically to ascertain stability: A script not running is as good as a script, not present. Testing scripts should always be run periodically to ensure the stability of the environment, product, test scripts, infrastructure, etc.
5. Develop & execute on separate machines: Running regression tests on separate machines is a good strategy to avoid the pitfall of flaky scripts. Testing on separate machines helps reduce the likelihood of flakiness. You can also run headless tests or use docker containers or cloud systems.
6. Plan Testing across Layers: Planning entire testing on one layer can be a bad strategy in most cases. Testing should always be planned across several layers. Also, consider the flakiness index of different layers. Flakiness Index: UI > API > Unit
7. Refreshing Test Environment: Test Environment and state can lead to false positives in your test results. Things that generally help to avoid such issues are:
1. Resetting State
2. Using Hooks
3. Spinning up docker containers, etc.
8. Verbose Logging & Observability
9. Using Contract Verification Scripts: If you are testing API(s) that depends on contracts, then it is always good to run scripts that:
i. Monitor Contracts
ii. Monitor Dependencies
For all our visual readers, I have summarized this entire article in the form of this mind map.
We hope you learned something about false positives and how to avoid them during the testing process through this article.
Avoid most of the common false positives in your testing, go check out a powerful open source test automation platform Testsigma.
Wondering why to choose an open source test automation tool, read here: Reasons to Choose an Open-Source Test Automation Tool