False Positives and False Negatives in Software Testing
Both DevOps and Agile frameworks focus on rapid development and delivery, and to enable this test automation becomes crucial. However, automation testing has its own challenges. Maintenance is one such challenge. If a test automation framework is not maintained correctly, automation testing starts to give False Positive and False Negative results.
Table Of Contents
- 1 What is False Positive in Software Testing?
- 2 What is a False Negative in Software Testing?
- 3 False Positives and False Negatives: How to Find?
- 4 How to Find False Negatives?
- 5 How to find the false positives?
- 6 Why do false positives and false negatives occur?
- 7 Examples of false negatives and false negatives
- 8 False Positives and False Negatives – Importance in Testing
- 9 Importance in test automation
- 10 How to Choose the Right Tool?
- 11 Best practices for reducing false positives and false negatives
- 12 Frequently Asked Questions
What is False Positive in Software Testing?
A false positive is an outcome where the model wrongly predicts the positive case.
if the test case is supposed to pass however due to script issues the test result is marked as fail. If you look at the application everything might be working as expected it’s just automation script marked them fail. The false positive might occur mostly due to tools, dependencies, middleware, or sometimes invalid DOM elements.
What is a False Negative in Software Testing?
A false negative is an outcome where the model wrongly predicts the negative case.
In testing, if the test case is supposed to fail due to functional issues or defects however test case ignores those defects and marks the test as pass. It can occur due to improper construction of test cases, false assumptions while writing the automation scripts, etc.
False Positives and False Negatives: How to Find?
Both false positives and false negatives are common in test automation. However, more than false positives, the false negatives have impact as it marks the defective application as pass.
If more of such issue start occurring in your application, the test automation test reports become unreliable. Thus, you should be very careful while writing the tests.
Let’s understand in detail how to find the false positives and false negatives.
How to Find False Negatives?
Finding a false negative is a lot more complicated than you think, as false negative results mark the test as passing inaccurately. False negatives can be caught only if you analyze them manually. You might not be able to validate a complete application manually if it is a complex application. However, you can put the strategy that whenever there are specific feature changes, you must ensure that all those test cases are validated manually and note down the expected result. The strategy needs to be improved based on its effectiveness.
Building a robust framework and having the right strategy are key to reducing false negatives.
How to find the false positives?
Unlike false negatives, finding false positives is easy. Standard practice in automation is to analyze the automation failures as and when the result/report appears. Before logging any issue as a defect, perform a double-check by reproducing the error manually.
In false positives scenarios, you can aim to go through the list of failure scenarios manually and analyze them before concluding.
Why do false positives and false negatives occur?
Both False negatives and false positives can occur because of various reasons. However, the business impact of false negatives is more compared to false positives. As an automation team, you should always aim to have 0% false negatives. The false positive percentage can vary based on the application complexity and automation framework.
Reason for false negatives
- Poor Automation framework architecture
- Skipping some of the conditions while designing the test case
- Untrustworthy tools/packages are used for automation
- Outdated packages, and libraries used in test automation
- Unhandled pesticide paradox principle
- Assumption of test coverage
Reason for False positives
- Automation scripts are not maintained regularly
- Unstable testing environment
- Standard locator strategies are not used
- Outdated frameworks, libraries, and packages used
- Underlying component issues such as Incompatible browsers, browser drivers, etc.
Examples of false negatives and false negatives
When you hear about false positives and false negatives you may easily get confused, Let’s put it into analogy.
When we compare with the medical test case, the medical test is done for the whether patient has some medical condition or not. Similarly, when it applies to software testing we can assume as below. A positive test found a bug when there was a bug in the application. This means that there was a bug in the application and automation testing found it, it is a true and ideal scenario.
Let’s take another example of COVID-19 test
Suppose Person X doesn’t have COVID-19 and he provides the sample for testing. When the report arrived the test report was marked inaccurately COVID-19 positive though he doesn’t have an infection in his body.
When the same scenario is applied to testing, If you have a login scenario that is working as expected when you execute the automation test script it reports back Login functionality failed. In reality, there is no bug as such. This is a false positive.
Suppose Person Y has covid infection and he provides the sample for testing. When the report arrived, report marked as “No Infections” which is COVID-19 negative. This is inaccurate, as in reality he had a COVID-19 infection.
Similarly, in software testing, you have a login scenario, which is failing to log in with a username and password. But, when you execute the automation script you get the report saying that the login test is passed. This is false negative.
If you are still confused, let’s put it in simple terms.
Automation test reporting failure without any valid failure in the application is called a false positive.
Automation test passing even if there is a valid failure in the application is called a false negative.
False Positives and False Negatives – Importance in Testing
The Software Development Life Cycle(SDLC) gives the same importance to testing as development, the reason is it aims to deliver quality products. Imagine a company has launched a car with model X, and after 6 months it finds a major issue with the car engine which can be life-critical. Now the company has to recall all the cars that are sold, and the company should provide compensation to the customer, additionally, the company might have to face lawsuits. These things might not have happened if there had been a proper check in place before releasing the car. The same thing applies to software. The quality of the application resides in the quality of the testing and testing framework. If testing itself does not report actual failures, that can incur loss and reputation damage to the organization.
The false negatives are scarier than the false positives. The false negatives leave the feature broken. If you are dealing with health care, defense, or automobile software the false negatives may take a toll on someone’s life. However, if the tool you choose, and the framework you build are robust and flawless then the organization gets great benefit.
In short, The good or bad quality lies in the automation framework and automation scripts.
Though the impact of false positives is lesser, if it occurs more frequently you end up with unreliable testing. The Team might have to spend a lot of time analyzing without any gain. The good news is that false positives are easy to fix especially with modern testing tools like Testsigma. It suggests the fixes for failures, and guidance and provides technical support if you are struggling to fix the critical issue.
Both false positives and false negatives are critical in testing. However when we compare, the false positives are tolerable to some extent, but there is zero tolerance for false negatives.
Importance in test automation
False positives and false negatives are most common in test automation, however, it is not limited to test automation. If the same occurs in manual testing it will be because of human error.
The automation framework design plays a critical role when dealing with false positives and negatives. As an organization makes huge investments to set up infra for automation, skills, and resources the unreliable results may disappoint the management and it may end with questioning Return on Investment(ROI). Thus, having a robust design to tackle the false positives and false negatives is a must.
How to Choose the Right Tool?
As mentioned earlier, the automation framework plays a very critical role in the success of test automation. The automation framework depends on the underlying tools used. The tool should be capable enough to handle all the edge cases and should be continuously updated as technology changes.
As technology has evolved, many legacy automation tools have disappeared. One of the reasons is they could not align with modern technology change. Importantly their architecture was not flexible.
The right tool for your organization will be based on many factors such as the development framework used, cost, infrastructure, required skillsets, requirement of upskills, nature of the application, application domain, etc. The below pointers are generic and irrespective of the project domain that you need to consider while choosing the right tool.
- Don’t be biased, evaluate all possible automation tools as per the requirement
- The tool should be intelligent enough to choose the best stable locator
- It should provide all critical functionalities natively, as it reduces the third-party library dependency. In turn, this reduces the false positives and false negatives
- It should support the DevOps ecosystem
- It should align with the modern Agile framework
- The reporting should be comprehensive
- Consider no code automation tools if your organization is short of skills or resources.
Best practices for reducing false positives and false negatives
The best practices for reducing false positives and false negatives vary from project to project. To reduce them you need to prepare a concrete strategy while designing the test strategy document. Below is the list of the best practices that are independent of the project and application however, list is non-exhaustive.
- Build it right, from the beginning. Choose the right tool and build a robust framework.
- Integrate the alerting and monitoring tools wherever possible
- Prepare a code review checklist and make it a mandatory requirement to follow
- Never write a test case based on the assumption
- Use the right testing tool after careful evaluation of your requirements
- Use dedicated environment for testing. Ensure it is as stable as the production environment
- Cover all possible testing scenarios, and use assertion as much as possible in your test scripts.
- Learn from failures, and update the checklists, and strategy documents as and when required.
False positives and false negatives in test automation are common, however, you should always target to minimize them. Especially when it comes to false negatives you should aim for zero. False positives are unavoidable as UI testing relies on browsers and other testing components.
Choosing the right tool can help to reduce false positives. False positives mostly occur due to latency, locator, browser incompatibility, etc. If your test automation tool is smart enough to handle such issues the false positives can be easily reduced. Testsigma is one such tool it automatically chooses the best possible locators and your test will be executed on the most stable workspaces remotely. This helps in producing the most accurate, stable, and reliable reports.
On the other hand, false negatives occur if you don’t write the test cases as per the requirement. False negatives require manual analysis and in-depth investigation. Remember even if you have the right toolsets, good infrastructure, and a stable application environment – if you don’t write test cases properly as per requirement then test automation will become unreliable.
Frequently Asked Questions
How to handle false positives and negatives?
False positives can be handled by implementing the correct design strategies while building the automation framework. If the framework is robust then there will be fewer false positives. False negatives can be reduced with code review, and test case review strategies. Additionally having the right process while developing the test cases can bring you great benefits.