The software will never be bug-free. But, it’s important to minimize the number of bugs such that the impact on functionality and user experience of an application is minimized. Bugs could come up due to different reasons, in this article, we will discuss them from the perspective of software errors. These are the errors that also need attention during the testing phase.
We can divide the errors during software development into two sections for easy understanding:
- Software errors
- Testing errors
Let’s understand each of them in detail below.
User interface errors
The user interface of an application comprises functionality, intuitive use, messages, page redirects, ease of use, ease of remembering the UI, etc. A few of the errors which can be faced due to a poorly designed UI are:
i. Functional errors
They are functionality related issues or errors. These software errors are usually uncovered during functionality testing, mostly can be present as:
Difficult/ confusing functionality: If the user is unable to understand or perform a function on the application, then it should change. The UI is meant to be intuitive and easy for the user.
Non-performance of the intended function: The software is unable to perform the function as intended, then this is an error.
Example: An e-commerce website is unable to process the payment part. Then such a website is of no use to the customer, and this is a functional error.
Incorrect manuals and user guides: They should not boast about the application and should clearly and concisely mention what the application does. This will eradicate the chances of a misled user, who is unable to perform a task successfully on the application. These errors are caught during software documentation testing.
ii. Information/ content related errors: These errors can cause wrong information to be conveyed to the users. These errors come up during the UI testing of a software application.
On-screen information for users: The information which is required by the user should be available on the application screen itself. A user will not want to consult the user guide for every other function.
Showing activity during long tasks: The application should display the activity/progress bar when a long task is being executed. Otherwise, the user will assume that the screen got frozen.
For example, a user has submitted a form, and the progress bar is showing no progress or activity then the user will get confused if the form has been submitted successfully or not. He may think that the software got stuck and a system restart is needed.
Spelling mistakes: You may think that wrong spellings are not a big deal, but they are. They look unprofessional and leave a bad impression of the application.
Precise messages: The error/ dialog boxes should have short and crisp messages, no one wants to read long verbose messages while performing a task.
Dialog box layout: There should be consistency in the layout, capitalization, style, spacing, buttons, etc. of the dialog boxes.
iii. Wrong redirection
The redirection of the links on the website should be thoroughly checked. It will leave a really bad impression if the user clicks are redirected to an unintended page. Example: The user clicks on the ‘home’ icon and she stays on the same page, additionally there are no other links to move back to the home page. This is a really bad UI error.
Failure to handle errors appropriately may result in absurd results, which we certainly want to avoid. We can anticipate and correct an error by using the below conditions:
It occurs when the arithmetic calculation results in a value that is way too large for the software program to handle.
Example: divide by zero, multiplication of 2 large numbers, etc.
The impossible values should be identified and removed from the program, otherwise, they will result in error conditions.
Example: A program performs a task when the value of the boolean variable is ‘True’ and another task when the value of the boolean variable is ‘False’.
But when ‘NULL’ is passed the behaviour of the program is unknown, hence we should make sure that the boolean variable has value either ‘True’ or ‘False’, nothing else. Also, if there is a condition when NULL is the outcome, it should be handled gracefully.
Programmers usually pass an ‘Error Flag’ in the function to keep track of the success or failure of the function. It is advisable to always check this value of the flag before using the returned value from the function. Otherwise, it may result in the processing of garbage data and erroneous results.
We focus on software so much that we forget about the hardware failures. If the hardware gets disconnected or stops working then we cannot expect correct output from the software.
Example: Printers and other hardware devices return bits of information that something is not right.
Software programs after performing a calculation may fetch wrong results, for any of the below reasons:
Incorrect logic: This can happen when a wrong formula is used for calculation. Another example is when there is a typo error by a programmer and the whole logic changes due to a change in formula/operation.
Incorrect arithmetic operator order: Precedence of the operators should be kept in mind while dealing with arithmetic operators.
Example: 2+5*4, the result of this will vary according to the operator precedence. Programmers may assume that the operation will execute from left to right and the answer will be 28. However, multiplication has higher precedence and it will be executed first, so the result is 22.
Truncation and round-off errors: When dealing with floating numbers, the value may get truncated or rounded-off and will lead to precision errors.
Control flow errors
Control flow decides where the program’s control will move next, this is described using the control statements. Control flow-related errors may arise when the flow of the program is changed abruptly.
Example: Stack overflow error, exception handling issues, blocking/unblocking the interrupts in the program.
This section explains the possible errors a tester can make during the software testing process. We can never catch all the possible bugs in the program, there will always be a few which will be left undetected.
However, we always look forward to achieving as much test coverage as possible so that the maximum number of bugs are detected and fixed.
Still, we may face the below-mentioned errors while performing software testing.
i. Data handling errors
The test data file contains bad data, and you weren’t aware of it during the testing process. The test results received will be awkward and concerning, and it becomes difficult to find the root cause of the error in this case.
ii. Verification of fixes
Usually, the developers fix only the issues which are mentioned in the defect report. It is the responsibility of the tester to test related functionalities as well, to make sure nothing else is broken.
Example: A field value consisting of the customer’s name was accepting blank values, although the field is mandatory. The developer has fixed it, during testing the field is not accepting blank values, but it is accepting numeric values now.
Such instances happen and the tester should use her vigilance while testing. Therefore, regression testing is so important after defect fixes.
iii. Hardware issues
These are the testing issues posed due to wrong/unavailable hardware.
Example: While reading a file, the file is corrupted and we receive a ‘Resource not available’ error. Now, this hardware issue is affecting the testing process indirectly.
Another example can be when we are sending data through Connect: Direct channel and the network is not working properly, in this case, the data transfer will not happen and we will not be able to test end-to-end.
iv. SSL certificate issues
During testing – if the website does not have a valid SSL certificate we may receive an SSL certificate error. In this case, we can accept the risk and continue for testing purposes. But, on production – a valid SSL certificate is a must.
v. Defect slippage
This is the most daunting error a tester may commit, but it happens! It may happen due to any of the below reasons:
- Correct expected results not known
- Difficult test steps
vi. Bug reporting
The bugs need to be found out, reported, fixed, and verified, every step holds its value. We should report the bug with proper evidence, reports, and steps to reproduce. The report should be correct and if it has huge data then highlight the wrong results for readability.
vii. Automation related errors
Incorrect element locator: If the Xpath, id, name, etc. used in the automation script are incorrect, then it would result in wrong test script steps. The script will not find the required element and eventually will fail.
Iframe: When the selected element is in a different frame, we will receive an error- ‘element not found’ even by using the correct locator. So, to resolve this we need to shift the control to the iframe, where the element is located. Testsigma makes it easy to switch the iframe and windows, to read the documentation visit here.
Minimize locator-related errors and make your test automation super easy with Testsigma
Wait time: It is important to use the correct wait time for every element in test scripts. Since, if the element is not visible after the wait time elapsed, the test script will fail.
viii. API testing
When the API is tested for expected inputs they work fine, but sometimes there are NULL values sent in the fields. Then during integration testing, such NULL values cause trouble. Hence, the API should be tested for NULL values as well, with appropriate error messages being sent to the application.
Invalid API responses
The field size of the partner application and the API should be the same otherwise the API response will be counted as invalid.
Data like images are usually cached on the partner application. However, if real-time data is cached then this results in erroneous results.
Standard coding practices should be followed by the programmers, else the non-standard coding practices will result in errors. Example: returning NULL values as output.
During testing, if we ensure proper test coverage then we can eliminate critical and major defects effectively. A quality application/software is imperative for business growth and user acquisition. Therefore, the testing should work on understanding the business requirements thoroughly and writing test cases accordingly.
More test cases do not mean better testing, we require good requirement coverage for effective testing. Additionally, there should be good risk coverage so that the business’s important features are tested in-depth. Proper bug reporting, defect retest, regression tests, verification of fixes, etc. all are important steps towards high-quality software.
PC: Emile Perron