Testsigma Agentic Test Automation Tool

Products

Solutions

Resources

DocsPricing

A Glossary of Software Testing Terms

Discover essential software testing terms with our extensive glossary. Learn about key terms, definitions, and terminologies of software testing.

Glossary of Software Testing Terms

A

  • A/B testing

    A form of testing where two or more different versions of a feature are compared to see which one performs better.

  • Acceptance testing

    Testing performed by the customer or end-user to verify that the software meets their requirements.

  • Ad hoc testing

    Unplanned, random testing performed to find bugs.

  • Actual result

    The results of a test may show as an anomaly or deviation from the expected behavior or status of the system.

  • Assertion

    A statement that checks if the expected result of a test is achieved.

  • Automated testing

    Testing performed by a computer program instead of a human tester.

  • Anomaly

    Deviations from expectations based on requirements, specifications, design documents, or standards, these are usually reported during testing.

  • Autonomous testing

    Testing that is performed without any human intervention.

B

  • Behavior-driven development (BDD)

    A test-driven development approach that focuses on the behavior of the software from the user's perspective.

  • Beta testing

    Testing performed by a limited number of users to identify bugs and get feedback on the software.

  • Black-box testing

    Testing performed without knowing the internal implementation of the software.

  • Bug

    An error in the software that causes it to behave incorrectly.

  • Bottom-up integration

    An alternative approach that begins by integrating the system′s components from the lowest level of the architecture.

  • Boundary value analysis

    Boundary value analysis is a black box test design technique that involves testing input or output values on the edge of what is allowed or at the smallest incremental distance on either side of a border. For example, an input field that accepts text between 1 and 10 characters has six boundary values: 0, 1, 2, 9, 10, and 11 characters.

  • BS 7925-1

    A document outlining testing standards and a glossary of related terms conforming to British Standards (BS).

  • BS 7925-2

    A document that outlines the testing process, with a primary focus on component testing, according to British Standards.

C

  • Canary testing

    A form of testing where a small percentage of users are moved to the new version of the software to test it before it is released to everyone.

  • Checkpoint

    A step in a test that is used to verify that the test is progressing correctly.

  • Continuous Integration (CI)

    A software development practice that involves integrating code changes into a shared mainline frequently, usually several times a day.

  • Continuous Delivery (CD)

    A software development practice that involves delivering software to production continuously.

  • Capture/playback tool

    This is a type of automated testing tool that records user interactions with a software system and then plays them back to verify the system′s functionality.

  • CAST

    Abbreviation for “Computer-aided software testing”. This is not a widely used term for automated testing tools. A more common term is “test automation framework”.

  • Change control board (CCB)

    The CCB is responsible for reviewing and approving changes to the software development and testing process.

  • Checklist

    A checklist is a list of steps a tester can follow to verify a particular feature or functionality of a software system. Checklists are often used for exploratory testing, where the tester does not have a pre-defined set of tests to execute.

  • Client

    In software testing, the client is the customer or end-user of the software system.

  • CMMI

    The Capability Maturity Model Integration (CMMI) is a process improvement framework that can improve the quality of software development and testing processes.

  • Code coverage

    Code coverage measures how much software code is executed by tests. A higher code coverage percentage indicates that more code has been tested.

  • Code review

    A code review is when two or more programmers review each other's code to identify potential bugs and defects.

  • Code standard

    A code standard is a set of rules and guidelines developers must follow when writing code.

  • Compilation

    The process of converting source code into machine code that a computer can execute.

  • Component

    A self-contained software unit that can be independently tested and deployed.

  • Component integration testing

    Testing that verifies different components of a software system work together correctly.

  • Component testing

    Component testing is a type of testing that verifies the functionality of individual components of a software system.

  • Configuration management

    Configuration management manages and tracks changes to software and hardware systems.

  • Configuration testing

    Configuration testing is a type of testing that verifies that a software system works correctly under different hardware and software configurations.

  • Context-driven testing

    Context-driven testing is a testing approach that focuses on testing the software system similarly to how it will be used in production.

  • COTS

    COTS stands for Commercial Off-The-Shelf. A third party develops COTS software, which can be purchased and deployed by other organizations.

D

  • Data-driven testing

    A form of testing where different data sets are used to test the same functionality.

  • Defect

    A flaw in the software that prevents it from meeting its requirements.

  • Deployment testing

    Testing is performed to verify that the software can be deployed to production successfully.

  • Distributed testing

    Testing that is performed on multiple systems in parallel.

  • Driver

    A piece of software that controls the execution of tests.

  • Documentation

    Documenting the testing process and changes after testing.

  • Dynamic testing

    This is a testing executed while the system is running. Execution of test cases is one example.

E

  • End-to-end testing

    Testing is performed to verify that the entire software system works correctly from the user's perspective.

  • Error

    An unexpected event that occurs during the execution of a test.

  • Error description

    A tester's record of their test steps, outcome, expected result, and additional information for troubleshooting.

  • Error guessing

    Using experience, skills, and intuition to design test cases based on similar systems and technologies.

  • Execute

    Running or conducting a test case or program.

  • Exhaustive testing

    Testing all possible inputs and outputs of a system.

  • Exit criteria

    Conditions must be met for testing to be considered complete, such as executing all high-priority test cases and closing all open high-priority defects.

  • Expected result

    The predicted status or behavior of a test object after completing the test steps.

  • Exploratory testing

    A test design technique where the tester creates and executes tests while learning about the system.

  • External supplier

    A supplier or vendor not part of the same organization as the client or buyer.

  • Extreme programming

    An agile development methodology that emphasizes pair programming, frequent deliveries, and automated testing.

F

  • Failure

    A test failure occurs when the test does not produce the expected result.

  • False positive

    A test failure caused by a problem with the test itself rather than a problem with the software being tested.

  • False negative

    A test that passes despite a problem with the software being tested.

  • Fault injection

    A technique to improve test coverage is fault injection, which intentionally inserts faults to test error-handling code paths that otherwise would not be observed.

  • Formal review

    A formal review is a documented review process that may include review meetings, formal roles, preparation steps, and goals, such as inspection.

  • Functional integration

    An integration testing strategy where the system is integrated one function at a time. For example, all the components required for the "search customer" function are assembled and tested individually.

  • Functional testing

    Testing is performed to verify that the software functions as expected.

G

  • Gherkin

    A domain-specific language that is used to write BDD tests.

  • Gray-box testing

    Software debugging using both white-box and black-box techniques, with limited knowledge of the code.

  • GUI testing

    Testing that is performed on the graphical user interface of the software.

  • Gremlin

    A tool that can be used to perform distributed testing on microservices architectures.

H

  • Health testing

    Testing is performed to verify that the software is healthy and can handle the expected load.

  • Human error

    An error caused by a human, such as a typo in a test case or a mistake in the execution of a test.

I

  • IEEE 829

    An international standard for test documentation, including templates for test plans, reports, and handover documents.

  • Impact analysis

    Techniques to assess the impact of a change; these techniques are used to determine which regression tests are needed.

  • Incident

    A condition different from expected, such as a deviation from requirements or test cases.

  • Incident report

    Same as a defect report.

  • Independent testing

    A type of testing in which testers' responsibilities are divided to maintain their objectivity.

  • Informal review

    A review that is not based on a formal procedure.

  • Inspection

    A type of formal review technique.

  • Installation test

    A test to assess whether the system meets the requirements for installation and uninstallation.

  • Instrumentation code

    Code that makes it possible to monitor information about the system's behavior during execution.

  • Integration testing

    A type of testing to show that the system's components work with one another.

  • Internal supplier

    Developer that belongs to the same organization as the client.

  • ISTQB

    International Software Testing Qualifications Board, responsible for international programs for testing certification.

  • Iteration

    A development cycle consisting of many phases, from requirements to delivery of part of a system.

J

  • JMeter

    A tool that can perform load testing on web applications.

  • JUnit

    JUnit is a framework designed for automated testing of Java components.

K

  • Keyword-driven testing

    A testing approach that uses keywords to define tests. This can make it easier for non-technical users to create tests.

L

  • Load testing

    Testing to evaluate how well the software performs under load.

  • Localization testing

    Testing performed to verify that the software works correctly in different languages and locales.

M

  • Maintainability

    The ease with which software can be modified.

  • Manual testing

    Testing that is performed by a human tester.

  • Module testing

    Testing of individual modules in a software system to ensure that they meet their requirements and function correctly.

  • MTBF

    Mean Time Between Failures, a measure of the reliability of a system or component. It is calculated as the total operating time of all units divided by the number of failures during that time.

  • Mutation testing

    A form of testing that is used to assess the quality of test cases.

  • MVP (minimum viable product)

    A product with just enough features to be usable by customers.

N

  • NLP (natural language processing)

    A field of computer science that deals with the interaction between computers and human (natural) languages.

  • Non-functional testing

    Testing is performed to evaluate non-functional aspects of the software, such as performance, security, and usability.

  • Navigation testing

    When conducting navigation tests, users are observed while they perform specific tasks or try to achieve particular goals on your website or application.

  • Negative testing

    Negative testing is a type of testing that aims to identify potential bugs, errors, or security vulnerabilities that may not be discovered through positive testing, which uses valid inputs.

O

  • Object-oriented testing

    A testing approach that tests individual objects and their interactions with other things.

  • Operational acceptance testing

    Testing is performed to verify that the software is ready for production.

  • Orchestration

    The process of automating the execution of tests in a coordinated manner.

  • Outcome

    The result after a test case has been executed.

P

  • Pair programming

    Two developers work together at one computer to write code, reviewing each other’s work.

  • Parallel testing

    Testing that is performed on multiple systems simultaneously.

  • Pair testing

    Two people work together to find defects, typically sharing one computer and trading control.

  • Penetration testing

    Testing performed to identify security vulnerabilities in the software.

  • Performance testing

    Evaluates whether a system meets performance requirements.

  • Pipeline

    A series of automated steps to build, test, and deploy software.

  • Positive testing

    This shows that a test object works correctly in normal situations.

  • Postconditions

    Conditions that must be met after a test case or run.

  • Preconditions

    Conditions that must be met before a component or system can be tested.

  • Prerequisites

    Same as preconditions.

  • Priority

    The level of importance assigned to a defect.

  • Professional tester

    A person whose sole job is testing.

  • Program testing

    The software testing process to ensure it meets its requirements.

Q

  • Quality assurance (QA)

    Ensuring the software meets the users' needs and is released with as few defects as possible.

R

  • Record and playback tool

    A test execution tool that records user interactions and replays them to automate tests.

  • Regression testing

    Testing to ensure that changes to a system have not introduced new defects.

  • Release

    A new version of a system, typically released to customers or users.

  • Release management

    The process of planning, coordinating, and controlling the release of a new version of a system.

  • Release testing

    Testing to ensure that a new system release meets its requirements and is ready for deployment.

  • Requirements management

    The process of gathering, analyzing, documenting, and managing the requirements for a system.

  • Requirements manager

    The person responsible for managing the requirements for a system.

  • Re-testing

    Testing to verify that a previously reported defect is now fixed.

  • Retrospective meeting

    A meeting held at the end of a project or iteration to reflect on the work that was done and identify areas for improvement.

  • Review

    A static testing technique in which a reviewer examines a document or artifact to identify defects and suggest improvements.

  • Reviewer

    A person who participates in a review to identify and document defects in the item being reviewed.

  • Risk

    A potential event that could have a negative impact on a project or system.

  • Risk-based testing

    A test design approach in which test cases are prioritized based on the risks associated with the system.

  • RUP

    The Rational Unified Process, a software development methodology developed by IBM.

S

  • Sandwich integration

    Top-down and bottom-up integration in parallel, saving time but complex.

  • Scalability testing

    Measures the ability of software to scale up or down in terms of non-functional characteristics.

  • Scenario

    Sequence of activities performed in a system, such as logging in, signing up, ordering, and printing an invoice.

  • Scrum

    Iterative, incremental framework for project management commonly used with agile software development.

  • Session-based testing

    Planning test activities as uninterrupted, short sessions of test design and execution, often used in conjunction with exploratory testing.

  • Severity

    The degree of impact that a defect has on the development or operation of a component or system.

  • Site acceptance testing (SAT)

    Acceptance testing is carried out onsite at the client's location, as opposed to the developer's location (factory acceptance testing, or FAT).

  • Smoke testing

    Verifies that the system can run and perform basic functions without crashing.

  • State transition testing

    Test design technique in which a system is viewed as a series of states, valid and invalid transitions between those states, and inputs and events that cause changes in state.

  • Static testing

    Testing performed without running the system, such as document review.

  • Stress testing

    Testing to assess how the system reacts to workloads that exceed its specified requirements.

  • Structural testing

    See white box testing.

  • Stub

    A dummy module implementation used to simulate its behavior during testing.

  • Supplier

    The organization that supplies an IT system to a client. It can be internal or external. Also called vendor.

  • System

    The integrated combination of hardware, software, and documentation.

  • System integration testing

    Test type to evaluate whether a system can be successfully integrated with other systems.

  • System testing

    Test type aimed at testing the complete integrated system, both functional and non-functional.

T

  • Test automation

    Using software to write and run tests, freeing testers to focus on other tasks.

  • Test basis

    The documentation testers use to design test cases, such as requirements and user stories.

  • Test case

    A step-by-step description of how to test a feature, including the expected results.

  • Test data

    The information testers use to execute test cases, such as customer names and addresses.

  • Test-driven development

    A development approach where developers write test cases before writing code, ensuring that the code meets the requirements.

  • Test driver

    A software component that simulates the behavior of another component during testing.

  • Test environment

    The hardware and software that testers use to run tests.

  • Test execution

    The process of running test cases and recording the results.

  • Test level

    A group of activities organized to achieve a specific goal, such as component or integration testing.

  • Test log

    A record of all test activities, including the date, time, and results.

  • Test manager

    The person responsible for planning and executing testing activities.

  • Test object

    The part of the tested system, such as a component, subsystem, or the entire system.

  • Test plan

    A document that describes the test strategy, test cases, and test schedule.

  • Test policy

    A document that describes the organization's testing process and standards.

  • Test process

    The set of activities that testers perform to ensure that the system meets the requirements.

  • Test report

    A document that summarizes the testing results and includes any recommendations.

  • Test run

    A group of test cases executed together, such as all the test cases for a particular release.

  • Test script

    A detailed description of executing a test case, including the steps to follow and the expected results.

  • Test specification

    A document that describes a set of test cases, including the steps to prepare and reset the system.

  • Test strategy

    A document that describes how the system will be tested, including the test levels and test coverage.

  • Test stub

    A software component that simulates the behavior of another component during testing.

  • Test suite

    A group of test cases executed to test a specific feature or functionality.

  • Testing

    The process of evaluating a system to determine if it meets the requirements and to find defects.

  • Third-party component

    A component of the system that a third-party vendor develops.

  • Top-down integration

    An integration test strategy where the team starts by integrating the highest-level components and then works their way down the system hierarchy.

  • TPI

    Test Process Improvement. A methodology for improving the organization’s testing process.

  • Traceability

    The ability to track the relationships between different artifacts, such as requirements, test cases, and defects.

  • Traceability matrix

    A table that shows the relationships between different artifacts.

U

  • UML

    A standardized language for describing software systems, using diagrams to show how the system works and how the different parts interact.

  • Unit testing

    A test of a single unit of code, such as a function or class.

  • Unit test framework

    A set of tools and libraries that makes it easier to write and run unit tests.

  • Usability

    How easy and enjoyable a system is to use.

  • Usability testing

    Testing how easy and enjoyable a system is to use by observing users as they try to complete tasks.

  • Use case

    A description of how a user interacts with a system to achieve a goal.

V

  • V-model

    A software development lifecycle model that emphasizes testing and validation.

  • Validation testing

    Testing ensures the system meets the users’ needs.

  • Verification testing

    Testing to ensure that the system meets the technical requirements.

  • Versioning

    A system for uniquely identifying and tracking changes to documents and source files.

W

  • Waterfall model

    A sequential development approach.

  • White-box testing

    Testing a system with knowledge of its internal structure, such as the code or database model.

X

  • XPath

    A language that is used to navigate XML documents.