testsigma
left-mobile-bg

Prompt Engineering for Testers: Unleashing the Power of LLMs

December 22, 2023
Rahul Parwal
right-mobile-bg
Prompt Engineering for Testers Unleashing the Power of LLMs
image

Start automating your tests 10X Faster in Simple English with Testsigma

Try for free

In the ever-evolving software testing landscape, a term became the latest buzzword – Prompt Engineering. 

What is it? & How did it become the talk of the town? 

Thanks to the emergence of powerful Large Language Models (LLMs) like ChatGPT, and Bard, along with applications popping out across domains such as marketing, copywriting, sales, development, product management, and, most importantly, testing.

In this article, let’s delve into the intriguing world of Prompt Engineering for testers. 

Let’s explore how this skill can supercharge our testing efforts and serve as an icing on the cake. Fasten your seatbelts because we’re about to embark on a journey that will transform our approach as a software tester.

The Multifaceted Applications of Prompt Engineering

The Multifaceted Applications of Prompt Engineering

Let’s explore the versatility of Prompt Engineering across domains to understand its diverse potential.

  1. Marketing: From generating compelling ad copies, and engaging social media posts, to creating click-worthy headlines, the possibilities are limitless. 
  2. Copywriting: From crafting persuasive sales letters, and email marketing campaigns, to attention-grabbing product descriptions that maximize their selling potential made easier.
  3. Operations: From creating automated responses to customer queries, optimizing supply chain processes, and many other values help increase operation efficiency.

All one needs is the right prompt.

  1. Testing: Prompt Engineering can revolutionize your testing process. Here are a few examples to level up your testing game with prompt engineering:
    1. Test Idea Generator: Prompt Engineering can be used in generating test ideas. By providing your LLM with a well-crafted prompt, we can obtain a list of test ideas to jumpstart our testing efforts. 

Think of it as a brainstorming partner that never gets tired. 

We can get test ideas based on the type of application, quality criteria to be checked, and testing depth required. 

  1. Analyze Key Statements / Text: Understanding and interpreting requirement documents is critical for a tester. Prompt Engineering allows testers to feed these statements to LLM and receive multiple interpretations along with questions to clarify ambiguity, helping us validate understanding and uncover potential issues.
  2. Learning Guide: As testers, we are always learning continuously. Prompt Engineering can kickstart the learning process when we try to learn different topics. 

Creating prompts to provide you a concise overview of any topic can increase the pace of learning.

  1. Coding: Generating code, whether it’s in Python, Java, C#, or any other language, can be assisted to a great extent with the help of prompts. Effective prompts can be written to create boilerplate code, test scripts, and even for deep tasks such as code review.
  1. Brainstorming Partner: Stuck with testing? Need fresh ideas for finding bugs?

LLM applications can be our brainstorming partner. Feed it with the right prompts and get rid of your creative block.

While the above applications might look like an instant game changer and time saver, there’s a major challenge that needs to be addressed with prompting. 

LLMs rely on the quality of the prompts. As the famous saying goes, “Garbage In, Garbage Out.” The response from LLMs will only be as good as the prompt you provide. 

This is where the skills of a tester play a major role. 

Crafting meaningful prompts is an art that can significantly impact the quality of our output. In the coming section, we will see how to craft meaningful prompts to get high-quality desired outcomes.

Frameworks to Craft Meaningful Prompts

Frameworks to Craft Meaningful Prompts

Several frameworks have emerged in recent times. A framework can be used depending on the depth of the task at hand. 

Let’s explore a few of them with examples of sample prompts as a tester: 

  1. APE (Action, Purpose, Expectation)
    • Action: Define the job or activity to be done.
    • Purpose: Discuss the intention or goal.
    • Expectation: State the desired outcome.

Example: Generating test data using the APE framework.

[Action]: Generate test data for a regular user attempting to log in.

[Purpose]: To assess the login form’s functionality and security by testing various scenarios with different input data. 

[Expectation]: The test data should include both valid and invalid input values, such as correct and incorrect usernames and passwords, as well as special characters, excessively long inputs, and empty fields. This data will be used to evaluate how the login form handles different situations, including successful logins, failed logins, and the prevention of security vulnerabilities like SQL injection.

  1. RACE (Role, Action, Context, Expectation)
  • Role: Specify the role of LLM.
  • Action: Detail what action is needed.
  • Context: Provide relevant details of the situation.
  • Expectation: Describe the expected outcome.

Example: Generating boilerplate selenium code in C# for login to a web application.

[Role]: As an automation engineer using C# and NUnit who uses Pascal Coding standards.

[Action]: Write boilerplate Selenium code to automate the login functionality of a web application.

[Context]: The web application has a login page with input fields for username and password, a login button, and the expected elements for successful login, such as a welcome message.

[Expectation]: The Selenium code should be written following Pascal Coding standards, and it should include the setup of the WebDriver, navigating to the login page, entering valid credentials, clicking the login button, and verifying the expected elements on the successful login page. Additionally, proper exception handling and reporting should be implemented in the code.

  1. COAST (Context, Objective, Actions, Scenario, Task)
    • Context: Set the stage for the conversation.
    • Objective: Describe the goal.
    • Actions: Explain the actions needed.
    • Scenario: Describe the scenario.
    • Task: Describe the task.

Example: Performing requirement analysis for an application

[Context]: As a test engineer, you are leading the requirements review session for a new mobile banking application.

[Objective]: The goal is to conduct a comprehensive review of the project requirements to ensure they are well-defined, unambiguous, and testable.

[Actions]

  • Carefully examine each requirement.
  • For each requirement, provide notes, raise questions on ambiguity, suggest test ideas, and offer comments.
  • Determine if each requirement is testable in its current state.

[Scenario]: You are part of a cross-functional team consisting of developers, product managers, and business analysts.

[Task]: Review the following requirements.

  1. TAG (Task, Action, Goal)
    • Task: Define the specific task.
    • Action: Describe what needs to be done.
    • Goal: Explain the end goal.

Example: Analyze sample test data JSON file and generate relevant test data ideas.

[Task]: Extract input variables from the provided JSON data file.

[Action]: Review the JSON data file and identify key input variables.

[Goal]: Suggest possible values for each input variable to facilitate test data generation.

  1. RISE (Role, Input, Steps, Expectation)
  • Role: Specify the role of LLM.
  • Input: Describe the information or resources.
  • Steps: Ask for detailed steps.
  • Expectation: Describe the desired result.

Example: Crafting step-by-step documentation for test environment setup on fresh PCs.

[Role]: In the role of a Lead Test Engineer (LTE), you are responsible for documenting the test setup process.

[Input]: The .NET test application source code, necessary dependencies, and access to the target test machine.

[Steps]:

  • Explain how to set up the development environment (e.g., Visual Studio) and which components need to be installed.
  • Detail the steps to clone or download the test application’s source code from the version control repository.
  • Explain how to build and compile the project, specifying any build configurations or build tools required.
  • Provide guidance on running the tests, including any specific commands for NUnit to use.
  • Include troubleshooting steps for common setup issues or errors that testers might encounter.

[Expectation]: After following the documented setup steps, other testers should be able to successfully set up their test machines, build the test application, and run the tests without encountering significant issues. The documentation should ensure a smooth and error-free setup process.

Tips for Effective Prompt Engineering

Tips for Effective Prompt Engineering
  • While crafting a prompt, ensure that enough context is provided to ensure accurate responses.
  • Be mindful of implicit details like the version of the programming language, coding conventions, and commenting standards.
  • Save frequently used prompts in text expanders like GetMagical or AutoHotKey for quick access.
  • Just like LLMs refine their models with updated data, keep refining prompts based on the responses received.

Cautionary notice: 

Treat LLM responses as heuristics. They are not infallible, so always review and verify before implementation. 

With great power comes great responsibility. Use this technology wisely and ethically.

Check out our AI-powered open-source test automation solution which can provide a balance of responsibility and creativity in terms of using AI for testing: Green button to try out open source Testsigma



In conclusion, Prompt Engineering is a game-changer for testers. It enables testers to harness the incredible capabilities of LLMs and can aid in making the testing process more efficient and effective. By mastering the art of crafting meaningful prompts, testers can unlock a whole new world of possibilities in software testing

So, embrace this new era of testing and make Prompt Engineering your secret weapon for success!

Testsigma Author - Rahul Parwal

Rahul Parwal

Rahul Parwal is a Software Tester & Generalist. Presently, He works as a Senior Software Engineer with ifm engineering in India. Reading, learning, and practicing the craft of software testing is something Rahul enjoys doing. His recent accolades include the ‘Jerry Weinberg Testing Excellence Award‘ under the ‘Rising star of the year’ (2021) Category from the “Tea-time with Testers” Magazine.

image

Start automating your tests 10X Faster in Simple English with Testsigma

Try for free
imageimage
Subscribe to get all our latest blogs, updates delivered directly to your inbox.

By submitting the form, you would be accepting the Privacy Policy.

RELATED BLOGS


Embracing the “Bad Tester” Mentality
PRICILLA BILAVENDRAN
GENERAL
Desk Checking: How it can be useful for testers
VIPIN JAIN
GENERAL
Testsigma joins EuroSTAR Sweden 2024 Software Testing Conference
LAVANYA CHANDRASEKHARAN
GENERALUPDATES