- collaboration
- Invite Team Members
- Assign Projects
- Users & Role Management
- Review Management [Test Cases]
- Review Management [Elements]
- Execution Controls
- test cases
- Test Cases
- Test Case List Actions
- Import and Export Test Cases
- Import Test Project Test Cases
- Importing Postman Collections and Environments
- Test cases for Desktop Windows
- Update Test Case result in a Test Plan
- Test cases for Mobile Web Application
- Test Step Types
- Type: Natural Language
- Type: REST API
- Type: Step Group
- Type: For Loop
- Type: While Loop
- Type: Block
- Type: If Condition
- Nested Step Groups
- Create Test Steps
- Create Test Steps Using Simple English
- Test Step Settings
- Test Step Options
- Reuse Elements
- Test Step Reordering
- Bulk Actions
- Add Steps Before & After
- Web Applications
- Test Step Actions
- Test Step Settings
- Test Data in Steps
- Add Steps Manually
- Reuse Elements
- Update Elements
- Create an Element
- Reorder Test Steps
- Bulk Actions
- Add Steps Before & After
- Record steps anywhere in a Test Case
- Image Injection
- Cross-application testing
- Test Data Types
- Raw
- Parameter
- Runtime
- Random
- Data Generator
- Phone Number
- Mail Box
- Environment
- Concat Test Data
- Create Test Data [Parameter]
- Update Test Data Profile
- Updating Value in TDP
- Import TDP
- Bulk Deletion of a Test Data Profile
- Create Test Data [Environment]
- Elements (Objects)
- Web Applications
- Record Multiple Elements
- Record Single Element
- Create Elements
- Supported Locator Types
- Formulating Elements
- Shadow DOM Elements
- Verifying elements in Chrome DevTools
- Handling iframe Elements?
- Dynamic Locators using Parameter
- Dynamic Locators using Runtime
- Using Environment Test Data for Dynamic locators
- Import/Export Elements
- AI Enabled Auto-Healing
- test step recorder
- Install Chrome Extension
- Install Firefox Extension
- Install Edge Extension
- Exclude Attributes/Classes
- test plans
- Add, Edit, Delete Test Machines
- Add, Edit, Delete Test Suites
- Schedule Test Plans
- Run Test Suites In Parallel
- Cross Browser Testing
- Distributed Testing
- Headless Testing
- Test Lab Types
- Disabling Test Cases in Test Plans
- AfterTest Case
- Post Plan Hook
- AfterTest Suite
- Email Configuration in Test Plan
- Execute Partial Test Plans via API
- Ad-hoc Run
- Test Plan Executions
- Dry Runs on Local Devices
- Run Tests on Vendor Platforms
- Run Test Plans on Local Devices
- Test Locally Hosted Applications
- Debug Test Case Failures
- Parallel and Allowed queues
- debugging
- Debug results on local devices (Web applications)
- Debug Results on Local Devices
- Launch Debugger in the Same Window
- Testsigma Agent
- Pre-requisites
- Setup: Windows, Mac, Linux
- Setup: Android Local Devices
- Setting up iOS Local Devices
- Update Agent Manually
- Update Drivers Manually
- Delete Corrupted Agent
- Triggering Tests on Local Devices
- troubleshooting
- Agent - Startup and Registration Errors
- Fetching Agent logs
- Upgrade Testsigma Agent Automatically
- Testsigma Agent - FAQs
- continuous integration
- Test Plan Details
- REST API(Generic)
- Jenkins
- Azure DevOps
- AWS DevOps
- AWS Lambda
- Circle CI
- Bamboo CI
- Travis CI
- CodeShip CI
- Shell Script(Generic)
- Bitrise CI
- GitHub CICD
- Bitbucket CICD
- GitLab CI/CD
- desired capabilities
- Most Common Desired Capabilities
- Browser Console Debug Logs
- Geolocation Emulation
- Bypass Unsafe Download Prompt
- Geolocation for Chrome & Firefox
- Custom User Profile in Chrome
- Emulate Mobile Devices (Chrome)
- Add Chrome Extension
- Network Throttling
- Network Logs
- Biometric Authentication
- Enable App Resigning in iOS
- Enable Capturing Screenshots (Android & iOS)
- Configure Android WebViews
- Incognito/Private mode
- Set Google Play Store Credentials
- addons
- What is an Addon?
- Addons Community Marketplace
- Install Community Addon
- Prerequisites(Create/Update Addon)
- Create an Addon
- Update Addon
- Addon Types
- Create a Post Plan Hook add-on in Testsigma
- Create OCR Text Extraction Addon
- configuration
- API Keys
- Security(SSO)
- Setting Up Google Single Sign-On(SSO) Login in Testsigma
- Setting Up Okta Single Sign-On Integration with SAML Login in Testsigma
- Setting up SAML-based SSO login for Testsigma in Azure
- iOS Settings
- Creating WDA File for iOS App Testing
- uploads
- Upload Files
- Upload Android and iOS Apps
- How to generate mobile builds for Android/iOS applications?
- Testsigma REST APIs
- Environments
- Elements
- Test Plans
- Upload Files
- Get Project wide information
- Upload and update test data profile
- Trigger Multiple Test Plans
- Trigger Test Plan remotely and wait until Completion
- Run the same Test Plan multiple times in Parallel
- Schedule, Update and Delete a test plan using API
- Update Test Case results using API
- Create and update values of Test Data Profile using REST API
- Rerun Test Cases from Run Results using API
- open source dev environment setup
- macOS and IntelliJ Community Edition
- macOS and IntelliJ Ultimate Edition
- Windows and IntelliJ Ultimate Edition
- Setup Dev Environment [Addons]
- NLPs
- Unable to retrieve value stored in text element
- Unable to capture dropdown element
- Unable to Select Radiobutton
- Unable to Click Checkbox
- setup
- Server Docker Deployment Errors
- Secured Business Application Support
- Troubleshooting Restricted Access to Testsigma
- Why mobile device not displayed in Testsigma Mobile Test Recorder?
- Unable to create new test session due to unexpected error
- web apps
- URL not accessible
- Test Queued for a Long Time
- Issues with UI Identifiers
- Missing Elements in the Recorder
- mobile apps
- Failed to Start Mobile Test Recorder
- Troubleshooting “Failed to perform action Mobile Test Recorder” error
- Test Execution State is Queued for a Long Time
- Mobile app keeps stopping after successful launch
- More pre-requisite settings
- Unable to start WDA Process on iPhone
- Most Common causes for Click/Tap NLP failure
- on premise setup
- On-Premise Setup Prerequisites
- On-Premise Setup with Docker-compose File
- Post-Installation Checklist for On-Premise Setup
- Install Docker on an Unix OS in Azure Infrastructure
- SMTP Configuration in Testsigma
- Configure Custom Domains
- salesforce testing
- Intro: Testsigma for Salesforce Testing
- Creating a Connected App
- Creating a Salesforce Project
- Creating Metadata Connections
- Adding User Connections
- Build Test Cases: Manual+Live
- Salesforce Element Repositories
- Intro: Testsigma Special NLPs
Integrate Testsigma with any CI/CD tool using Shell Script
In this document, we will discuss the generic shell script - for both PowerShell and Unix shell - that can be used to integrate Testsigma with any CI/CD tool.
Pre-requisites:
You should already know how to
For Powershell Script:
- Generate the API key from Configurations > API Keys, let’s call it - TESTSIGMAAPIKEY
- Get the Test Plan ID from the Test Plan details page, lets call it - TESTSIGMATESTPLAN_ID
- Replace those values in the below script(TESTSIGMAAPIKEY, TESTSIGMATESTPLAN_ID) and paste them in the Pipeline of your CI/CD tool as a new Stage
The Script will trigger the executions and wait until the Timeout (MAXWAITTIMEFORSCRIPTTOEXIT) for the execution to complete.
Now, let’s look at the script for Powershell:
##########################################################################################################################################################
#
#TESTSIGMA_API_KEY-->API key generated under Testsigma App-->Configuration-->API Keys
#TESTSIGMA_TEST_PLAN_ID--> Testsigma Testplan ID, U can get this ID from Testsigma_app-->Test Plans--><TEST_PLAN_NAME>-->CI/CD Integration
#MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT-->Maximum time the script will wait for TEST Plan execution to complete. The sctript will exit if the Maximum time
#is exceeded, however the Test Plan will continue to run. You can check test results by logging to Testsigma.
#REPORT_FILE_PATH-->File path to save report Ex: <DIR_PATH>/report.xml, ./report.xml
##########################################################################################################################################################
<# START USER INPUTS#>
$TESTSIGMA_API_KEY=<API_KEY_IN_DOUBLE_QUOTES>
$TESTSIGMA_TEST_PLAN_ID=<TEST_PLAN_ID_IN_DOUBLE_QUITES>
$REPORT_FILE_PATH="./junit-report.xml"
$MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT=180
<# END USER INPUTS #>
$TESTSIGMA_TEST_PLAN_REST_URL="https://app.testsigma.com/api/v1/execution_results"
$TESTSIGMA_JUNIT_REPORT_URL="https://app.testsigma.com/api/v1/reports/junit"
$POLL_INTERVAL_FOR_RUN_STATUS=1
$NO_OF_POLLS=($MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT/$POLL_INTERVAL_FOR_RUN_STATUS)
$SLEEP_TIME=($POLL_INTERVAL_FOR_RUN_STATUS * 60)
$global:LOG_CONTENT=""
$global:APP_URL=""
$global:EXECUTION_STATUS=-1
$RUN_ID=""
$global:IS_TEST_RUN_COMPLETED=-1
$PSDefaultParameterValues['Invoke-RestMethod:SkipHeaderValidation'] = $true
$PSDefaultParameterValues['Invoke-WebRequest:SkipHeaderValidation'] = $true
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}" -f $TESTSIGMA_API_KEY)))
function get_status{
$global:RUN_RESPONSE=Invoke-RestMethod $status_URL -Method GET -Headers @{Authorization=("Bearer {0}" -f $TESTSIGMA_API_KEY);'Accept'='application/json'} -ContentType "application/json"
$global:EXECUTION_STATUS=$RUN_RESPONSE.status
$global:APP_URL=$RUN_RESPONSE.app_url
Write-Host "Execution Status: $EXECUTION_STATUS"
}
function checkTestPlanRunStatus{
$global:IS_TEST_RUN_COMPLETED=0
for($i=0; $i -le $NO_OF_POLLS;$i++){
get_status
Write-Host "Execution Status before going for wait: $EXECUTION_STATUS ,Status_message:"($RUN_RESPONSE.message)
if ($EXECUTION_STATUS -eq "STATUS_IN_PROGRESS"){
Write-Host "Sleep/Wait for $SLEEP_TIME seconds before next poll....."
sleep $SLEEP_TIME
}else{
$global:IS_TEST_RUN_COMPLETED=1
Write-Host "Automated Tests Execution completed...`nTotal script execution time:$(($i)*$SLEEP_TIME/60) minutes"
break
}
}
}
function saveFinalResponseToAFile{
if ($IS_TEST_RUN_COMPLETED -eq 0){
$global:LOG_CONTENT="Wait time exceeded specified maximum time(MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT). Please visit below URL for Test Plan Run status.$APP_URL"
Write-Host "LogContent:$LOG_CONTENT nResponse content:"($RUN_RESPONSE | ConvertTo-Json -Compress)
}
else{
Write-Host "Fetching reports:$TESTSIGMA_JUNIT_REPORT_URL/$RUN_ID"
$REPORT_DATA=Invoke-RestMethod $TESTSIGMA_JUNIT_REPORT_URL/$RUN_ID -Method GET -Headers @{Authorization=("Bearer {0}" -f $TESTSIGMA_API_KEY);'Accept'='application/xml'} -ContentType "application/json"
Write-Host "report data: $REPORT_DATA"
# Add-Content -Path $REPORT_FILE_PATH -Value ($REPORT_DATA)
$REPORT_DATA.OuterXml | Out-File $REPORT_FILE_PATH
}
Write-Host "Reports File::$REPORT_FILE_PATH"
}
Write-Host "No of polls: $NO_OF_POLLS"
Write-Host "Polling Interval:$SLEEP_TIME"
Write-Host "Junit report file path: $REPORT_FILE_PATH"
$REQUEST_BODY='{"executionId":'+"$TESTSIGMA_TEST_PLAN_ID"+'}'
try{
$TRIGGER_RESPONSE=Invoke-RestMethod -Method POST -Headers @{Authorization=("Bearer {0}" -f $TESTSIGMA_API_KEY);'Accept'='application/json'} -ContentType 'application/json' -Body $REQUEST_BODY -uri $TESTSIGMA_TEST_PLAN_REST_URL
}catch{
Write-Host "Code:" $_.Exception.Response.StatusCode.value__
Write-Host "Description:" $_.Exception.Response.StatusDescription
Write-Host "Error encountered in executing a test plan. Please check if the test plan is already in running state."
exit 1
}
$RUN_ID=$TRIGGER_RESPONSE.id
Write-Host "Execution triggered RunID: $RUN_ID"
$status_URL = "$TESTSIGMA_TEST_PLAN_REST_URL/$RUN_ID"
Write-Host $status_URL
checkTestPlanRunStatus
saveFinalResponseToAFile
For Unix shell Script (Bash Script):
- Generate the API key from Configurations > API Keys, let’s call it - TESTSIGMAAPIKEY
- Get the Test Plan ID from the Test Plan details page, lets call it - TESTSIGMATESTPLAN_ID
- Replace those values in the below script(TESTSIGMAAPIKEY, TESTSIGMATESTPLAN_ID) and paste them in the Pipeline of your CI/CD tool as a new Stage
The Script will trigger the executions and also wait until the Timeout (MAXWAITTIMEFORSCRIPTTOEXIT) for the execution to complete.
Now, let’s look at the script for the Unix shell:
#!/bin/bash
#**********************************************************************
#
# TESTSIGMA_API_KEY -> API key generated under Testsigma App >> Configuration >> API Keys
#
# TESTSIGMA_TEST_PLAN_ID -> Testsigma Testplan ID.
# You can get this from Testsigma App >> Test Plans >> <TEST_PLAN_NAME> >> CI/CD Integration
#
# MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT -> Maximum time in minutes the script will wait for TEST Plan execution to complete.
# The sctript will exit if the Maximum time is exceeded. However, the Test Plan will continue to run.
# You can check test results by logging to Testsigma.
#
# JUNIT_REPORT_FILE_PATH -> File name with directory path to save the report.
# For Example, <DIR_PATH>/report.xml, ./report.xml
#
# RUNTIME_DATA_INPUT -> Specify runtime parameters/variables to be used in the tests in comma-separated manner
# For example, "url=https://the-internet.herokuapp.com/login,variable1=value1"
#
# BUILD_NO -> Specify Build number if you want to track the builds in Testsigma. It will show up in the Test Results page
# For example, we are using $(date +"%Y%m%d%H%M") to use current data and time as build number.
#
#********START USER_INPUTS*********
TESTSIGMA_API_KEY=eyJhbGciOixxxxxxxxxxxxxxxTNpgv0w
TESTSIGMA_TEST_PLAN_ID=2090
MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT=1
JUNIT_REPORT_FILE_PATH=./junit-report-$(date +"%Y%m%d%H%M").xml
RUNTIME_DATA_INPUT="url=https://the-internet.herokuapp.com/login,test=1221"
BUILD_NO=$(date +"%Y%m%d%H%M")
#********END USER_INPUTS***********
#********GLOBAL variables**********
POLL_COUNT=30
SLEEP_TIME=$(((MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT*60)/$POLL_COUNT))
JSON_REPORT_FILE_PATH=./testsigma.json
TESTSIGMA_TEST_PLAN_REST_URL=https://app.testsigma.com/api/v1/execution_results
TESTSIGMA_JUNIT_REPORT_URL=https://app.testsigma.com/api/v1/reports/junit
MAX_WAITTIME_EXCEEDED_ERRORMSG="Given Maximum Wait Time of $MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT minutes exceeded waiting for the Test Run completion.
Please log-in to Testsigma to check Test Plan run results. You can visit the URL specified in \"app_url\" JSON parameter in the response to go to the Test Plan results page directly.
For example, \"app_url\":\"https://dev.testsigma.com/#/projects/31/applications/53/version/72/report/executions/197/runs/819/environments\""
#**********************************
#Read arguments
for i in "$@"
do
case $i in
-k=*|--apikey=*)
TESTSIGMA_API_KEY="${i#*=}"
shift
;;
-i=*|--testplanid=*)
TESTSIGMA_TEST_PLAN_ID="${i#*=}"
shift
;;
-t=*|--maxtimeinmins=*)
MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT="${i#*=}"
shift
;;
-r=*|--reportfilepath=*)
JUNIT_REPORT_FILE_PATH="${i#*=}"
shift
;;
-d=*|--runtimedata=*)
RUNTIME_DATA_INPUT="${i#*=}"
shift
;;
-b=*|--buildno=*)
BUILD_NO="${i#*=}"
shift
;;
-h|--help)
echo "Arguments: "
echo " [-k | --apikey] = <TESTSIGMA_API_KEY>"
echo " [-i | --testplanid] = <TESTSIGMA_TEST_PLAN_ID>"
echo " [-t | --maxtimeinmins] = <MAX_WAIT_TIME_IN_MINS>"
echo " [-r | --reportfilepath] = <JUNIT_REPORT_FILE_PATH>"
echo " [-d | --runtimedata] = <OPTIONAL COMMA SEPARATED KEY VALUE PAIRS>"
echo " [-b | --buildno] = <BUILD_NO_IF_ANY>"
printf "Example:\n bash testsigma_cicd.sh --apikey=YSWfniLEWYK7aLrS-FhYUD1kO0MQu9renQ0p-oyCXMlQ --testplanid=230 --maxtimeinmins=180 --reportfilepath=./junit-report.xml \n\n"
printf "With Runtimedata parameters:\n bash testsigma_cicd.sh --apikey=YSWfniLEWYK7aLrS-FhYUD1kO0MQu9renQ0p-oyCXMlQ --testplanid=230 --maxtimeinmins=180
--reportfilepath=./junit-report.xml --runtimedata=\"buildurl=http://test1.url.com,data1=testdata\" --buildno=773\n\n"
shift
exit 1
;;
esac
done
get_status(){
# Old method
# RUN_RESPONSE=$(curl -u $TESTSIGMA_USER_NAME:$TESTSIGMA_PASSWORD --silent --write-out "HTTPSTATUS:%{http_code}" -X GET $TESTSIGMA_TEST_PLAN_RUN_URL/$HTTP_BODY/status)
RUN_RESPONSE=$(curl -H "Authorization:Bearer $TESTSIGMA_API_KEY"\
--silent --write-out "HTTPSTATUS:%{http_code}" \
-X GET $TESTSIGMA_TEST_PLAN_REST_URL/$RUN_ID)
# extract the body
RUN_BODY=$(echo $RUN_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
# extract the response status
RUN_STATUS=$(echo $RUN_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
echo "Test Plan Result Response: $RUN_BODY"
# extract exec status
EXECUTION_STATUS=$(echo $RUN_BODY | getJsonValue status)
}
function checkTestPlanRunStatus(){
IS_TEST_RUN_COMPLETED=0
for ((i=0;i<=POLL_COUNT;i++))
do
get_status
echo " Exceution Status:: $EXECUTION_STATUS "
if [[ $EXECUTION_STATUS =~ "STATUS_IN_PROGRESS" ]]; then
echo "Poll #$(($i+1)) - Test Execution in progress... Wait for $SLEEP_TIME seconds before next poll.."
sleep $SLEEP_TIME
elif [[ $EXECUTION_STATUS =~ "STATUS_CREATED" ]]; then
echo "Poll #$(($i+1)) - Test Execution/Re-run Created... Wait for $SLEEP_TIME seconds before next poll.."
sleep $SLEEP_TIME
elif [[ $EXECUTION_STATUS =~ "STATUS_COMPLETED" ]]; then
IS_TEST_RUN_COMPLETED=1
echo "Poll #$(($i+1)) - Tests Execution completed..."
TOTALRUNSECONDS=$(($(($i+1))*$SLEEP_TIME))
echo "Total script run time: $(convertsecs $TOTALRUNSECONDS)"
break
else
echo "Unexpected Execution status. Please check run results for more details."
fi
done
}
function saveFinalResponseToJSONFile(){
if [ $IS_TEST_RUN_COMPLETED -eq 0 ]
then
echo "$MAX_WAITTIME_EXCEEDED_ERRORMSG"
fi
echo "$RUN_BODY" >> $JSON_REPORT_FILE_PATH
echo "Saved response to JSON Reports file - $JSON_REPORT_FILE_PATH"
}
function saveFinalResponseToJUnitFile(){
if [ $IS_TEST_RUN_COMPLETED -eq 0 ]
then
echo "$MAX_WAITTIME_EXCEEDED_ERRORMSG"
exit 1
fi
echo ""
echo "Downloading the Junit report..."
curl --progress-bar -H "Authorization:Bearer $TESTSIGMA_API_KEY" \
-H "Accept: application/xml" \
-H "content-type:application/json" \
-X GET $TESTSIGMA_JUNIT_REPORT_URL/$RUN_ID --output $JUNIT_REPORT_FILE_PATH
echo "JUNIT Reports file - $JUNIT_REPORT_FILE_PATH"
}
function getJsonValue() {
json_key=$1
awk -F"[,:}]" '{for(i=1;i<=NF;i++){if($i~/\042'$json_key'\042/){print $(i+1)}}}' | tr -d '"'
}
function populateRuntimeData() {
IFS=',' read -r -a VARIABLES <<< "$RUNTIME_DATA_INPUT"
RUN_TIME_DATA='"runtimeData":{'
DATA_VALUES=
for element in "${VARIABLES[@]}"
do
DATA_VALUES=$DATA_VALUES","
IFS='=' read -r -a VARIABLE_VALUES <<< "$element"
DATA_VALUES="$DATA_VALUES"'"'"${VARIABLE_VALUES[0]}"'":"'"${VARIABLE_VALUES[1]}"'"'
done
DATA_VALUES="${DATA_VALUES:1}"
RUN_TIME_DATA=$RUN_TIME_DATA$DATA_VALUES"}"
}
function populateBuildNo(){
if [ -z "$BUILD_NO" ]
then
echo ""
else
BUILD_DATA='"buildNo":'$BUILD_NO
fi
}
function populateJsonPayload(){
JSON_DATA='{"executionId":'$TESTSIGMA_TEST_PLAN_ID
populateRuntimeData
populateBuildNo
if [ -z "$BUILD_DATA" ];then
JSON_DATA=$JSON_DATA,$RUN_TIME_DATA"}"
elif [ -z "$RUN_TIME_DATA" ];then
JSON_DATA=$JSON_DATA,$BUILD_DATA"}"
elif [ -z "$BUILD_DATA" ] && [ -z "$RUN_TIME_DATA" ];then
JSON_DATA=$JSON_DATA"}"
else
JSON_DATA=$JSON_DATA,$RUN_TIME_DATA,$BUILD_DATA"}"
fi
echo "InputData="$JSON_DATA
}
function convertsecs(){
((h=${1}/3600))
((m=(${1}%3600)/60))
((s=${1}%60))
printf "%02d hours %02d minutes %02d seconds" $h $m $s
}
function setExitCode(){
RESULT=$(echo $RUN_BODY | getJsonValue result)
APPURL=$(echo $RUN_BODY | getJsonValue result)
echo $RESULT
echo $([[ $RESULT =~ "SUCCESS" ]])
if [[ $RESULT =~ "SUCCESS" ]];then
EXITCODE=0
else
EXITCODE=1
fi
echo "exit Code:$EXITCODE"
}
#******************************************************
echo "************ Testsigma: Start executing automated tests ************"
populateJsonPayload
# store the whole response with the status at the end
HTTP_RESPONSE=$(curl -H "Authorization:Bearer $TESTSIGMA_API_KEY" \
-H "Accept: application/json" \
-H "content-type:application/json" \
--silent --write-out "HTTPSTATUS:%{http_code}" \
-d $JSON_DATA -X POST $TESTSIGMA_TEST_PLAN_REST_URL )
# extract the body from response
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
# extract run id from response
RUN_ID=$(echo $HTTP_RESPONSE | getJsonValue id)
# extract the status code from response
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
# print the run ID or the error message
NUMBERS_REGEX="^[0-9].*"
if [[ $RUN_ID =~ $NUMBERS_REGEX ]]; then
echo "Run ID: $RUN_ID"
else
echo "$RUN_ID"
fi
EXITCODE=0
# example using the status
if [ ! $HTTP_STATUS -eq 200 ]; then
echo "Failed to start Test Plan execution!"
echo "$HTTP_RESPONSE"
EXITCODE=1
#Exit with a failure.
else
echo "Number of maximum polls to be done: $POLL_COUNT"
checkTestPlanRunStatus
saveFinalResponseToJUnitFile
saveFinalResponseToJSONFile
setExitCode
fi
echo "************************************************"
echo "Result JSON Response: $RUN_BODY"
echo "************ Testsigma: Completed executing automated tests ************"
exit $EXITCODE
In case you have any questions, contact Testsigma support.