- collaboration
- Invite Team Members
- Assign Projects
- Users & Role Management
- Review Management [Test Cases]
- Review Management [Elements]
- Execution Controls
- test cases
- Test Cases
- Test Case List Actions
- Import and Export Test Cases
- Import Test Project Test Cases
- Importing Postman Collections and Environments
- Test cases for Desktop Windows
- Update Test Case result in a Test Plan
- Test cases for Mobile Web Application
- Test Step Types
- Type: Natural Language
- Type: REST API
- Type: Step Group
- Type: For Loop
- Type: While Loop
- Type: Block
- Type: If Condition
- Nested Step Groups
- Create Test Steps
- Create Test Steps Using Simple English
- Test Step Settings
- Test Step Options
- Reuse Elements
- Test Step Reordering
- Bulk Actions
- Add Steps Before & After
- Web Applications
- Test Step Actions
- Test Step Settings
- Test Data in Steps
- Add Steps Manually
- Reuse Elements
- Update Elements
- Create an Element
- Reorder Test Steps
- Bulk Actions
- Add Steps Before & After
- Record steps anywhere in a Test Case
- Image Injection
- Cross-application testing
- Test Data Types
- Raw
- Parameter
- Runtime
- Random
- Data Generator
- Phone Number
- Mail Box
- Environment
- Concat Test Data
- Create Test Data [Parameter]
- Update Test Data Profile
- Updating Value in TDP
- Import TDP
- Bulk Deletion of a Test Data Profile
- Create Test Data [Environment]
- Elements (Objects)
- Web Applications
- Record Multiple Elements
- Record Single Element
- Create Elements
- Supported Locator Types
- Formulating Elements
- Shadow DOM Elements
- Verifying elements in Chrome DevTools
- Handling iframe Elements?
- Dynamic Locators using Parameter
- Dynamic Locators using Runtime
- Using Environment Test Data for Dynamic locators
- Import/Export Elements
- AI Enabled Auto-Healing
- test step recorder
- Install Chrome Extension
- Install Firefox Extension
- Install Edge Extension
- Exclude Attributes/Classes
- test plans
- Add, Edit, Delete Test Machines
- Add, Edit, Delete Test Suites
- Schedule Test Plans
- Run Test Suites In Parallel
- Cross Browser Testing
- Distributed Testing
- Headless Testing
- Test Lab Types
- Disabling Test Cases in Test Plans
- AfterTest Case
- Post Plan Hook
- AfterTest Suite
- Email Configuration in Test Plan
- Execute Partial Test Plans via API
- Ad-hoc Run
- Test Plan Executions
- Dry Runs on Local Devices
- Run Tests on Vendor Platforms
- Run Test Plans on Local Devices
- Test Locally Hosted Applications
- Debug Test Case Failures
- Parallel and Allowed queues
- debugging
- Debug results on local devices (Web applications)
- Debug Results on Local Devices
- Launch Debugger in the Same Window
- Testsigma Agent
- Pre-requisites
- Setup: Windows, Mac, Linux
- Setup: Android Local Devices
- Setting up iOS Local Devices
- Update Agent Manually
- Update Drivers Manually
- Delete Corrupted Agent
- Triggering Tests on Local Devices
- troubleshooting
- Agent - Startup and Registration Errors
- Fetching Agent logs
- Upgrade Testsigma Agent Automatically
- Testsigma Agent - FAQs
- continuous integration
- Test Plan Details
- REST API(Generic)
- Jenkins
- Azure DevOps
- AWS DevOps
- AWS Lambda
- Circle CI
- Bamboo CI
- Travis CI
- CodeShip CI
- Shell Script(Generic)
- Bitrise CI
- GitHub CICD
- Bitbucket CICD
- GitLab CI/CD
- desired capabilities
- Most Common Desired Capabilities
- Browser Console Debug Logs
- Geolocation Emulation
- Bypass Unsafe Download Prompt
- Geolocation for Chrome & Firefox
- Custom User Profile in Chrome
- Emulate Mobile Devices (Chrome)
- Add Chrome Extension
- Network Throttling
- Network Logs
- Biometric Authentication
- Enable App Resigning in iOS
- Enable Capturing Screenshots (Android & iOS)
- Configure Android WebViews
- Incognito/Private mode
- Set Google Play Store Credentials
- addons
- What is an Addon?
- Addons Community Marketplace
- Install Community Addon
- Prerequisites(Create/Update Addon)
- Create an Addon
- Update Addon
- Addon Types
- Create a Post Plan Hook add-on in Testsigma
- Create OCR Text Extraction Addon
- configuration
- API Keys
- Security(SSO)
- Setting Up Google Single Sign-On(SSO) Login in Testsigma
- Setting Up Okta Single Sign-On Integration with SAML Login in Testsigma
- Setting up SAML-based SSO login for Testsigma in Azure
- iOS Settings
- Creating WDA File for iOS App Testing
- uploads
- Upload Files
- Upload Android and iOS Apps
- How to generate mobile builds for Android/iOS applications?
- Testsigma REST APIs
- Environments
- Elements
- Test Plans
- Upload Files
- Get Project wide information
- Upload and update test data profile
- Trigger Multiple Test Plans
- Trigger Test Plan remotely and wait until Completion
- Run the same Test Plan multiple times in Parallel
- Schedule, Update and Delete a test plan using API
- Update Test Case results using API
- Create and update values of Test Data Profile using REST API
- Rerun Test Cases from Run Results using API
- open source dev environment setup
- macOS and IntelliJ Community Edition
- macOS and IntelliJ Ultimate Edition
- Windows and IntelliJ Ultimate Edition
- Setup Dev Environment [Addons]
- NLPs
- Unable to retrieve value stored in text element
- Unable to capture dropdown element
- Unable to Select Radiobutton
- Unable to Click Checkbox
- setup
- Server Docker Deployment Errors
- Secured Business Application Support
- Troubleshooting Restricted Access to Testsigma
- Why mobile device not displayed in Testsigma Mobile Test Recorder?
- Unable to create new test session due to unexpected error
- web apps
- URL not accessible
- Test Queued for a Long Time
- Issues with UI Identifiers
- Missing Elements in the Recorder
- mobile apps
- Failed to Start Mobile Test Recorder
- Troubleshooting “Failed to perform action Mobile Test Recorder” error
- Test Execution State is Queued for a Long Time
- Mobile app keeps stopping after successful launch
- More pre-requisite settings
- Unable to start WDA Process on iPhone
- Most Common causes for Click/Tap NLP failure
- on premise setup
- On-Premise Setup Prerequisites
- On-Premise Setup with Docker-compose File
- Post-Installation Checklist for On-Premise Setup
- Install Docker on an Unix OS in Azure Infrastructure
- SMTP Configuration in Testsigma
- Configure Custom Domains
- salesforce testing
- Intro: Testsigma for Salesforce Testing
- Creating a Connected App
- Creating a Salesforce Project
- Creating Metadata Connections
- Adding User Connections
- Build Test Cases: Manual+Live
- Salesforce Element Repositories
- Intro: Testsigma Special NLPs
Trigger Test Plan remotely and wait until Completion
If you are adding automated testing as an AUTOMATED stage in your CI/CD Pipeline, you will need to trigger the Tests from the Pipeline and get the test results into the Pipeline as well. You can use Testsigma Test Plan Execution Results API to do that..
You should already be familiar with creating and running Test Plans. See Manage Test Plans.
You need to authenticate these requests with your Testsigma API Key. To know more about obtaining API Keys, refer - How to generate API Keys
Checking the Test Run Results
The below steps give an overview of how the script given at the bottom of this page works:
- Start the Test Plan Run using the ‘Test Plan Trigger API’
See How to trigger Test Plans remotely.
The Response of the Trigger Test Plan API Call returns the ID of the specific Test Plan Run called ‘Run ID’. This Run ID can be used to check the status of the Test Plan Run once it starts. - Get the ‘id’ key in the JSON Response of Trigger Test Plan API The Run ID for the Test Plan Run is under the key ‘id’. This is unique for the current Test Plan Run.
- Use the above ‘id’ with the ‘Test Plan Status API’ to check Test Plan Run Status. See How to check Test Plan Status.
- Check if the value of the ‘status’ key is “STATUS_COMPLETED” We can poll the Test Plan Execution Results API with this Run ID at regular intervals to check for the status of the specific Test Plan execution by reading the ‘status’ key in the JSON Response for the Test Plan Execution Results API Call.
- If no, go to Step 3.
- If yes, check the ‘result’ key to get the Test Plan Result.
We have provided the Powershell and Bash scripts for the same below. Please feel free to make changes as required and plug the same into your CI Pipeline.
Powershell script
The Script triggers the executions and also waits until the Timeout(MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT) for the execution to complete.
###################################################################################
#TESTSIGMA_API_KEY -> API key generated in Testsigma App under Configurations > API Keys
#TESTSIGMA_TEST_PLAN_ID -> Testsigma Test Plan ID. You can get this ID from Testsigma App in Test Plans > TEST_PLAN_NAME > CI/CD Integration Tab
#MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT -> Maximum time the script will wait for Test Plan execution to complete.
The script will exit if the script execution exceeds this mentioned Maximum time.
However, the Test Plan will continue to run in Testsigma. You can check test results by logging to Testsigma later once you get the Execution passed or failed email/Slack/Teams notification.
#REPORT_FILE_PATH -> File path to save report Ex: <DIR_PATH>/report.xml, ./report.xml, C:\report.xml
#$RUN_TIME_PARAMS -> Here, you can pass Runtime parameters such as Deployment URL to the Test Plan
@{key1="$env:buildURL";key2="value2"}
####################################################################################
####### START USER INPUTS ######
$TESTSIGMA_API_KEY="<API_KEY>"
$TESTSIGMA_TEST_PLAN_ID="3058"
$REPORT_FILE_PATH="./junit-report.xml"
$MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT=180
$RUN_TIME_PARAMS=@{}
####### END USER INPUTS ########
#### Please do not change the values below this if you are not sure what you are doing. ####
$TESTSIGMA_TEST_PLAN_REST_URL="https://app.testsigma.com/api/v1/execution_results"
$TESTSIGMA_JUNIT_REPORT_URL="https://app.testsigma.com/api/v1/reports/junit"
$POLL_INTERVAL_FOR_RUN_STATUS=5
$NO_OF_POLLS=($MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT/$POLL_INTERVAL_FOR_RUN_STATUS)
$SLEEP_TIME=($POLL_INTERVAL_FOR_RUN_STATUS * 60)
$global:LOG_CONTENT=""
$global:APP_URL=""
$global:EXECUTION_STATUS=-1
$RUN_ID=""
$global:IS_TEST_RUN_COMPLETED=-1
$PSDefaultParameterValues['Invoke-RestMethod:SkipHeaderValidation'] = $true
$PSDefaultParameterValues['Invoke-WebRequest:SkipHeaderValidation'] = $true
$base64AuthInfo = [Convert]::ToBase64String([Text.Encoding]::ASCII.GetBytes(("{0}" -f $TESTSIGMA_API_KEY)))
function get_status{
$global:RUN_RESPONSE=Invoke-RestMethod $status_URL -Method GET -Headers @{Authorization=("Bearer {0}" -f $TESTSIGMA_API_KEY);'Accept'='application/json'} -ContentType "application/json"
$global:EXECUTION_STATUS=$RUN_RESPONSE.status
$global:APP_URL=$RUN_RESPONSE.app_url
Write-Host "Execution Status: $EXECUTION_STATUS"
}
function checkTestPlanRunStatus{
$global:IS_TEST_RUN_COMPLETED=0
for($i=0; $i -le $NO_OF_POLLS;$i++){
get_status
Write-Host "Execution Status before going for wait: $EXECUTION_STATUS ,Status_message:"($RUN_RESPONSE.message)
if ($EXECUTION_STATUS -eq "STATUS_IN_PROGRESS"){
Write-Host "Sleep/Wait for $SLEEP_TIME seconds before next poll....."
sleep $SLEEP_TIME
}else{
$global:IS_TEST_RUN_COMPLETED=1
Write-Host "Automated Tests Execution completed...`nTotal script execution time:$(($i)*$SLEEP_TIME/60) minutes"
break
}
}
}
function saveFinalResponseToAFile{
if ($IS_TEST_RUN_COMPLETED -eq 0){
$global:LOG_CONTENT="Wait time exceeded specified maximum time(MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT). Please visit below URL for Test Plan Run status.$APP_URL"
Write-Host "LogContent:$LOG_CONTENT nResponse content:"($RUN_RESPONSE | ConvertTo-Json -Compress)
}
else{
Write-Host "Fetching reports:$TESTSIGMA_JUNIT_REPORT_URL/$RUN_ID"
$REPORT_DATA=Invoke-RestMethod $TESTSIGMA_JUNIT_REPORT_URL/$RUN_ID -Method GET -Headers @{Authorization=("Bearer {0}" -f $TESTSIGMA_API_KEY);'Accept'='application/xml'} -ContentType "application/json"
Write-Host "report data: $REPORT_DATA"
# Add-Content -Path $REPORT_FILE_PATH -Value ($REPORT_DATA)
$REPORT_DATA.OuterXml | Out-File $REPORT_FILE_PATH
}
Write-Host "Reports File::$REPORT_FILE_PATH"
}
Write-Host "No of polls: $NO_OF_POLLS"
Write-Host "Polling Interval:$SLEEP_TIME"
Write-Host "Junit report file path: $REPORT_FILE_PATH"
$REQUEST_BODY_TABLE=@{executionId="$TESTSIGMA_TEST_PLAN_ID"}
$REQUEST_BODY_TABLE.Add("runtimeData",$RUN_TIME_PARAMS)
$REQUEST_BODY = $REQUEST_BODY_TABLE | ConvertTo-Json -Compress
Write-Host "Json payload" $REQUEST_BODY
try{
$TRIGGER_RESPONSE=Invoke-RestMethod -Method POST -Headers @{Authorization=("Bearer {0}" -f $TESTSIGMA_API_KEY);'Accept'='application/json'} -ContentType 'application/json' -Body $REQUEST_BODY -uri $TESTSIGMA_TEST_PLAN_REST_URL
}catch{
Write-Host "Code:" $_.Exception.Response.StatusCode.value__
Write-Host "Description:" $_.Exception.Response.StatusDescription
Write-Host "Error encountered in executing a test plan. Please check if the test plan is already in running state."
exit 1
}
$RUN_ID=$TRIGGER_RESPONSE.id
Write-Host "Execution triggered RunID: $RUN_ID"
$status_URL = "$TESTSIGMA_TEST_PLAN_REST_URL/$RUN_ID"
Write-Host $status_URL
checkTestPlanRunStatus
saveFinalResponseToAFile
Bash Script
#!/bin/bash
#**********************************************************************
#
# TESTSIGMA_API_KEY -> API key generated under Testsigma App >> Configuration >> API Keys
#
# TESTSIGMA_TEST_PLAN_ID -> Testsigma Test Plan ID.
# You can get this from Testsigma App >> Test Plans >> <TEST_PLAN_NAME> >> CI/CD Integration
#
# MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT -> Maximum time in minutes the script will wait for TEST Plan execution to complete.
# The script will exit if the Maximum time is exceeded. However, the Test Plan will continue to run.
# You can check test results by logging to Testsigma.
#
# JUNIT_REPORT_FILE_PATH -> Filename with directory path to save the report.
# For Example, <DIR_PATH>/report.xml, ./report.xml
#
# RUNTIME_DATA_INPUT -> Specify runtime parameters/variables to be used in the tests in comma-separated manner
# For example, "url=https://the-internet.herokuapp.com/login,variable1=value1"
#
# BUILD_NO -> Specify Build number if you want to track the builds in Testsigma. It will show up in the Test Results page
# For example, we are using $(date +"%Y%m%d%H%M") to use current data and time as build number.
#
#********START USER_INPUTS*********
TESTSIGMA_API_KEY=eyJhbGciOixxxxxxxxxxxxxxxTNpgv0w
TESTSIGMA_TEST_PLAN_ID=2090
MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT=1
JUNIT_REPORT_FILE_PATH=./junit-report-$(date +"%Y%m%d%H%M").xml
RUNTIME_DATA_INPUT="url=https://the-internet.herokuapp.com/login,test=1221"
BUILD_NO=$(date +"%Y%m%d%H%M")
#********END USER_INPUTS***********
#********GLOBAL variables**********
POLL_COUNT=5
SLEEP_TIME=$(((MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT*60)/$POLL_COUNT))
JSON_REPORT_FILE_PATH=./testsigma.json
TESTSIGMA_TEST_PLAN_REST_URL=https://app.testsigma.com/api/v1/execution_results
TESTSIGMA_JUNIT_REPORT_URL=https://app.testsigma.com/api/v1/reports/junit
MAX_WAITTIME_EXCEEDED_ERRORMSG="Given Maximum Wait Time of $MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT minutes exceeded waiting for the Test Run completion.
Please log-in to Testsigma to check Test Plan run results. You can visit the URL specified in \"app_url\" JSON parameter in the response to go to the Test Plan results page directly.
For example, \"app_url\":\"https://dev.testsigma.com/#/projects/31/applications/53/version/72/report/executions/197/runs/819/environments\""
#**********************************
#Read arguments
for i in "$@"
do
case $i in
-k=*|--apikey=*)
TESTSIGMA_API_KEY="${i#*=}"
shift
;;
-i=*|--testplanid=*)
TESTSIGMA_TEST_PLAN_ID="${i#*=}"
shift
;;
-t=*|--maxtimeinmins=*)
MAX_WAIT_TIME_FOR_SCRIPT_TO_EXIT="${i#*=}"
shift
;;
-r=*|--reportfilepath=*)
JUNIT_REPORT_FILE_PATH="${i#*=}"
shift
;;
-d=*|--runtimedata=*)
RUNTIME_DATA_INPUT="${i#*=}"
shift
;;
-b=*|--buildno=*)
BUILD_NO="${i#*=}"
shift
;;
-h|--help)
echo "Arguments: "
echo " [-k | --apikey] = <TESTSIGMA_API_KEY>"
echo " [-i | --testplanid] = <TESTSIGMA_TEST_PLAN_ID>"
echo " [-t | --maxtimeinmins] = <MAX_WAIT_TIME_IN_MINS>"
echo " [-r | --reportfilepath] = <JUNIT_REPORT_FILE_PATH>"
echo " [-d | --runtimedata] = <OPTIONAL COMMA SEPARATED KEY VALUE PAIRS>"
echo " [-b | --buildno] = <BUILD_NO_IF_ANY>"
printf "Example:\n bash testsigma_cicd.sh --apikey=YSWfniLEWYK7aLrS-FhYUD1kO0MQu9renQ0p-oyCXMlQ --testplanid=230 --maxtimeinmins=180 --reportfilepath=./junit-report.xml \n\n"
printf "With Runtimedata parameters:\n bash testsigma_cicd.sh --apikey=YSWfniLEWYK7aLrS-FhYUD1kO0MQu9renQ0p-oyCXMlQ --testplanid=230 --maxtimeinmins=180
--reportfilepath=./junit-report.xml --runtimedata=\"buildurl=http://test1.url.com,data1=testdata\" --buildno=773\n\n"
shift
exit 1
;;
esac
done
get_status(){
# Old method
# RUN_RESPONSE=$(curl -u $TESTSIGMA_USER_NAME:$TESTSIGMA_PASSWORD --silent --write-out "HTTPSTATUS:%{http_code}" -X GET $TESTSIGMA_TEST_PLAN_RUN_URL/$HTTP_BODY/status)
RUN_RESPONSE=$(curl -H "Authorization:Bearer $TESTSIGMA_API_KEY"\
--silent --write-out "HTTPSTATUS:%{http_code}" \
-X GET $TESTSIGMA_TEST_PLAN_REST_URL/$RUN_ID)
# extract the body
RUN_BODY=$(echo $RUN_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
# extract the response status
RUN_STATUS=$(echo $RUN_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
# extract exec status
EXECUTION_STATUS=$(echo $RUN_BODY | getJsonValue status)
}
function checkTestPlanRunStatus(){
IS_TEST_RUN_COMPLETED=0
for ((i=0;i<=POLL_COUNT;i++))
do
get_status
if [ $EXECUTION_STATUS = "STATUS_IN_PROGRESS" ]; then
echo "Poll #$(($i+1)) - Test Execution in progress... Wait for $SLEEP_TIME seconds before next poll.."
sleep $SLEEP_TIME
elif [ $EXECUTION_STATUS = "STATUS_COMPLETED" ]; then
IS_TEST_RUN_COMPLETED=1
echo "Poll #$(($i+1)) - Tests Execution completed..."
TOTALRUNSECONDS=$(($(($i+1))*$SLEEP_TIME))
echo "Total script run time: $(convertsecs $TOTALRUNSECONDS)"
break
else
echo "Unexpected Execution status. Please check run results for more details."
fi
done
}
function saveFinalResponseToJSONFile(){
if [ $IS_TEST_RUN_COMPLETED -eq 0 ]
then
echo "$MAX_WAITTIME_EXCEEDED_ERRORMSG"
fi
echo "$RUN_BODY" >> $JSON_REPORT_FILE_PATH
echo "Saved response to JSON Reports file - $JSON_REPORT_FILE_PATH"
}
function saveFinalResponseToJUnitFile(){
if [ $IS_TEST_RUN_COMPLETED -eq 0 ]
then
echo "$MAX_WAITTIME_EXCEEDED_ERRORMSG"
exit 1
fi
echo ""
echo "Downloading the Junit report..."
curl --progress-bar -H "Authorization:Bearer $TESTSIGMA_API_KEY" \
-H "Accept: application/xml" \
-H "content-type:application/json" \
-X GET $TESTSIGMA_JUNIT_REPORT_URL/$RUN_ID --output $JUNIT_REPORT_FILE_PATH
echo "JUNIT Reports file - $JUNIT_REPORT_FILE_PATH"
}
function getJsonValue() {
json_key=$1
awk -F"[,:}]" '{for(i=1;i<=NF;i++){if($i~/\042'$json_key'\042/){print $(i+1)}}}' | tr -d '"'
}
function populateRuntimeData() {
IFS=',' read -r -a VARIABLES <<< "$RUNTIME_DATA_INPUT"
RUN_TIME_DATA='"runtimeData":{'
DATA_VALUES=
for element in "${VARIABLES[@]}"
do
DATA_VALUES=$DATA_VALUES","
IFS='=' read -r -a VARIABLE_VALUES <<< "$element"
DATA_VALUES="$DATA_VALUES"'"'"${VARIABLE_VALUES[0]}"'":"'"${VARIABLE_VALUES[1]}"'"'
done
DATA_VALUES="${DATA_VALUES:1}"
RUN_TIME_DATA=$RUN_TIME_DATA$DATA_VALUES"}"
}
function populateBuildNo(){
if [ -z "$BUILD_NO" ]
then
echo ""
else
BUILD_DATA='"buildNo":'$BUILD_NO
fi
}
function populateJsonPayload(){
JSON_DATA='{"executionId":'$TESTSIGMA_TEST_PLAN_ID
populateRuntimeData
populateBuildNo
if [ -z "$BUILD_DATA" ];then
JSON_DATA=$JSON_DATA,$RUN_TIME_DATA"}"
elif [ -z "$RUN_TIME_DATA" ];then
JSON_DATA=$JSON_DATA,$BUILD_DATA"}"
elif [ -z "$BUILD_DATA" ] && [ -z "$RUN_TIME_DATA" ];then
JSON_DATA=$JSON_DATA"}"
else
JSON_DATA=$JSON_DATA,$RUN_TIME_DATA,$BUILD_DATA"}"
fi
echo "InputData="$JSON_DATA
}
function convertsecs(){
((h=${1}/3600))
((m=(${1}%3600)/60))
((s=${1}%60))
printf "%02d hours %02d minutes %02d seconds" $h $m $s
}
function setExitCode(){
RESULT=$(echo $RUN_BODY | getJsonValue result)
APPURL=$(echo $RUN_BODY | getJsonValue result)
echo $RESULT
echo $([[ $RESULT =~ "SUCCESS" ]])
if [[ $RESULT =~ "SUCCESS" ]];then
EXITCODE=0
else
EXITCODE=1
fi
}
#******************************************************
echo "************ Testsigma: Start executing automated tests ************"
populateJsonPayload
# store the whole response with the status at the end
HTTP_RESPONSE=$(curl -H "Authorization:Bearer $TESTSIGMA_API_KEY" \
-H "Accept: application/json" \
-H "content-type:application/json" \
--silent --write-out "HTTPSTATUS:%{http_code}" \
-d $JSON_DATA -X POST $TESTSIGMA_TEST_PLAN_REST_URL )
# extract the body from response
HTTP_BODY=$(echo $HTTP_RESPONSE | sed -e 's/HTTPSTATUS\:.*//g')
# extract run id from response
RUN_ID=$(echo $HTTP_RESPONSE | getJsonValue id)
# extract the status code from response
HTTP_STATUS=$(echo $HTTP_RESPONSE | tr -d '\n' | sed -e 's/.*HTTPSTATUS://')
# print the run ID or the error message
NUMBERS_REGEX="^[0-9].*"
if [[ $RUN_ID =~ $NUMBERS_REGEX ]]; then
echo "Run ID: $RUN_ID"
else
echo "$RUN_ID"
fi
EXITCODE=0
# example using the status
if [ ! $HTTP_STATUS -eq 200 ]; then
echo "Failed to start Test Plan execution!"
echo "$HTTP_RESPONSE"
EXITCODE=1
#Exit with a failure.
else
echo "Number of maximum polls to be done: $POLL_COUNT"
checkTestPlanRunStatus
saveFinalResponseToJUnitFile
saveFinalResponseToJSONFile
setExitCode
fi
echo "************************************************"
echo "Result JSON Response: $RUN_BODY"
echo "************ Testsigma: Completed executing automated tests ************"
exit $EXITCODE