Executing Tests
Learn how to run tests in Qualflare, whether you're executing them manually through the UI or uploading automated results via the CLI.
Overview
Qualflare supports multiple ways to execute tests:
- Manual execution: Run tests directly from the UI for ad-hoc testing
- CLI upload: Upload results from your automated test runs
- Test suite execution: Run organized groups of tests
- Test plan execution: Execute scheduled or on-demand test plans
- Retry workflows: Re-run failed tests to investigate issues
Workflow 1: Run Tests Manually from the UI
Manual test execution is useful for exploratory testing, smoke tests, or when you need to verify specific functionality on-demand.
Steps
- Navigate to your project in Qualflare
- Click Test Cases in the left sidebar
- Select the test cases you want to run by checking the boxes next to each test
- Click the Run button in the action bar
- In the launch configuration dialog:
- Enter a launch name (e.g., "Smoke Test - Jan 14")
- Select the environment (dev, staging, production)
- Choose the assignee if someone else will execute the tests
- Click Start Launch
During Execution
As you execute each test:
- Click on a test case to view its steps
- Follow each step and mark it as complete
- Set the final result:
- Passed: Test executed successfully
- Failed: Unexpected result occurred
- Blocked: Could not complete due to environmental issues
- Skipped: Test not applicable for this session
- Add comments, screenshots, or logs as evidence
- Move to the next test case
Viewing Real-Time Results
The launch dashboard shows:
- Total tests vs. completed tests
- Pass/fail counts updating in real-time
- Execution progress bar
- List of failed tests for quick access
Tips
- Use keyboard shortcuts (Next/Previous) to navigate between tests quickly
- Add screenshots for failed steps to aid debugging
- Use the Blocked status when environmental issues prevent testing
Workflow 2: Upload Test Results via CLI
For automated tests, use the Qualflare CLI to upload test results from your CI/CD pipeline or local development environment.
Prerequisites
- Qualflare CLI installed (
qf) - Test results in a supported format (JUnit, pytest, Jest, etc.)
- Authenticated with Qualflare
See the Quick Start for installation basics.
Basic Upload
# Upload test results to a project
qf upload results.xml \
--project "MyApp Tests" \
--launch "CI Build #456"The CLI will:
- Auto-detect the test framework from the file format
- Parse the test results
- Create a new launch in Qualflare
- Upload individual test results as case runs
Specifying Environment
# Upload with environment metadata
qf upload results.xml \
--project "MyApp Tests" \
--launch "Staging Tests" \
--environment stagingIncluding Build Metadata
# Upload with CI/CD metadata
qf upload results.xml \
--project "MyApp Tests" \
--launch "PR #123 Tests" \
--environment pr \
--branch feature/new-auth \
--commit abc123defHandling Upload Errors
If the upload fails:
# Validate your results file before uploading
qf validate results.xml
# Check which format was detected
qf list-formats results.xmlTips
- Use descriptive launch names that include build numbers or PR identifiers
- Always specify the environment for accurate reporting
- Include branch and commit metadata for traceability
- Run
qf validatefirst if you're unsure about the file format
Workflow 3: Execute Specific Test Suites
Test suites organize related tests into groups. Execute entire suites to test specific features or areas of your application.
Steps
- Navigate to your project
- Click Test Suites in the left sidebar
- Find the suite you want to execute
- Click the Run Suite button on the suite card
- Configure the launch:
- Enter a launch name
- Select the environment
- Choose whether to run all cases or only untested ones
- Click Start Launch
During Suite Execution
- Tests appear in the order they're organized in the suite
- Use the suite outline to jump between test cases
- Track progress against the total suite size
Tips
- Organize suites by feature area, user flow, or priority level
- Create suites for regression testing (critical path tests)
- Use suite execution for smoke testing before releases
- Save common suite configurations as test plans for reuse
Workflow 4: Run Tests from a Test Plan
Test plans define reusable test execution configurations with specific test cases, suites, and schedules.
Steps
- Navigate to your project
- Click Test Plans in the left sidebar
- Select the test plan you want to execute
- Click Execute Plan
- Review the pre-configured settings:
- Selected test cases and suites
- Environment configuration
- Assigned testers
- Click Start Launch
Scheduled Test Plans
If a test plan has a schedule:
- Navigate to Test Plans
- Find the scheduled plan
- Click View Schedule to see upcoming runs
- Scheduled launches are created automatically
Tips
- Create test plans for recurring testing activities (daily smoke tests, weekly regression)
- Assign different team members to different sections of large test plans
- Use test plans to ensure consistency across releases
- Clone existing plans to create variations for different environments
Workflow 5: Retry Failed Tests and Investigate Failures
When tests fail, you need to investigate and determine if the failure is a legitimate bug or a flaky test.
Steps
- Navigate to the failed launch
- Click on the Failed filter to see only failed tests
- Click on a failed test case to view:
- Failure message and stack trace
- Screenshots or logs
- Test steps and expected results
- Decide on the action:
- Retry: Re-run the test to check for flakiness
- Investigate: Debug the issue in your application
- Link Defect: Create or link a bug report
Retrying Failed Tests
- In the launch view, click Retry Failed
- Select which tests to retry:
- All failed tests: Retry everything that failed
- Specific tests: Choose individual tests to retry
- Click Start Retry
- A new launch is created with the retry results
Interpreting Retry Results
- Consistent failure: Same test fails again with the same error
- Likely a legitimate bug
- Link to a defect for tracking
- Inconsistent failure: Test passes on retry
- Likely a flaky test
- Investigate timing issues, dependency problems, or environmental factors
- Different failure: Test fails with a different error
- Multiple issues may exist
- Investigate each failure mode
Tips
- Always retry failed tests before creating defects
- Document flaky tests for later stabilization
- Use failure patterns across multiple launches to identify systemic issues
- Add more specific assertions to reduce false positives
Understanding Test States
| State | Meaning | When to Use |
|---|---|---|
| Passed | Test executed successfully, expected outcome achieved | Default when test completes without issues |
| Failed | Test executed but unexpected result occurred | When expected outcome doesn't match actual |
| Skipped | Test was not executed | Test excluded from this run, not applicable |
| Blocked | Test could not run due to environmental issues | Service down, missing data, dependency failure |
| Running | Test is currently executing | Automated test in progress |
Related Topics
- Launches - Learn about launch lifecycle and analytics
- Case Runs - Individual test results explained
- Quick Start - Basic CLI upload tutorial
- CLI Tool Documentation - Full CLI command reference