Collection Runner
Run all requests in a collection sequentially or in parallel, with progress tracking and aggregated results.
The Collection Runner executes multiple requests automatically in sequence or parallel, perfect for testing complete API workflows, running integration tests, or validating endpoints in bulk.
Quick Start
Open the Runner
Click the Run All button in the collection or folder header.
Configure Execution
Choose execution mode (Sequential or Parallel), select which requests to run, and set any delays.
Run
Click Run to start execution. Watch progress and results appear in real-time.
Review Results
See aggregated statistics, individual request results, and test outcomes.
Opening the Runner
For Collections
Click the Run All button in the collection header to open the runner.
All requests in the collection will be included, including those in nested folders.
For Folders
Click the Run All button in the folder header.
Only requests in that folder (and its subfolders) will run.
Execution Modes
Sequential Mode
Requests execute one at a time, in order.
Use when:
- Order matters (later requests depend on earlier ones)
- Testing API workflows with dependencies
- You need to throttle request rate
- Scripts set variables for subsequent requests
Settings:
- Delay Between Requests: Add a pause (in seconds) between each request
- Default: 0 seconds (no delay)
Sequential is Default
Sequential mode is the default because most API testing scenarios involve request dependencies.
Parallel Mode
Multiple requests execute simultaneously.
Use when:
- Requests are independent
- You want faster execution
- Load testing or bulk operations
- Order doesn't matter
Settings:
- Concurrent Requests: How many requests run at once
- Range: 1 to 10
- Default: 5
Watch Rate Limits
Parallel mode can trigger rate limits if you send too many requests simultaneously. Start with lower concurrency.
Selecting Requests
Default Selection
All requests are selected by default when you open the runner.
Individual Selection
Uncheck any request you don't want to run. Only selected requests execute.
Bulk Actions
| Action | What It Does |
|---|---|
| Select All | Check all requests |
| Deselect All | Uncheck all requests |
Selection Persists
Your selection is remembered during the runner session, making it easy to re-run the same subset.
Custom Order
Drag and drop requests to change execution order.
Note: This only affects the current run - your collection's actual order doesn't change.
Running Requests
Start Execution
Click Run to begin. The button is disabled if no requests are selected.
Monitor Progress
Watch the progress bar, elapsed time, and individual results as requests complete.
Control Execution
Use Pause, Resume, or Stop buttons to control the run.
Review Results
After completion, review statistics and individual request details.
Execution Controls
Pause
Click Pause to temporarily stop execution.
What happens:
- Currently running requests finish
- No new requests start
- Progress freezes
- Resume button appears
Works in both modes:
- Sequential: Pauses before next request
- Parallel: Waits for active requests to complete, doesn't start new ones
Resume
Click Resume to continue from where you paused.
Execution picks up with the next request in the queue.
Stop
Click Stop to cancel the entire run.
What happens:
- Currently running requests finish
- Remaining requests are skipped
- Results show only completed requests
- Run Again button appears
Stop Is Permanent
Stopping cannot be undone. Remaining requests won't execute and you'll need to start a new run.
Progress Tracking
Overall Progress
Progress Bar
- Shows percentage complete
- Updates in real-time as requests finish
- "X of Y completed" counter
Elapsed Time
- Counts up from 0 seconds
- Updates every second
- Freezes when paused
- Shows final time after completion
Individual Results
Each request result shows:
| Detail | Description |
|---|---|
| Method | HTTP method badge (GET, POST, etc.) |
| Name | Request name |
| Status | HTTP status code |
| Time | Response time in milliseconds |
Color Coding:
| Status | Color | Meaning |
|---|---|---|
| 2xx | Green | Success |
| 3xx | Blue | Redirect |
| 4xx | Orange | Client error |
| 5xx | Red | Server error |
| Error | Red | Network/timeout error |
Results Summary
After execution completes, you'll see aggregated statistics.
Statistics Cards
Success Count
- How many requests returned 2xx or 3xx status
- Green indicator
Failure Count
- How many requests returned 4xx, 5xx, or errored
- Red indicator
Average Response Time
- Average of all request response times
- Helpful for performance analysis
Total Time
- Total elapsed time from start to finish
- Includes delays (in sequential mode)
Test Results
If your requests include tests (in post-response scripts), you'll see test aggregation.
Test Summary:
- Total tests passed
- Total tests failed
- Pass/fail badge
- Expandable details per request
Writing Tests
Use post-response scripts to add assertions. Learn more in the Scripting documentation.
Expandable Details
Click any result row to see more information.
Expanded view shows:
- Full URL as it was sent
- Error message (if request failed)
- Test results (if tests exist)
Click again to collapse. Only one row can be expanded at a time.
Variables and Scripts
Environment Variables
Requests use variables from your active environment.
Variable Resolution:
- Request variables (highest priority)
- Folder variables
- Collection variables
- Environment variables (lowest priority)
Example:
URL: {{baseUrl}}/users/{{userId}}
Resolved: https://api.example.com/users/12345
Script Execution
Pre-request scripts run before each request.
Post-response scripts run after each request.
Script Order:
- Collection pre-request script
- Folder pre-request script (if applicable)
- Request pre-request script
- HTTP Request
- Request post-response script
- Folder post-response script
- Collection post-response script
Variable Propagation
Scripts can set variables that persist for subsequent requests in the run.
Example:
Request 1 post-script:
// Extract auth token from response
const data = nova.response.json();
nova.environment.set("authToken", data.token);
Request 2 can use it:
Authorization: Bearer {{authToken}}
Variables Don't Persist After Run
Variables set during a run use currentValue which is local-only. After the run, they revert to default values.
Runner Presets
Save common configurations for quick reuse.
Saving a Preset
Configure Runner
Set execution mode, select requests, adjust settings.
Click Save
Click Save Preset button.
Name It
Enter a name (e.g., "Quick Test", "Full Integration Test") and optional description.
Confirm
Preset is saved locally and appears in the presets list.
What's saved:
- Execution mode (Sequential/Parallel)
- Delay or concurrency settings
- Selected requests
- Custom request order
Loading a Preset
- Open the runner
- Select a preset from the list
- All settings are restored automatically
Updating a Preset
- Load the preset
- Make changes
- Click Save again - the existing preset is updated
Preset Statistics
After each run, the preset stores statistics:
- Last run date and time
- Success count
- Failure count
- Total requests run
These stats appear in the preset list for quick reference.
Presets Are Local
Presets are stored on your device only and don't sync across devices.
Error Handling
Request Failures
If a request fails (network error, timeout, 4xx/5xx), it's marked as failed and execution continues.
The runner does NOT stop on failure - all remaining requests still execute.
Network Offline
If you're offline, requests fail immediately with a network error.
The runner doesn't queue or retry requests automatically.
Timeouts
Requests timeout after 30 seconds by default.
Timeout errors are marked as failures and execution continues.
Best Practices
- Start with Sequential — Use sequential mode first, especially when requests depend on each other
- Test with Few Requests — Validate your configuration on a small subset before running all requests
- Use Delays for Rate Limits — Add delays in sequential mode if the API has rate limits
- Save Common Configurations — Create presets for frequently used test scenarios
- Review Test Results — Add assertions to catch API regressions early
- Check Variables — Ensure your environment is active and variables are set correctly
- Monitor Resource Usage — Limit concurrency in parallel mode to avoid overwhelming the API
Troubleshooting
All Requests Failing
Check:
- Internet connection is active
- Variables are resolved correctly (no
{{unresolved}}in URLs) - Environment is active if using variables
- API endpoint is accessible
Slow Execution
Causes:
- Large delay between requests (sequential mode)
- Low concurrency (parallel mode)
- Slow API responses
- Script execution taking time
Solutions:
- Reduce delay (sequential)
- Increase concurrency (parallel)
- Optimize scripts
Tests Failing
Check:
- Response structure matches expectations
- Status codes are as expected
- Variables are set correctly before tests run