Understanding Results
Learn how to read load test metrics and make data-driven decisions about your infrastructure.
Key Metrics
Response Time (http_req_duration)
How long it takes for your server to respond. Look at the median (p50) for typical performance and p95/p99 for tail latency.
p95 under 500ms for APIs, under 2s for web pages
p95 over 1s for APIs or rapidly increasing over time
Throughput (http_reqs)
Requests per second your server handles. Should scale linearly with VUs until your server hits its limit.
Throughput increases as VUs increase
Throughput plateaus or drops as VUs increase — server is saturated
Error Rate (http_req_failed)
Percentage of requests that return HTTP error codes (4xx/5xx) or connection failures.
Under 0.1% for production-ready systems
Over 1% — investigate the error distribution and fix root causes
Check Pass Rate (checks)
How many of your custom checks passed. Checks validate response bodies, headers, and status codes.
Over 99% pass rate
Under 95% — your API is returning unexpected responses under load
Data Transfer (data_received / data_sent)
Total bytes transferred during the test. Useful for estimating bandwidth costs and detecting payload bloat.
Consistent with expected response sizes
Much larger than expected — check for verbose logging or debug responses
Virtual Users (vus)
Number of concurrent virtual users at any point during the test. Maps to your stages configuration.
Smoothly ramping according to your stages
VUs not reaching target — k6 can't spawn fast enough or hitting connection limits
Understanding Percentiles
Averages lie. If your average response time is 200ms but your p99 is 5 seconds, 1 in 100 users is having a terrible experience. Always use percentiles.
| Percentile | What it means |
|---|---|
| p50 (median) | Half of requests are faster than this. Your "typical" response time. |
| p90 | 90% of requests are faster. A good SLA target for most APIs. |
| p95 | 95% of requests are faster. The industry standard for API performance SLAs. |
| p99 | 99% of requests are faster. Only 1 in 100 users experiences worse latency. Critical for high-traffic apps. |
Red Flags to Watch For
- ⚠️Response time increases with VUs — Your server is saturated. Scale horizontally or optimize bottlenecks.
- ⚠️Throughput plateaus while VUs increase — You've hit a concurrency limit. Check database connections, thread pools, or rate limits.
- ⚠️Error rate spikes suddenly — Likely a resource exhaustion issue. Check memory, CPU, or connection pool limits.
- ⚠️Large gap between p50 and p99 — Inconsistent performance. Look for garbage collection pauses, cold caches, or noisy neighbors.