Performance Testing
Use Cases
Performance testing for specified interfaces.
Implementation
To meet high-performance concurrent demands, a self-developed pressure testing engine is used, capable of handling over 10,000 concurrent requests. The project is open-source, available at: https://github.com/EchoAPI-Team/runnerGo
Pressure Test Result Calculation Method
Load Test Value | Meaning | Calculation Method |
---|---|---|
Total Requests | Total number of requests sent | Concurrent requests * Number of iterations |
Execution Time | Execution time of the load testing task | End time of the task - Start time of the task |
Successful Requests | Number of requests with HTTP status code 200 | - |
Failed Requests | Number of requests with non-200 HTTP code or connection errors | - |
Error Rate | Ratio of errors in load testing | (Number of failed requests / Total requests) * 1000 |
Total Received Data | Total bytes received in responses | Sum of the byte sizes of each response |
Requests per Second | Average number of requests processed per second | Total requests / Total execution time |
Successful Requests per Second | Average number of successful requests per second | Total successful requests / Total successful time |
Received Bytes per Second | Average bytes received per second | Total received bytes / Total execution time |
Maximum Response Time | Maximum execution time for a request | Longest execution time among all requests |
Minimum Response Time | Minimum execution time for a request | Shortest execution time among all requests |
Average Response Time | Average execution time for requests | Total execution time / Total requests |
10th Percentile | Completion time for the first 10% of requests | Sort all request times in ascending order, select the time at the 10% position |
25th Percentile | Completion time for the first 25% of requests | Sort all request times in ascending order, select the time at the 25% position |
50th Percentile | Completion time for the first 50% of requests | Sort all request times in ascending order, select the time at the 50% position |
75th Percentile | Completion time for the first 75% of requests | Sort all request times in ascending order, select the time at the 75% position |
90th Percentile | Completion time for the first 90% of requests | Sort all request times in ascending order, select the time at the 90% position |
95th Percentile | Completion time for the first 95% of requests | Sort all request times in ascending order, select the time at the 95% position |
Practice
Concurrent results can be easily affected by external factors, so it's important to minimize these influences during stress testing. Factors that impact stress test outcomes include the local machine's handle limit, DNS resolution speed, network quality, and server connection limits, among others.
For instance, when using 10,000 concurrent connections, it’s easy to exceed the maximum handle limit of the machine (typically 1024). Requests that exceed this limit will fail due to constrained handles, resulting in connection failures.
Thus, selecting an appropriate level of concurrency is crucial for accurately testing interface performance; a higher concurrency is not necessarily better. It is advisable to conduct preliminary tests at concurrency levels of around 10, 100, 500, and 1000. If the failure rate is below 1%, one can gradually increase the number of concurrent connections. Only when the increase in requests per second can be sustained should it be considered a healthy usage practice.