Skip to main content

Performance Testing

Use Cases

Performance testing for specified interfaces.

Implementation

To meet high-performance concurrent demands, a self-developed pressure testing engine is used, capable of handling over 10,000 concurrent requests. The project is open-source, available at: https://github.com/EchoAPI-Team/runnerGo

Pressure Test Result Calculation Method

Load Test ValueMeaningCalculation Method
Total RequestsTotal number of requests sentConcurrent requests * Number of iterations
Execution TimeExecution time of the load testing taskEnd time of the task - Start time of the task
Successful RequestsNumber of requests with HTTP status code 200-
Failed RequestsNumber of requests with non-200 HTTP code or connection errors-
Error RateRatio of errors in load testing(Number of failed requests / Total requests) * 1000
Total Received DataTotal bytes received in responsesSum of the byte sizes of each response
Requests per SecondAverage number of requests processed per secondTotal requests / Total execution time
Successful Requests per SecondAverage number of successful requests per secondTotal successful requests / Total successful time
Received Bytes per SecondAverage bytes received per secondTotal received bytes / Total execution time
Maximum Response TimeMaximum execution time for a requestLongest execution time among all requests
Minimum Response TimeMinimum execution time for a requestShortest execution time among all requests
Average Response TimeAverage execution time for requestsTotal execution time / Total requests
10th PercentileCompletion time for the first 10% of requestsSort all request times in ascending order, select the time at the 10% position
25th PercentileCompletion time for the first 25% of requestsSort all request times in ascending order, select the time at the 25% position
50th PercentileCompletion time for the first 50% of requestsSort all request times in ascending order, select the time at the 50% position
75th PercentileCompletion time for the first 75% of requestsSort all request times in ascending order, select the time at the 75% position
90th PercentileCompletion time for the first 90% of requestsSort all request times in ascending order, select the time at the 90% position
95th PercentileCompletion time for the first 95% of requestsSort all request times in ascending order, select the time at the 95% position

Practice

Concurrent results can be easily affected by external factors, so it's important to minimize these influences during stress testing. Factors that impact stress test outcomes include the local machine's handle limit, DNS resolution speed, network quality, and server connection limits, among others.

For instance, when using 10,000 concurrent connections, it’s easy to exceed the maximum handle limit of the machine (typically 1024). Requests that exceed this limit will fail due to constrained handles, resulting in connection failures.

Thus, selecting an appropriate level of concurrency is crucial for accurately testing interface performance; a higher concurrency is not necessarily better. It is advisable to conduct preliminary tests at concurrency levels of around 10, 100, 500, and 1000. If the failure rate is below 1%, one can gradually increase the number of concurrent connections. Only when the increase in requests per second can be sustained should it be considered a healthy usage practice.