================================================================================ CPU Intel(R) Core(TM) i9-13900K @ 5.5 GHz, performance governor CPU security patches all ON RAM 192 GB, DDR5 5600 MT/s SSD 2 TB NVMe SSD UFW Firewall ON OS Ubuntu 24.04 LTS (with all security updates) FDs prlimit --pid=$$ --nofile=1048576 TST NGINX's wrk2, see why here: http://gwan.com/blog/20250417.html ================================================================================ LITESPEED Enterprise 6.3.4 => [116k-565k] RPS, 285k socket errors -------------------------------------------------------------------------------- RPS by number of concurrent clients: 1: 116k RPS, Socket errors: read 116 1k: 565k RPS --> LITESPEED and NGINX peak RPS (G-WAN's peak: 10k clients) 10k: 279k RPS, Socket errors: timeout 7451 20k: 220k RPS, Socket errors: timeout 44312 30k: 201k RPS, Socket errors: timeout 85010 40k: 180k RPS, Socket errors: timeout 148176 ---------------- Total 1,561k RPS ================================================================================ Before the test -------------------------------------------------------------------------------- - running litespeed process(es): PID PPID THRDS %CPU VIRT RSS SHRD EXE 11474 1791 1 0.0 38.7 MB 17.5 MB 1.5 MB litespeed (lshttpd - main) 11478 11474 6 0.0 80.4 MB 17.9 MB 1.5 MB litespeed (lshttpd - #01) 11479 11474 6 0.0 80.4 MB 17.9 MB 1.5 MB litespeed (lshttpd - #02) =============== Total: 17.5 + (2 * 17.9) + 1.5 = 54.8 MB -------------------------------------------------------------------------------- After the test -------------------------------------------------------------------------------- - running litespeed process(es): PID PPID THRDS %CPU VIRT RSS SHRD EXE 11474 1791 1 0.0 38.7 MB 17.5 MB 1.5 MB litespeed (lshttpd - main) 11478 11474 6 13.2 142.3 MB 77.4 MB 1.5 MB litespeed (lshttpd - #01) 11479 11474 6 13.2 141.7 MB 76.3 MB 1.5 MB litespeed (lshttpd - #02) =============== Total: 17.5 + 77.4 + 76.3 + 1.5 = 172.7 MB ================================================================================ wrk2 -t1 -c1 -R10m "http://127.0.0.1:8081/100.html" Initialised 1 threads in 0 ms. Running 10s test @ http://127.0.0.1:8081/100.html 1 threads and 1 connections Thread Stats Avg Stdev Max +/- Stdev Latency 4.94s 2.85s 9.89s 0.03% Req/Sec -nan -nan 0.00 0.00% 1165744 requests in 10.00s, 429.13MB read Socket errors: connect 0, read 116, write 0, timeout 0 Requests/sec: 116573.36 Transfer/sec: 42.91MB ================================================================================ wrk2 -t1k -c1k -R1m "http://127.0.0.1:8081/100.html" Initialised 1000 threads in 49 ms. Running 10s test @ http://127.0.0.1:8081/100.html 1000 threads and 1000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.25s 1.25s 4.68s 0.04% Req/Sec -nan -nan 0.00 0.00% 5651027 requests in 9.99s, 2.03GB read Requests/sec: 565807.98 Transfer/sec: 208.28MB ================================================================================ wrk2 -t10k -c10k -R1m "http://127.0.0.1:8081/100.html" Initialised 10000 threads in 2206 ms. Running 10s test @ http://127.0.0.1:8081/100.html 10000 threads and 10000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 3.31s 2.42s 9.11s 0.03% Req/Sec -nan -nan 0.00 0.00% 3032861 requests in 10.87s, 1.09GB read Socket errors: connect 0, read 0, write 0, timeout 7451 Requests/sec: 279006.43 Transfer/sec: 102.71MB ================================================================================ wrk2 -t20k -c20k -R1m "http://127.0.0.1:8081/100.html" Initialised 20000 threads in 8238 ms. Running 10s test @ http://127.0.0.1:8081/100.html 20000 threads and 20000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 2.34s 2.39s 9.71s 0.02% Req/Sec -nan -nan 0.00 0.00% 2894658 requests in 13.11s, 1.04GB read Socket errors: connect 0, read 0, write 0, timeout 44312 Requests/sec: 220848.50 Transfer/sec: 81.30MB ================================================================================ wrk2 -t30k -c30k -R1m "http://127.0.0.1:8081/100.html" Initialised 30000 threads in 18110 ms. Running 10s test @ http://127.0.0.1:8081/100.html 30000 threads and 30000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 1.42s 2.03s 9.71s 0.01% Req/Sec -nan -nan 0.00 0.00% 2742754 requests in 13.59s, 0.99GB read Socket errors: connect 0, read 0, write 0, timeout 85010 Requests/sec: 201863.31 Transfer/sec: 74.31MB ================================================================================ wrk2 -t40k -c40k -R100m "http://127.0.0.1:8081/100.html" Initialised 40000 threads in 31842 ms. Running 10s test @ http://127.0.0.1:8081/100.html 40000 threads and 40000 connections Thread Stats Avg Stdev Max +/- Stdev Latency 5.55s 2.60s 9.84s 0.02% Req/Sec 0.00 0.00 0.00 0.00% 2480522 requests in 13.75s, 0.89GB read Socket errors: connect 0, read 0, write 0, timeout 148176 Requests/sec: 180407.50 Transfer/sec: 66.41MB ================================================================================ Conclusion -------------------------------------------------------------------------------- LITESPEED relies on caching and 2 worker processes (each running 6 threads) to achieve good low-concurrency scores, but LITESPEED is quickly overwhelmed by moderate concurrencies (> 1k clients). Compared to LITESPEED, NGINX under-performs WITHOUT CONCURRENCY, and performs twice better on all concurrencies, with less errors. LITESPEED starts with a lower RAM usage but closes the test with a higher memory consumption than NGINX - despite lower performance. Compared to LITESPEED, G-WAN is as fast WITHOUT CONCURRENCY, and much faster (without errors) on the [1k-40k] concurrency range (G-WAN being up to 3 orders of magnitude faster than LITESPEED). At startup, G-WAN consumed 71x less RAM than LITESPEED. At the test's end, G-WAN has consumed 4x more RAM than LITESPEED - while consistantly delivering hundreds of millions of RPS instead of NGINX and LITESPEED's hundreds of thousand RPS. Last but not least, LITESPEED puts emphasis to Apache Benchmark (AB) which is long considered obsolete because it is single-threaded, the only WITHOUT CONCURRENCY range where LITESPEED is faster than NGINX: https://www.litespeedtech.com/support/wiki/doku.php/litespeed_wiki:faq:performing-a-benchmark ================================================================================