HTTPS Reverse Proxy: Caddy outperforms NGINX 4x

In my setup, where I have an HTTPS load balancer, sending traffic over HTTP to the server, aka reverse proxy, Caddy outperforms NGINX by 4x.

That’s 400% better performance via Caddy over an HTTPS connection than NGINX. I’m using RPC Bench for this test.

Disclaimer: YMMV. If Nginx is working out great for you, continue using it. I have switched from Nginx to Caddy after I experienced this performance difference. However, I’m constantly experimenting, and if anything changes on this reverse proxy front, I’ll update the blog post.

Summary

With 4 concurrent requests, NGINX was able to run 100 calls/second, with a 90%-ile latency of 48ms. Under the same conditions, Caddy ran 400 calls/second, with a 90%-ile latency of 16ms.

90%-ile response size is the same in both, which shows that the results were identical and legit. Also, the rpc-bench script spot checks for validity of results, and crashes in case it finds any inconsistencies.

I ran the tests multiple times, with the same results to ensure that the backend wasn’t somehow caching things better. I ran NGINX, then Caddy, then NGINX, then Caddy, and that loop 3 times to ensure the results were correct. Because I found it unbelievable that Caddy could outperform NGINX 4x over HTTPS.

I double checked my methodology, then triple checked it. But, there’s no other way to see this, except Caddy is indeed way faster when dealing with HTTPS connections.

My calculated guess here is that Caddy does more concurrent processing to encrypt the responses, while Nginx doesn’t — so as the response size increases, Caddy starts to really outperform Nginx on multi-core processors. I see that in the CPU usage of Caddy vs Nginx. The latter barely uses any CPU, and hence is slower to encrypt the responses.

Actually: My Tweet is inaccurate. It’s not 2x faster, it’s 4x faster!

Machine Specs

The machine I used for this test is a Digital Ocean, CPU-optimized droplet, with 4 vCPUs, and 8 GB RAM, running Ubuntu 22.04 LTS.

{
  "name":"ubuntu-c-4-8gib-sfo3-01",
  "size":"c-4-8GiB",
  "region":"sfo3",
  "image":"ubuntu-22-10-x64"
}
$ inxi

CPU: quad core Intel Xeon Platinum 8358 (-MCP-) speed: 2600 MHz Kernel: 5.15.0-53-generic x86_64
Up: 16h 54m Mem: 528.5/7949.5 MiB (6.6%) Storage: 50 GiB (6.3% used) Procs: 120 Shell: Zsh
inxi: 3.3.13

$ lsb_release -a

Distributor ID: Ubuntu
Description:    Ubuntu 22.04.1 LTS
Release:    22.04
Codename:   jammy

Results with Caddy

sudo caddy reverse-proxy --from <domain>:443 --to <server>:80
Num Queries: 25138 | Num 429:    0 | Data: 3.3 GiB [   1m0s @ 411 calls/sec ]
-----------------------
Latency in milliseconds
 -- Histogram:
Min value: 4
Max value: 44
Count: 25138
50p: 16.00
75p: 16.00
90p: 16.00

-----------------------
Resp size in bytes
 -- Histogram:
Min value: 1479
Max value: 1387712
Count: 25138
50p: 131072.00
75p: 262144.00
90p: 262144.00

Method: eth_getBlockByNumber | DONE

Results with NGINX: Round 1, Default Settings

Here’s the relevant portion of NGINX config. The HTTPS certs were auto issued by following instructions in this link.

server {
    root /var/www/html;
    server_name xxxxxxxxxxxxxxxxxxxxx;
  access_log off;

    location / {
        proxy_pass http://xxxxxxxxxxxxxx:80;
    }

  listen [::]:443 ssl ipv6only=on; # managed by Certbot
  listen 443 ssl; # managed by Certbot
  ssl_certificate /etc/letsencrypt/live/xxxxxxxxxxxxxxxxxxxxx/fullchain.pem; # managed by Certbot
  ssl_certificate_key /etc/letsencrypt/live/xxxxxxxxxxxxxxxxxxxxx/privkey.pem; # managed by Certbot
  include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
  ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
}

server {
  if ($host = xxxxxxxxxxxxxxxxxxxxx) {
      return 301 https://$host$request_uri;
  } # managed by Certbot


    listen 80 default_server;
    listen [::]:80 default_server;

    server_name xxxxxxxxxxxxxxxxxxxxx;
  return 404; # managed by Certbot
}

Based on nginx performance tips I ensured that worker_processes auto; was already set in /etc/nginx/nginx.conf. I also turned off access_log and gzip compression.

Num Queries: 7190 | Num 429:    0 | Data: 951 MiB [   1m0s @ 117 calls/sec ]
-----------------------
Latency in milliseconds
 -- Histogram:
Min value: 13
Max value: 101
Count: 7190
50p: 48.00
75p: 48.00
90p: 48.00

-----------------------
Resp size in bytes
 -- Histogram:
Min value: 1488
Max value: 1030245
Count: 7190
50p: 131072.00
75p: 262144.00
90p: 262144.00

Method: eth_getBlockByNumber | DONE

Results with Nginx: Round 2, Twitterverse Config Recommendation

Based on recommendation from a netizen, I set some extra configs. Even with these changes, I see very little peformance gains over round 1.

#  diff nginx.conf.backup nginx.conf
5a6,8
> worker_rlimit_nofile 100000;
> error_log /var/log/nginx/error.log crit;
>
7,8c10,12
<   worker_connections 768;
<   # multi_accept on;
---
>   worker_connections 4000;
>   use epoll;
>   multi_accept on;
16a21,25
>   open_file_cache max=200000 inactive=20s;
>       open_file_cache_valid 30s;
>       open_file_cache_min_uses 2;
>       open_file_cache_errors on;
>
18a28,33
>   tcp_nodelay on;
>   reset_timedout_connection on;
>   client_body_timeout 10;
>   send_timeout 2;
>   keepalive_timeout 30;
>
39c54
<   access_log /var/log/nginx/access.log;
---
>   access_log off;


#  diff ~/default.backup sites-enabled/default
52c52,54
<       # try_files $uri $uri/ =404;
---
>       proxy_redirect off;
>       proxy_http_version 1.1;
>       proxy_set_header Connection "";
# The above diff goes into the location / section, just below proxy_pass.
Num Blocks:  7522 | Num Queries: 7526 | Num 429:    0 | Data: 985 MiB [   1m0s @ 123 calls/sec ]
-----------------------
Latency in milliseconds
 -- Histogram:
Min value: 12
Max value: 81
Count: 7526
50p: 48.00
75p: 48.00
90p: 48.00
 --

-----------------------
Resp size in bytes
 -- Histogram:
Min value: 1479
Max value: 774017
Count: 7526
50p: 131072.00
75p: 262144.00
90p: 262144.00


Date
November 23, 2022