Kategorie: Allgemein

  • Caching for Ghost, NGINX edition

    The (only) recommended stack to self-host Ghost via ghost-cli already has an nginx in it (for SSL termination), but its not using its caching feature. So to survive the „Mastodon hug of death“, as Elena put it, lets add a small cache to it.

    By default Ghost adds headers that effectively forbid caching, because the default cache-control header has a lifetime (max-age) set of 0. So lets change that first:

    Add a caching config to the ghost config.production.json file (see https://ghost.org/docs/config/#caching for full details):

    "caching": {
      "frontend": {
        "maxAge": 60
      },
      "contentAPI": {
        "maxAge": 60
      }
    }

    Then go into the nginx config file for the site (the one that ghost-cli created in /etc/nginx/sites-available/) and add the bare minimum, because by nginx has quite good default values for many cache options.

    1. Give nginx a place to store the cached files, eg in /tmp

    proxy_cache_path /tmp/nginx_ghost levels=1:2 keys_zone=ghostcache:10m max_size=100m inactive=24h;

    2. Within the existing location block that does the reverse proxying, lets do these four things:

    • enable the caching (proxy_cache ghostcache;)
    • protect the poor ghost instance that only one request for each cache key hit it (proxy_cache_lock on;)
    • allow it to update the cache during serving „outdated entries“ (proxy_cache_background_update on;)
    • continue to serve cached content even if ghost should fall over (proxy_cache_use_stale ...)
    proxy_cache ghostcache;
    proxy_cache_lock on;
    proxy_cache_background_update on;
    proxy_cache_use_stale updating error timeout http_500 http_502 http_429;

    3. Lastly, to see it in action, lets add a header that puts the cache status into it so that we can see it from the browser dev tools:

    add_header X-Cache-Status $upstream_cache_status;

    And thats already it, restart ghost (to activate the config), reload nginx:

    sudo systemctl restart ghost...
    sudo systemctl reload nginx

    Just for completeness, my full nginx config then looks like this:

    proxy_cache_path /tmp/nginx_ghost levels=1:2 keys_zone=ghostcache:10m max_size=100m inactive=24h;
    
    map $status $header_content_type_options {
        204 "";
        default "nosniff";
    }
    
    server {
        listen 443 ssl http2;
        listen [::]:443 ssl http2;
    
        server_name blog.settgast.org;
        root /var/www/blog/system/nginx-root; # Used for acme.sh SSL verification (https://acme.sh)
    
        ssl_certificate /etc/letsencrypt/blog.settgast.org/fullchain.cer;
        ssl_certificate_key /etc/letsencrypt/blog.settgast.org/blog.settgast.org.key;
        include /etc/nginx/snippets/ssl-params.conf;
    
        location / {
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header X-Forwarded-Proto $scheme;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header Host $http_host;
            proxy_pass http://127.0.0.1:2368;
    
    	add_header X-Content-Type-Options $header_content_type_options;
    
    	proxy_cache ghostcache;
    	proxy_cache_lock on;
    	proxy_cache_background_update on;
    	proxy_cache_use_stale updating error timeout http_500 http_502 http_429;
    	add_header X-Cache-Status $upstream_cache_status;
        }
    
        location ~ /.well-known {
            allow all;
        }
    
        client_max_body_size 1g;
    }

    Benchmarking

    To benchmark it a bit, lets use the wrk tool and hit ghost with 500 parallel connections, once default without caching, once with caching enabled:

    As a baseline I ran it without the caching, so in default mode:

    wrk -c 500 -t 4 https://blog.settgast.org/caching-for-ghost-nginx-edition/
    Running 10s test @ https://blog.settgast.org/caching-for-ghost-nginx-edition/
      4 threads and 500 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency     1.13s   507.40ms   1.99s    58.33%
        Req/Sec    22.18     15.36    60.00     61.22%
      371 requests in 10.08s, 5.53MB read
      Socket errors: connect 0, read 0, write 0, timeout 311
    Requests/sec:     36.80
    Transfer/sec:    561.85KB

    36 requests per second, thats not much. Or to put it differently: From the 500 connections started, only 371 completed within 10 seconds, the others were still loading.

    Now lets repeat this with NGINX caching enabled:

    wrk -c 500 -t 4 https://blog.settgast.org/caching-for-ghost-nginx-edition/
    Running 10s test @ https://blog.settgast.org/caching-for-ghost-nginx-edition/
      4 threads and 500 connections
      Thread Stats   Avg      Stdev     Max   +/- Stdev
        Latency    41.36ms   70.71ms   1.05s    93.24%
        Req/Sec     3.84k   804.38     5.36k    83.79%
      139524 requests in 10.06s, 1.95GB read
    Requests/sec:  13865.81
    Transfer/sec:    198.63MB

    Over 13 thousand requests per second from a small VPS (2CPUs, 4GB RAM), or a factor of 380 times more requests!

    References