Home Blog CV Projects Patterns Notes Book Colophon Search

NGINX Proxy

4 Feb, 2022

NGINX has a variety of ways of proxying to back end servers. The default config isn't always the best.

This is the set up you need to get NGINX to make an HTTP 1.1 connection to the backend, and not close the TCP connection after each request:

upstream keepalive-upstream {
    server 127.0.0.1:8000;
    # server unix:/home/james/hello;
    keepalive 1024;
    keepalive_requests 1000000;
    keepalive_timeout 60s;
}


server {
    listen 80 default_server;
    listen [::]:80 default_server;
    root /var/www/html;
    index index.html;
    server_name _;

    location / {
        proxy_http_version 1.1;
        proxy_set_header Connection ""; 
        proxy_read_timeout     300;
        proxy_connect_timeout  300;
        proxy_pass http://keepalive-upstream;
    }
}

I find I need to keep the number of idle keep-alive connections rather high when there is lots of concurrent access, otherwise NGINX starts closing connections and then re-opening them again. Here I've set the idle keepalive connections to 1024. I've also said that connections shouldn't be recycled until they have each served 1,000,000 requests.

NGINX has a variety of ways of proxying to back end servers. Above I just used a standard HTTP proxy using TCP. It can also use protocols like the uWSGI protocol. One I'm particularly interested in though is using HTTP over a UNIX domain socket rather than TCP. This avoids any of the data going through the network stack so is a bit faster. It also means that you don't have to worry about choosing a port number.

You can enable UNIX domain sockets by switching the server lines in the upstream block above:

    # server 127.0.0.1:8000;
    server unix:/home/james/hello;

Again, the sample server I wrote in Fast HTTP Server is capable of listening on UNIX domain sockets instead of TCP sockets. Just run it like this:

python3 server.py path/to/socket 8

The number at the end is the number of worker processes. You can just have 1 if you prefer.

Finally you'll want to increase the number of NGINX worker connections:

events {
    ...
    worker_connections 4096;
    ...
}

Comments

Be the first to comment.

Add Comment





Copyright James Gardner 1996-2020 All Rights Reserved. Admin.