a modal/duality modality

Deploying WebSockets Secure with NGINX and systemd

Introduction

This article will demonstrate a simple way to deploy a WebSockets (Secure) application using NGINX as a reverse proxy and systemd as a service manager. This will likely not be appropriate for heavy production use.

There is no original content in this article, but I needed to source the included information from many different places, so hopefully this will be more convenient.

The application I deployed was a Haskell program using the websockets library, but the below instructions should work in general. Notably, your program’s WebSockets library does not need to support WebSockets Secure in order to use it due to NGINX’s SSL offloading feature.

If you are using WSS, I assume you already have SSL set up with NGINX. If you don’t, Let’s Encrypt is a great free certificate authority to set up SSL for your domain.

Configure NGINX as a reverse proxy

First, figure out the port your application is serving WebSockets on (a high number unlikely to be used by other programs, like 9161), and set the IP address to 127.0.0.1 (localhost). Your WebSocket library’s documentation should detail how to do this.

Open your NGINX configuration file for the domain you’d like to serve from. Move all SSL configuration from inside the server block to outside, and enable ssl on; inside the server block (ignore this step if you’re not using SSL).

Choose a path to forward the WebSockets connection to on your domain, like /websockets/<app-name>. Clients will connect by using the URI wss://yourdomain.tld/websockets/<app-name> for WSS or ws://yourdomain.tld/websockets/<app-name> for unencrypted WebSockets.

Then add a location directive for that path and enter the IP address (127.0.0.1) and port you chose in a proxy_pass option, as shown below.

ssl_certificate /etc/letsencrypt/live/<yourdomain.tld>/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/<yourdomain.tld>/privkey.pem;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_dhparam /etc/ssl/certs/dhparam.pem;
...other SSL options here...

server {
    listen      443 ssl default_server;
    server_name yourdomain.tld www.yourdomain.tld;

    charset     utf-8;
    client_max_body_size 75M;

    ssl on;

    location ~ /.well-known {
        root /var/www/<lets-encrypt-wellknown-directory-root>/;
    }

    location /websockets/<app-name> {
        proxy_pass http://127.0.0.1:<port-number>/;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "Upgrade";
    }

}

server {
    listen 80;
    server_name yourdomain.tld www.yourdomain.tld;
    return 301 https://$host$request_uri;
}

Note the http in proxy_pass rather than https. This directs NGINX to decrypt the traffic before forwarding it to your application (and encrypting traffic before forwarding to the client), so your application can assume an unencrypted connection. The proxy_set_header options tell NGINX to upgrade the HTTP connection to WebSockets.

At this point, if you run your application (you may need root privileges), clients should be able to connect through the reverse proxy. However, it’s possible that your program crashes unexpectedly, or even your server is forced to reboot. When this happens, if you’d like your application to restart automatically, you need to daemonize it with systemd (alternatives are the older systemv and upstart).

Daemonize the application with systemd

Create a executable shell script that runs your application. Make sure it has the correct #!/bin/bash shebang. Do not set it to run as a background process. If it does fork itself or you have more complicated requirements, this manual has more information about how to configure the init file.

$ cat /home/<username>/<app-name>/start.sh
#!/bin/bash
./home/<username>/<app-name>/<app-executable-name>
$ chmod +x start.sh

Now create a systemd init script that details how your program starts and stops. Since it’s nonforking process, indicated by Type=simple, systemd automatically figures out how to stop it (by sending a SIGTERM to the spawned processes, and a SIGKILL if that doesn’t work). By default, restarting involves just stopping the process and starting it again, but this can be configured with ExecReload.

# cat /etc/systemd/system/<app-name>.service
[Unit]
Description=<app-description>
After=syslog.target

[Service]
ExecStart=/home/<username>/<app-name>/start.sh
Restart=always
Type=simple

[Install]
WantedBy=multi-user.target

Restart=always forces the program to restart in any case, but you may want to set it to on-success if you only want it to restart on a clean exit code or on-failure if you only want it to restart on a failing exit code, killed by a signal, on a timeout, or by the systemd watchdog.

Finally, WantedBy=multi-user.target essentially means to start the service on startup.

Now you can enable the service.

# systemd enable <app-name>.service
# systemd start <app-name>.service

You can test everything works by $ kill -9 <app-PID> the process (you can find the PID with $ pgrep <app-executable-name>) and ensuring it starts back up again. You may also want to try restarting the server and making sure the process starts on boot.


Any corrections, questions, or suggestions? E-mail surya at modalduality dot org.