Building an NGINX Reverse Proxy

When hosting things at home, or even in the cloud, there is often an issue with having multiple services with web interfaces, but only one external IP address. As a result, we end up relying on port forwarding, using a bunch of non-standard ports which, while functional, is often far from ideal. The two main issues with this approach are:

  • Delving into your ISPs Port Forwarding settings – which are usually clunky, confusing a don’t always work.
  • Non-standard ports being blocked by organisations or service providers – they are often considered a security risk, or involve too much work to ensure traffic inspection/classification.

(There is, of course, the further complication that your IP Address may not be static, but we’ll deal with that in a different post. For now, we’ll assume our IP is Static)

Enter the Proxy.

According to Wikipedia, a Proxy is an ‘application or appliance that acts as an intermediary for requests from clients’. In the IT world, proxies are often used at the network edge – they allows multiple internal users to connect out to the web from a single or range of IP addresses (without the complications of NAT) while also giving the business insight into the types of requests being sent, and allowing them to enforce governance and security policies on the traffic. Proxies can also cache frequently requested content, which can significantly increase web performance for users (i.e. less cries of “The Internet is sloooww”).

There are a range of commercial paid-for and free proxy services but for this post we’re going to focus on NGINX. NGINX is actually an open source web server – similar to Apache – but with some extra features which make it very suitable for a multitude of applications including caching, streaming, load balancing and proxying.

Now I know what you’re thinking – “..but we’re interested about traffic coming in, not going out..” – and you’d be right, which is why we need to flip this whole concept around and talk about a Reverse Proxy. While standard proxies generally act as a single gateway for users going out to the web a Reverse Proxy acts as a single gateway from the web – usually to multiple servers sitting behind it.

This means we only need to open our web ports and the Reverse Proxy can do the rest for us. Configuring NGINX to take on this role is actually pretty straightforward, so let’s get it instsalled.

stan@nginx~# sudo apt install nginx

Just like that we have a web-server up and running – just head to http://<NGINX-IP> and you should see the NGINX default web page:

Image result for nginx default web page

The default installation of NGINX on Debian will create the directory /etc/nginx which holds all the files we need to get things up and running. If we head into the sites-available folder, there will be a default.conf file – the default configuration for the above web page. It will be filled with lots of comments about how to use the file, but in essence will look something like this:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;

        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}

This defines the ports (80) and IPs (0.0.0.0 and :: – i.e. all IPv4 and IPv6 addresses) we want the server to listen on, then the location of the website root directory (/var/www/html) and the index page types to look for. The server_name attribute lets us define a way of identifying the server and the location section lets us list any specific configuration based on the URI.

In Reverse Proxy mode, we don’t want to serve a website locally, so we modify this configuration:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

       server_name _;

        location / {
	        proxy_pass http://<SERVER-IP>:<SERVER-PORT>;
        }
}

This is the most basic configuration to allow reverse proxying to work but if we now browse to http://<NGINX-IP> we will find ourselves looking at the web application deployed on <server>:<port>. So, with the basics out of the way, how do we make this work for our multiple services?

NGINX, like Apache, utilises the concept of Virtual Hosts to allow multiple websites to be served from a single web server (commonly known as shared hosting). We can use the URI of the website to differentiate the server we want to proxy to. This next step assumes that DNS entries pointing to your Public IP are in place for your services – we’ll use services s1, s2 and s3 with the domain home.net. They will run on internal IPs .10, .20 and .30 from the 192.168.1.0/24 network.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

       server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s1.home.net;

        location / {
                proxy_pass http://192.168.1.10:8080;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s2.home.net;

        location / {
                proxy_pass http://192.168.1.20:3667;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s3.home.net;

        location / {
                proxy_pass http://192.168.1.30:8080;
        }
}

This configuration utilises multiple server{} blocks along with the server_name attribute to separate out the services we wish to proxy for. If we browse to http://s1.home.net we will be forwarded to http://192.168.1.10:8080 while http://s2.home.net will take us to http://192.168.1.20:3667. Our browser, meanwhile, will still show http://s1.home.net and http://s2.home.net – this whole process is transparent to the browser. As a result we only need to forward port 80 on our Router to our NGINX server, and let it handle the rest.

Again, this is a very basic configuration – often times this will need tweaked depending on the requirements of the web application to ensure that certain information from the HTTP Header is passed on to the final website – or to insure information passed back from the website is done so correctly. Best practices would also have each of the backend server{} statements in its own file under sites-available, which can then be symbolically linked to the sites-enabled folder, also within the /etc/nginx directory.

stan@nginx~# ls -1 /etc/nginx/sites-available
default.conf
s1.hone.net.conf
s2.hone.net.conf
s3.hone.net.conf
stan@nginx~# ls -1 /etc/nginx/sites-enabled
default.conf -> /etc/nginx/sites-available/default.conf
s1.hone.net.conf -> /etc/nginx/sites-available/s1.hone.net.conf
s2.hone.net.conf -> /etc/nginx/sites-available/s2.hone.net.conf
s3.hone.net.conf -> /etc/nginx/sites-available/s3.hone.net.conf

With the proxy acting as a frontend for all of our services, there is now the opportunity to enable HTTPS for all of them, but that is a post for another day…

Leave a Reply

Your email address will not be published. Required fields are marked *