Enabling TLS for NGINX with Let’s Encrypt & Certbot

In my last blog I wrote about setting up NGINX to act as a reverse proxy for websites – this allowed for multiple web services to be proxied behind a single NGINX server acting as a gateway for them all. We stopped once we had HTTP services successfully reachable from the Internet. In this post we’re going to explore how to secure these services with TLS and make them reachable via HTTPS.

For this, we’ll need to ensure that port 443 is open and forwarded to our NGINX Server. For the next step, opinions vary but I’m a fan of setting up an automatic redirect from port 80 to port 443. This is quite easy to do, we just modify our original NGINX configuration file:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        #
        # Redirect HTTP Requests
        #
        return 301 https://$host$request_uri;

       server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}

The return 301 instructs the server to .. well return a 301 (Permanently moved) when the HTTP server is requested. This is followed by the URL that should be redirected to, in this case we return the original host and URI, but now as an HTTPS request. The browser will automatically look up this new URL. Next, we need to configure NGINX to respond to the HTTPS requests for the web applications. Most of the default settings can be copied from the /etc/nginx/sites-available/default file. I’m going to use a production site for this example, as when it comes to generating the certificate, we need a reachable site.

server {
        listen 443;
        server_name jewhurst.com www.jewhurst.com;
###
# SSL Config
###
        ssl_certificate /etc/ssl/temp.cert;
        ssl_certificate_key /etc/ssl/temp.key;

        ssl on;
        ssl_protocols   TLSv1.2;
###
        access_log      /var/log/nginx/jewhurst_com.log;

        location / {
                proxy_pass      http://10.10.10.10:80;
                proxy_set_header        Host    $host;
                proxy_set_header        X-Real-IP       $remote_addr;
                proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header        X-Forwarded-Proto       $scheme;
        }
}

With this configuration we turn on SSL and specify a location for the certificate which should be served for the site. Unless you already have an SSL certificate from a provider, use a temporary location for the time being. We also specify that we’d like to use only TLS 1.2 to ensure we’re on the most secure version of the protocol (TLS 1.3 is out there, but we’ll deal with that another time).

Next, install certbot which will automate the process of getting a free SSL certificate from Let’s Encrypt and applying it to the server:

stan@nginx~# sudo apt install certbot

With that installed, and with the NGINX server reachable via port 80 and port 443, run certbot with the NGINX option:

stan@nginx~# certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: www.jewhurst.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1

Selecting your website and pressing return will cause certbot to reach out to the Let’s Encrypt servers and generate a certificate. This certificate will be verified using a local challenge (it creates a verification file locally, then connects to the server to ensure it can read it) and downloads to the server. Note that you will likely need to run this command as root.

Obtaining a new certificate
Performing the following challenges:
http-01 challenge for www.jewhurst.com
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/www.jewhurst.com

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1

For the above step, no redirect is necessary, as we have already configured this manually.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled https://www.jewhurst.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=www.jewhurst.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/www.jewhurst.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/www.jewhurst.com/privkey.pem
   Your cert will expire on 2020-06-18. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"

And with that your site should now be reachable via HTTPS, with a verified, trusted certificate. An added advantage of certbot is that it adds a cron job to automatically renew your certificates, ensuring you don’t need to manually run the command and risk your server expiring:

stan@nginx~# cat /etc/cron.d/certbot
# /etc/cron.d/certbot: crontab entries for the certbot package
#
# Upstream recommends attempting renewal twice a day
#
# Eventually, this will be an opportunity to validate certificates
# haven't been revoked, etc.  Renewal will only occur if expiration
# is within 30 days.
#
# Important Note!  This cronjob will NOT be executed if you are
# running systemd as your init system.  If you are running systemd,
# the cronjob.timer function takes precedence over this cronjob.  For
# more details, see the systemd.timer manpage, or use systemctl show
# certbot.timer.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(43200))' && certbot -q renew

Building an NGINX Reverse Proxy

When hosting things at home, or even in the cloud, there is often an issue with having multiple services with web interfaces, but only one external IP address. As a result, we end up relying on port forwarding, using a bunch of non-standard ports which, while functional, is often far from ideal. The two main issues with this approach are:

  • Delving into your ISPs Port Forwarding settings – which are usually clunky, confusing a don’t always work.
  • Non-standard ports being blocked by organisations or service providers – they are often considered a security risk, or involve too much work to ensure traffic inspection/classification.

(There is, of course, the further complication that your IP Address may not be static, but we’ll deal with that in a different post. For now, we’ll assume our IP is Static)

Enter the Proxy.

According to Wikipedia, a Proxy is an ‘application or appliance that acts as an intermediary for requests from clients’. In the IT world, proxies are often used at the network edge – they allows multiple internal users to connect out to the web from a single or range of IP addresses (without the complications of NAT) while also giving the business insight into the types of requests being sent, and allowing them to enforce governance and security policies on the traffic. Proxies can also cache frequently requested content, which can significantly increase web performance for users (i.e. less cries of “The Internet is sloooww”).

There are a range of commercial paid-for and free proxy services but for this post we’re going to focus on NGINX. NGINX is actually an open source web server – similar to Apache – but with some extra features which make it very suitable for a multitude of applications including caching, streaming, load balancing and proxying.

Now I know what you’re thinking – “..but we’re interested about traffic coming in, not going out..” – and you’d be right, which is why we need to flip this whole concept around and talk about a Reverse Proxy. While standard proxies generally act as a single gateway for users going out to the web a Reverse Proxy acts as a single gateway from the web – usually to multiple servers sitting behind it.

This means we only need to open our web ports and the Reverse Proxy can do the rest for us. Configuring NGINX to take on this role is actually pretty straightforward, so let’s get it instsalled.

stan@nginx~# sudo apt install nginx

Just like that we have a web-server up and running – just head to http://<NGINX-IP> and you should see the NGINX default web page:

Image result for nginx default web page

The default installation of NGINX on Debian will create the directory /etc/nginx which holds all the files we need to get things up and running. If we head into the sites-available folder, there will be a default.conf file – the default configuration for the above web page. It will be filled with lots of comments about how to use the file, but in essence will look something like this:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;

        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}

This defines the ports (80) and IPs (0.0.0.0 and :: – i.e. all IPv4 and IPv6 addresses) we want the server to listen on, then the location of the website root directory (/var/www/html) and the index page types to look for. The server_name attribute lets us define a way of identifying the server and the location section lets us list any specific configuration based on the URI.

In Reverse Proxy mode, we don’t want to serve a website locally, so we modify this configuration:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

       server_name _;

        location / {
	        proxy_pass http://<SERVER-IP>:<SERVER-PORT>;
        }
}

This is the most basic configuration to allow reverse proxying to work but if we now browse to http://<NGINX-IP> we will find ourselves looking at the web application deployed on <server>:<port>. So, with the basics out of the way, how do we make this work for our multiple services?

NGINX, like Apache, utilises the concept of Virtual Hosts to allow multiple websites to be served from a single web server (commonly known as shared hosting). We can use the URI of the website to differentiate the server we want to proxy to. This next step assumes that DNS entries pointing to your Public IP are in place for your services – we’ll use services s1, s2 and s3 with the domain home.net. They will run on internal IPs .10, .20 and .30 from the 192.168.1.0/24 network.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

       server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s1.home.net;

        location / {
                proxy_pass http://192.168.1.10:8080;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s2.home.net;

        location / {
                proxy_pass http://192.168.1.20:3667;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s3.home.net;

        location / {
                proxy_pass http://192.168.1.30:8080;
        }
}

This configuration utilises multiple server{} blocks along with the server_name attribute to separate out the services we wish to proxy for. If we browse to http://s1.home.net we will be forwarded to http://192.168.1.10:8080 while http://s2.home.net will take us to http://192.168.1.20:3667. Our browser, meanwhile, will still show http://s1.home.net and http://s2.home.net – this whole process is transparent to the browser. As a result we only need to forward port 80 on our Router to our NGINX server, and let it handle the rest.

Again, this is a very basic configuration – often times this will need tweaked depending on the requirements of the web application to ensure that certain information from the HTTP Header is passed on to the final website – or to insure information passed back from the website is done so correctly. Best practices would also have each of the backend server{} statements in its own file under sites-available, which can then be symbolically linked to the sites-enabled folder, also within the /etc/nginx directory.

stan@nginx~# ls -1 /etc/nginx/sites-available
default.conf
s1.hone.net.conf
s2.hone.net.conf
s3.hone.net.conf
stan@nginx~# ls -1 /etc/nginx/sites-enabled
default.conf -> /etc/nginx/sites-available/default.conf
s1.hone.net.conf -> /etc/nginx/sites-available/s1.hone.net.conf
s2.hone.net.conf -> /etc/nginx/sites-available/s2.hone.net.conf
s3.hone.net.conf -> /etc/nginx/sites-available/s3.hone.net.conf

With the proxy acting as a frontend for all of our services, there is now the opportunity to enable HTTPS for all of them, but that is a post for another day…

Configuring DNS over HTTPS with a local domain and Ad-Blocking

As the license on my Meraki Cloud gear gets closer to expiration I’ve been working more and more to get my home environment migrated to some new technologies. Some of these focus on simplification of my media set up – moving services like Plex back from the cloud and into an On Prem environment. Others focus on implementing some new tech to make managing my infrastructure more simple.

Once you start to get past a couple of services, running a DNS server starts to become quite attractive. It allows you to more easily manage your environment, and makes accessing servers and services much more straightforward – especially for mobile devices where typing an IP address can be a pain.

I’ve used PiHole running on Debian VM for a long time to block adverts at home, adding entries manually for local devices as I’ve needed them but I wanted something which allowed me to manage local entries more simply. I also wanted to implement DoH (DNS over HTTPS) given the fact that privacy is increasingly coming under attack. DoH encrypts DNS traffic between you and the Internet DNS Server, ensuring that eyes in the middle (such as ISPs) cannot store, datamine or sell this information. However, the provider at the far end still can, and there have been a lot of concerns about data retention and sale.

But where to begin? I’ve a little experience with PowerDNS, so decided it would be the best choice for the majority of my DNS efforts. I knew I would need two parts – an Authoritative DNS serer to create and manage my local zones, and a recursor to resolve internet hosts and insure that the local hosts were forwarded to the authoritative server. PowerDNS is modular, with the recursor and the Authoritative parts running separately. You can also chose the backend storage methodology from a wide range – I went for a local database as it is also supported by PDNS Manager – the web management interface I chose to manage the Authoritative server. It has a clean, easy to use interface and seemed straightforward to intall. For DoH, Cloudflare’s cloudflared was my choice, as Cloudflare currently conform to Mozilla’s strict policies on DNS DoH Providers.

There were a few order of operations issues I came across, but in the end, the following process was followed. Note that I already had PiHole installed on another server:

  • Install PDNS Authoritative and Recursive servers
  • Install MariaDB and PDNS MariaDB backend
  • Install PDNS Manager and Web Server (I like apache)
  • Download and install cloudflared
  • Configure PDNS Manager + PDNS Authoritative Server
  • Configure cloudflared
  • Configure PDNS Recursor
  • Configure PiHole

This seems like a lot but it’s pretty straight forward – Get the packages:

stadmin@ns~# apt install pdns-server pdns-recursor pdns-backend-mysql mariadb-server apache2 php7.3 php7.3-json php7.3-mysql php7.3-readline php-apcu

Get and prepare PDNS Manager:

stadmin@ns~# wget https://dl.pdnsmanager.org/pdnsmanager-2.0.1.tar.gz
stadmin@ns~# tar -zxf pdnsmanager-2.0.1.tar.gz
stadmin@ns~# mv pdnsmanager-2.0.1/* /var/www/html
stadmin@ns~# chown -R www-data:www-data /var/www/html/frontend

And cloudflared:

stadmin@ns~# wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
stadmin@ns~# dpkg -i cloudflared-stable-linux-amd64.deb

Let’s start configuration with MariaDB – secure the database by running the built in script:

stadmin@ns~# mysql_secure_installation

Then create a new empty database to use:

stadmin@ns~# mysql -u root -p
Enter password:

MariaDB [(none)]> CREATE DATABASE powerdns;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> exit
Bye

Next, configure Apache, PHP and PDNS Manager. I’m using the configuration available in the PDNS Manager Docs:

stadmin@ns~# a2ensite default-ssl
stadmin@ns~# a2enmod rewrite ssl php7.3
stadmin@ns~# phpenmod apcu json readline
stadmin@ns~# systemctl restart apache2.service

You should now be able to browse to https://<YOUR-SERVER>/setup and set up your database connection and user credentials. You’ll get an invalid cert warning unless you’ve properly configured SSL.

With that done, we now configure pdns-server with the same information. Note that I only configure it to listen locally and on port 5300. This is because pdns-recursor and PDNS Manager will be the only services using it.

stadmin@ns~# vim /etc/powerdns/pdns.conf

#--------
#SETTINGS
#--------
allow-axfr-ips=127.0.0.1
allow-dnsupdate-from=127.0.0.1/8,::1
config-dir=/etc/powerdns
daemon=yes
disable-axfr=no
local-address=127.0.0.1
local-port=5300
#--------
#DATABASE
#--------
launch=gmysql
gmysql-host=127.0.0.1
gmysql-port=3306
gmysql-dbname=powerdns
gmysql-user=root
gmysql-password=<PASSOWRD>

stadmin@ns~# systemctl restart pdns.service

And finally enable it to run at boot:

stadmin@ns~# systemctl enable pdns.service

Now you can log into PDNS Manager and add your Zones and Records.

Next up, we configure cloudflared. cloudflared will use default settings, running on port 53 unless otherwise configured. This can be done with a short YAML file located in /etc/cloudflared/config.yml:

proxy-dns: true
proxy-dns-port:5053
proxy-dns-upstream:
	- https://1.1.1.1/dns-query
	- https://1.0.0.1/dns-query

Enable and start cloudflared:

stadmin@ns~# systemctl enable cloudflared.service
stadmin@ns~# systemctl start cloudflared.service

You can confirm it’s running with netstat or by trying to resolve a domain:

stadmin@ns~# dig +short @localhost -p 5053 google.com
216.58.213.110

Almost there – onto the PDNS Recursor now!

stadmin@ns~# vim /etc/powerdns/recursor.conf

allow-from=127.0.0.0/8,<LOCAL-NETWORK>
forward-zones=<YOUR-ZONE>=127.0.0.1:5300
forward-zones-recurse=.=127.0.0.1:5053
local-address=0.0.0.0

stadmin@ns~# systemctl restart pdns-recursor.service
stadmin@ns~# systemctl enable pdns-recursor.service

Again confirm with netstat and dig (Note, no port needed this time):

stadmin@ns~# dig +short @localhost bbc.com
151.101.0.81
151.101.192.81
151.101.64.81
151.101.128.81

Our DNS server is now answering requests on port 53, and forwarding anything for the internet to cloudflared using DoH. For local zones defined in the recursor.conf file, it will forward them to the PDNS Authoritative Server:

stadmin@ns~# dig +short @localhost pihole.jewhurst.net
172.16.25.25

While this setup will work entirely on it’s own, I also wanted to enabled ad-blocking with PiHole – so the final step is adding our newly configured DoH DNS Server to PiHole as an upstream server. It should be the only server selected unless you create a second setup as above. Note that you’d need to enable MySQL replication for the Authoritative Server.

Ensure PiHole is your DHCP Server, or your Router hands out PiHoles IP as the DNS server of choice and the setup is complete. You can even go as far as blocking port 53 on your firewall.

So there you go – fully secured DNS over HTTPS with an internal authoritative name-server and added ad-blocking.