Installing the HTML LCARS Screensaver on MacOS

Years ago I was introduced to the fantastic System47 LCARS screensaver through a random Google search. For years this was my goto screensaver of choice, but the project hasn’t been maintained in years, and with the recent death of Adobe Flash (which was a requirement for this screensaver) running it on MacOS is no longer possible (not to metntion M1 Macbooks).

While screensavers are somewhat out of fashion, I have always been a fan of the nostalgia that this one brings about. Not only that, but it’s a highly accurate depiction of the version used on the Star Trek series’, so I was very pleased when I learned that a developer on Github know as webOSpinn had created an HTML version of this screensaver using the now defunct Google Swiffy runtime.

I started by cloning the repository:

git clone https://github.com/webOSpinn/System47

This provides several versions of the HTML code, converted using different versions of the Swiffy runtime. I was interested in the latest version of which there are two files:

stan@Pidgeotto ~/System47 (git)-[master] % ls -l | grep 8.0
-rw-r--r--@ 1 stan staff 1519622 25 Aug 09:28 v8.0.html
-rw-r--r-- 1 stan staff 1453993 25 Aug 09:28 v8.0_noaudio.html

Next, we need a program which can take a website and display it as a webpage. Thankfully another developer, liquidx, has just the thing – WebViewScreensaver. Installation insructions are included, however as I use homebrew to manage my Mac, installation was incredibly straightforward:

stan@Pidgeotto ~/System47 (git)-[master] % brew install webviewscreensaver
Updating Homebrew…
==> Auto-updated Homebrew!
Updated 1 tap (homebrew/core).
==> Updated Formulae
Updated 221 formulae.
==> Downloading https://github.com/liquidx/webviewscreensaver/releases/download/v2.2.1/WebViewScreenSaver-2.2.1.zip
==> Downloading from https://github-releases.githubusercontent.com/847378/d25f5c80-dfe2-11eb-816c-78779ac366b9?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=AKIAIWNJYAX4CSVEH53A%2F20210825%2Fus-east
################################################################## 100.0%
==> Installing Cask webviewscreensaver
==> Moving Screen Saver 'WebViewScreenSaver.saver' to '/Users/stan/Library/Screen Savers/WebViewScreenSaver.saver'
🍺 webviewscreensaver was successfully installed!

With this installed, a new screensaver is now listed under screensavers:

It won’t work immediately at first – I had to override Gatekeeper (in System Preferences > Security & Privacy > General). After that, I configured the path to the v8.0_noaudio.html under Screen Saver Options…

And Voila! A working Star Trek LCARS screensaver on an M1 MacBook Air.

Enabling TLS for NGINX with Let’s Encrypt & Certbot

In my last blog I wrote about setting up NGINX to act as a reverse proxy for websites – this allowed for multiple web services to be proxied behind a single NGINX server acting as a gateway for them all. We stopped once we had HTTP services successfully reachable from the Internet. In this post we’re going to explore how to secure these services with TLS and make them reachable via HTTPS.

For this, we’ll need to ensure that port 443 is open and forwarded to our NGINX Server. For the next step, opinions vary but I’m a fan of setting up an automatic redirect from port 80 to port 443. This is quite easy to do, we just modify our original NGINX configuration file:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        #
        # Redirect HTTP Requests
        #
        return 301 https://$host$request_uri;

       server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}

The return 301 instructs the server to .. well return a 301 (Permanently moved) when the HTTP server is requested. This is followed by the URL that should be redirected to, in this case we return the original host and URI, but now as an HTTPS request. The browser will automatically look up this new URL. Next, we need to configure NGINX to respond to the HTTPS requests for the web applications. Most of the default settings can be copied from the /etc/nginx/sites-available/default file. I’m going to use a production site for this example, as when it comes to generating the certificate, we need a reachable site.

server {
        listen 443;
        server_name jewhurst.com www.jewhurst.com;
###
# SSL Config
###
        ssl_certificate /etc/ssl/temp.cert;
        ssl_certificate_key /etc/ssl/temp.key;

        ssl on;
        ssl_protocols   TLSv1.2;
###
        access_log      /var/log/nginx/jewhurst_com.log;

        location / {
                proxy_pass      http://10.10.10.10:80;
                proxy_set_header        Host    $host;
                proxy_set_header        X-Real-IP       $remote_addr;
                proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header        X-Forwarded-Proto       $scheme;
        }
}

With this configuration we turn on SSL and specify a location for the certificate which should be served for the site. Unless you already have an SSL certificate from a provider, use a temporary location for the time being. We also specify that we’d like to use only TLS 1.2 to ensure we’re on the most secure version of the protocol (TLS 1.3 is out there, but we’ll deal with that another time).

Next, install certbot which will automate the process of getting a free SSL certificate from Let’s Encrypt and applying it to the server:

stan@nginx~# sudo apt install certbot

With that installed, and with the NGINX server reachable via port 80 and port 443, run certbot with the NGINX option:

stan@nginx~# certbot --nginx
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator nginx, Installer nginx

Which names would you like to activate HTTPS for?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: www.jewhurst.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate numbers separated by commas and/or spaces, or leave input
blank to select all options shown (Enter 'c' to cancel): 1

Selecting your website and pressing return will cause certbot to reach out to the Let’s Encrypt servers and generate a certificate. This certificate will be verified using a local challenge (it creates a verification file locally, then connects to the server to ensure it can read it) and downloads to the server. Note that you will likely need to run this command as root.

Obtaining a new certificate
Performing the following challenges:
http-01 challenge for www.jewhurst.com
Waiting for verification...
Cleaning up challenges
Deploying Certificate to VirtualHost /etc/nginx/sites-enabled/www.jewhurst.com

Please choose whether or not to redirect HTTP traffic to HTTPS, removing HTTP access.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
1: No redirect - Make no further changes to the webserver configuration.
2: Redirect - Make all requests redirect to secure HTTPS access. Choose this for
new sites, or if you're confident your site works on HTTPS. You can undo this
change by editing your web server's configuration.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Select the appropriate number [1-2] then [enter] (press 'c' to cancel): 1

For the above step, no redirect is necessary, as we have already configured this manually.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled https://www.jewhurst.com

You should test your configuration at:
https://www.ssllabs.com/ssltest/analyze.html?d=www.jewhurst.com
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/www.jewhurst.com/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/www.jewhurst.com/privkey.pem
   Your cert will expire on 2020-06-18. To obtain a new or tweaked
   version of this certificate in the future, simply run certbot again
   with the "certonly" option. To non-interactively renew *all* of
   your certificates, run "certbot renew"

And with that your site should now be reachable via HTTPS, with a verified, trusted certificate. An added advantage of certbot is that it adds a cron job to automatically renew your certificates, ensuring you don’t need to manually run the command and risk your server expiring:

stan@nginx~# cat /etc/cron.d/certbot
# /etc/cron.d/certbot: crontab entries for the certbot package
#
# Upstream recommends attempting renewal twice a day
#
# Eventually, this will be an opportunity to validate certificates
# haven't been revoked, etc.  Renewal will only occur if expiration
# is within 30 days.
#
# Important Note!  This cronjob will NOT be executed if you are
# running systemd as your init system.  If you are running systemd,
# the cronjob.timer function takes precedence over this cronjob.  For
# more details, see the systemd.timer manpage, or use systemctl show
# certbot.timer.
SHELL=/bin/sh
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin

0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(43200))' && certbot -q renew

Building an NGINX Reverse Proxy

When hosting things at home, or even in the cloud, there is often an issue with having multiple services with web interfaces, but only one external IP address. As a result, we end up relying on port forwarding, using a bunch of non-standard ports which, while functional, is often far from ideal. The two main issues with this approach are:

  • Delving into your ISPs Port Forwarding settings – which are usually clunky, confusing a don’t always work.
  • Non-standard ports being blocked by organisations or service providers – they are often considered a security risk, or involve too much work to ensure traffic inspection/classification.

(There is, of course, the further complication that your IP Address may not be static, but we’ll deal with that in a different post. For now, we’ll assume our IP is Static)

Enter the Proxy.

According to Wikipedia, a Proxy is an ‘application or appliance that acts as an intermediary for requests from clients’. In the IT world, proxies are often used at the network edge – they allows multiple internal users to connect out to the web from a single or range of IP addresses (without the complications of NAT) while also giving the business insight into the types of requests being sent, and allowing them to enforce governance and security policies on the traffic. Proxies can also cache frequently requested content, which can significantly increase web performance for users (i.e. less cries of “The Internet is sloooww”).

There are a range of commercial paid-for and free proxy services but for this post we’re going to focus on NGINX. NGINX is actually an open source web server – similar to Apache – but with some extra features which make it very suitable for a multitude of applications including caching, streaming, load balancing and proxying.

Now I know what you’re thinking – “..but we’re interested about traffic coming in, not going out..” – and you’d be right, which is why we need to flip this whole concept around and talk about a Reverse Proxy. While standard proxies generally act as a single gateway for users going out to the web a Reverse Proxy acts as a single gateway from the web – usually to multiple servers sitting behind it.

This means we only need to open our web ports and the Reverse Proxy can do the rest for us. Configuring NGINX to take on this role is actually pretty straightforward, so let’s get it instsalled.

stan@nginx~# sudo apt install nginx

Just like that we have a web-server up and running – just head to http://<NGINX-IP> and you should see the NGINX default web page:

Image result for nginx default web page

The default installation of NGINX on Debian will create the directory /etc/nginx which holds all the files we need to get things up and running. If we head into the sites-available folder, there will be a default.conf file – the default configuration for the above web page. It will be filled with lots of comments about how to use the file, but in essence will look something like this:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        root /var/www/html;

        index index.html index.htm index.nginx-debian.html;

        server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}

This defines the ports (80) and IPs (0.0.0.0 and :: – i.e. all IPv4 and IPv6 addresses) we want the server to listen on, then the location of the website root directory (/var/www/html) and the index page types to look for. The server_name attribute lets us define a way of identifying the server and the location section lets us list any specific configuration based on the URI.

In Reverse Proxy mode, we don’t want to serve a website locally, so we modify this configuration:

server {
        listen 80 default_server;
        listen [::]:80 default_server;

       server_name _;

        location / {
	        proxy_pass http://<SERVER-IP>:<SERVER-PORT>;
        }
}

This is the most basic configuration to allow reverse proxying to work but if we now browse to http://<NGINX-IP> we will find ourselves looking at the web application deployed on <server>:<port>. So, with the basics out of the way, how do we make this work for our multiple services?

NGINX, like Apache, utilises the concept of Virtual Hosts to allow multiple websites to be served from a single web server (commonly known as shared hosting). We can use the URI of the website to differentiate the server we want to proxy to. This next step assumes that DNS entries pointing to your Public IP are in place for your services – we’ll use services s1, s2 and s3 with the domain home.net. They will run on internal IPs .10, .20 and .30 from the 192.168.1.0/24 network.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

       server_name _;

        location / {
                try_files $uri $uri/ =404;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s1.home.net;

        location / {
                proxy_pass http://192.168.1.10:8080;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s2.home.net;

        location / {
                proxy_pass http://192.168.1.20:3667;
        }
}
server {
        listen 80;
        listen [::]:80;

       server_name s3.home.net;

        location / {
                proxy_pass http://192.168.1.30:8080;
        }
}

This configuration utilises multiple server{} blocks along with the server_name attribute to separate out the services we wish to proxy for. If we browse to http://s1.home.net we will be forwarded to http://192.168.1.10:8080 while http://s2.home.net will take us to http://192.168.1.20:3667. Our browser, meanwhile, will still show http://s1.home.net and http://s2.home.net – this whole process is transparent to the browser. As a result we only need to forward port 80 on our Router to our NGINX server, and let it handle the rest.

Again, this is a very basic configuration – often times this will need tweaked depending on the requirements of the web application to ensure that certain information from the HTTP Header is passed on to the final website – or to insure information passed back from the website is done so correctly. Best practices would also have each of the backend server{} statements in its own file under sites-available, which can then be symbolically linked to the sites-enabled folder, also within the /etc/nginx directory.

stan@nginx~# ls -1 /etc/nginx/sites-available
default.conf
s1.hone.net.conf
s2.hone.net.conf
s3.hone.net.conf
stan@nginx~# ls -1 /etc/nginx/sites-enabled
default.conf -> /etc/nginx/sites-available/default.conf
s1.hone.net.conf -> /etc/nginx/sites-available/s1.hone.net.conf
s2.hone.net.conf -> /etc/nginx/sites-available/s2.hone.net.conf
s3.hone.net.conf -> /etc/nginx/sites-available/s3.hone.net.conf

With the proxy acting as a frontend for all of our services, there is now the opportunity to enable HTTPS for all of them, but that is a post for another day…

Configuring DNS over HTTPS with a local domain and Ad-Blocking

As the license on my Meraki Cloud gear gets closer to expiration I’ve been working more and more to get my home environment migrated to some new technologies. Some of these focus on simplification of my media set up – moving services like Plex back from the cloud and into an On Prem environment. Others focus on implementing some new tech to make managing my infrastructure more simple.

Once you start to get past a couple of services, running a DNS server starts to become quite attractive. It allows you to more easily manage your environment, and makes accessing servers and services much more straightforward – especially for mobile devices where typing an IP address can be a pain.

I’ve used PiHole running on Debian VM for a long time to block adverts at home, adding entries manually for local devices as I’ve needed them but I wanted something which allowed me to manage local entries more simply. I also wanted to implement DoH (DNS over HTTPS) given the fact that privacy is increasingly coming under attack. DoH encrypts DNS traffic between you and the Internet DNS Server, ensuring that eyes in the middle (such as ISPs) cannot store, datamine or sell this information. However, the provider at the far end still can, and there have been a lot of concerns about data retention and sale.

But where to begin? I’ve a little experience with PowerDNS, so decided it would be the best choice for the majority of my DNS efforts. I knew I would need two parts – an Authoritative DNS serer to create and manage my local zones, and a recursor to resolve internet hosts and insure that the local hosts were forwarded to the authoritative server. PowerDNS is modular, with the recursor and the Authoritative parts running separately. You can also chose the backend storage methodology from a wide range – I went for a local database as it is also supported by PDNS Manager – the web management interface I chose to manage the Authoritative server. It has a clean, easy to use interface and seemed straightforward to intall. For DoH, Cloudflare’s cloudflared was my choice, as Cloudflare currently conform to Mozilla’s strict policies on DNS DoH Providers.

There were a few order of operations issues I came across, but in the end, the following process was followed. Note that I already had PiHole installed on another server:

  • Install PDNS Authoritative and Recursive servers
  • Install MariaDB and PDNS MariaDB backend
  • Install PDNS Manager and Web Server (I like apache)
  • Download and install cloudflared
  • Configure PDNS Manager + PDNS Authoritative Server
  • Configure cloudflared
  • Configure PDNS Recursor
  • Configure PiHole

This seems like a lot but it’s pretty straight forward – Get the packages:

stadmin@ns~# apt install pdns-server pdns-recursor pdns-backend-mysql mariadb-server apache2 php7.3 php7.3-json php7.3-mysql php7.3-readline php-apcu

Get and prepare PDNS Manager:

stadmin@ns~# wget https://dl.pdnsmanager.org/pdnsmanager-2.0.1.tar.gz
stadmin@ns~# tar -zxf pdnsmanager-2.0.1.tar.gz
stadmin@ns~# mv pdnsmanager-2.0.1/* /var/www/html
stadmin@ns~# chown -R www-data:www-data /var/www/html/frontend

And cloudflared:

stadmin@ns~# wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
stadmin@ns~# dpkg -i cloudflared-stable-linux-amd64.deb

Let’s start configuration with MariaDB – secure the database by running the built in script:

stadmin@ns~# mysql_secure_installation

Then create a new empty database to use:

stadmin@ns~# mysql -u root -p
Enter password:

MariaDB [(none)]> CREATE DATABASE powerdns;
Query OK, 1 row affected (0.001 sec)
MariaDB [(none)]> exit
Bye

Next, configure Apache, PHP and PDNS Manager. I’m using the configuration available in the PDNS Manager Docs:

stadmin@ns~# a2ensite default-ssl
stadmin@ns~# a2enmod rewrite ssl php7.3
stadmin@ns~# phpenmod apcu json readline
stadmin@ns~# systemctl restart apache2.service

You should now be able to browse to https://<YOUR-SERVER>/setup and set up your database connection and user credentials. You’ll get an invalid cert warning unless you’ve properly configured SSL.

With that done, we now configure pdns-server with the same information. Note that I only configure it to listen locally and on port 5300. This is because pdns-recursor and PDNS Manager will be the only services using it.

stadmin@ns~# vim /etc/powerdns/pdns.conf

#--------
#SETTINGS
#--------
allow-axfr-ips=127.0.0.1
allow-dnsupdate-from=127.0.0.1/8,::1
config-dir=/etc/powerdns
daemon=yes
disable-axfr=no
local-address=127.0.0.1
local-port=5300
#--------
#DATABASE
#--------
launch=gmysql
gmysql-host=127.0.0.1
gmysql-port=3306
gmysql-dbname=powerdns
gmysql-user=root
gmysql-password=<PASSOWRD>

stadmin@ns~# systemctl restart pdns.service

And finally enable it to run at boot:

stadmin@ns~# systemctl enable pdns.service

Now you can log into PDNS Manager and add your Zones and Records.

Next up, we configure cloudflared. cloudflared will use default settings, running on port 53 unless otherwise configured. This can be done with a short YAML file located in /etc/cloudflared/config.yml:

proxy-dns: true
proxy-dns-port:5053
proxy-dns-upstream:
	- https://1.1.1.1/dns-query
	- https://1.0.0.1/dns-query

Enable and start cloudflared:

stadmin@ns~# systemctl enable cloudflared.service
stadmin@ns~# systemctl start cloudflared.service

You can confirm it’s running with netstat or by trying to resolve a domain:

stadmin@ns~# dig +short @localhost -p 5053 google.com
216.58.213.110

Almost there – onto the PDNS Recursor now!

stadmin@ns~# vim /etc/powerdns/recursor.conf

allow-from=127.0.0.0/8,<LOCAL-NETWORK>
forward-zones=<YOUR-ZONE>=127.0.0.1:5300
forward-zones-recurse=.=127.0.0.1:5053
local-address=0.0.0.0

stadmin@ns~# systemctl restart pdns-recursor.service
stadmin@ns~# systemctl enable pdns-recursor.service

Again confirm with netstat and dig (Note, no port needed this time):

stadmin@ns~# dig +short @localhost bbc.com
151.101.0.81
151.101.192.81
151.101.64.81
151.101.128.81

Our DNS server is now answering requests on port 53, and forwarding anything for the internet to cloudflared using DoH. For local zones defined in the recursor.conf file, it will forward them to the PDNS Authoritative Server:

stadmin@ns~# dig +short @localhost pihole.jewhurst.net
172.16.25.25

While this setup will work entirely on it’s own, I also wanted to enabled ad-blocking with PiHole – so the final step is adding our newly configured DoH DNS Server to PiHole as an upstream server. It should be the only server selected unless you create a second setup as above. Note that you’d need to enable MySQL replication for the Authoritative Server.

Ensure PiHole is your DHCP Server, or your Router hands out PiHoles IP as the DNS server of choice and the setup is complete. You can even go as far as blocking port 53 on your firewall.

So there you go – fully secured DNS over HTTPS with an internal authoritative name-server and added ad-blocking.

L2TP/IPSec for Remote Access with VyOS

Remote access to my Home Lab and various other assets has always been something I’ve secured with HTTPS – I’ve used multiple technologies from nginx remote proxies to HAProxy with SSL Offloading (this works really well when integrating with LetsEncrypt with multiple back end services); however after picking up my new iPhone 11 Pro, I wanted a good way to secure my general browsing when out and about on public WiFi as well as a more straightforward method of gaining CLI access to my various assets.

I’ve not spoken about my use of VyOS yet, but as an alternative to the time-based licensing on Cisco CSR1000v (and it’s kind of crazy VM requirements) I decided to migrate my Edge Routing to VyOS which runs in a small sized VM, on practically any CPU which supports virtualisation, and yet can still support decent encryption speeds. I also implemented this on my remote vServer, so I could extend my Home network into the Cloud.

Being a network engineer raised in a Cisco environment, IPSec has always been my tunnelling protocol of choice, but with a remote endpoint such as an iPhone, with a dynamic IP, I felt this could be a tad clumsy to set up. Likewise, IKEv2, while supported by iOS natively, is entirely undocumented from a VyOS standpoint for a dynamic remote client, such as a phone (and older versions of VyOS have had issues with IKEv2 support). Unlike a lot of Routers – as it is Debian based – VyOS has native support for OpenVPN but I wanted something native to make it easier to deploy – which left me with L2TP.

L2TP on it’s own is not a very secure protocol. In fact, it’s entirely insecure, which led to the creation of RFC3193 which denotes a methodology for securing L2TP with IPSec. I was a tad concerned about the security of this, as the RFC denotes that either 3DES or AES can be used, but I figured I would be able to configure that within VyOS, and continued on my merry way.

The configuration was actually super easy – L2TP is a supported protocol within VyOS, and their command line interface is nice and straightforward to use. I’m going to use a VPN range of 10.66.66.1 – 10.66.66.6. My Outside interface will be pppoe0. First we enable IPSec on the outside interface, enable NAT Traversal and define the networks we’re allowing:

set vpn ipsec ipsec-interfaces interface pppoe0
set vpn ipsec nat-traversal enable
set vpn ipsec nat-networks allowed-networks 10.66.66.0/29

Next, we simply configure L2TP the way we like it, including authentication, users, passwords and addresses for clients:

set vpn l2tp remote-access description iOSL2TP
set vpn l2tp remote-access authentication mode 'local'
set vpn l2tp remote-access authentication local-users username user1 password 5ome$ecur3Pa$5
set vpn l2tp remote-access client-ip-pool start 10.66.66.1
set vpn l2tp remote-access client-ip-pool stop 10.66.66.6
set vpn l2tp remote-access dns-servers server-1 10.10.10.5
set vpn l2tp remote-access dns-servers server-2 8.8.8.8
set vpn l2tp remote-access ipsec-settings authentication mode 'pre-shared-secret'
set vpn l2tp remote-access ipsec-settings authentication pre-shared-secret A12DigitPass
set vpn l2tp remote-access ipsec-settings ike-lifetime '3600'
set vpn l2tp remote-access outside-address 'MyOutsideIP'

There are a couple of things about the configuration above, but we’ll start with the elephant in the room – while you can configure this on any VyOS router, you need to specify your outside IP, which means if you don’t have a static IP, you’re gonna have a bad time. Some UK ISPs do offer static IPs to home users, such as Zen, Cerberus and PlusNet.

Secondly, you may notice that my DNS Server 1 IP is an internal IP – that’s my PiHole DNS server which means that my VPN enabled iPhone won’t have to deal with those bandwidth consuming adverts (especially as they screw up mobile web browsing so much). It also means that I can hit my internal hosts using DNS.

There is one final piece of configuration, and that is to make sure that the IPs can reach the internet, so we tweak NAT to take that into account:

set nat source rule 100 outbound-interface 'pppoe0'
set nat source rule 100 source address 10.66.66.0/29
set nat source rule 100 translation address masquerade

With that complete, configuring the iPhone is easy, adding in your outside IP (or DNS), username, password, and PSK:

And when we spin it up, we can confirm on the CLI:

stan@home-vyos:~$ show vpn remote-access
Active remote access VPN sessions:

User            Proto Iface     Tunnel IP       TX byte RX byte  Time
----            ----- -----     ---------       ------- -------  ----
user1           L2TP  l2tp0     10.66.66.1        6.0K    9.7K  00h00m02s

You may remember earlier I mentioned that I wasn’t sure how secure L2TP/IPSec actually was, and you can see that VyOS gives no options to configure the Phase 1 or Phase 2 SAs. Thankfully, it does have a way of showing some detail on the IPSec connection being used:

stan@home-vyos:~$ show vpn ipsec sa verbose
[..snip..]
Connections:
remote-access:  my.ip.add.ress...%any  IKEv1, dpddelay=15s
remote-access:   local:  [my.ip.add.ress] uses pre-shared key authentication
remote-access:   remote: uses pre-shared key authentication
remote-access:   child:  dynamic[0/l2f] === dynamic TRANSPORT, dpdaction=clear
Security Associations (2 up, 0 connecting):
remote-access[109]: ESTABLISHED 31 seconds ago, my.ip.add.ress[my.ip.add.ress]...82.132.242.151[10.5.202.100]
remote-access[109]: IKEv1 SPIs: 6f77acf533823322_i 5516f8dae8156886_r*, rekeying disabled
remote-access[109]: IKE proposal: AES_CBC_256/HMAC_SHA1_96/PRF_HMAC_SHA1/MODP_1024
remote-access{717}:  INSTALLED, TRANSPORT, reqid 19, ESP in UDP SPIs: c050470e_i 0da3faea_o
remote-access{717}:  AES_CBC_256/HMAC_SHA1_96, 1343 bytes_i (24 pkts, 5s ago), 1992 bytes_o (25 pkts, 5s ago), rekeying disabled
remote-access{717}:   my.ip.add.ress/32[udp/l2f] === 82.132.242.151/32[udp/61545]

We’re using IKEv1 with AES256-CBC Encryption, SHA1 Hashing with IPSec Phase 2 using ESP-AES256-CBC and SHA1 Hashing…which I think is quite secure enough to be getting on with.

IOS-XR Upgrades – PIEs, SMUs and Certificate Expirations…

Given that these devices tend to be the most stable and unchanging in our network (and in most networks) it’s rare I have to upgrade and ASR9000 Router. Every time I approach it I realise I’ve completely forgotten the process, so hopefully this will help me remember in the future.

IOS-XR is modular software, based on a high-performance Linux kernel. This means that, after the base software features have been installed, you can customise with just the features you want, to keep your install minimalistic and to also stop you encountering bugs with software features you’re not even using.

IOS-XR uses two types of software package – a PIE (Package Installation Envelope) and a SMU (Software Maintenance Upgrade). The PIEs are used to install main software features, such as the core IOS-XR Operating System and then feature sets such as multicast, MPLS and cryptographic functions. SMUs are a pretty novel approach to bug-fixing; In the past you would need to upgrade your entire OS to a maintenance release in order to get the latest bug-fixes. With IOS-XR, you just install a SMU, which patches the bug (usually) and doesn’t require you to reload your device (again, usually).

There is a third type of package available called an SP (Service Pack) which Cisco release on a relatively regular basis, which bundles up all the SMUs released up to a given point in time, and allows you to install them all together. Today, I’m upgrading an ASR9001 from 5.1.0 to 6.1.4.

The first step is getting the software onto the device – easy enough if you have a FTP Server or a (*shivers*) TFTP server in your network. Collect all your PIEs together into a folder and get them onto the device:

RP/0/RSP0/CPU0:ASR-9001#copy ftp://username:password:@10.0.10.1;mgmt/asr9k-mini-px.pie-6.1.4
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
477999627 bytes copied in 375 sec ( 1273283)bytes/sec

I’ve only shown the first PIE above but you’ll need to bring all your packages across. Once all your images are on the device (you can confirm this with a dir disk0: command) it’s time to start the process of installing them. Installation of IOS-XOR PIEs comes in three steps:

  • add the PIE
  • activate the PIE
  • commit the PIE

Each step performs a number of checks, including verifying the image, unpacking it and then making it active at next boot (we’re going to skip ISSU as it’s not supported in 5.1.0 and finally committing all the changes (which usually reloads the device).

It was here I ran into my first issue. I ran the add command for my mini-px PIE (The mini PIE contains all the base OS functions – its the smallest package that renders a usable IOS-XR install) and all seemed to be going well:

RP/0/RSP0/CPU0:ASR-9001(admin)#install add disk0:/asr9k-mini-px.pie-6.1.4
Fri May 10 08:12:01.690 UTC
Install operation 18 '(admin) install add /disk0:/asr9k-mini-px.pie-6.1.4' started by user 'stan' via CLI at 08:12:02 UTC Fri May 10 2019.
The install operation will continue asynchronously.

The process continued to run in the background (you can check the progress with the show install requests command) until:

Error:    Cannot proceed with the add operation because the code signing certificate has
 Error:    expired.
 Error:    Suggested steps to resolve this:
 Error:     - check the system clock using 'show clock' (correct with 'clock set' if
 Error:       necessary).
 Error:     - check the pie file was built within the last 5 years using '(admin) show
 Error:       install pie-info /disk0:/asr9k-mini-px.pie-6.1.4'.

Well at least it’s a somewhat useful error. I checked the clock – all good, synced to NTP. I checked the PIE, also fine. So what could it be? Time to consult google, and instantly got a hit – Field Notice 63979. I was actually aware of this issue as it had come to light when I was working with a large ISP in the past. Essentially, in October 2015, the Software Signing certificate Cisco bundled in with IOS-XR expired – meaning if you tried to install software after that time it would be unable to verify the software as genuine (much like when you try to visit a website when its HTTPS certificate has expired). PIE, SMU and SP installations would all fail. At the time your could install a SMU before the expiration date in order to renew the root certificate, but it’s now 2019.

The Field Notice gave clear instruction that the issue could be resolved by installing a new root certificate and then installing a post-expiry SMU. However, you have to look for the cryptic link in the middle of the article to get the instructions on how to do this. So cue downloading the SMU, un-taring it and FTP-ing the files onto the disk. Once these were in place, to install the root certificate, you have to drop to a shell on IOS-XR (remember, it’s based on Linux)

RP/0/RSP0/CPU0:ASR-9001#run
Fri May 10 08:58:26.360 UTC
#samcmd sam add certificate /disk0:/css-root.cer root trust
SAM: Successful adding certificate /disk0:/css-root.cer
exit
RP/0/RSP0/CPU0:ASR-9001#admin
Fri May 10 08:58:48.636 UTC
RP/0/RSP0/CPU0:ASR-9001(admin)#install add disk0:/asr9k-px-5.1.0.CSCut52232.pie sync
Fri May 10 08:59:08.223 UTC
Install operation 21 '(admin) install add /disk0:/asr9k-px-5.1.0.CSCut52232.pie synchronous' started  by user 'stan' via CLI at 08:59:08 UTC Fri May 10 2019.
 Info:     The following package is now available to be activated:
 Info:
 Info:         disk0:asr9k-px-5.1.0.CSCut52232-1.0.0
 Info:
 Info:     The package can be activated across the entire router.
 Info:
 / 100% complete: The operation can no longer be aborted (ctrl-c for options)
 Install operation 21 completed successfully at 08:59:19 UTC Fri May 10 2019.

Excellent, now on with adding packages!

RP/0/RSP0/CPU0:ASR-9001(admin)#install add disk0:/asr9k-mini-px.pie-6.1.4
Fri May 10 09:02:48.699 UTC
Install operation 23 '(admin) install add /disk0:/asr9k-mini-px.pie-6.1.4' started by user 'stan' via CLI at 09:02:48 UTC Fri May 10 2019.
The install operation will continue asynchronously.
RP/0/RSP0/CPU0:ASR-9001(admin)#Info: The following package is now available to be activated:
 Info:
 Info: disk0:asr9k-mini-px-6.1.4
 Info:
 Info: The package can be activated across the entire router.
 Info:
 Install operation 23 completed successfully at 09:19:49 UTC Fri May 10 2019.

Once all the packages are added, the next step is activating them. There are a few caveats to this step – firstly you need to make sure you’ve installed new versions of all your currently running packages – you can confirm this with the show install active and show install inactive commands and compare the lists.

RP/0/RSP0/CPU0:ASR-9001(admin)#show install active
 Fri May 10 10:02:34.447 UTC
 Secure Domain Router: Owner

Node 0/RSP0/CPU0 [RP] [SDR: Owner]
     Boot Device: disk0:
     Boot Image: /disk0/asr9k-os-mbi-5.1.0/0x100000/mbiasr9k-rp.vm
     Active Packages:
       disk0:asr9k-fpd-px-5.1.0
       disk0:asr9k-k9sec-px-5.1.0
       disk0:asr9k-mgbl-px-5.1.0
       disk0:asr9k-mini-px-5.1.0
       disk0:asr9k-mpls-px-5.1.0
       disk0:asr9k-px-5.1.0.CSCut52232-1.0.0
RP/0/RSP0/CPU0:ASR-9001show install inactive
 Fri May 10 10:43:11.576 UTC
 Secure Domain Router: Owner

Node 0/RSP0/CPU0 [RP] [SDR: Owner]
     Boot Device: disk0:
     Inactive Packages:
       disk0:asr9k-mini-px-6.1.4
       disk0:asr9k-fpd-px-6.1.4
       disk0:asr9k-mgbl-px-6.1.4
       disk0:asr9k-mpls-px-6.1.4
       disk0:asr9k-mcast-px-6.1.4
       disk0:asr9k-k9sec-px-6.1.4

I’m adding the MPLS PIE to this upgrade as well. Once we’ve confirmed, we can activate the packages. They need to be activated together, to ensure all the components are updated as well:

RP/0/RSP0/CPU0:ASR-9001(admin)#install activate disk0:asr9k-mini-px-6.1.4 disk0:asr9k-mpls-px-6.1.4 disk0:asr9k-fpd-px-6.1.4 disk0:asr9k-k9sec-px-6.1.4 disk0:asr9k-mcast-px-6.1.4 disk0:asr9k-mgbl-px-6.1.4
Install operation 36 '(admin) install activate disk0:asr9k-mini-px-6.1.4 disk0:asr9k-mpls-px-6.1.4 disk0:asr9k-fpd-px-6.1.4 disk0:asr9k-k9sec-px-6.1.4 disk0:asr9k-mcast-px-6.1.4
 disk0:asr9k-mgbl-px-6.1.4' started by user 'stan' via CLI at 10:44:59 UTC Fri May 10 2019.
 Info:     This operation will reload the following nodes in parallel:
 Info:         0/RSP0/CPU0 (RP) (SDR: Owner)
 Info:         0/0/CPU0 (LC) (SDR: Owner)
 Proceed with this install operation (y/n)? [y]
 Info:     Install Method: Parallel Reload
 The install operation will continue asynchronously.
RP/0/RSP0/CPU0:May 10 10:49:45.851 : instdir[259]: %INSTALL-INSTMGR-7-SOFTWARE_CHANGE_RELOAD : Software change transaction 36 will reload affected nodes

The install will continue and give you a few heads up and messages as it carries out the various tasks and then finally reload. When it comes pack we can see that the new software packages are active:

RP/0/RSP0/CPU0:ASR-9001#sh install active
Fri May 10 11:05:52.542 UTC
Secure Domain Router: Owner

  Node 0/RSP0/CPU0 [RP] [SDR: Owner]
    Boot Device: disk0:
    Boot Image: /disk0/asr9k-os-mbi-6.1.4/0x100000/mbiasr9k-rp.vm
    Active Packages:
      disk0:asr9k-mini-px-6.1.4
      disk0:asr9k-fpd-px-6.1.4
      disk0:asr9k-mgbl-px-6.1.4
      disk0:asr9k-mpls-px-6.1.4
      disk0:asr9k-mcast-px-6.1.4
      disk0:asr9k-k9sec-px-6.1.4

Finally, we commit the software packages:

RP/0/RSP0/CPU0:ASR-9001(admin)#install commit
Fri May 10 11:26:16.623 UTC 
Install operation 1 '(admin) install commit' started by user 'stan' via CLI at 11:26:17 UTC Fri May 10 2019.

100% complete: The operation can no longer be aborted (ctrl-c for options)

RP/0/RSP0/CPU0:May 10 11:26:30.523 : instdir[256]: %INSTALL-INSTMGR-4-ACTIVE_SOFTWARE_COMMITTED_INFO : The currently active software is now the same as the committed software.

Install operation 1 completed successfully at 11:26:30 UTC Fri May 10 2019. 

And done! We’ve not got an ASR9001 running a fresh install of 6.1.4.

EDIT: Just a final note on this. The package actually contains upgrade code for the FPDs on the ASR9001 i.e. Microcode upgrade for the FPGAs themselves.

RP/0/RSP0/CPU0:ASR-9001#admin show hw-module fpd location 0/RSP0/CPU0

===================================== ==========================================
                                       Existing Field Programmable Devices
                                       ==========================================
                                         HW                       Current SW Upg/
 Location     Card Type                Version Type Subtype Inst   Version   Dng?
 ============ ======================== ======= ==== ======= ==== =========== ====
 0/RSP0/CPU0  ASR9001-RP                 1.0   lc   fpga2   0       1.14     Yes
                                               lc   cbc     0      22.114    No
                                               lc   rommon  0       2.01     Yes
---------------------------------------------------------------------------------
 NOTES:
 One or more FPD needs an upgrade.  This can be accomplished
 using the "admin> upgrade hw-module fpd  location " CLI. 

This can be achieved with the command admin upgrade hw-module fpd all location 0/RSP0/CPU0 followed by a reload.

WSL, Hyper and Vagrant

The first laptop I ever purchased was the excellent Black MacBook released by Apple in 2006. The 2nd generation of MacBooks to include Intel processors, once I’d got to grips with the UI and a few keyboard shortcuts, I realised just how powerful a UNIX-based operating system could be.

Fast forward 12 years and I’m writing this on a Dell XPS 13. I wish I had a MacBook Pro, or even the dinky 12″ MacBook but when it comes to cost vs performance, Apple have priced themselves out of my reach. Likewise, the majority of businesses still use Windows-based platforms for day to day work and so, when my day to day work required the use of tools like ssh, curl, python and others, I started looking for a more integrated solution for Windows than PuTTY and a Linux VM.

WSL

Windows Subsystem for Linux arrived shortly after the initial release of Windows 10, but has really come into its own in the last year. The capability to install different Linux Subsystems (rather than just Ubuntu, which was the only available at first) has made it a far more flexible platform. Currently Ubuntu, Debian, SUSE, OpenSUSE and Kali (BackTrack) are available and can be installed with a couple of clicks.

Firstly, open a Powershell prompt as Administrator and run:

Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Windows-Subsystem-Linux

to enable the WSL feature. Next head to the Windows Store to choose your distro (I’m a fan of Debian) and install. You’ll be asked to create a UNIX username and password for the new environment and after a few moments you’ll be dropped to a bash shell, familiar to anyone who’s used GNU/Linux in the past. Not only will you now have the standard Linux commands at your fingertips, but you also have access to package managers and linux packages – most of which work natively. After a few additions:

sudo apt-get install zsh vim wget inetutils-ping curl git python python-pip

I’m nearly ready to start work.

Hyper

While you can continue to launch your environment from the Start menu or by invoking bash from PowerShell, I wanted something that allowed me to run multiple sessions at once without windows open everywhere; Enter Hyper. Hyper is a terminal emulator based on javascript and CSS. It’s not the most lightweight of terminals, but it’s flexible and pretty (who doesn’t like eye-candy)?

Hyper terminal – and yes, I generated new keys after this 😉

Hyper is configured with a simple textfile and by default on Windows launches cmd.exe. This can be changed by locating and modifying the “shell” attribute:

shell: 'C:\\Windows\\System32\\wsl.exe',

Most guides suggest changing the above to bash.exe as that is the default shell installed, but notice earlier I downloaded and installed zsh, my shell of choice. Improvements to WSL now allow you more control over your shell, and it’s easy to change from bash to zsh. Within your environment run:

chsh -s $(which zsh)

and then invoke your environment using wsl.exe as above.

Vagrant

While WSL and Hyper allow me to get on with my day to day work without the need for a virtual environment, there are still limitations (for example, networking under WSL is a bit broken and abstracted from Windows). When it comes to rapid testing or prototyping, Vagrant makes spinning up (and tearing down) virtual environments a cinch. It natively integrates with VirtualBox and with one tweak can work perfectly with WSL.

First, download and install Virtualbox and its extension pack. Next, grab an install package for Vagrant on your chosen distro. I chose Debian 64-bit. This can then be installed:

sudo dpkg -i vagrant_2.1.2_x86_64.deb

Only one tweak is required to get this working under WSL which is to run the following connand:

export VAGRANT_WSL_ENABLE_WINDOWS_ACCESS="1"

to ensure WSL is allowed to interface correctly with Windows. For ease,  I added this to my zshrc file to save me doing it every time I start a shell. Finally, create and boot a test environment:

% mkdir vagrant
% mkdir vagrant/test
% cd vagrant/test
% vagrant init
A `Vagrantfile` has been placed in this directory. You are now
ready to `vagrant up` your first virtual environment! Please read
the comments in the Vagrantfile as well as documentation on
`vagrantup.com` for more information on using Vagrant.

You’ll need to modify the Vagrantfile to use a valid vagrant image. There is a wide selection available on Vagrant’s website as well as many more on github. For demonstration you could modify the vagrantfile with:

Vagrant.configure("2") do |config|
     config.vm.box = "ubuntu/bionic64"
end

and then

% vagrant up

Vagrant will go off and download the VM Image, boot it, set up a local port forward and install SSH keys.

Once complete accessing your new environment is as easy as:

% vagrant ssh

Should you wish you can suspend, resume and halt the environment:

% vagrant suspend | halt | resume

or if you’re finished with the environment completely:

% vagrant destroy

which will remove all traces of the virtual machine.

P.S. I hit some issues booting this Ubuntu VM and needed to add the following to my basic Vagrantfile:

config.vm.provider "virtualbox" do |vb|
  vb.customize [ "modifyvm", :id, "--uartmode1", "disconnected" ]
end

EasyEngine auto-completion with zsh

Like many users, I discovered the joy of using the zsh shell when I began playing with a Live USB recovery distribution called GRML. GRML implemented zsh with some excellent customisation’s including syntax highlighting and lightning fast path completion, but the thing that really got me was its great command completion. I’m sure bash can do the same thing but having never experienced it, I was pretty blown away. GRML make their zsh configuration available to download, and it has become a standard for my server builds.

Recently, a colleague introduced me to EasyEngine, which I have been using to build out a few minimal web environments. One of the neat things EasyEngine provides is a completion script for bash to allow tab completion of the ‘ee’ command. The instructions are simple – to enable the completion run:

% source /etc/bash_completion.d/ee_auto.rc

However, running this on zsh caused a bit of an issue – specifically I was getting an error:

% /etc/bash_completion.d/ee_auto.rc:384: command not found: complete

Thankfully the Holy Google came through for me and I stumbled across this post discussing the same issue with a different program which suggested that zsh already has compatibility with bash completion built in – it just needs activated.

% autoload bashcompinit
% bashcompinit
% source /etc/bash_completion.d/ee_auto.rc

And Voila! Completion for the ee command in zsh!