Getting Firefox Nightly working nice with GNOME

I love Firefox. I use Firefox Nightly as my daily driver.

Recent changes with how Firefox uses its profiles has meant I’ve now uninstalled the stable release of Firefox I had installed with my package manager.

The other motivation to get Firefox Nightly to play nice with GNOME was the new nightly logo.

Firefox Nightly logo

hnnng

To install Firefox Nightly I donwloaded the latest release and extracted it to /opt/firefox. I then made the firefox folder owned by my user account so firefox could update itself. GNOME uses .desktop files to populate its launcher and dock. I created this file and saved it to ~/.local/share/applications/firefox.desktop:

[Desktop Entry]
Version=1.0
Name=Firefox
Comment=Browse the World Wide Web
Icon=/opt/firefox/browser/icons/mozicon128.png
Exec=/opt/firefox/firefox %u
Terminal=false
Type=Application
Categories=Network;WebBrowser;
Actions=PrivateMode;SafeMode;ProfileManager;

[Desktop Action PrivateMode]
Name=Private Mode
Exec=/opt/firefox/firefox --private-window %u

[Desktop Action SafeMode]
Name=Safe Mode
Exec=/opt/firefox/firefox --safe-mode

[Desktop Action ProfileManager]
Name=Profile Manager
Exec=/opt/firefox/firefox --ProfileManager

Hopefully this useful to others.

Installing PostgreSQL v10

I want to install the latest version of PostgreSQL on my server. The first thing to do is to backup my database from my current installation.

$ pg_dump -U username database_name > backup.sql

From here I can compile and install the latest version of PostgreSQL, then restore my previous data.

Setup

Let’s get a working directory

$ mkdir postgresql && cd postgresql

And install the necessary dependencies

$ sudo apt install build-essential libreadline6-dev zlib1g-dev libssl-dev libxml2-dev libxslt1-dev libossp-uuid-dev libsystemd-dev libproj-dev libpcre3-dev libjson-c-dev

PostgreSQL also needs a dedicated user account.

$ sudo adduser --system postgres

download postgresql

$ curl -O https://ftp.postgresql.org/pub/source/v10.0/postgresql-10.0.tar.bz2
$ curl -O https://ftp.postgresql.org/pub/source/v10.0/postgresql-10.0.tar.bz2.sha256
$ shasum -a 256 postgresql-10.0.tar.bz2
$ cat postgresql-10.0.tar.bz2.sha256 # check values match

download postgis

$ curl -O http://download.osgeo.org/postgis/source/postgis-2.4.0.tar.gz

download geos

geos is a dependency for PostGIS.

$ curl -O http://download.osgeo.org/geos/geos-3.6.2.tar.bz2

download gdal

gdal is another dependency for PostGIS.

$ curl -O http://download.osgeo.org/gdal/2.2.2/gdal-2.2.2.tar.xz
$ curl -O http://download.osgeo.org/gdal/2.2.2/gdal-2.2.2.tar.xz.md5
$ md5sum gdal-2.2.2.tar.xz
$ cat gdal-2.2.2.tar.xz.md5 # check values match

Installing

Now we can install PostgreSQL and its dependencies.

PostgreSQL

$ tar -xf postgresql-10.0.tar.bz2
$ cd postgresql-10.0
$ ./configure --with-openssl --with-systemd --with-uuid=ossp --with-libxml --with-libxslt --with-system-tzdata=/usr/share/zoneinfo
$ make
$ sudo make install
$ cd ..

GEOS

$ tar -xf geos-3.6.2.tar.bz2
$ cd geos-3.6.2
$ ./configure
$ make
$ sudo make install
$ cd ..

Gdal

$ tar -xf gdal-2.2.2.tar.xz
$ cd gdal-2.2.2/
$ ./configure --with-liblzma=yes --with-pg=/usr/local/pgsql/bin/pg_config
$ make
$ sudo make install
$ cd ..

PostGIS

$ tar -xf postgis-2.4.0.tar.gz
$ cd postgis-2.4.0
$ ./configure --disable-gtktest --with-pgconfig=/usr/local/pgsql/bin/pg_config
$ make
$ sudo make install
$ sudo ldconfig
$ cd ..

Configuring

The last line from above, sudo ldconfig, makes sure the .so files are linked to and loaded correctly, in particular the postgis .so files. Then when we try and get PostgreSQL to load a module, it’ll find the right file.

To initiate PostgreSQL change to the postgres user and initiate the db, we also need to set the right permissions on the folder used for the db:

$ sudo mkdir /usr/local/pgsql/data
$ sudo chown -R postgres /usr/local/pgsql/data
$ sudo chmod 700 /usr/local/pgsql/data
$ sudo su postgres
postgres@hostname:/home/user/postgresql$ /usr/local/pgsql/bin/initdb -E UTF8 -D /usr/local/pgsql/data

Startup

Let’s use systemd to start and stop PostgreSQL. Create a new service file at /etc/systemd/system/postgresql.service:

[Unit]
Description=PostgreSQL database server
After=network.target

[Service]
Type=forking
TimeoutSec=120
User=postgres

ExecStart= /usr/local/pgsql/bin/pg_ctl -s -D /usr/local/pgsql/data start -w -t 120
ExecReload=/usr/local/pgsql/bin/pg_ctl -s -D /usr/local/pgsql/data reload
ExecStop=  /usr/local/pgsql/bin/pg_ctl -s -D /usr/local/pgsql/data stop -m fast

[Install]
WantedBy=multi-user.target

Then start and enable the PostgreSQL server:

$ sudo systemctl start postgresql
$ sudo systemctl enable postgresql

Now we setup a new database as the postgres user using psql:

$ sudo su postgres
postgres@hostname:/home/user/postgresql$ /usr/local/psql/bin/psql
postgres=# CREATE ROLE username WITH LOGIN PASSWORD 'password';
postgres=# CREATE DATABASE database_name WITH OWNER username;
postgres=# \q

Then, whilst we are still the postgres user load the backup file:

postgres@hostname:/home/user/postgresql$ /usr/local/pgsql/bin/psql -d database_name --set ON_ERROR_STOP=on -f backup.sql

Further Updates

When a minor version it released, e.g. from 9.6.2 -> 9.6.3, updating is very easy. Compile the new source code, stop postgresql.service, and run the sudo make install command. Then you can simply start the postgresql.service again, as the data structure will remain compatible.

Major updated are more involved. We need to dump our database as at the start of this article. Then move the /usr/local/pgsql folder to a backup location. Then install as the article describes.

Updating an Eloquent model to use timestamps

When I upgraded my website to use Laravel I already had a database with blog posts. As part of the schema one of the columns was called date_time and was of type int(11) and contained plain unix timestamps. To speed up migrating my code base to Laravel I kept this column and set my articles model to not use Laravel’s timestamps.

I’ve now got round to updating my database to use said timestamps. Here’s what I did. First I created the relavent new columns:

[sql]
> ALTER TABLE `articles` ADD `created_at` TIMESTAMP DEFAULT 0;
> ALTER TABLE `articles` ADD `updated_at` TIMESTAMP;
> ALTER TABLE `articles` ADD `deleted_at` TIMESTAMP NULL;

I needed to add DEFAULT 0 when adding the created_at column to stop MariaDB setting the default value to CURRENT_TIMESTAMP as well as adding an extra rule of updating the column value on a row update.

Then I needed to populate the column values based on the soon to be redundant date_time column. I took advantage of the fact the values were timestamps:

[sql]
> UPDATE `articles` SET `created_at` = FROM_UNIXTIME(`date_time`);
> UPDATE `articles` SET `updated_at` = FROM_UNIXTIME(`date_time`);

Now I can delete the old date_time column:

[sql]
> ALTER TABLE `articles` DROP `date_time`;

Next I had to get Eloquent to work properly. I wantet /blog to show all articles, I wanted /blog/{year} to show all articles from a given year, and I wanted /blog/{year}/{month} to show all articles from a given month. My routes.php handled this as Route::get('blog/{year?}/{month?}', 'ArticlesConrtoller@showArticles');. I then defined a query scope so I could do $articles = Articles::dates($year, $month)->get(). Clearly these variables could be null so my scope was defined as follows:

[php]
public function scopeDate($query, $year = null, $month = null)
{
    if ($year == null) {
        return $query;
    }
    $time = $year;
    if ($month !== null) {
        $time .= '-' . $month;
    }
    $time .= '%';
    return $query->where('updated_at', 'like', $time);
}

The logic takes advantage of the fact I know that $year can’t be null whilst simultaneously $month be not-null. i.e. when there is no year just return an unmodified query.

And now my blog posts are handled by Laravel properly.

Hardening HTTPS with nginx

I’ve improved my HTTPS setup with nginx recently. For a start I’ve organised the files better. For a TL;DR I’ve put the pertinent files on GitHub.

First I have conf/nginx.conf, the main configuration file, which defines lots of mundane non-security related things. Then the penultimate directive is: include includes/tls.conf; This defines the various TLS rules globally. In particular this allows the session cache to be shared amongst several virtual servers. Let’s take a look at what else is done here:

[bash]
# Let’s only use TLS
ssl_protocols TLSv1.1 TLSv1.2;
# This is sourced from Mozilla’s Server-Side Security – Modern setting.
ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!3DES:!MD5:!PSK';
ssl_prefer_server_ciphers  on;

# Optimize SSL by caching session parameters for 10 minutes. This cuts down on the number of expensive SSL handshakes.
# The handshake is the most CPU-intensive operation, and by default it is re-negotiated on every new/parallel connection.
# By enabling a cache (of type \"shared between all Nginx workers\"), we tell the client to re-use the already negotiated state.
# Further optimization can be achieved by raising keepalive_timeout, but that shouldn't be done unless you serve primarily HTTPS.
ssl_session_cache    shared:SSL:10m; # a 1mb cache can hold about 4000 sessions, so we can hold 40000 sessions
ssl_session_timeout  24h;

# SSL buffer size was added in 1.5.9
ssl_buffer_size      1400; # 1400 bytes to fit in one MTU

# Use a higher keepalive timeout to reduce the need for repeated handshakes
keepalive_timeout 300; # up from 75 secs default

# SPDY header compression (0 for none, 9 for slow/heavy compression). Preferred is 6. 
# 
# BUT: header compression is flawed and vulnerable in SPDY versions 1 - 3.
# Disable with 0, until using a version of nginx with SPDY 4.
spdy_headers_comp 0;

# Diffie-Hellman parameter for DHE ciphersuites
# `openssl dhparam -out dhparam.pem 4096`
ssl_dhparam includes/dhparam.pem;

As you can see, I don’t support any version of SSL. It’s insecure. I’ve also dropped support for TLSv1. I’m still undecided on that. Remember you are going to need to generate your own dhparam.pem file. This command can take a long time.

In each included virtual server I further include two other files, stapling.conf and security-headers.conf. The first file is very self explanatory and simply enables OCSP Stapling. As far as I can tell for nginx, if you use virtual servers, one of them needs to be designated a default_server and at least this one needs stapling enabled in order for stapling to work for any other virtual server. Feedback on this point is welcome.

The second file, security-headers.conf, is where I improve the security of the sites using several HTTP headers. I’ve been particularly inspired by securityheaders.io. Let’s take a look:

[bash]
# The CSP header allows you to define a whitelist of approved sources of content for your site.
# By restricting the assets that a browser can load for your site, like js and css, CSP can act as an effective countermeasure to XSS attacks.
add_header Content-Security-Policy \"default-src https: data: 'unsafe-inline' 'unsafe-eval'\" always;

# The X-Frame-Options header, or XFO header, protects your visitors against clickjacking attacks.
add_header X-Frame-Options \"SAMEORIGIN\" always;

# This header is used to configure the built in reflective XSS protection found in Internet Explorer, Chrome and Safari (Webkit).
# Valid settings for the header are 0, which disables the protection, 1 which enables the protection and 1; mode=block which tells
# the browser to block the response if it detects an attack rather than sanitising the script.
add_header X-Xss-Protection \"1; mode=block\" always;

# This prevents Google Chrome and Internet Explorer from trying to mime-sniff the content-type of a response away from the one being
# declared by the server. It reduces exposure to drive-by downloads and the risks of user uploaded content that, with clever naming,
# could be treated as a different content-type, like an executable.
add_header X-Content-Type-Options \"nosniff\" always;

This is unashamedly copied from Scott Helme.

There are two more headers I use, but these are used on a site-by-site basis and are thus done in the virtual servers files themselves. This is because once we use these headers we can’t really go back to having a non-https version of the site. You can see them in the sites-available/https.jonnybarnes.uk file. They are the HSTS and HPKP headers. HSTS is easy, it just tells the browser to only use https:// links for the domain. This is cached by the browser, and can even be pre-loaded. HPKP is a little more involved.

The idea with HTTP Public Key Pinning is to try and stop your site being the subject of the Man-in-the-Middle attack. In such an attack a different certificate than yours is presented to the user. In particular the public key included in the certificate is not the associated public key to my private key. What HPKP does is take a pin, or hash, of the public key and transfer that information in a header. This value is then cached by the user’s browser and any subsequent connections the browser checks the provided public key matches this locally cached pin. For fallback purposes a backup pin must also be provided. This backup pin can be derived from a public key contained in a CSR. In particular this CSR needn’t have been used to get a signed certificate from a CA yet.

Scott Helme has an excellent write-up of this process. Given either of your current site certificate, or CSRs for future certificates it’s simply a few openssl commands to get the relavent base64 encoded pin. Then a single add_header directive in nginx.

The end result of all this is a more secure site, hopefully. One issue to note for now is Mozilla Firefox doesn’t support HPKP yet and you’ll get error entries in the console log regarding an invalid Public-Key-Pins header. This should get fixed in time.

Getting IPv6 Support

Given the impending doom of IPv4, I thought I’d try and setup my site to be accessible over IPv6. Thankfully my webhost has dual-stack connectivity in their datacenter. They also assign IPv6 addresses for free, in fact they gave me 65,537 addresses.[^1]

Getting nginx setup was trivially easy, I re-compiled the software adding the --with-ipv6 flag, then added the line listen [::]:80 to my vhost files (or indeed listen [::]:443). This was in addition to the usual listen directive.

Getting IPv6 configured correctly on the system took a little more working out. In the end I think I have simplified my configuration even for IPv4. I use Debian 7 which comes with the newer iproute2 package to manage network connections. With the stored settings in /etc/network/interfaces. This is mine:

[bash]
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

# The loopback network interface
auto lo
iface lo inet loopback

# This line makes sure the interface will be brought up during boot
auto eth0
allow-hotplug eth0

# The primary network interface
iface eth0 inet static
	address 85.17.141.27
	netmask 255.255.255.0
	gateway 85.17.141.254
	# dns-* options are implemented by the resolvconf package, if installed
	dns-nameservers 85.17.150.123 85.17.96.69 85.17.150.123 62.212.64.122
	dns-search localdomain
	# up commands
	up sysctl -w net.ipv6.conf.eth0.autoconf=0
	up sysctl -w net.ipv6.conf.eth0.accept_ra=0
	up ip addr add 85.17.141.33/24 dev eth0
	up ip -6 addr add 2001:1af8:4100:a00e:4::1/64 dev eth0
	up ip -6 ro add default via 2001:1af8:4100:a00e::1 dev eth0

This sets up the default IPv4 address and a default gateway. Then once the interfrace is brought up at boot time the ip command is invoked, which is a part of the iproute2 package, to add a second IPv4 address. Then add an IPv6 address and the default route to use when communicating over IPv6.

You’ll notice I also use the sysctl command to change some system settings. These stop the system trying to assign itself an IPv6 address and to not listen to router advertisements. I think these were causing my IPv6 connection to drop.

Now my system is setup as so:

[bash]
➜ ~  ip addr show eth0
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP qlen 1000
link/ether d4:ae:52:c5:d2:1b brd ff:ff:ff:ff:ff:ff
inet 85.17.141.27/24 brd 85.17.141.255 scope global eth0
inet 85.17.141.33/24 scope global secondary eth0
inet6 2001:1af8:4100:a00e:4::1/64 scope global
   valid_lft forever preferred_lft forever
inet6 fe80::d6ae:52ff:fec5:d21b/64 scope link
   valid_lft forever preferred_lft forever

and

[bash]
➜ ~  ip -6 ro
2001:1af8:4100:a00e::/64 dev eth0  proto kernel  metric 256
fe80::/64 dev eth0  proto kernel  metric 256
default via 2001:1af8:4100:a00e::1 dev eth0  metric 1024

Even though I don’t have IPv6 at home yet, my site should be connectible over IPv6.

[^1]: I was given the IP addresses ::0000 to ::FFFF, that’s 216 addresses.

*[IP]: Internet Protocol