Common Name vs Subject Alternative Names

You probably heard about the conflict between the fields Common Name (CN) and Subject Alt Names (subjectAltName) in SSL certificates. It seems best practice for clients to compare the CN value with the server’s name. However, RFC 2818 already advised against using the Common Name and google now takes the gloves off. Since Chrome version 58 they do not support the CN anymore, but throw an error:

Subject Alternative Name Missing

Good potential for some administrative work ;-)

Check for a Subject Alternative Names

You can use OpenSSL to obtain a certificate, for example for binfalse.de:

openssl s_client -showcerts -connect binfalse.de:443 </dev/null 2>/dev/null

Here, openssl will connect to the server behind binfalse.de at port 443 (default port for HTTPS) to request the SSL certificate and dump it to your terminal. openssl can also print the details about a certificate. You just need to pipe the certificate into:

openssl x509 -text -noout

Thus, the whole command including the output may look like this:

openssl s_client -showcerts -connect binfalse.de:443 </dev/null | openssl x509 -text -noout
Certificate:
  Data:
    Version: 3 (0x2)
    Serial Number:
      03:a1:4e:c1:b9:6c:60:61:34:a2:e1:9f:ad:15:2b:f9:fd:f0
  Signature Algorithm: sha256WithRSAEncryption
    Issuer: C = US, O = Let's Encrypt, CN = Let's Encrypt Authority X3
    Validity
      Not Before: May 12 07:11:00 2017 GMT
      Not After : Aug 10 07:11:00 2017 GMT
    Subject: CN = binfalse.de
    Subject Public Key Info:
      Public Key Algorithm: rsaEncryption
        Public-Key: (4096 bit)
        Modulus:
          00:ae:8d:6a:74:0b:10:4e:8e:07:1e:c8:3e:b8:83:
          11:4f:b0:af:2b:eb:49:61:82:4f:6f:73:30:0c:d6:
          3e:0a:47:bc:72:55:df:84:8c:56:1a:4a:87:ec:d4:
          72:8d:8c:3d:c4:b3:6c:7a:42:e2:f4:6e:c0:5e:50:
          e4:c0:9c:63:6c:0b:e0:12:15:0c:28:2d:4f:67:ad:
          69:9a:b4:ee:dc:12:b1:02:83:00:b7:22:22:60:13:
          a6:7d:e3:8a:e5:0c:f3:15:17:69:5e:fe:de:af:ea:
          1e:71:b4:90:df:97:fe:d2:1b:ef:58:d5:43:35:8b:
          81:e1:62:d6:6b:eb:18:e5:5b:a8:5c:da:f8:39:be:
          8b:9a:34:c1:54:d2:5c:bc:22:85:6b:2e:30:8c:d8:
          fa:dd:2c:9d:ae:5e:c9:21:43:86:d5:f8:dc:aa:d6:
          d4:2c:a8:0b:ca:d8:16:cb:98:d3:c9:c8:c0:a3:6c:
          1e:2f:9d:6f:5b:d3:09:1f:4e:1b:a7:48:99:25:84:
          ef:5f:5a:db:c1:19:82:fd:8c:9e:b2:68:da:1b:98:
          b8:60:49:62:82:8e:75:ea:03:be:0d:df:e1:8c:40:
          8a:10:48:f4:c0:f8:89:02:29:9b:94:3f:6d:68:72:
          42:e8:2e:ad:e6:81:cd:22:bf:cd:ff:ce:40:89:73:
          2e:1e:b7:94:3f:f1:9e:36:89:37:4a:04:81:80:70:
          8f:39:fe:b2:90:b5:5e:cb:93:7e:71:e3:e1:2a:bc:
          21:9a:ef:a6:e2:2b:1c:8c:da:53:bf:79:37:7d:6e:
          0e:eb:de:c3:aa:9f:64:f6:c9:58:35:d2:32:ab:4f:
          f7:8d:6e:a1:7f:7a:de:d4:48:cd:0d:18:b7:20:84:
          b5:8c:d8:f5:b1:ac:e3:b4:66:9f:9f:ab:01:22:c8:
          f2:f8:09:36:f1:c5:90:ff:d3:a4:80:8e:f4:c4:05:
          c5:4f:7f:ca:f3:fd:42:ec:25:b7:38:42:af:fd:37:
          da:5e:2f:a8:c4:23:fe:24:d2:72:16:1e:96:50:45:
          05:cb:39:6c:95:69:a0:39:48:73:72:a4:d5:c0:a0:
          b3:9a:cb:27:fe:7c:87:b8:53:3b:52:50:b6:5d:11:
          ea:b5:42:1a:80:07:4d:4c:b4:79:59:7c:b9:4b:2f:
          0b:b4:2e:57:a6:6c:5f:45:c6:4d:20:54:9d:e3:1b:
          82:0c:16:65:a0:fa:e9:cb:98:6d:59:3c:a5:41:22:
          22:e8:38:38:b6:fe:05:d5:e5:34:7f:9e:52:ba:34:
          4c:ab:9b:8d:e0:32:ce:fa:cd:2b:a3:57:7a:2c:fc:
          2c:e7:31:00:77:d7:d1:cd:b5:d2:6a:65:0f:97:63:
          b0:36:39
        Exponent: 65537 (0x10001)
    X509v3 extensions:
      X509v3 Key Usage: critical
        Digital Signature, Key Encipherment
      X509v3 Extended Key Usage: 
        TLS Web Server Authentication, TLS Web Client Authentication
      X509v3 Basic Constraints: critical
        CA:FALSE
      X509v3 Subject Key Identifier: 
        3B:F7:85:9A:2B:1E:1E:95:20:1B:21:D9:2C:AF:F4:26:E8:95:29:BA
      X509v3 Authority Key Identifier: 
        keyid:A8:4A:6A:63:04:7D:DD:BA:E6:D1:39:B7:A6:45:65:EF:F3:A8:EC:A1
      
      Authority Information Access: 
        OCSP - URI:http://ocsp.int-x3.letsencrypt.org/
        CA Issuers - URI:http://cert.int-x3.letsencrypt.org/
      
      X509v3 Subject Alternative Name: 
        DNS:binfalse.de
      X509v3 Certificate Policies: 
        Policy: 2.23.140.1.2.1
        Policy: 1.3.6.1.4.1.44947.1.1.1
          CPS: http://cps.letsencrypt.org
          User Notice:
            Explicit Text: This Certificate may only be relied upon by Relying Parties and only in accordance with the Certificate Policy found at https://letsencrypt.org/repository/
  
  Signature Algorithm: sha256WithRSAEncryption
    1b:82:51:b3:1c:0d:ae:8c:9f:25:4e:87:1a:4b:e9:b4:77:98:
    74:22:f1:27:c5:c1:83:45:7c:89:34:43:fe:76:d8:90:56:c5:
    b1:a7:74:78:f1:e4:4c:69:2c:9f:55:d1:a3:c9:ce:f1:b6:4a:
    40:e4:18:ae:80:03:76:bd:d5:25:ff:4b:4b:68:cd:98:09:48:
    e4:42:07:bc:4a:ad:a3:f7:46:8a:fe:46:c2:6a:b2:28:01:4d:
    89:09:2a:31:15:26:c5:aa:14:93:5e:8c:a6:cb:30:af:08:7f:
    6f:d8:ef:a2:d7:de:33:3e:f2:c3:17:c6:08:4a:3b:c6:67:05:
    07:c0:b8:52:13:e1:c8:13:d4:0e:19:11:0f:54:4e:ea:d0:2b:
    c2:3d:93:51:8a:15:da:f7:4b:78:08:cd:c1:d0:f2:f7:e0:98:
    f7:0a:bc:13:ca:d0:9b:be:2d:2b:d5:e9:03:29:12:aa:97:ec:
    1a:d1:2c:51:7d:21:d1:38:39:aa:1d:9e:a5:98:1d:94:e2:66:
    ea:31:c4:18:b6:13:6c:6f:8e:2f:27:77:7b:af:37:e0:0b:86:
    4b:b5:cc:7b:96:31:0c:30:c6:9e:12:a2:15:07:29:9f:78:3e:
    5e:2a:3f:cf:f8:27:82:30:72:6b:63:64:5a:d1:2d:ed:08:ed:
    71:13:a9:0b

As you can see in the X.509 extension this server’s SSL certificate does have a Subject Alternative Name:

X509v3 Subject Alternative Name:
  DNS:binfalse.de

To quick-check one of your websites you may want to use the following grep filter:

openssl s_client -showcerts -connect binfalse.de:443 </dev/null | openssl x509 -text -noout | grep -A 1 "Subject Alternative Name"

If that doesn’t print a proper Subject Alternative Name you should go and create a new SSL certificate for that server!

#android: No Internet Access Detected, won't automatically reconnect -aka- Connected, no Internet.

Android: No Internet Access Detected, won't automatically reconnect.
Android: No Internet Access Detected, won't automatically reconnect.

Hands up: who knows what an android device does when it sees a WiFi network coming up? Exactly, since Lollipo (Android 5) your phone or tablet leaks a quick HTTP request to check if it has internet access. This check is, for example, done with clients3.google.com/generate_204, a “webpage” that always returns an HTTP status code 204 No Content. Thus, if the phone receives a 204 it is connected to the internet, otherwise it assumes that this network does not provide proper internet access or is just a captive portal. However, that way Google of course always knows when you connect from where. And how often. And which device you’re using. etc… :(

How to prevent the leak

Even if people may like that feature, that is of course a privacy issue – so how can we counter that?

I briefly mentioned that a few years ago. You could use AdAway (available from F-Droid, source from GitHub) to redirect all traffic for clients3.google.com and clients.l.google.com to nirvana.

I already maintain a convenient configuration for AdAway at stuff.lesscomplex.org/adaway.txt, which blocks Google’s captive portal detection.

However, blocking that “feature” also comes with some drawbacks…

The downside of blocking captive portal detection

Android: Connected, no Internet.
Android: Connected, no Internet.

The consequences of blocking all request of the captive portal detection are obvious: your phone assumes that no network hat internet access. And therefore, it wouldn’t connect automatically, saying

No Internet Access Detected, won’t automatically reconnect. see image on top

That will probably increase your mobile data usage, as you always need (to remember) to do connect manually. And even if you manually connect to a network “without internet” the WiFi icon will get an exclamation mark and the phone says

Connected, no Internet. see second image

Annoying…

What can we do about it?

Disable captive portal detection

With a rooted phone you can simply disable captive portal detection. Just get a root-shell through adb (or SSH etc) to run the following command:

settings put global captive_portal_detection_enabled 0

One small drawback of that approach: you need to execute that again after flashing a new image… However, I guess you’ll anyway have a small workflow for re-flashing your phone – just add that tiny bit to it ;-)

Another drawback is that you loose the captive portal detection… Of course, that’s what you intended, but sometimes it may be useful to have that feature in hotels etc..

Change the server for captive portal detection with the Android API

You can also change the URL to the captive portal server to a server under your control. Let’s say you have a site running at scratch.binfalse.de/generate_204 that simulates a captive portal detection server backend(!?) and always returns 204, no matter what request. Then you can use that URL for captive portal detection! Override the captive portal server on a root-shell (adb or SSH etc) by calling:

settings put global captive_portal_server scratch.binfalse.de

This way you retain the captive portal detection without leaking data to Google. However, you will again loose the setting when flashing the phone again..

Change the server for captive portal detection using AdAway

Another option for changing the captive portal detection server is to change its IP address to one that’s under your control. You can do that with AdAway, for example. Let’s say your captive portal detection server has the IP address 5.189.140.231, then you may add the following to your AdAway configuration:

5.189.140.231 clients3.google.com
5.189.140.231 clients.l.google.com

The webserver at 5.189.140.231 should then of course accept requests for the foreign domains.

This way, you also don’t leak the data to Google and you will also keep the settings after flashing the phone (as long as you leave AdAway installed). However, there are also some things to keep in mind: First, I could imagine that Google may be a bit upset if you redirect their domains to a different server? And second, you don’t know if those are the only servers used for captive portal detection. If Google at some point comes up with another domain for captive portal detection, such as captive.google.com, you’re screwed.

Supplementary material

See also the CaptivePortal description at the android reference.

Create captive portal detection server with Nginx

Just add the following to your Nginx configuration:

location /generate_204 { return 204; }

Create captive portal detection server with Apache

If you’re running an Apache web server you need to enable mod_rewrite, then create a .htaccess in the DocumentRoot containing:

<IfModule mod_rewrite.c>
	RewriteEngine On
	RewriteCond %{REQUEST_URI} /generate_204$
	RewriteRule $ / [R=204]
</IfModule>

Create captive portal detection server with PHP

A simple PHP script will also do the trick:

<?php http_response_code (204); ?>

Docker MySQL Backup

Even with Docker you need to care about backups.. ;-)

As you usually mount all the persistent data into the container the files will actually be on your host. Thus, you can simply do the backup of these files. However, for MySQL I prefer having an actual SQL-dump. Therefore I just developed the Docker MySQL-Backup tool. You will find the sources at the corresponding GitHub repository.

How does Docker MySQL-Backup work?

The tool basically consists of two scripts:

The script /etc/cron.daily/docker-mysql-backup parses the output of the docker ps command to find running containers of the MySQL image. More precisely, it looks for containers of images that start with \smysql. That of course only matches the original MySQL image names (if you have a good reason to derive an own version of that image please tell me!). For every matching $container the script will exec the following command:

docker exec "$container" \
	sh -c 'exec mysqldump --all-databases -uroot -p"$MYSQL_ROOT_PASSWORD"' \
	| ${GZIP} -9 > "${BACKUP_DIR}/${NOW}_complete.sql.gz"

With the following variables:

  • $BACKUP_DIR is a concatenation of $BACKUP_BASE (configured in /etc/default/docker-mysql-backup) and the container name,
  • $NOW is the current time stamp as date +"%Y-%m-%d_%H-%M".

Thus, the backups are compressed, organised in subdirectories of $BACKUP_BASE, and the SQL-dumps have a time stamp in their names. $BACKUP_BASE defaults to /srv/backup/mysql/, but can be configured in /etc/default/docker-mysql-backup.

Last but not least, the script also cleans the backups itself. It will keep the backups of the last 30 days and all backups of days that end with a 2. So you will keep the backups from the 2nd, the 12th, and the 22nd of every month.

As the script is stored in /etc/cron.daily/ the cron tool will execute the backup script on a daily basis.

Restore a dump

Restoring the dump is quite easy. Let’s assume your container’s name is $container and the dump to restore carries the time stamp $date. Then you just need to run:

docker exec "$container" -v "${BACKUP_BASE}/docker_${container}":/srv sh -c \
      'exec gunzip < /srv/${date}_complete.sql.gz | mysql -uroot -p"$MYSQL_ROOT_PASSWORD"'

This will mount the backup directory in /srv of the running container and then decompress and import the SQL-dump on the fly.

Installation

Manual installation through GitHub

Clone the Docker MySQL-Backup repository:

git clone https://github.com/binfalse/docker-mysql-backup.git

Copy the backup script to the cron.daily (most likely /etc/cron.daily/) directory on your system:

cp docker-mysql-backup/etc/cron.daily/docker-mysql-backup /etc/cron.daily/

Copy the configuration to /etc/default/:

cp docker-mysql-backup/etc/default/docker-mysql-backup /etc/default/

Installation from my Apt repository

If you’re running a Debian-based system you may want to use my apt-repository to install the Docker MySQL-Backup tool. In that case you just need to run

aptitude install bf-docker-mysql-backup

Afterwards, look into /etc/default/docker-mysql-backup for configuration options. This way, you’ll always stay up-to-date with bug fixes and new features :)

Automatically update Docker images

Automatically Update Docker Images
Automatically Update your Docker Images

Docker is cool. Jails tools into containers. That of course sounds clean and safe and beautiful etc. However, the tools are still buggy and subject to usual attacks, just as they were running on your main host! Thus, you still need to make sure your containers are up to date.

But how would you do that?

Approaches so far

docker-compose pull

On the one hand, let’s assume you’re using Docker Compose, then you can go to the directory containing the docker-compose.yml and call

docker-compose pull
docker-compose up -d --remove-orphans

However, this will just update the images used in that Docker Compose setup – all the other images on your system wouldn’t be updated. And you need to do that for all Docker Compose environments. And if you’re running 30 containers of the same image it would check 30 times for an update of that image – quite a waste or power and time..

dupdate

On the other hand, you may use the dupdate tool, introduced earlier:

dupdate -s

It is able to go through all your images and update them, one after the other. That way, all the images on your system will be updated. However, dupdate doesn’t know about running containers. Thus, currently running tools and services won’t be restarted..

Better: Docker Auto-Update

Therefore, I just developed a tool called Docker Auto-Update that combines the benefits of both approaches. It first calls dupdate -s to update all your images and then iterates over a pre-defined list of Docker Compose environments to call a docker-compose up -d --remove-orphans.

The tool consists of three files:

  • /etc/cron.daily/docker-updater reads the configuration in /etc/default/docker-updater and does the regular update
  • /etc/default/docker-updater stores the configuration. You need to set the ENABLED variable to 1, otherwise the update tool won’t run.
  • /etc/docker-compose-auto-update.conf carries a list of Docker Compose environments. Add the paths to the docker-compose.yml files on your system, one per line

As it’s installed in /etc/cron.daily/, cron will take care of the job and update your images and containers on a daily basis. If your system is configured properly, cron will send an email to the systems administrator when it updates an image or restarts a container.

You see, no magic, but a very convenient workflow! :)

Installation

Manual

To install the Docker Auto-Update tool, you may clone the git repository at GitHub. Then,

  1. move the ./etc/cron.daily/docker-updater script to /etc/cron.daily/docker-updater
  2. move the ./etc/default/docker-updater config file to /etc/default/docker-updater
  3. update the setup in /etc/default/docker-updater – at least set ENABLED=1
  4. create a list of Docker Compose config files in /etc/docker-compose-auto-update.conf - one path to a docker-compose.yml per line.

Debian Package

If you’re using a Debian based system you may install the Docker-Tools through my apt-repository:

aptitude install bf-docker-tools

Afterwards, configure /etc/default/docker-updater and at least set ENABLED=1. This way, you’ll stay up-to-date with bug fixes etc.

Disclaimer

The tool will update your images and containers automatically – very convenient but also dangerous! The new version of an image may break your tool or may require an updated configuration.

Therefore, I recommend to monitor your tools through Nagios/Icinga/check_mk or whatever. And study the mails generated by cron!

Rsync of ZFS data with a FreeBSD live system

Booting into FreeBSD
Booting into FreeBSD

Let’s assume you rendered your FreeBSD system unbootable.. Yeah, happens to the best, but how can you still copy the data stored on a ZFS to another machine? You probably just shouted RSYNC - but it’s not that easy.

You would need a FreeBSD live os (either on a USB pen drive or on a CD/DVD) and boot into that system. However, by default you do not have network, the ZPool is not mounted, there is no rsync and SSH is not running, and the live os is not writable, which brings another few issues…

This is a step-by-step how-to through all the obstacles. Just boot into your live os (get it from freebsd.org) and go on with the following…

Get Networking

By default your live system does not have networking setup correctly. Call ifconfig to see if the network interface is up. If it’s not you can bring it up using:

ifconfig em0 up

(assuming your inteface is called em0)

If it is up, you need to configure it. When you’re using a DHCP server you can just ask for an IP address using:

dhclient em0

Otherwise you need to configure the addresses manually:

ifconfig em0 inet 1.2.3.4 netmask 255.255.255.0

Afterwards you should be able to ping other machines, such as

ping 8.8.8.8

Mount the ZPool

Your ZPool won’t be mounted by default; you need to do it manually. To list all pools available on that machine just call:

zpool import

This searches through the devices in /dev to discover ZPools. You may specify a different directory with -d (see man page for zpool). To actually import and mount your ZPool you need to provide its name, for example:

zpool import -f -o altroot=/mnt zroot

This will import the ZPool zroot. Moreover, the argument -o altroot=/mnt will mount it to /mnt instead of / and the -f will mount it even if it may be in use by another system (here we’re sure it isn’t, aren’t we?).

Create some Writeable Directories

The next problem is, that you do not have permissions to write to /etc, which you need to e.g. create SSH host keys etc. However, that’s also not a big issue as we have the unionfs filesystem! :)

UnionFS will mount a directory as an overlay over another directory. Let’s assume you have some space in $SPACE (maybe in the ZPool that you just mounted or on another USB drive), then you can just create a few directories:

mkdir $SPACE/{etc,var,usr,tmp}

and mount it as unionfs to the root’s equivalents:

mount_unionfs $SPACE/etc /etc
mount_unionfs $SPACE/var /var
mount_unionfs $SPACE/usr /usr
mount_unionfs $SPACE/tmp /tmp

Now we can write to /etc, while the actual changes will be written to $SPACE/etc! Isn’t that a great invention?

Start the SSH service

Now that /etc is writable we can start caring about the SSH daemon. First, we need to configure it to allow root to login. Add the follwing line to the /etc/ssh/sshd_config:

PermitRootLogin yes

Then, we can start the ssh daemon using:

service sshd onestart

It will automatically create host keys and all the necessary things for a first start of SSH. If that was successful, port 22 should now be open:

# sockstat -4 -l
USER     COMMAND    PID   FD PROTO  LOCAL ADDRESS         FOREIGN ADDRESS
root     sshd       938   4  tcp4   *:22                  *:*
root     syslogd    542   7  udp4   *:514                 *:*

Set root Password

To be able to login you of course need to set a root password:

passwd root

Aftwerwards, you should be able to login through SSH from any other machine. Go ahaed and give it a try!

Install and Run rsync

Almost there, but the freeBSD live image doesn’t come with rsync installed. So we need to do it manually:

pkg install rsync

This will first tell us that not even pkg is installed, but answering the question with y it will automatically install itself. And as everything is mounted as UnionFS, the stuff will actually be installed to $SPACE/... instead of /. However, you should now be able to do the rsync job from where ever you want :)

Sector 32 is already in use by the program `FlexNet'

Just tried to install Grub on a debootstrap‘ed hard drive, but Grub complained:

Installing for i386-pc platform.
grub-install: warning: Sector 32 is already in use by the program 'FlexNet'; avoiding it.  This software may cause boot or other problems in future.  Please ask its authors not to store data in the boot track.
DRM is bugging us
DRM is bugging us! Image by Brendan Mruk and Matt Lee, shared under CC BY-SA 3.0

Never heard of that FlexNet thing, but according to Wikipedia it’s a software license manager. And we all know how this whole DRM thing just bugs us.. So it bugged me because the new system wouldn’t boot properly.. Other people having similar problems.

However, it seems impossible to force grub overriding this sector, but you may wipe it manually. In my case sector 32 was infected by DRM, so I did the following:

dd if=/dev/zero of=/dev/sda bs=512 count=1 seek=32

If that’s done Grub installs like a charm, the system booted again, and the admin was happy that another DRM thing died :)

The figure I used in this article was made by Brendan Mruk and Matt Lee. They share it as CC BY-SA 3.0.

Handy Docker Tools

Handy Docker Tools
Docker Tools

As I’m working with Docker quite intensively it was about time to develop some tools that help me managing different tasks. Some of them have already been existing as functions in my environment or something, but now they are assembled in a git repository at GitHub.

The toolbox currently consists of the following tools:.

dclean cleans your setup

The Docker-Clean tool dclean helps getting rid of old, exited Docker containers. Sometimes I forget the --rm flag during tests, and when I realise it there are already hundreds of orhpaned containers hanging around.. Running dclean without arguments removes all of them quickly.

Additionally, the dclean tool accepts a -i flag which will clean the images. It will prune all dangling images. Dangling images are orphaned and usually not needed anymore. Thus, dclean -i will remove them.

denter gets you into a containers

The Docker-Enter tool denter beames you into a running Docker container. Just provide the container’s name or CID as an argument to get a /bin/bash inside the container. Internally, denter will just call

docker exec -it "$NAME" "$EXEC"

with $EXEC being /bin/bash by default. So there is no magic, it’s just a shortcut.. You may overwrite the program to be executed by providing it as a second argument. That means,

denter SOMEID ps -ef

will execute ps -ef in the container with the id SOMEID.

dip shows IP addresses

The Docker-IP tool dip shows the IP addresses of running containers. Without arguments it will print the IP addresses, names, and container ids of all running containers. If your interested in the IP address of a specific container you may pass that container’s CID as an argument with -c, just like:

dip -c SOMEID

This will show the IP of the container with id SOMEID.

dkill stops all running containers

The Docker-Kill tool dkill is able to kill all running containers. It doesn’t care what’s in the container, it will just iterate over the docker ps list to stop all running containers.

As this is quite dangerous, it requires a -f flag to actually kill the containers. You may afterwards run the dclean tool from above to get rid of the cadavers..

dupdate updates images

The Docker-Update tool dupdate helps you staying up-to-date. It will iterate over all your images and tries to pull new versions of that image from the Docker registry (or your own registry, if you have one). By default, it will echo the images that have been updates and tells you which images cannot be found (anymore) on the registry. You may pass the -v to dupdate to enable verbose mode and also get a report for images that do not have a newer version at the registry. This way, you can make sure that all images are checked. Similarly, you can pass -s to enable silent mode and suppress messages about images that cannot be found at the registry.

You may also want to look at the Docker-Update tool?

Installation

Installing the tools is very easy: Just clone the Docker-Tools git repository at GitHub. If you’re using a Debian based system you may also install the tools through my apt-repository:

aptitude install bf-docker-tools

This way, you’ll stay up-to-date with bug fixes etc.

Firefox: Mute Media

Silence Firefox
Silence Firefox

You middle-click a few youtube videos and all start shouting against each other. You enter a website and it immediately slaps sound in you face. How annoying…

But there may be help.

Enter about:config and set

  • media.block-play-until-visible to true to only play media that is also in the current tab an do not play the stuff from the background
  • media.autoplay.enabled to false to stop autoplaying of some of the media (doens’t work everywhere, not sure why..)
  • dom.audiochannel.mutedByDefault sets the audio muted by default – essential for offices
  • plugins.click_to_play requires a click to run plugins, such as flash (which you are anyway not using!)

Fix highlight colors for QT apps on a GTK desktop

Okular: highlighted text is hardly readable
Okular: highlighted text is hardly readable

I’m using the i3 window manger. As smart as possible, increases productivity, and feels clean. Exactly how I like my desktop. I’m still very happy that Uschy hinted me towards i3!

However, I’m experiencing a problem with highlighted text in Okular, my preferred PDF viewer. When I highlight something in Okular the highlight-color (blue) is far too dark, the highlighted text isn’t readable anymore. I used to live with that, but it was quite annoying. Especially when you’re in a meeting/presentation and you want to highlight something at the projector. I just saw that problem occurring in Okular. Not sure why, but I honestly do not understand this whole desktop config thing – probably one of the reasons why I love i3 ;-)

Today, I eventually digged into the issue and found out what’s the problem how to solve the problem. Apparently, Okular uses a Qt configuration, that can be modified using the qtconfig tool. Just install it (here for Qt4 applications):

Configure the highlight color using the qtconfig tool
Configure the highlight color using the qtconfig tool
aptitude install qt4-qtconfig

When you run qt4-qtconfig a window will pop up, as you can see in the figure on the right:

  1. Select a GUI Style that is not Desktop Settings (Default), e.g. Cleanlooks.
  2. Then you can click the Tune Palette… button in the Build Palette section.
  3. A second window will pop up. Select Highlight in the Central color roles section.
  4. Finally you’re good to select the hightlight color using the color chooser button! :)
Okular highlighting text with fixed colors
Okular highlighting text with fixed colors

Was a bit difficult to find, but the result is worth it! The figure on the bottom shows the new highlight color – much better.

I will probably never understand all these KDE, QT, Gnome, GTK, blah settings. Every environment does it differently and changes the configuration format and location like every few months. At least for me that’s quite frustrating…

Mail support for Docker's php:fpm

Sending Mails from within a Docker Container
Sending Mails from within a Docker Container

Dockerizing everything is fun and gives rise to sooo many ideas and opportunities. However, sometimes it’s also annoying as …. For example, I just tried to use a Docker container for a PHP application that sends emails. Usually, if your server is configured ok-ish, it works out of the box and I never had problems with something like that.

The Issue

In times of Docker there is just one application per container. That means the PHP container doesn’t know anything about emailing. Even worse, the configuration tool that comes with PHP tries configuring the sendmail_path to something like $SENDMAILBINARY -t -i. That obviously fails, because there is no sendmail binary and $SENDMAILBINARY remains empty, thus the actual setting becomes:

sendmail_path = " -t -i"

That, in turn, leads to absurd messages in your log files, because there is no such binary as -t:

WARNING: [pool www] child 7 said into stderr: "sh: 1: -t: not found"

Hard times to start debugging that issue..

The Solution

To solve this problem I forked the php:fpm image to install sSMTP, a very simple MTA that is able to deliver mail to a mail hub. Afterwards I needed to configure the sSMTP as well as the PHP mail setup.

Install sSMTP into php:fpm

Nothing easier than that, just create a Dockerfile based on php:fpm and install sSMTP through apt:

FROM php:fpm
MAINTAINER martin scharm <https://binfalse.de>

# Install sSMTP for mail support
RUN apt-get update \
	&& apt-get install -y -q --no-install-recommends \
		ssmtp \
	&& apt-get clean \
	&& rm -r /var/lib/apt/lists/*

Docker-build that image either through command line or using Docker Compose or whatever is your workflow. For this example, let’s call this image binfalse/php-fpm-extended.

Setup for the sSMTP

Configuring the sSMTP is easy. Basically, all you need to do is to specify the address to the mail hub using the mailhub option. However, as my mail server is running on a different physical server I also want to enable encryption, so I set UseTLS and UseSTARTTLS to YES. Docker containers usually get cryptic names, so I reset the hostname using the hostname variable. And last but not least I allowed the applications to overwrite of the From field in emails using the FromLineOverride. Finally, your full configuration may look like:

FromLineOverride=YES
mailhub=mail.server.tld
hostname=php-fpm.yourdomain.tld
UseTLS=YES
UseSTARTTLS=YES

Just store that in a file, e.g. /path/to/ssmtp.conf. We’ll mount that into the container later on.

Configure mail for php:fpm

Even if we installed the sSMTP the PHP configuration is still invalid, we need to set the sendmail_path correctly. That’s actually super easy, just create a file containing the following lines:

[mail function]
sendmail_path = "/usr/sbin/ssmtp -t"

Save it as /path/to/php-mail.conf to mount it into the container later on.

Putting it all together

To run it, you would need to mount the following things:

  • /path/to/php-mail.conf to /usr/local/etc/php/conf.d/mail.ini
  • /path/to/ssmtp.conf to /etc/ssmtp/ssmtp.conf
  • your PHP scripts to wherever your sources are expected..

Thus a Docker Compose configuration may look like:

fpm:
	restart: always
	image: binfalse/php-fpm-extended
	volumes:
		# CONFIG
		- /path/to/ssmtp.conf:/etc/ssmtp/ssmtp.conf:ro
		- /path/to/php-mail.conf:/usr/local/etc/php/conf.d/mail.ini:ro
		# PHP scripts
		- /path/to/scripts:/scripts/:ro
	logging:
		driver: syslog
		options:
			tag: docker/fpm

Give it a try and let me know if that doesn’t work!

Docker resources:

Some sSMTP resources that helped me configuring things: