We've been using Docker in quite some different use cases so far: All our internal services run on dokku, an on-premises Heroku clone based on Docker. Furthermore, we're heavily using Gitlab CI, which is also based on Docker.
One part where we did not use Docker too much yet is for the actual development setup, as most of our stacks run natively on OSX really easily. In simpler projects, it is often enough to do a "composer install; ./flow server:run" or "./gradlew run"; and everything is downloaded automatically and set up correctly. However, for bigger Flow projects with more moving parts we just recently started embracing Docker for Mac; this post is about the things we have learned so far; as it might be helpful to others as well.
Development Setup using docker-compose
We spin up a few containers using docker-compose, specifically for all foreign services we're using (like mariadb, elasticsearch, redis).
For our own (Neos/Flow) application, we're usually build the container from scratch (or from a base image we've created ourselves) based on alpine linux or ubuntu; using php-fpm, nginx, supervisor, and SSH. Using mount volumes in Docker isquiteslow on OSX; so usually people add an SSH server to the application's image, add the full source code to the image, and then use their IDE to auto-upload changed files on saving.
To recap: the application's source code is baked into the container image; and you need to configure PHPStorm/IntelliJ to sync files on save via SSH to update the in-container files. While this solution is fast, it is also quite cumbersome to set up (in my view at least). Read on below on an alternative solution.
Problem 1: Dying containers when importing huge SQL dumps
We usually ran Sequel Pro over SSH, using the "application" container as SSH-target; and then connect further to the "db" container; pretty much like you would do when running traditional virtualized servers. While this worked for simple debugging, we had huge problems importing 1-2 GB SQL dumps from production systems: In this case, the containers just all crashed completely with error code 255; with nothing in the logs. That killed productivity quite tremendously for us!
To solve the issue, we did not just expose port "3306" in docker-compose.yaml, but instead used the port definition "33060:3306" for the db container. This means: You can connect to port 33060 of the host machine (localhost) and end up in port 3306 in the DB container. All other containers can still connect on port 3306 to the DB container, as usual!
When connecting Sequel Pro to 127.0.0.1:33060 (directly to the container, without an intermediate SSH connection), the crashes went away and we had stable behavior!
Problem 2: Bind Mount Performance / File Synchronization
As already explained above, people often develop by treating the docker container as remote host, add a SSH server to it, add all files to the image using the ADD directive in the Dockerfile, and then use PHPStorm's remote file synchronization to keep the "remote" copy in the container in sync with the local files as these get changed.
This is quite fast, in a big Flow application (development context), I got response times of about 0.2 seconds (when the caches were used); but I need to configure file synching, and for me it has already happened that saving a file in the IDE did not update the remote copy. Quite a mess to debug this when it happens as you end up going into the totally wrong direction until you find out your changes are actually not reflected...
On the other extreme, I tried mounting the Packages/ folder of a Flow application into the Docker container (using the "volumes" feature); and I got (non-acceptable) response times of about 3-5 seconds.
:cached to the rescue!
Starting in Docker-Mac 17.04.0 (which is currently only available as "Edge/Beta" version), a new mount flag :cached can be added to the volume/mount definition, so it then looks like: "- ./app/Packages/:/data/www/app/Packages/:cached". This slighly reduces the consistency requirements, allowing that the container might not at all times see the same files as the host; but the host is the master of the files. Just by setting this flag, we improved loading times to around 1.2 seconds (from 3-5 seconds before).
While 1.2 seconds is not great, it is certainly workable with. Then, I again remembered that osxfs bind mounts get slower the more files are included; meaning we might speed up things by only bind-mounting the current Flow package(s) we are developing. I tested this with a huge package, containing around 1400 files; only including this one via bind-mount; and leaving all the other packages in the image. In this case, the response time was about 0.4 seconds. Of course that's slower than when not using mounts (we were down to 0.2 seconds then), but for me, 0.4 seconds is really fine for my use cases – especially because it frees me from configuring weird "upload-to-server" rules in PHPStorm and debugging when the uploading broke. You only have to remember that only changes to the bind-mounted package(s) are actually visible in the container :-)
So, as a rule of thumb: 1) Include the full application source code into your image using ADD in Dockerfile. 2) Set up volumes/shared folders for the package(s) you are currently developing; and append :cached to these definitions. 3) Ensure to run Docker 17.04.0 (or newer) as :cached is currently only available on the edge/beta releases.
Happy Docker'ing, I hope this article helps some of you!
Dein Besuch auf unserer Website produziert laut der Messung auf websitecarbon.com nur 0,28 g CO₂.