Kubernetes for stateless workloads


Kubernetes is, in our opinion, the emerging standard for container orchestration and a robust open source basis for building PAAS. Since fall 2017, we're using it ourselves for all our internal and external services, that's why we can wholeheartedly recommend to ADOPT Kubernetes.


  • Provides a unified API for smaller and bigger workloads, with the option of scaling the applications from small to big.
  • kubectl allows developers to self-service and debug (e.g. log into the containers)
  • Can be used on a single host (using minikube), or on a fleet of many services.
  • Is the emerging de-facto standard for containerized applications.
  • consists of very clever concepts/atoms (like Deployment, Pod, Ingress, Service), which are pluggable in flexible ways
  • Helm being used as package manager.
  • Using Google Container Engine, there exists a professional hosted service for critical workloads.
  • Supports Role-Based Access Control (RBAC).
  • Supported by lots of external applications.


  • the yaml configuration can be quite verbose
  • Has some learning curve, but a great interactive tutorial
  • The initial setup is nontrivial; and it is hard to figure out how to best install a working cluster.


  • We've used multiple dokku servers in the past, but it was more and more difficult to know where each application resided; and it did not provide the resiliency we needed.
  • We want to control our own infrastructure, that's why cloud-based PAAS platforms like Heroku are not used by us.

Kubernetes at Sandstorm

We at Sandstorm use Kubernetes in a little special way, which is outlined below:

  • We installed Kubernetes using Kismatic, which worked well for us. We're using weave net as overlay network. Before, we've evaluated the use of Canonical Juju, and Rancher 2.0; but Kismatic was the first one really working well.
  • As we currently only have two Kubernetes servers, we are not yet running in high-availability mode. Instead, we manually pin pods to the worker nodes for now. Because of this, we can use the hostPath persistent volumes, which are very easy to understand.
  • We're heavily using ingress-nginx, e.g. for things like SSO; and kube-lego for Let's Encrypt SSL certificates.
  • As Docker Registry and CI system, we are using GitLab.
  • "Big" stateful services like MySQL/Postgres databases are not hosted by Kubernetes, but colocated on the same infrastructure (installed using apt etc) - for much of the reasoning explained in this blog post.
  • For user authentication, we're using client certificates together with RBAC.
  • Backups are automatically done using CronJob and Restic.
  • Monitoring is done using Prometheus and Grafana.

In the mid term term, we plan the following infrastructure additions:

  • Add a third server. Once this is done, we can in principle tolerate one failure.
  • Set up MySQL (Galera) Cluster and Postgres Replication, to have a HA DB cluster.
  • For resilient storage, the following options exist (this blog post structures it nicely):
    • GlusterFS - currently our favourite; seems like high performance and HA, and suppots ReadWriteMany.
    • OpenEBS - looks interesting, but is only a block storage; thus it does not seem to support ReadWriteMany mounts (which we need).
    • Rook / CephFS - seems to be way more hard to deploy (compared to GlusterFS)