Deploying your DDEV containers in digitalocean (or aws) with kubernetes

Alejandro Moreno
9 min readDec 26, 2019

Some of you already know that I’ve been having an affair with DDEV tool. It’s great to build quickly an environment and start tinkering around without spending lot of time on sysadmin, devops or however the cool kids want to call those tasks nowadays.

However at some point some of your POC’s want to see the light. That’s were in ddev you’ll hit a wall, as the tool itself is just a local environment, if you want to progress further into production realm, you can chose to go with DDEV live, their paid service, or build your own environment.

I’ve been already on that place (build your own environment), and I did not want to have to rebuild environments manually again... ever. I also didn’t want to spend a lot of time automating things, as I’ve done in the past with Ansible (I introduced Ansible to the workflow we had at BBC and that was beautiful I have to say). I love the tool, but I was looking at something quicker, that I could maintain without again spending lots of time that I don’t have nowadays (I’m writing this during my Christmas break).

Enter Kubernetes. The initial plan I thought was to learn the tool, and at the same time see which parts of the DDEV containers I could take with me on the right route. Again, being lazy is nothing wrong and if some of the containers that the DDEV team is using could be re-used for production purposes, well, at leasts it was worth a shot.

I have chosen DigitalOcean as they have free accounts that you can take to experiment, and because I already have an account so it would be relatively easy to migrate my personal blog and a few other projects I was playing with straight away.

As you will see, the instructions are pretty clear and easy to follow. Once you have created your Kubernetes cluster, you have to download their CLUSTER CONFIG FILE, which contains your credentials. Each command to work against your newly created cluster has to be accompanied by your credentials file, which would look something like this:

kubectl — kubeconfig=”k8s-XXXXXX-kubeconfig.yaml” get nodes

Great. Let’s start then. I have taken the approach to build my own yaml files, but ddev gives you a docker compose file. Theoretically you could get that file and convert it into a Kubernetes yaml that you could use to deploy your services. I have preferred to opt for the custom path so I could understand what I’m doing and have a bit more freedom in case I need to.

What you have done with digitalocean so far is simply create a Kubernetes cluster. Our project will live inside that cluster. We’ll build a pod for ngnix, which will server our Drupal, a pod for the database (mariadb), services to expose both of them so they can communicate each other, a loadbalancer to expose the pod containing the webserver to internet and two block storages to contain the database and the filesystem.

Having database and files in a block storage or in a Kubernetes/DigitalOcean volume is something I really like as you can be playing with the pods later in time, maybe rebuild them, destroy, upgrade, etc with the assurance that your data is safe in their volumes.

Let’s start, first with the webserver.

/app/pod-ddev-webserver.yaml

What you can see here is basically a new pod created called ddev-wb, which will use the docker image from the ddev webserver: drud/ddev-webserver:v1.5.0

I have done this to avoid having to do any configuration on the ngnix server, as everything comes configured by Drud team for you. I may change my mind later in time, in case I want to optimise the server or maybe just decouple from DDEV so I can make sure I have the latest updates available of ngnix independently of the amazing Drud team updating their Docker to match that.

This service will live in the port 80, and will serve the files from var/www/html. A small improvement that I am planning to do here is the mount paths. I don’t want to expose www, but it has to be somewhere as Drupal needs to access the vendor folder, autoload, etc.

The second interesting thing to notice is the volumes section. This basically will link the storage of the folders I was mentioning to Kubernetes volumes, decoupled from the pod itself. That makes sense as I was saying before because you may want to scale the pods, destroy them, upgrade, etc, without affecting the important content of your Drupal site.

Ok, we are ready to build our first pod:

kubectl — kubeconfig=”k8s-XXXXXX-kubeconfig.yaml” apply -f app/webserver.yaml

Executing that you’ll get a simple response:

$ kubectl — kubeconfig=k8s-XXXXX-kubeconfig.yaml apply -f app/pod-webserver-ddev.yaml

deployment.apps “ddev-wb” created

Everything not being an error means all is good. We can see our new pod with this command:

$ kubectl — kubeconfig=k8s-XXXXX-kubeconfig.yaml get pods

NAME READY STATUS RESTARTS AGE

ddev-wb-7474d6485c-v2zbn 0/1 Pending 0 1m

We should now create the volume for that pod. Easy peasy:

app/blockstorage-web.yaml

We are using PersistentVolumeClaim to create a 5GB storage. We cannot resize this storage, but we could simply create a new one bigger, and simply move the contents between them. As long as they are in the same region the volumes live inside a linux machine in the same /mnt/ folder, so we could simply move their contents in case we need to.

Let’s test this:

$ kubectl — kubeconfig=k8s-XXXXXX-kubeconfig.yaml apply -f app/blockstorage-web.yaml

persistentvolumeclaim “webserver-storage” created

And since we are here, let’s also create the storage for the database too:

$ kubectl — kubeconfig=k8s-XXXXX-kubeconfig.yaml apply -f app/blockstorage-db.yaml

persistentvolumeclaim “db-storage” created

If we now visit the digital ocean console, click on Volumes, we should see our new two volumes created.

Be careful with this as they are not free. Each Volume is potentially a new 5$ server (depending on the size you choose), so you’ll get charged per utilisation of them per hour.

As you can see the database volume is still not attached to any droplet. Let’s create our database and see some more magic. This will be our database.

poc/app/pod-mariadb.yaml

Don’t use password and username like this, it’s not difficult to store it in a Kubernetes secret, but for this experiment I’ll leave it like this.

kubectl — kubeconfig=k8s-XXXXXX-kubeconfig.yaml apply -f app/pod-mariadb.yaml

kubectl — kubeconfig=k8s-XXXXXXX-kubeconfig.yaml apply -f app/pod-mariadb.yaml

service “mysqlservice” created

deployment.apps “ddevdb” created

deployment.apps “adminer” created

Some more things have happened than before. Basically we have created a database pod, and we are linking that to a service. A service basically allows other pods to access them, for example Drupal will need to access this database.

I like to use adminer as well to verify that the database is accessible from a web interface, but that is not necessary and you can use mysql cli straight away.

If you visit the console again you’ll see that the the Volume is now linked to the cluster where the pod lives.

Something else to notice is that we have used a PersistentVolumeClaim for both, database and files. Files could live in a Object Storage instead of a blockstorage, as they would benefit of the CDN like capabilities that DigitalOcean offers, and would also be cheaper. I will leave this for the future, but the beautiful thing of building things with this level of automation is that you can easily change your pod and point to the new storage once it is ready without any interruption for your users.

Where did we leave things? We have a web server, a database, we also have a storage for both. Now we need to connect the dots. We’ll use Kubernetes loadbalancers for that.

poc/app/lb-webserver.yaml

Let’s do this:

$ kubectl — kubeconfig=k8s-XXXXXX-kubeconfig.yaml apply -f app/lb-webserver.yaml

service “ddev-load-balancer” created

If we visit now Networking->Load Balancers, we’ll see something like this:

In a few minutes we’ll see our shiny new load balancer active.

Let’s recap and see what we have so far:

$ kubectl — kubeconfig=k8s-XXXXXX-1546520017642-kubeconfig.yaml get pods

NAME READY STATUS RESTARTS AGE

adminer-57df787959-pgzv2 1/1 Running 0 13m

ddev-wb-7474d6485c-v2zbn 1/1 Running 0 22m

ddevdb-8497d8c5f9-c2m4z 1/1 Running 0 13m

$ kubectl — kubeconfig=k8s-XXXXXX-kubeconfig.yaml get services

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE

ddev-load-balancer LoadBalancer XX.XXX.XX.XXX XXX.XXX.XXX.X 80:31349/TCP 1m

kubernetes ClusterIP X.XXX.X.X <none> 443/TCP 1h

mysqlservice ClusterIP None <none> 3306/TCP 13m

We can see three pods, adminer, the website server and the database, and we can see the two services that we’ll use to connect between and to them.

If we now try to connect to the ip that the loadbalancer will give us (see culster-ip, or copy from digitalocean console), we’ll see the ngnix 403 page:

That’s perfectly fine and expected. It looks like our webserver is ready, we just need to get back to the Volumes and configure them. It’s just a few steps that DO guides us thought.

To ssh into the volume you have to visit the firewall section and add ssh traffic allowed. To make it even more secure, just get your IP and put that in the firewall. The volume lives in the Cluster itself, so you’ll need the root password to ssh into it too. I normally reset the root password from the Droplet console and then set up your own.

Configuring the Volumes consists just in creating the folders and setting them up in fstab. Again, following DO instructions:

# Create a mount point for your volume:

$ mkdir -p /mnt/pvc_8dead62c_0f64_11e9_bb97_ded10a8632bc

# Mount your volume at the newly-created mount point:

$ mount -o discard,defaults,noatime /dev/disk/by-id/scsi-0DO_Volume_pvc-8dead62c-XXXXXXXXXXXXXXXXXXXXXX /mnt/pvc_8dead62c_XXXXXXXXXXXXXXXXXXXXXX

# Change fstab so the volume will be mounted after a reboot

$ echo ‘/dev/disk/by-id/scsi-0DO_Volume_pvc-8dead62c-XXXXXXXXXXXXXXXXXXXXXX /mnt/pvc_8dead62c_XXXXXXXXXXXXXXXXXXXXXX ext4 defaults,nofail,discard 0 0’ | sudo tee -a /etc/fstab

If we visit now both folders we should see very quickly something familiar on both of them:

aria_log.00000001 aria_log_control drupal ib_buffer_pool ibdata1 ib_logfile0 ib_logfile1 ibtmp1 lost+found multi-master.info mysql performance_schema tc.log

So yes, this one is the database Volume. In the other one we should see a web/ folder, I have placed an index.html and refreshed the page:

Now the rest of the process is identical to setting up a Drupal site with its database. To import the database you’ll do this from kubectl:

kubectl — kubeconfig=k8s-XXXXXXX-kubeconfig.yaml exec -i ddevdb-XXXXX — mysql -u root -h mysqlservice -proot drupal < d8-vec-final.sql

Where ddevdb-XXXXX is the name of the pod where you database lives. You can get this with get pods:

To test things out you can get into mysql like this:

$ kubectl — kubeconfig=k8s-VEC-1–13–1-do-2-ams3–1546520017642-kubeconfig.yaml run -it — rm — image=mariadb:10.4 — restart=Never mysql — mysql -h mysqlservice -proot

If you don’t see a command prompt, try pressing enter.

MariaDB [(none)]>

Then show databases, create, assign permissions, etc like in a normal msyql database.

You can then transfer the files using scp, git, blt deploy, etc.

That’s pretty much it. Enjoy

NOTE: I built this tutorial nearly a year ago during my last Christmas break. I wanted to learn more about Docker and specially Kubernetes, so I thought I could migrate my personal blog and a couple of pet projects I still have there.

However since then I have moved back and simplify my workflow even further. For a pet project or a personal site that I don’t update much often, Drupal + Tome instead is a perfect match. However the learnings I got from this exercise have greater value than anything else.

For a bigger projects though, the Kubernetes approach is a match made in heaven. Before that I had lots of problems with the server going down for several reasons, be it database, nginx, etc. Kubernetes ensured in the last months that the pod was always alive, and if any problem happened there was always a new pod replacing the one that went rogue. Just like magic.

One more thing I liked about DigitalOcean is the Volumes concept. That level of abstration ensures that wahtever happens to your server it won’t compromise the storage where database lives, or where your files

B0NUS: CHEATSHEET

Take a dump using kubernetes

kubectl — kubeconfig=k8s-VEC-1–13–1-do-2-ams3–1546520017642-kubeconfig.yaml exec -n mysql-demo -ti

kubectl — kubeconfig=k8s-VEC-1–13–1-do-2-ams3–1546520017642-kubeconfig.yaml run -it — rm — image=mariadb:10.4 — restart=Never mysql — mysqldump — databases vec2019 > vec-prod.sql

Executing drush

# ssh into the pod.

kubectl — kubeconfig=k8s-XXXXXXXXX-kubeconfig.yaml exec -it POD_NAME — /bin/bash

# Execute drush inside the pod.

$ cd /var/www/html

$ ../vendor/drush/drush/drush uli

--

--