I use a local Kubernetes cluster to help me develop microservices. On my 2015 Macbook Pro, the cluster ran inside a Minikube VM using the Hyperkit driver. Replicating this setup on my new 2021 Macbook Pro proved impractical. This is how I made it work.
Docker Desktop on Apple Silicon
Running a cluster on my Apple Silicon Macbook Pro requires Docker Desktop. in February 2022 Docker Desktop published a preview update which enabled Kubernetes clusters for Apple Silicon computers. With that update, I was able to replicate my old setup with minikube, by using Docker as a driver.
This new setup worked okay, but there were some challenges.
The first necessity is that I had to check if my containers could still run on Apple Silicon. It seems that most of my docker images were generic enough that they could be built for my Macbook’s ARM architecture. Some images didn’t have platform specific builds, so when I tried to build them, they failed with varying errors. Most of the errors were either caused by compilation problems of low-level packages, or simply the absence of dependencies for ARM architectures. Luckily for me, you can leverage Apple’s Rosetta2 execution translation by forcing an image to be build for its intended platform. Docker Desktop offers QEMU Emulation that supports running images built for the AMD64 platform, although it does warn that this might impact stability and performance. But that makes it possible to run containers based on the AMD64 platform, which completed my requirements to run a local cluster.
# tag=0.1.13 -FROM python:3.8-slim-bullseye +FROM --platform=linux/amd64 python:3.8-slim-bullseye ...
Drawbacks of using Minikube
Using Minikube, there were two big drawbacks:
The first drawback is that the Minikube network wasn’t directly exposed to the host machine, as it was when I used the Hyperkit driver on my old Intel Macbook. Minikube offers a workaround for this with the
minikube tunnelcommand, which exposes the network, as long as the command keeps running. In practice I just kept a terminal window open with that command while I used the cluster.
The second drawback: even though my specs were high, the cluster pods were a bit slow to run, and I noticed it was having trouble with memory demands. The main cause of this seemed to be that minikube ran inside of its own container on Docker Desktop.
Removing the middle man
As I was used to minikube and its plugins, this setup worked well enough that I kept using it. Later, I got curious about the Docker Desktop cluster after a new team member successfully used it on his Intel Macbook.
That’s why I tried to run my cluster without Minikube, by enabling the cluster from the Docker Desktop dashboard.
After installing all the deployments, in my case using helm, this (somewhat surprisingly) worked without issue!
The only thing I had to figure out was how to access the cluster network from the host. This involved installing an nginx-controller on the cluster.
Since this particular cluster does not have a ingressClass definitions (yet), I then had to supply the deployment with the
--watch-ingress-without-class=true argument. Do note that ingress configuration will vary between clusters, and that topic probably deserves its own blogpost.
By using the cluster that ships with Docker Desktop, I was able to simplify the setup that I was used to. Moreover, building, deploying and running apps is actually faster than before. I considered crossplatform compatibility to be one of the bigger hurdles in upgrading from an Intel based Macbook to an Apple Silicon based Macbook. While it took until February of this year to see container emulation be integrated in an important developer tool like Docker Desktop, I feel pretty content in my choice in hindsight. My JVM apps luckily aren’t too concerned on which platform they run, but I’m still glad that Apple Silicon proved stable enough for my development requirements outside of the JVM, including running a local cluster!