Locally deployed clusters can be a convenient part of a modern software development cycle, reducing feedback loops and give a developer a useful representation of the live version of an app, even if it’s just a stub. Unfortunately, they have a reputation for eating up your precious resources like they’re mashed taters. Since this year working from a home office has become the norm for many developers around the world. Enter the home desktop to "share the load" with our brave little work laptop. We will form a fellowship with our loyal home desktop, to help us through this new and uncertain adventure. Keep reading to find out how we can take off and escape this "Mount Doom" scenario!

Working from home has played its part in our professional world as developers ever since the rise of the Personal Computer. Many countries declared lockdown in varying degrees ever since Covid-19’s outbreak earlier this year. This meant that our professional world got flipped inside out. For many of us, the door to our home office is now the Black Gate to the professional world, including all its uruks and trolls.

I use Minikube to locally deploy the Kubernetes cluster that runs the applications I develop. The nature of such a cluster means that it is usually quick to go hungry for resources like CPU, RAM and storage. Combine this with the necessary communication software like MS Teams or Google Hangouts, not to mention your typical heavyweight IDE’s like Intellij, and you have a recipe for a spicy hot work laptop in dire need of a cooldown.

In this blogpost, I’ll demonstrate, with an example on Windows 10 Pro, what you have to do to run Minikube on one host and reach the cluster it runs from another host on your private LAN. The example will enable you to push Docker images to Minikube and browse to a webpage served from a pod on the cluster, using a custom domain name. Before that, I’ll go over some background information that should provide some insight in the What’s and Why’s.

Disclaimer: I am not responsible for any compromises of your network, hardware or software due to recklessness in using this tutorial or failure to take proper measures to secure your assets!

If you intend to follow the tutorial, here are the requirements.

Requirements

Topics in this Blogpost

  • Minikube under the covers

  • Step-by-step guide to running Minikube on a private LAN

  • Conclusion

Minikube under the covers

Minikube is designed as a barebones VM that runs a Kubernetes Cluster with a master and a single node, with the goal of learning the basics of Kubernetes and even to develop services, should you find it convenient.

The Minikube ISO is booted by your hypervisor when you run the minikube start command. It installs kubernetes with some default settings and a single node, but be sure to at least specify your desired hypervisor here.

The things you are going to deploy to the Kubernetes cluster will launch with images pushed to the docker repository also installed on the Minikube VM. If you want to run multiple deployments, you can choose to make another Minikube VM, and a new cluster, by using the --profile option, or add a namespace in the existing Kubernetes cluster. Typically though, Kubernetes deployments are reasoned about on the cluster level. You can isolate a deployment context using a namespace, but it will come with caveats, like the fact that a namespace usually shares all its resources with the other namespaces on its cluster.

With typical usage, you’ll need to set some environment variables every time you want to interact with the Kubernetes cluster from a shell. Minikube conveniently offers you the command to do this, by running minikube docker-env. For every supported shell, the manpages should have clear instructions how to combine this command in a oneliner to configure the shell of your choice without hassle.

The Kube-System namespace

Kubernetes has a set of services under the kube-system namespace which provide functions associated with the cluster level. You can see all the pods running under kube-system with kubectl get pods -n kube-system.

You will usually also need Minikube’s "ingress" addon. You can enable it either with an option in the start command (--addons=ingress), or retroactively by using the dedicated command (minikube addons enable ingress). Ingress addon makes it possible to open services in the cluster to incoming requests and adds another set of functions to the kube-system namespace.

As seen on the Kubernetes reference, you can deploy Ingress objects. In an Ingress deployment, you can specify rules that let you configure and associate a hostname with a service running on the backend.

By extending your system’s hosts file with your desired hostname and the cluster’s IP, a HTTP request to that host will be sent directly to the cluster. Then, a routing rule is applied by the Ingress Controller, where it is forwarded to the correct service using Kubernetes' internal DNS. In the tutorial, we’ll use the example from Kubernetes to test whether we can route to the cluster from a client on our LAN.

Network Access To Minikube

In the normal usecase, minikube will get an IP on a dedicated virtual network interface that the host can use. From the host, that IP falls under the private subnet of that interface, so it won’t collide with the subnet of your LAN. Since the route to Minikube address only lives on the hosts routing table, only local processes can route to Minikube.

In our tutorial, instead of using a virtual network interface on the host, we can leverage a Minikube option that is available when using the hyperv driver: --hyperv-virtual-switch. You can configure a virtual switch with hyperv which can share the hosts' physical network interface with hyperv virtual machines. This means that, depending on the DHCP configuration on the network, Minikube will get an IP that is routable for all devices on the private LAN, rather than just the host. Do note that sharing the physical network interface will mean that the maximum transfer speed will be divided by the number of systems sharing the physical network interface, including the host OS.

There are a number of other vendors that support similar functionality, albeit under different names. The concept is basically the same, so check Minikube’s manpages for hypervisor-specific options if you want to know whether Minikube can be configured to work with the hypervisor of your choice. It all depends on whether the hypervisor can associate its VMs with a physical network interface.

Step-by-step guide to running Minikube on a private LAN

1. Make new virtual switch on the Minikube host

Follow Microsoft’s instructions to make a new virtual switch with connection type External, using an active physical network interface and checking "Allow management operating system to share this network adapter". Let’s call it "Primary Virtual Switch".

2. Start minikube with all the necessary options using powershell run as admin
minikube start --profile hello-world --vm-driver hyperv --hyperv-virtual-switch "Primary Virtual Switch" --addons=ingress --embed-certs

note: --embed-certs includes the certificates used for kubectl configuration inside the config file, so we only need to share that single file with the Kubernetes client host.

3. Share docker and kubernetes information with the client

use a medium of choice. I used a shared folder on the desktop for this. These are used to verify the docker repository and the kubernetes cluster identity, respectively.

By default, they can be found here on Windows 10:

  • <USER>\.minikube\certs\

  • <USER>\.kube\config

5. Configure the client docker cli and kubectl

apply minikube’s environment variables to an open terminal on the client to establish the docker repository. For example, running minikube docker-env --shell fish on the minikube server host yields a command like this:

set -gx DOCKER_TLS_VERIFY "1";
set -gx DOCKER_HOST "tcp://<minikube's IP with docker port>";
set -gx DOCKER_CERT_PATH "<path_to_minikube_certs>";
set -gx MINIKUBE_ACTIVE_DOCKERD "minikube";

just share this command and make sure to start any terminal session on the client with this, if you intend to push images to the Minikube VM.

Now you can check if docker is reachable from this terminal with

docker ps

You should see a list of Kubernetes containers. Now, let’s move on to kubectl:

Apply the Kubernetes config file to the client’s Kubernetes config, for example by merging them with the kubectl konfig plugin:

kubectl konfig import -s <minikube hosts config file>

switch to the proper context:

kubectl config use-context hello-world

and verify connection by viewing the pods on the kube-system namespace:

kubectl get pods -n kube-system

Now you should have the cluster running and routable from within your private LAN. If you got this far, congratulations, you’re awesome!

The rest of the steps demonstrate how you would typically interact with the cluster in terms of deploying and routing to cluster services, using an example by Kubernetes.

6. Define a hello world Kubernetes deployment

From the client you can now follow the ingress example from Kubernetes. Keep in mind that we only shared information to use the docker CLI and kubectl on the client, so any minikube commands mentioned only work on the Minikube server host, like:

minikube -p hello-world service web --url

Also remember that we got minikube’s IP in step 5 included with the environment variables. Use this address when extending your hosts file with the new host.

7. Shutting down

When you are done, simply stop or delete the minikube vm on the server with a powershell command:

minikube -p hello-world delete

In general, if you minikube stop and minikube start it up again later, you only need to supply the existing profile with the start command.

I’ve found that if the Minikube server goes in sleep mode with minikube running, it will be available again when it wakes up again without requiring a start command.

Conclusion

If you have your LAN well-secured and private, this should provide an interesting option to offload some resources to computers on your LAN. For me, it has made the difference between daily heat throttling on my veteran Macbook Pro, to a comfy 40 centigrade idle while working in my IDE and pushing deployments from the terminal. As far as I’m concerned, I’ll be looking for more oppurtunities to put my devices to good use like this!

shadow-left