I Thought I Had the Best Docker OS in My Home Lab Until I Tried This

Best docker os home lab

For a long while now, I have felt like I have my Docker hosts dialed in. I have used a wide range of operating systems over the years, including Ubuntu, Debian, and even experimented with more specialized setups, including NixOS. Everything has been stable, predictable, and easy to manage with tools that I already know. But recently, I had a comment on one of my blogs about trying Lightwhale. I hadn’t heard of it before so I wanted to give it a spin, and I am really glad I did. Let me show you what I learned about this purpose-build Docker operating system.

How do I normally run Docker containers in my home lab?

I usually deploy a virtual machine running in my Proxmox VE Server environment, running either Ubuntu or Debian. I have been trying out NixOS for this purpose as well. From there, I would install Docker, Docker Compose, and whatever other additional tools I need to run.

There is nothing “wrong” with this approach as it is familiar and it is arguably the most flexible approach. If you need something installed, you just install it. But with the flexibility it brings, it comes with certain tradeoffs. Over time, I found myself dealing with the normal challenges of things like the following table gives an overview of:

AreaWhat happensWhy it matters
UpdatesOS updates have to have non-container packagesWasted time and added risk
Extra servicesUnneeded services run in the backgroundMore complexity and resource usage
InconsistencyHosts drift and get to be slightly differentHarder to manage and troubleshoot
OS maintenanceTime spent managing the base OSLess time for actual workloads
Attack surfaceMore packages and services installedMore security risks
OverheadMore packages to patch and monitorMore time spent managing
Human errorManual changes over timeHigh chance of misconfiguration
DriftSystems slowly divergeLess predictability

What Lightwhale actually is

In case you haven’t heard of Lightwhale, it is a minimal operating system that is designed specifically to run containers. It strips out the idea of running a traditional Linux server and focuses almost everything on running Docker.

So, what you get is not just a general Linux operating system that can run containers, but it is built with containers as the primary workload in mind. The footprint is extremely small and the entire system is simplified. The idea here is not that you are managing the OS in the same way that you would do if you were running Ubuntu or Debian or some other distro.

In many ways, this reminds me of the same philosophy that we see behind projects like Talos Linux. But, it is applied more directly to Docker instead of Kubernetes which Talos is focused on. Note the following key characteristics of the Lightwhale project that I think stand out:

  • Minimal base system with very few moving parts
  • Focus on running containers without extra packages or services
  • Simple lifecycle and reduced maintenance
  • Less opportunity for configuration drift

That combination is what caught my attention. It made me start questioning whether I really needed a full Linux distribution for running Docker hosts in the home lab.

Installing Lightwhale

So, how do we pull down this purpose-built operating system for running Docker containers? It is pretty simple actually. Browse out to this link to download the latest ISO image:

One of the cool things you will note about the ISO is it is TINY! It only weighs in at around 200MB!

Lightwhale downloads
Lightwhale downloads

After ISO downloaded and uploading to Proxmox:

Lightwhale iso downloaded and uploading to proxmox
Lightwhale iso downloaded and uploading to proxmox

Creating a new virtual machine to house lightwhale installation:

Creating a new proxmox vm for lightwhale
Creating a new proxmox vm for lightwhale

One of the first things I noticed is how straightforward the initial setup is. Lightwhale is designed to be lightweight and fast to deploy. You are not walking through a long installer or making dozens of configuration decisions. The system is focused on getting you to the point where you can run containers as quickly as possible.

Booting up lightwhale
Booting up lightwhale

Lightwhale will boot up to the login screen. Here, the default login credentials are:

  • User: ops
  • Pass: opsecret
Ready to login to lightwhale container operating system
Ready to login to lightwhale container operating system

After logging in, you will see the note about the default storage for Lightwhale being non-persistent.

Lightwhale doesn't have persistent storage by default
Lightwhale doesn’t have persistent storage by default

I wanted to do just a quick sanity check of how docker ready this is right out of the box and sure enough, a quick check of docker ps worked just fine.

Running a quick docker ps command to verify docker installation
Running a quick docker ps command to verify docker installation

Enabling persistence

Now, we have already mentioned that the Lightwhale installation out of the box is not persistent. But, we can make it persistent if we want to and have the need to have bind mounts and store images locally if you are not just working with stateless containers.

To enable persistence, this is just two commands and a reboot. First we run the following to wipe the first 512 bytes of the disk. This also clears any existing partition table or metadata. In the following two commands replace sda with the disk ID on your particular system which you can get with a quick lsblk command:

sudo dd if=/dev/zero bs=512 count=1 conv=notrunc of=/dev/sda
Wiping the first 512 bytes of the disk
Wiping the first 512 bytes of the disk

Next, we write the “magic header” for the disk. What is this? The magic header command writes a specific identifier string to the beginning of the disk that Lightwhale detects at boot. This command is a signal for it to automatically initialize, format, and use that device for persistent storage.

echo "lightwhale-please-format-me" | sudo dd conv=notrunc of=/dev/sda
Writing the magic header to the disk
Writing the magic header to the disk

Now, all that we need to do is reboot the Lightwhale temporary live environment and the process will pick up the marked disk and get it ready on the next boot:

sudo reboot
Rebooting lightwhale after configuring persistence
Rebooting lightwhale after configuring persistence

Just a quick comparison of what it looks like before and after enabling persistence. Below is before we make storage persistent. You can see by default the /mnt/lightwhale-data/lightwhale-state/docker is on tmpfs.

Before enabling persistence the docker storage is temporary
Before enabling persistence the docker storage is temporary

But, after running the persistence commands above, the /mnt/lightwhale-data/lightwhale-state/docker is now pointed to /dev/sda2 and is now persistent.

After enabling persistence docker storage is backed by the disk and not tmpfs
After enabling persistence docker storage is backed by the disk and not tmpfs

Where I think Lightwhale shines

After spending some time with Lightwhale, there are many areas where I think it stands out as something I would recommend definitely for a home lab. First of all, it is simple. There isn’t a lot of complexity here outside of typing a couple of commands for persistence. There isn’t anything to configure during the installation and this means quick and easier time to running containers. It is also less to troubleshoot.

Running nginx on top of lightwhale
Running nginx on top of lightwhale

This greatly helps with consistency. The system is super minimal as you can see, just 200 MB. So, with this, it is easier to keep things consistent and aligned. You are not dealing with slightly different package versions or configurations across nodes. This also helps to reduce maintenance since there is WAY less to update and manage.

Lightwhale naturally pushes you toward getting things containerized since there isn’t much at all to the host operating system and you can’t just run commands like you would a traditional installation of a Linux distro.

Where Lightwhale may not fit

As much as I like it, there are a few call outs here that I think need to be made and at least stated before saying that you would absolutely use this everywhere. First, if you rely on host level tools or custom scripts that expect a full Linux environment, Lightwhale is probably not going to work well for you as it doesn’t have the flexibility compared with full Ubuntu or Debian installations.

Also, if you feel like you are still learning Docker and want to experiment with different tools directly on the host, a traditional OS is probably a better starting point. Lightwhale assumes you are comfortable working in a container first mindset.

You also may have scenarios where you need specific drivers, kernel modules, or integrations that just may not work on Lightwhael and would be much better suited on a full Linux distro.

Wrapping up

When I went into trying Lightwhale, I thought I already had the best Docker setup in the home lab. I don’t run quite as much standalone Docker installations now that I am full on with Kubernetes, but I think this is definitely going to become my goto for quick and easy and maintainable installation for Docker container hosts.

When you first use Lightwhale, it makes you realize that most of us are carrying a whole lot more complexity than we need most of the time. For most container workloads, we just need a container focused OS that simplifies things and shrinks your attack surface. If you haven’t tried it before, I would definitely recommend Lightwhale for the home lab and running Docker hosts. How about you? Have you tried it before? Does this look like a container OS you would think about running in your lab?

Google
Add as a preferred source on Google

Google is updating how articles are shown. Don’t miss our leading home lab and tech content, written by humans, by setting Virtualization Howto as a preferred source.

About The Author

Brandon Lee

Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

0 0 votes
Article Rating
Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments