No doubt, if you have worked with technology for any time, you have heard the terms “virtual machines” and “containers” more than once. Both virtual machines and containers are core technologies in today’s ever-advanced technology world. However, many running home lab environments may wonder which they should run, virtual machine vs container. This post will dive deep into virtual machines vs. containers and which could serve your home lab best. The answers may surprise you. First, let’s compare and learn their strengths and weaknesses.
Table of contents
- What is a virtual machine (VM)?
- How virtual machines operate
- What are containers?
- Exploring containers functionality
- Security aspects of VMs and containers
- Virtual machine vs container in the home lab
- The Future of home labs: Virtual machines, containers, or both?
- Deciding between virtual machines and containers
- Other posts you may like
What is a virtual machine (VM)?
Let’s start our discussion with virtual machines, often referred to as VMs. These have been integral in the world of computing since the dawn of the virtualization age in the early 2000s. Essentially, they are digital replicas of physical computers, furnished with their own operating system and system resources. These resources are allocated from the underlying hardware of a real-world, physical server.
In essence, running multiple virtual machines on a single server gives you the flexibility of having multiple machines with different operating systems within your reach.
A virtual machine runs an operating system and applications, behaving independently while operating on a fraction of the resources from a physical server. The magic behind virtual machines lies in a component called the hypervisor, that is essentially the host operating system.
This software layer allows your physical computer, the host, to create and run virtual machines. The hypervisor assigns system resources such as processing power, memory, and storage space from the host computer to each of the guest VMs it creates.
Multiple operating systems same hardware
A standout characteristic of virtual machines is their capacity to host multiple operating systems on a singular set of physical hardware. This allows for the utilization of a host computer, perhaps running a Windows operating system, to concurrently operate virtual machines running distinct operating systems, such as Linux or MacOS, all within the same hardware confines.
Virtual machines mimic the physical computer’s hardware architecture. As such, each VM boasts its own collection of virtual hardware components, encompassing CPUs, memory, hard drives, network interfaces, and other peripheral devices. The guest OS, or the operating system running within the virtual machine, interacts directly with these virtual hardware elements.
Another critical feature is the ability to manage virtual machines efficiently. You can create, delete, and modify VMs as needed, making tasks such as testing across multiple environments or software development on various operating systems a breeze.
You can also take virtual machine snapshots, providing a frozen point-in-time copy of a VM. This is useful for tasks like testing new software, where you can roll back to the snapshot if something goes wrong.
In essence, virtual machines provide the flexibility to emulate multiple computers with potentially different operating systems, all within the confines of your existing physical hardware.
How virtual machines operate
In a virtual environment, the hypervisor running on the physical computer creates an isolated virtual environment in which a guest operating system runs. This complete operating system layer gives each individual virtual machine its independence. It is separated from the host OS and neighboring virtual machines.
Make good use of modern hardware
Leveraging modern, high-performance hardware, it’s entirely feasible for a single server to accommodate numerous virtual machines, each running a distinct operating system. The task of managing virtual machines has been significantly streamlined due to established virtual machine platforms like VMware and VirtualBox.
These platforms offer tools to handle virtual machine images, take a virtual machine snapshot, clone virtual machines, migrate VMs between hosts, etc., making operating and maintaining environments simple.
What are containers?
Shifting our focus to container technology, containers are another type of virtualization technology, but they take a slightly different approach compared to virtual machines. Rather than emulating an entire computer’s hardware and running a complete operating system, a container encapsulates an application and its dependencies isolated from other containers.
Shared host operating system kernel
A container runs directly on the host OS kernel, sharing it with other containers. This means that all containers on a host use the same underlying operating system, unlike virtual machines which can run different operating systems. The shared operating system approach is part of what makes containers lightweight and efficient compared to VMs.
Containers originate from something known as container images. These are agile, independent, and executable software packages that encompass all the necessary components to run a specific software – the code, runtime, system tools, libraries, and configurations. Take Docker containers as an instance; they are constructed from Docker images, whose definitions are inscribed in a Dockerfile.
Owing to their small size and quick startup time, containers prove to be very efficient for dynamic development workflows and microservices, particularly when integrated with CI/CD platforms. They enable a uniform packaging and distribution of software across multiple environments, enhancing the efficiency of the software development lifecycle.
The isolation provided by containers is at the process level. Each container operates as an isolated user-space instance, running a single application or service. While they don’t offer as robust isolation as VMs, containers still provide an effective way to package and isolate applications with their dependencies, reducing conflicts and improving deployment consistency across multiple environments.
Container engines like Docker or Kubernetes (orchestration platform) can manage containers. These engines allow you to automate the deployment, scaling, networking, and availability of containerized applications, making the management of containers more straightforward and scalable.
Exploring containers functionality
Containers vs virtual machines in the home lab is not just a battle of resources; it’s also about how they fit into your home lab or a software development lifecycle if you are interested in learning about CI/CD pipelines, etc.
In an agile development environment, for instance, containers can be beneficial due to their lightweight nature and fast start-up times. Running containers on your physical machine won’t burden your system resources as much as running multiple virtual machines.
The container engine, which could be Docker or any other similar platform, is responsible for managing the lifecycle of containers. It handles tasks like starting, stopping, and destroying containers based on the instructions given in the container image’s build file.
With containers and simple Docker Compose code, you can quickly spin up multiple solutions for effective application stacks. Below we are spinning up Traefik and Pi-Hole using Docker Compose:
version: '3.3' services: traefik2: image: traefik:latest restart: always command: - "--log.level=DEBUG" - "--api.insecure=true" - "--providers.docker=true" - "--providers.docker.exposedbydefault=true" - "--entrypoints.web.address=:80" - "--entrypoints.websecure.address=:443" - "--entrypoints.web.http.redirections.entryPoint.to=websecure" - "--entrypoints.web.http.redirections.entryPoint.scheme=https" ports: - 80:80 - 443:443 networks: - traefik volumes: - /var/run/docker.sock:/var/run/docker.sock container_name: traefik pihole: image: pihole/pihole:latest container_name: pihole ports: - "53:53/tcp" - "53:53/udp" dns: - 127.0.0.1 - 18.104.22.168 environment: TZ: 'America/Chicago' WEBPASSWORD: 'password' PIHOLE_DNS_: 22.214.171.124;126.96.36.199 DNSSEC: 'false' VIRTUAL_HOST: piholetest.cloud.local # Same as port traefik config WEBTHEME: default-dark PIHOLE_DOMAIN: lan volumes: - '~/pihole/pihole:/etc/pihole/' - '~/pihole/dnsmasq.d:/etc/dnsmasq.d/' restart: always
Security aspects of VMs and containers
Security is always a consideration, no matter what type of environment you are thinking about, including home labs. As you weigh the options between virtual machine vs container, consider the isolation level.
Comparing VMs and containers, virtual machines offer more isolation since each runs its own operating system, reducing the security risk. Containers, while efficient, share the host OS kernel with the other containers running on the host. It could present a security concern due to the shared OS kernel.
Virtual machine vs container in the home lab
Ok so we have a much better understanding of what a virtual machine vs container is and what they are typically used for. So, now, which is best for the home lab? Well, in the famous phrase that none of us like to hear, it depends.
I have been running a home lab for a decade now and have seen many technological shifts since starting to run my lab a decade ago. However, the right answer for most will probably be running both. Why is that the answer?
Well, first and foremost, containers need container hosts. The container host is the computer or virtual machine with Docker or other container runtime installed. Most will likely use virtual machines in most home lab or production environments to serve this purpose as VMs are much easier to manage, backup, migrate, etc, than a physical computer.
So by their nature, containers need and work well with VM technology. For most, they will not replace their entire lab with a bare-metal physical container host to run containers, they will keep their hypervisor in place, running virtual machines as container hosts.
The shift in home lab technology – more containers!
However, I think we have seen a shift in home lab focus and technologies like production environments. Containers allow us to run home labs much more efficiently. Instead of running 65 VMs 10 years ago, we may now have 5-10 VMs, some running as container hosts with multiple containers serving out self-hosted services.
In addition to container hosts, Kubernetes has also taken off in many home lab environments. Kubernetes is a container orchestration tool that provides many great features, including:
Robust container scheduling
Virtual machines also have their place in other use cases. Many still may run big monolithic SQL servers, backup servers, domain controllers, or other management appliances as virtual machines. These still are important and have their place.
The great thing about the shift in technology and the new hybrid mix of VMs and containers is it becomes even easier to have a home lab running multiple self-hosted services more efficiently.
The best use case for containers in the home lab
Generally speaking, web-driven services are the best type of service to run inside containers. Containers are excellent for self-hosted web services. In addition, using containers in the home lab is an excellent way to spin up new services to test out without having to worry about spinning up a new VM and installing all the prerequisites needed to run the services.
Below is a screenshot of Kubeapps I have running in my home lab in a Kubernetes cluster, allowing management of many different containerized services.
Container images include all the necessary software and prerequisites to run the application you are spinning up.
The Future of home labs: Virtual machines, containers, or both?
The tech world is steadily leaning towards containerization, especially with the rise of microservices and cloud-native applications. But this doesn’t mean virtual machines are obsolete. They still hold tremendous value in most environments, including home labs.
Containers and virtual are not competing technologies. Rather they complement each other. VMs provide robust isolation, and containers bring speed and efficiency.
Best use for VMs?
Big servers that need lots of resources and needs to run many applications
SQL Servers, backup servers, domain controllers, etc
Best use for containers?
Running very small web services
Testing out new services in your home lab
CI/CD pipelines and agile development
Evergreen environments that are easily upgraded
Deciding between virtual machines and containers
It’s not necessarily a case of virtual machine vs container. It’s about understanding your specific needs and aligning them with the strengths of each technology. If you require complete operating systems to simulate a complex network of multiple resources or you need to run earlier versions of software, virtual machines could be your best bet.
If your goal is to have a streamlined software development process that can replicate multiple environments quickly, then containers may serve you better.
Ultimately, the decision between containers and virtual machines boils down to the specific requirements of your home lab setup, the physical resources available, and the nature of the projects you’ll be working on.