Home ยป home lab ยป Run Your Home Lab with Infrastructure as Code Like a Boss
home lab

Run Your Home Lab with Infrastructure as Code Like a Boss

Learn how to automate your home lab using Terraform, Ansible, Packer & GitLab CI/CD for consistent, version-controlled, and scalable infrastructure.

There has never been a better time to get into coding things, especially infrastructure as code. There are so many powerful AI tools available now so the barrier to entry for writing code has never been lower. I have always believed in treating my home lab like “real” production infrastructure. Infrastructure as code is the secret sauce for staying organized, repeatable, and having more fun. When you have things like network configuration to VM templates defined as code, you can easily spin up or tear down environments in minutes. Let’s look at how we can wire together things like Terraform, Packer, Ansible, and GitLab CI/CD to automate everything.

Why Infrastructure as Code (IaC) matters in your home lab

You may think IaC is only something you need in production environments. And, well, that may be technically true for the most part. Most of us don’t have absolutely critical infrastructure we need to scale or automate at home. However, a home lab is where you can learn those skills and disciplines. Also, it makes your labbing experience more enjoyable and efficient. Note a few of the reasons I jotted down for giving IaC priority in your lab:

  • Consistency – Manually clicking through GUIs is fine for a one-off test or to learn, but human error creeps in fast. Recording everything in code eliminates those โ€œoops, I forgot to set Xโ€ moments and makes it repeatable each and every time.
  • Version Control – Storing your configs in Git means you can roll back mistakes, branch experiments, and even invite collaborators to projects without worrying about issues that crop up.
  • Pets vs Cattle -When you treat servers and networks as ephemeral, youโ€™re free to break things and experiment knowing you can rebuild in minutes and a server is no longer a “pet” but cattle in that it serves a specific purpose. This is also the “3rd way” for those that have read the Phoenix Project (learning and experimentation).
  • Documentation – Your code is the documentation. Comments and commit history become a step-by-step lab notebook and audit trail of what you did and why you did it
Infrastructure as code
Infrastructure as code

1. Defining your infrastructure with Terraform

Terraform is the defacto standard for Infrastructure as Code. You can use it to define your virtual machines or LXC containers in your on-premises lab environment just like you would in a cloud environment.

Take a look at the following code. Note the resources it creates:

  • Creates a Linux Container (LXC) named “debian_jump”
  • Sets the hostname to “jump01”
  • Uses a Debian 12 container template stored locally on the Proxmox server
  • Allocates 2 CPU cores and 2GB of RAM
  • Configures networking with interface “eth0” connected to bridge “vmbr0”
provider "proxmox" {
  pm_api_url      = "https://proxmox.local:8006/api2/json"
  pm_user         = "terraform@pve"
  pm_password     = var.pve_password
  pm_tls_insecure = true
}

resource "proxmox_lxc" "debian_jump" {
  hostname = "jump01"
  ostemplate = "local:vztmpl/debian-12-standard_12.0-1_amd64.tar.zst"
  cores    = 2
  memory   = 2048
  net {
    name = "eth0"
    bridge = "vmbr0"
  }
}

By organizing your Terraform code into modules like networking/, compute/, storage/, these can be reused across projects (e.g., spinning up a Kubernetes cluster one week, a Windows test domain the next), like the following code example:

Modules structure:

modules/
โ”œโ”€โ”€ networking/
โ”‚   โ”œโ”€โ”€ main.tf
โ”‚   โ”œโ”€โ”€ variables.tf
โ”‚   โ””โ”€โ”€ outputs.tf
โ”œโ”€โ”€ compute/
โ”‚   โ”œโ”€โ”€ main.tf
โ”‚   โ”œโ”€โ”€ variables.tf
โ”‚   โ””โ”€โ”€ outputs.tf
โ””โ”€โ”€ storage/
    โ”œโ”€โ”€ main.tf
    โ”œโ”€โ”€ variables.tf
    โ””โ”€โ”€ outputs.tf

Week 1 – Kubernetes Cluster Project:

# kubernetes-cluster/main.tf
module "network" {
  source = "../modules/networking"
  
  network_name = "k8s-network"
  subnet_cidr  = "10.1.0.0/24"
  vlan_id      = 100
}

module "compute" {
  source = "../modules/compute"
  
  vm_count    = 3
  vm_template = "ubuntu-20.04"
  cpu_cores   = 4
  memory_mb   = 8192
  network_id  = module.network.network_id
}

module "storage" {
  source = "../modules/storage"
  
  storage_type = "fast-ssd"
  size_gb      = 100
  vm_ids       = module.compute.vm_ids
}

Week 2 – Windows Test Domain Project:

# windows-domain/main.tf
module "network" {
  source = "../modules/networking"
  
  network_name = "domain-network"
  subnet_cidr  = "10.2.0.0/24"
  vlan_id      = 200
}

module "compute" {
  source = "../modules/compute"
  
  vm_count    = 2
  vm_template = "windows-server-2022"
  cpu_cores   = 2
  memory_mb   = 4096
  network_id  = module.network.network_id
}

module "storage" {
  source = "../modules/storage"
  
  storage_type = "standard"
  size_gb      = 50
  vm_ids       = module.compute.vm_ids
}

2. Building golden images with Packer

One of the best resources you can have in your home lab is templates. Whether you are using VMware vSphere or Proxmox, or something else, templates allow you to quickly “clone” a new resource from the template with all of your configuration, tweaks, and customizations in place already. This is a huge time saver.

However, keeping your templates updated with the latest software, updates, etc can be tedious to perform and continue to do manually. Hashicorp Packer is an amazing tool you can use that lets you automate the creation of VM and container templates. Once the template is created, then you can use Terraform like in the example above to clone the template to a new resource, whether this is a VM or container.

Sample Packer Template for Debian

{
  "builders": [{
    "type": "proxmox",
    "proxmox_url": "https://proxmox.local:8006/api2/json",
    "username": "packer@pve",
    "password": "{{user `pve_password`}}",
    "template": false,
    "disk_size": "20G",
    "storage_pool": "local-zfs",
    "vm_id": "110"
  }],
  "provisioners": [
    {
      "type": "shell",
      "inline": [
        "apt-get update",
        "apt-get upgrade -y",
        "apt-get install -y qemu-guest-agent"
      ]
    }
  ],
  "post-processors": [
    {
      "type": "proxmox-template",
      "compression": "zstd"
    }
  ]
}

Why I Love Packer

  • Immutable: Each template is a versioned image.
  • Automation: Install your common agents, security patches, and monitoring tools in code.
  • Faster: New VMs based on a template boot in seconds and not even minutes

3. Orchestrating Configuration with Ansible & Semaphore UI

With golden images in place, I rely on Ansible to handle the โ€œday 2โ€ configuration. You can use it for doing things like installing packages, setting up users, configuring NTP, setting up additional agents, tweaking sysctl, etc. And thanks to my recent deep dive into Semaphore UI, Iโ€™ve got a slick web-based runner for playbooks.

My workflow

Create Playbooks

- hosts: all
  become: true
  roles:
    - role: ufw
      ufw_rules:
        - { rule: allow, port: ssh }
        - { rule: allow, port: 80 }
    - role: docker
    - role: prometheus_node_exporter

Commit to Git
Every change to roles or inventories goes into a Git branch. This way a tool like Semaphore UI can pull the scripts it needs from Git.

Semaphore
I connect my GitLab repo to Semaphore. When the Semaphore UI cron job kicks off it pulls the latest scripts and playbooks from Git.

This GUI-driven approach keeps my team and I honest about playbook hygiene (linting, idempotency) and makes sharing automation easy.

4. Putting things together in GitLab CI/CD

Continuous integration continuous delivery
Continuous integration continuous delivery

I treat my home lab repos just like enterprise code. Hereโ€™s a snippet from my .gitlab-ci.yml that ties Terraform, Packer, and Ansible into one seamless pipeline:

stages:
  - validate
  - plan
  - build
  - deploy

variables:
  TF_WORKING_DIR: infra/terraform
  PACKER_TEMPLATE: infra/packer/debian.json
  ANSIBLE_PLAYBOOK: infra/ansible/site.yml

validate:
  stage: validate
  script:
    - cd $TF_WORKING_DIR && terraform validate
    - packer validate $PACKER_TEMPLATE
    - ansible-lint $ANSIBLE_PLAYBOOK

terraform-plan:
  stage: plan
  script:
    - cd $TF_WORKING_DIR && terraform plan -out=plan.tfplan
  artifacts:
    paths:
      - $TF_WORKING_DIR/plan.tfplan

packer-build:
  stage: build
  script:
    - packer build -var "pve_password=$PVE_PASSWORD" $PACKER_TEMPLATE

ansible-deploy:
  stage: deploy
  script:
    - |
      ansible-playbook \
        -i infra/ansible/inventory.ini \
        $ANSIBLE_PLAYBOOK

This allows you to do the following:

  • Validate your code: Prevent typos in Terraform, Packer, and Ansible before anything hits the lab.
  • Artifacts: Packer images get versioned and stored; Terraform plans can be reviewed before apply.
  • End-to-End Automation: Merge code > pipeline > new network + golden image > playbooks > live services.

5. Tips I have learned from my home lab automation

PracticeWhy It Matters
Use parametersAvoid hard-coding IPs, passwords, or names and instead use variables and secrets managers or repository variables
Use remote stateStore Terraform state in an S3-compatible bucket (MinIO in my lab).
Run drift detectionRun terraform plan nightly via CI to catch out-of-band changes.
Immutability for templatesNever SSH into a Packer-built template. If you need changes, update the code.
Idempotent playbooksMake sure Ansible tasks can run multiple times without side effects.
SecretsUse Vault or GitLab CI/CD variables for sensitive data and never commit them. Use a .gitignore file wisely to avoid accidental commits of sensitive file types
Document in codeAdd comments, README files, and examples right alongside your IaC. Use AI to quickly and thoroughly document your code. This is a HUGE time saver!

Wrapping up

Treating your home lab like code is a great way to not only learn, but also to enjoy your lab environment even more. Codifying everything means by default that you have everything documented in your code. This can include networks, images, configs, and workflows in your pipelines.

If you haven’t started learning and appreciating automation, pick a tool like Terraform, Packer, or Ansible and turn one manual task that you perform in your lab into code. I promise you, that you will be hooked from then on. I am a huge advocate of project-based learning. By using this approach you will 10x your skills in just a few months.

Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.