@t3hbeowulf @malcolm-r @jnew1213 @termv @ghaleon I wanted to see what code repositories you guys are using in the home lab. Experimenting with any cool CI/CD? Infrastructure as Code?
Nothing in regard to repositories here.
I have a VM set up to experiment with Ansible, even had it build a VM for me once. That's the extent of anything IaC or DevOps here. Well, I do have some Docker stuff running if that still counts as DevOps, but no plans to expand what I have.
I'm afraid I am too old or too dense for Git, and all that related stuff. I enjoy the design and engineering parts of things and leave the operations side to those folks interested in that aspect.
My first job in IT (before it was called IT) was, in part, as a COBOL programmer. IBM mainframe and WANG VS with mainframe connectivity. The programming aspect of the job I didn't like. Writing, anything, is a creative task for me, and I have to be in the mood to get anything done.
Beside that, I wasn't very good at it.
Next job, I ended up as, partly, a BASIC programmer. Not the fun Commodore PET BASIC, but the more serious cousin, that "Professional Development" stuff. I didn't much care for that either, though I loved BASIC programming in high school... on the Commodore PET.
So, I figured programming was not for me and I transitioned over time into tech support and networking. I've never looked back. Abandoning anything to do with code was the smartest thing I could have done, career-wise and "life-wise," if you will, because surely I would have killed myself had I pursued programming.
I did take a C course in the early or mid-eighties. That just confirmed that I was not meant for that kind of thing.
So, no. I have a clear, personally mandated avoidance of anything DevOps.
Anyway, projects here have gotten out of hand and progress isn't being made on most of them in any appreciable way.
I did complete a silly "project" (more like a task, really) to outfit any of my UPSes here that could take a management module with one. That turned out to be three of five devices. All three CyberPower UPSes got a remote management card and are now controllable, to some extent, over the network. One UPS additionally got a temperature and humidity sensor. Kind of fun. One more thing I can manage that I don't really need to. But, hey, I added a tile for the utility to my dashboard (Dashy).
My KVM project was abandoned, with the 8-port TESmart KVM and cables packed up and put away about a week ago. Today, I got word of back pay coming this month (along with a tax refund), so today I ordered a new PiKVM to front the TESmart KVM the way the BliKVM I was working with couldn't. So, the project is resurrected! The TESmart will mount in the rear of the rack and connect to 7 tiny/mini/micro machines (my vSAN and "remote" clusters) as well as a newly built Plex server.
Oh, yeah. The Plex server. That's new. So, my big Plex server runs on a Synology DS3615xs NAS. The NAS has an old Core i3 processor in it and the media is co-resident on that machine, so that all fits nicely. However, beside running out of space on the device, drive bay 5 has gone bad. It's really coming time for a new NAS. Another rack mount, like my RS1619xs+.
All Synology's rack mount devices are Xeon or AMD based. Neither works for transcoding in Plex, so the next Plex server is going to be Windows. I looked at moving Plex from the DS3615xs to a new Windows machine. That move is nearly impossible to do correctly and, with the way Plex stores, literally, hundreds of thousands of folders and artifacts, the move would take a very, very long period of time, during which Plex would either have to be down, or I would have to take the chance of inconsistencies in the data being copied.
So I built a new 1U rack mount ESXi box to run a single Windows Plex VM. I used a Core i5-13500 in an ASRock mini-ITX motherboard with Intel 2.5Gb Ethernet onboard, and passed through an Nvidia Tesla P4 card I had laying around. Now I am slowly recreating/duplicating the Plex environment from the NAS onto this new server. Turns out the 13th gen Core i5 is faster than my 12th gen Core i9 desktop PC. Go figure.
I am working with a new UniFi Express unit with WiFi 6, trying to get it on my network, with much difficulty. I don't need its gateway functions. I just want it to host my UniFi Network Application, which is now running in a Windows VM, and act as an access point, replacing the 60-odd watt monster of an access point I now have.
There's been a VLAN project going on here, started and stopped and restarted. Using UniFi equipment again, up through my USW Pro Aggregation switch which needs to do VLAN and subnet routing and keeping my firewall as the gateway for everything to the Internet. Normally in a UniFi setup, you would use one of their gateways to do all the routing. I want/need to route though the switch due to the fact that I have 10Gb and 25Gb subnets to route.
Lastly, I think, is a project to replace my firewall, a Sophos XG 125, with a Protectli device running Sophos XG Home Edition. The hardware firewall costs over $500/year for updates and service, where the Home version offers much of, or more than, what the $500 yearly fee offers. Interestingly, if you go shopping for a firewall, they never mention the free Home version. Just the pretty hardware they're selling with a yearly renewal. Hmmmmm. Why is that?
I have the Protectli, and installed XG Home and it boots to a blank screen. Heaven forbid it should work the first time.
There's actually so much going on that I created a free account at Monday.com (we use it at work, and I kind of hate it), to track this stuff. I am not sure if the tracking actually helps, but it's another dashboard with pretty colors to be managed, so that must be good.
Oh, in an earlier post I mentioned using Ansible to create a VM. I misspoke. It was actually Terraform. I have a desire to learn both tools to... I don't know what end. But they're good to learn if the frustration doesn't drown out the fun.
Speaking of which, I am battling TrueNAS here. Never again anything but Synology!!! The device is joined to Active Directory and I can't get shares to be seen or used with domain accounts. Whoever designed that software should be disemboweled, drawn and quartered.
Thank you for letting me indulge!
*** End of line
I haven't set up a source repository at home yet but it is on my task list. I primarily use Github now with private repos for internal configs. I have always wanted to keep everything offline so that I'm not dependent on an internet connection to save/restore things.
For pipelines and CI/CD, I use Jenkins LTS to schedule or trigger various tasks such as ansible runs, rsync backups and code builds. Everything I use GitHub for could easily be done in GitLab or Gitea. I'm glad you followed up with demonstrating how to set them up, thank you!
Yes. I configured the Jenkins 'built-in node' (formally called 'master') in a docker container and I also created 3 "agents" in separate containers. These containerized agents do code builds and trigger ansible runs, etc. I use Jenkins agents on various machines in the house as well to initiate backups to the NAS. (Linux --> rsync, Windows --> Robocopy)
I have 1 special agent in a dedicated VM that is used for data backup. That VM is a "data hub" of NFS, passthrough USB devices, SSH mounts and samba mounts and is used for moving files around between systems.
For example:
My main NAS has a "Photos" share exposed via samba and NFS. The samba share is for the rest of the house to back up and look through photos. The NFS share is connected to the "data hub" VM and Jenkins facilitates copying data from the Main NAS (via NFS) to a backup NAS (via an SSH mount) using rsync.
I have about a dozen of these jobs pointing at each dataset on the Main NAS. Both my Main NAS and Backup NAS are configured with enough space to keep all of the datasets.
The current Jenkins setup is not totally ideal as all of the containers for Jenkins itself are on the same physical host but I am going to move them onto the cluster to take advantage of Proxmox Backups for the Jenkins infrastructure.