Finished / Never Fi...
 
Notifications
Clear all

Finished / Never Finished

10 Posts
2 Users
4 Reactions
655 Views
JNew1213
Posts: 16
Topic starter
(@jnew1213)
Eminent Member
Joined: 10 months ago
Home Rack

Jeff's Mini Data Center - 2023

Stable since the beginning of the year, I proudly present my upscaled (and downscaled) mini datacenter.

Upscaled with the addition of a leased Dell PowerEdge R740 and another PowerEdge R750. Downscaled as the OptiPlex minitowers I had have been sold off. The PowerEdge R710 was long ago sold. The R720, then the T620, sold off. Patch panels and 6" multicolored network patch cables removed, and all Ethernet cables swapped out for Monoprice SlimRun Ethernet cables.

 

Equipment Details

On top of the rack:

  • Synology DS3615xs NAS connected via 25G fibre Ethernet, Linksys AC5400 Tri-Band Wireless Router. Mostly obscured: Arris TG1672G cable modem.

 

In the rack, from top to bottom:

  • Sophos XG-125 firewall

  • Ubiquiti Pro Aggregation switch (1G/10G/25G)

  • Brush panel

  • Shelf containing 4 x HP EliteDesk 800 G5 Core i7 10G Ethernet (these constitute an 8.0U1 ESA vSAN cluster), HP EliteDesk 800 G3 Core i7, HP OptiPlex 5070m Micro Core i7, HP EliteDesk 800 G3 Core i7 (these three systems make up a "remote" vSphere cluster, running ESXi 8.0U1). The Rack Solutions shelf slides out and contains the 7 power bricks for these units along with four Thunderbolt-to-10G Ethernet adapters for the vSAN cluster nodes.

  • Synology RS1619xs+ NAS with RX1217 expansion unit (16 bays total), connected via 25G fibre Ethernet

  • Dell EMC PowerEdge R740, Dual Silver Cascade Lake, 384GB RAM, BOSS, all solid state storage, 25G fibre Ethernet

  • Dell EMC PowerEdge R750 Dual Gold Ice Lake, 512GB RAM, BOSS-S2, all solid state storage (including U.2 NVMe RAID), 25G fibre Ethernet

  • Digital Loggers Universal Voltage Datacenter Smart Web-controlled PDU (not currently in use)

  • 2 x CyberPower CPS1215RM Basic PDU

  • 2 x CyberPower OR1500LCDRM1U 1500VA UPS

 

There's 10G connectivity to a couple of desktop machines and 25G connectivity between the two NASes and two PowerEdge servers. Compute and storage are separate, with PowerEdge local storage mostly unused. The environment is very stable, implemented for simplicity and ease of support. There's compute and storage capacity to deploy just about anything I might want to deploy. All the mini systems are manageable to some extent using vPro.

The two PowerEdge servers are clustered in vCenter, which presents them both to VMs as Cascade Lake machines using EVC, enabling vMotion between them. The R750 is powered off most of the time, saving power. (iDRAC alone uses 19 watts.) The machine can be powered on from vCenter or iDRAC.

Recently, I've switched from using the Digital Loggers smart PDU to Govee smart outlets that are controllable by phone app and voice/Alexa. One outlet with a 1-to-5 power cord connects the four vSAN cluster nodes and another connects the three ESXi "remote" cluster nodes.

"Alexa. Turn on vSAN."

"Alexa. Turn on remote cluster."

Two more smart outlets turn on the left and right power supplies for the PowerEdge R750 that's infrequently used.

"Alexa. Turn on Dell Left. Alexa. Turn on Dell Right."

Total storage here is about 6/10 petabyte.

There are a couple of additional NASes for, mostly, onsite media backup.

 

Okay, that's a fair bit of equipment. So what's running on it?

 

Software and Systems

Well, basically most of what we have running at the office, and what I support in my job, is running at home. There's a full Windows domain, including two domain controllers, two DNS servers and two DHCP servers.

This runs under a full vSphere environment: ESXi 8.0U2, linked vCenter Servers, vSphere Replication. SRM. Also, vSAN (ESA), some of the vRealize (now Aria) suite, including vRealize Operations Managment (vROps) and Log Insight. And... Horizon: Three Horizon pods, two of which are in a Cloud Pod federation, and one of which sits on vSAN. DEM and App Volumes also run on top of Horizon. I have a pair of Unified Access Gateways which allow outside access from any device to Windows 10 or Windows 11 desktops. Also running: Runecast for compliance, Veeam for backup, and CheckMK for monitoring.

Future plans include replacing the Sophos XG-125 firewall with a Protectli 4-port Vault running Sophos XG Home. This will unlock all the features of the Sophos software without incurring the $500+ annual software and support fee. I'm also planning to implement a load balancer ahead of two pairs of Horizon connection servers.

What else? There's a fairly large Plex server running on the DS3615xs. There's also a Docker container running on that NAS that hosts Tautulli for Plex statistics. There are two Ubuntu Server Docker host VMs in the environment (test and production), but the only things running on them right now are Portainer and Dashy. I lean more toward implementing things as virtual machines rather than containers. I have a couple of decades worth of bias on this.

So that's it. My little data center in Sheepshead Bay, Brooklyn.

I'd love to entertain any questions. Hit me up.

9 Replies
Brandon Lee
Posts: 380
Admin
(@brandon-lee)
Member
Joined: 14 years ago

@JNew1213 this is seriously an awesome home datacenter! I need to read through everything more closely. I'm sure I will have questions! Thank you for sharing this!

Reply
Brandon Lee
Posts: 380
Admin
(@brandon-lee)
Member
Joined: 14 years ago

@jnew1213 Ok so there is a lot for me to chew on in your description. Thank you for all the details of your hardware and software, awesome stuff. So guessing you are running 2-node ESA with the two Dell servers? Also, it is great to talk to someone who is leasing equipment for home lab. How do you come about doing it this way (leased Dell PowerEdge R740 and another PowerEdge R750)? Do you mind sharing some details on where you are sourcing your lease, is this directly from Dell? Also, what kind of cost if you mind me asking that?

Is the R740 the only lease, or the R750 is leased also? I have only played around with ESA in a non-supported nested environment a bit but would love to transition from OSA in the lab to ESA. What types of drive hardware are you running in your Dell servers, are you utilizing NVMe throughout? Have you used HCI bench or another tool to get some rough benchmark numbers?

I am sharing some similarities with you there on software stack ran as well, using Veeam here too, also using NAKIVO directly on my Synology NAS, Vembu for testing, and also Synology Active Backup.

Also, wondering on guidance...it looks like you went full circle, from Optiplex minitowers to full-on PowerEdge servers. Was there a reason you went back the direction of full servers as opposed to the minis? Were you hitting limitations there, specifically wanting to go ESA and 25 gig networking?

Ok, I know I will have more questions, eagerly awaiting your feedback on the above. Thanks again Jeff and welcome to the community. Will be great to have someone with your knowledge and experience level for ones to bounce questions off of, especially on the VMware side of things. 👍 

Reply
JNew1213
Posts: 16
Topic starter
(@jnew1213)
Eminent Member
Joined: 10 months ago

I'm actually running a 4-node vSAN ESA cluster, on four HP EliteDesk 800 G5 Mini machines, each with 250GB and 2TB M.2 SSDs and each with a Thunderbolt to 10G Ethernet adapter. The cluster doesn't run full time (power, heat -- the 10G adapters get too hot to touch -- and I don't trust it as storage for important VMs.

The Dells: I needed to replace my old PowerEdge servers for vSphere 8, though I had been thinking about doing so since vSphere 7 came out. Mid-2022 and there was no money for new hardware. Big hardware purchases usually tail tax season. That's why they invented tax refunds!

I checked out Dell's Small Business Website for worst-case pricing on a new R740. This was the logical step forward from my Twelfth Generation Dells. The Thirteenth Generation R730 is a nice machine, but it's life is limited and I didn't want to have to worry about servers again for 5-10 years (my T620, which I bought new, lasted more than ten years before it aged-out and was sold, still reliable and almost silent).

So I put together a minimal configuration for an R740 on Dell's site and then called Dell Small Business and asked them for a quote on that configuration. What I put together was a single silver Cascade Lake refresh CPU, with minimal RAM (8GB) and a single SATA SSD. I added a RAID controller, took the usual dual power supplies, and I think that was it. Recapping: Single processor, minimal RAM, minimal disk.

I told Sales that I wanted to lease for three years and they provided terms for lease and lease-to-own (with $1 buyout at the end). Terms were fair as was the finance charge. A single phone call to verify my phone number, etc. was all it took. Sales tax is payable up front in full, even before the first lease payment.

You're going to love how much this soon-to-be monster of a machine cost: $115/month!

Okay, soon-to-be-monster... Time to really outfit the thing. Second matching processor: eBay purchase, risers 3 & 4: Dell. BOSS card for boot: eBay again. Two sticks of supported Micron M.2 for the BOSS card: eBay. RAM: eBay and Crucial. All 64GB RDIMMs. Nice stuff. Mellanox ConnectX-4 25Gbit NIC: eBay. iDRAC Enterprise license: eBay.

That was it. I had everything here when the server arrived. I added another 1TB M.2 on a PCIe card that I had laying around. I have a domain controller sitting on that, pretty much isolated from the shared storage where everything else resides.

The R750 is another matter. When browsing eBay to see what they had in the way of PowerEdge servers, I came across a seller with six available data center pulls of well configured R750 machines. His asking price was decent, as was his eBay rating. I emailed him and we arranged an off-eBay sale for the machine.

I paid, the seller waited until everything cleared and was satisfied that he got his payment, he gave me the Dell service tag number so I could look it up and, once I was satisfied, he sent the machine.

The R750 was fully decked-out with dual Ice Lake Gold processors, 256GB RAM, a boot SSD, I think SATA, and two U.2 NVMe SSDs in the front bays, attached to a SAS/SATA/NVMe front-mounted RAID controller. the machine came with a license for iDRAC Enterprise.

The r750 has dual 1400w power supplies and came with the long risers Dell provides for use with installed graphics cards. The machine also came with Dell's Very High Performance ("Gold") fans. Surely you were able to hear them when I powered the system on for the first time.

To that configuration, I added another 256GB RAM, a BOSS-S2 card with supported Intel M.2 SSDs, cables, and a few other things, oh, and another Mellanox CX-4. I soon after replaced the Gold fans with Standard fans to reduce noise and power usage. I don't have a GPU in either PowerEdge.

The R750 is powered off most of the time. After the upgrade to ESXi 8.0U2 the fans no longer ramp down as much as they used to, so the machine is a bit loud. Also, it pulls about 385 watts at idle. (The R740 uses about 180 watts with ~30 VMs running). iDRAC alone uses 19 watts with the machine powered off, so it's plugged into a pair of smart outlets that are usually turned off.

I haven't benchmarked the Dells. I am not sure the reading from within a VM is all that accurate and performance is fine. If it didn't match my desktop machine, I probably wouldn't be happy and... I don't want that!!

I did benchmark the vSAN cluster with HCI Bench, I think, when originally built with with ESXi7 (OSA, of course), and it was decent. This was a good three years ago. In one test I was able to migrate a VM to/from the cluster in less time than migrating a similar size VM at work on a Nutanix cluster. So, I was satisfied. The four brand-new HP Minis with 64GB RAM, two SSDs, and 10G adapters were an expensive toy.

Backups: Veeam for VMs and physical workstations. NFR license through Spiceworks. Veeam also replicates my vCenter Servers and domain controllers. Synology Hyper Backup backs up four times a day from one NAS to another (via rsync). A handful of VMs are replicated to different storage using vSphere Replication. One uses SRM in addition to vSphere Replication. Just for practice. I was using Synology's Cloud Sync to back up to Google Drive for a couple of years, but recently, Google cracked down on formerly "unlimited" accounts that were using terabytes of storage, so no more Cloud Sync. I've been using CrashPlan to backup about 11.5TB of data for years. I added a Backblaze personal plan to all that when I moved off Google.

I've tried Synology's Active Backup for Business. Curious as to how it could backup from the free version of ESXi. It does! But it exposes your hosts by requiring SSL to be turned on and, as I recall, it needed a modification made to each VM's .vmx file.

The OptiPlexes... After decades of cast-off Compaq and home-built white box servers, I bought the T620. A Lenovo TS-something server and the OptiPlexes followed rather than preceded it. A 7010. A 9020. There was a Dell R710 and an R720 in there too, before the R740/R750.

The Lenovo was limited to 32GB and a single CPU, and it became dated fast. Buggy BIOS too. The Optiplexes also maxed out at 20GB or 32GB. Too many boxes  of that size required to run a full Microsoft and VMware stack. And no iLO or iDRAC. All the boxes had vPro, but that is a poor substitute.

Brandon, I've enjoyed your discussion of ESXi and your home setup for a while now. Appreciate the effort you put into things. Thanks for all your work.

Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 380

@jnew1213 This is a fantastic explanation of the evolution of your lab. Loving reading through this. Definitely akin to a few of the steps I have made in the lab environment over the years. I'm curious on the G5s - Are you running ESA with 4 drives or more? Was curious if it was easy to get away with that in physical implementations of ESA. I know when ESA first came out I saw that as a requirement, but I was able to bypass that check in the nested config I played around with. Just curous if you ran into any weird issues getting ESA to adopt less disks or were you able to fit that many in the G5s?

Also, wow, I can't believe the lease for the 740 was $115/mo, I was expecting much more than that. Also, thinking for home labs, leasing might be a great option in this price range since every 3 years or so you could have new hardware basically and just trade out the payment.

On the HP minis, it sounds like you mainly have these to play around with ESA, is that right? Also, do you run into any issues powering down and powering back up your lab environment with the services you are running? Are you using some automation to do this gracefully, PowerCLI, etc? Am I right in thinking it sounds like you don't run anything 24x7? 

I need to spin up a current Horizon pod again and had a Horizon 8.x cluster running with UAGs and a Kemp Loadbalancer for LB. However, funny enough, my Horizon UAGs were popped when the Log4j vulnerability surfaced...I was busy patching some production boxes with Log4j, and my home UAGs got compromised, lol. I had to burn them down. Since then, I have just been using Twingate to connect remotely and not opening any ports to the outside.

I am also wondering what will happen soon with Broadcom wanting to sell off the EUC side, as mentioned in the news lately, which may mean we won't have access to the licenses much longer with VMUG and vExpert, but waiting to see what happens there. 

Thank you also for the kind words Jeff. The best part of creating content and putting it out there is meeting up with people in the community and fellow enthusiasts, like yourself. I'm really glad you joined up here on the forums, and I'm looking forward to exchanging more ideas! 👍 

Reply
JNew1213
Posts: 16
Topic starter
(@jnew1213)
Eminent Member
Joined: 10 months ago

Each of the ESA nodes has just the two SSDs in them. What used to be the cache (250GB) and capacity (2TB) drives. vSAN never complained about that. I lost a point for not having 25Gb in the hosts. I also had difficulty updating the cluster using an image. I had to do the machines the old fashioned (preferable!) way, using baselines. Lifecycle Manager did not like the Samsung 2TB drive in each box. It didn't flag the smaller SSD. Someone gave me a Ruby vSphere Console command to have that check skipped. I made the change, but haven't tested it yet.

The minis were originally acquired to do "vSAN" whatever that entailed; cache and capacity drives, and an external 10Gb adapter. That was all the space in the little things. Fully configured, the four boxes were close to $8000. That was for roughly 6TB usable space. I never had anything running on them except a Horizon pool. I don't trust vSAN for anything important. At least not without VMware support.

The vSAN boxes can be powered down and up at will. There's a wizard to do that, but at some point, if there are VMs on the cluster, they are going to go down in such a fashion that... well, it seems to leave objects or pieces of things when the nodes are powered back up. These oft-occurring objects are part of my distrust for the system. What are they and where did they come from? How can they be safely cleaned off?

The problem with leasing the server without the buyout at the end is that, at the end, you own nothing. The server gets returned to Dell, I don't know if there's any credit toward a new machine either. The purpose behind getting current hardware is that it would be usable for years to come; years after the lease expires. Three to five years is fine for the hardware in a corporate data center, but in a home lab, it should last much longer.

There is one other thing I think needs discussion, but I haven't seen much, if any, yet. Each new generation of Intel machine (the same may be true for AMD machines, which I don't use) uses more power, puts out more heat, is louder, and seems to be incrementally more expensive than the last. It looks to me like the last generally usable Dell for the home lab may be the Rx40. The Rx50 uses more than twice the power and is louder. The Rx60 is out of my price range, even talking about stripped-down configuration.

On top of that, the machine can be so power hungry and so hot running that at least one data center I know of has had to limit the number of such units in a rack due to power usage and the fact that the temperature in the "warm aisle" behind the rack reached 120 degrees F. This is not home lab-friendly.

The R750 and above have liquid cooling options for the processors, but there are still fans to cool memory and PCIe card, and the liquid cooling requires one or more external pumps, some sort of manifold device, I believe a multi-gallon reservoir, and other accommodations. Not home lab-friendly.

The core environment, the R740, all the NASes, 30+ VMs, etc., that's all "production" and doesn't go down. I strive for 100% uptime, but living in an apartment, and being at the mercy of sometimes weird and always limited power, a single ISP, etc., it's nowhere near possible. Running two ACs in the summer is fine. Add the dishwasher, and it flips a circuit breaker downstairs, in a locked electrical room. Waking the super up at 1am to reset the breaker... oh, don't get me started!

When there's service required on a one of the PowerEdge systems, the other is available for vMotion of the load. When a NAS requires maintenance, that's a bit more difficult. There's space for the VMs to move, but moving things from storage device to storage device, even connected at 25Gbit, takes time. Further, both the big NASes are used for other things, including as backup and replication targets, media hosts and streaming, etc. NASes are taken down individually and usually around 4am, and I have to make sure there are no jobs running that can't be recovered later.

I am not big on automation. So bringing things up and down is a manual process. I have been working on a runbook, on and off, that would detail the process, but it's not nearly at a useful stage of completeness.

Putting a Kemp LB in front of my Horizon connection servers is a project for the coming year. I was looking at doing it with a virtual F5 LTM, but there's a cost involved, and the Kemp looks easier to administer. The F5 would be more practical learning, but... I am not part of of the Networking team at work, so I would never get to touch an F5 in a hundred years.

Haven't thought about Log4j in a while. It didn't affect me too much at home; only CrashPlan used Java logging, and they said they weren't affected. Horizon was clean and VMware Tools had an issue only in combination with something else. I did upgrade Log Insight, I think, but that may have actually been before the vulnerability was revealed. Work was another story. We were busy for months patching, implementing workarounds patching again, running reports, etc. Every time VMware updated code, which they sometimes did without changing version numbers (!!!), build numbers were incremented, we had to upgrade the affected software again.

Like everyone, we're waiting to see what shakes out as the months pass since the Broadcom merger. My VMware TAM and service manager are safe (for now), but on last Tuesday's weekly TAM call, we were introduced to two new teammates who replaced two others, no longer on the account. I hope Horizon remains something we can get from VMUG or a similar program at another vendor once it's spun off. Work is invested in Horizon to the tune of nearly 50,000 desktops running out of two data centers, so we're not going anywhere. Horizon is what I do. We have far less of an investment in Workspace ONE and none in Carbon Black.

Fun talking about this stuff with someone whose eyes don't glaze over after twenty seconds.

Reply
1 Reply
Brandon Lee
Admin
(@brandon-lee)
Joined: 14 years ago

Member
Posts: 380

@jnew1213 Absolutely, these are the kinds of discussions that are so fascinating to me. It never ceases to amaze me how much we can learn just talking out home lab ideas and discussing the paths we have all taken. I always come away with fun and new things to think about and try to implement. You now have me thinking of leasing, haha. 

I am really wondering where things are headed with VMware, personally. It definitely feels like things are taking a different turn with Broadcom at the helm, as we knew it would. Hopefully, the same enthusiasm and culture in the community can remain intact. It really made me wonder where VMUG was headed after the acquisition, but so far, it seems things are holding together for now. 

You make excellent points on the CPUs coming out. Aren't things supposed to be the opposite with each new architecture? That used to be the trend, with every new nanometer size reduction, we benefited from less heat and lower power. However, I wonder if it is the absolute density they are going for between Intel and AMD and their war with each other on benchmarks and such that is leading to cramming so many cores in a package that heat and power draw has exponentially increased.

These are excellent points you bring up to think about for running home lab equipment. It has led me to think about the mini PC route for my next lab refresh, but I have found it challenging since mini PCs are so limited in terms of networking and storage. It is difficult to really do the "enterprisey" things we want to do and set up technologies based on mini PC-based labs. There is just nothing like a big fat enterprise server and what it can do. But the heat and power draw are definitely deterrents. I have liked many of the Supermicro IoT line of servers and thought about refreshing with a new set of those, but just haven't pulled the trigger on anything there as of yet.

I did install a mini split unit upstairs where my home lab currently sits to assist with cooling, and it has done a good job over the past two summers. Mini-splits I have found do have some quirks, though, and I have an extra long condensation line running through the attic that started to leak on me this past summer, leading to obvious issues.

Hopefully, Horizon will remain in the VMUG catalog and allow us all to still have our hands on it. I have been wondering, as you mention companies that are heavily invested in Horizon, what the future looks like there. Things are certainly getting interesting. You will probably like the post here. I just posted this morning that Broadcom is canceling partner agreements with those that don't meet certain new requirements: https://www.virtualizationhowto.com/community/news/broadcom-cancelling-partner-agreements/

Reply
Page 1 / 2