Don't miss out on new posts! Sign up! Also, visit the VHT Forums!
Finished / Never Fi...
 
Notifications
Clear all

Finished / Never Finished

10 Posts
2 Users
4 Likes
317 Views
JNew1213
(@jnew1213)
Posts: 22
Eminent Member
Topic starter
 
Home Rack

Jeff's Mini Data Center - 2023

Stable since the beginning of the year, I proudly present my upscaled (and downscaled) mini datacenter.

Upscaled with the addition of a leased Dell PowerEdge R740 and another PowerEdge R750. Downscaled as the OptiPlex minitowers I had have been sold off. The PowerEdge R710 was long ago sold. The R720, then the T620, sold off. Patch panels and 6" multicolored network patch cables removed, and all Ethernet cables swapped out for Monoprice SlimRun Ethernet cables.

 

Equipment Details

On top of the rack:

  • Synology DS3615xs NAS connected via 25G fibre Ethernet, Linksys AC5400 Tri-Band Wireless Router. Mostly obscured: Arris TG1672G cable modem.

 

In the rack, from top to bottom:

  • Sophos XG-125 firewall

  • Ubiquiti Pro Aggregation switch (1G/10G/25G)

  • Brush panel

  • Shelf containing 4 x HP EliteDesk 800 G5 Core i7 10G Ethernet (these constitute an 8.0U1 ESA vSAN cluster), HP EliteDesk 800 G3 Core i7, HP OptiPlex 5070m Micro Core i7, HP EliteDesk 800 G3 Core i7 (these three systems make up a "remote" vSphere cluster, running ESXi 8.0U1). The Rack Solutions shelf slides out and contains the 7 power bricks for these units along with four Thunderbolt-to-10G Ethernet adapters for the vSAN cluster nodes.

  • Synology RS1619xs+ NAS with RX1217 expansion unit (16 bays total), connected via 25G fibre Ethernet

  • Dell EMC PowerEdge R740, Dual Silver Cascade Lake, 384GB RAM, BOSS, all solid state storage, 25G fibre Ethernet

  • Dell EMC PowerEdge R750 Dual Gold Ice Lake, 512GB RAM, BOSS-S2, all solid state storage (including U.2 NVMe RAID), 25G fibre Ethernet

  • Digital Loggers Universal Voltage Datacenter Smart Web-controlled PDU (not currently in use)

  • 2 x CyberPower CPS1215RM Basic PDU

  • 2 x CyberPower OR1500LCDRM1U 1500VA UPS

 

There's 10G connectivity to a couple of desktop machines and 25G connectivity between the two NASes and two PowerEdge servers. Compute and storage are separate, with PowerEdge local storage mostly unused. The environment is very stable, implemented for simplicity and ease of support. There's compute and storage capacity to deploy just about anything I might want to deploy. All the mini systems are manageable to some extent using vPro.

The two PowerEdge servers are clustered in vCenter, which presents them both to VMs as Cascade Lake machines using EVC, enabling vMotion between them. The R750 is powered off most of the time, saving power. (iDRAC alone uses 19 watts.) The machine can be powered on from vCenter or iDRAC.

Recently, I've switched from using the Digital Loggers smart PDU to Govee smart outlets that are controllable by phone app and voice/Alexa. One outlet with a 1-to-5 power cord connects the four vSAN cluster nodes and another connects the three ESXi "remote" cluster nodes.

"Alexa. Turn on vSAN."

"Alexa. Turn on remote cluster."

Two more smart outlets turn on the left and right power supplies for the PowerEdge R750 that's infrequently used.

"Alexa. Turn on Dell Left. Alexa. Turn on Dell Right."

Total storage here is about 6/10 petabyte.

There are a couple of additional NASes for, mostly, onsite media backup.

 

Okay, that's a fair bit of equipment. So what's running on it?

 

Software and Systems

Well, basically most of what we have running at the office, and what I support in my job, is running at home. There's a full Windows domain, including two domain controllers, two DNS servers and two DHCP servers.

This runs under a full vSphere environment: ESXi 8.0U2, linked vCenter Servers, vSphere Replication. SRM. Also, vSAN (ESA), some of the vRealize (now Aria) suite, including vRealize Operations Managment (vROps) and Log Insight. And... Horizon: Three Horizon pods, two of which are in a Cloud Pod federation, and one of which sits on vSAN. DEM and App Volumes also run on top of Horizon. I have a pair of Unified Access Gateways which allow outside access from any device to Windows 10 or Windows 11 desktops. Also running: Runecast for compliance, Veeam for backup, and CheckMK for monitoring.

Future plans include replacing the Sophos XG-125 firewall with a Protectli 4-port Vault running Sophos XG Home. This will unlock all the features of the Sophos software without incurring the $500+ annual software and support fee. I'm also planning to implement a load balancer ahead of two pairs of Horizon connection servers.

What else? There's a fairly large Plex server running on the DS3615xs. There's also a Docker container running on that NAS that hosts Tautulli for Plex statistics. There are two Ubuntu Server Docker host VMs in the environment (test and production), but the only things running on them right now are Portainer and Dashy. I lean more toward implementing things as virtual machines rather than containers. I have a couple of decades worth of bias on this.

So that's it. My little data center in Sheepshead Bay, Brooklyn.

I'd love to entertain any questions. Hit me up.

 
Posted : 23/12/2023 2:08 pm
Brandon Lee
(@brandon-lee)
Posts: 408
Member Admin
 

@JNew1213 this is seriously an awesome home datacenter! I need to read through everything more closely. I'm sure I will have questions! Thank you for sharing this!

 
Posted : 23/12/2023 2:56 pm
Brandon Lee
(@brandon-lee)
Posts: 408
Member Admin
 

@jnew1213 Ok so there is a lot for me to chew on in your description. Thank you for all the details of your hardware and software, awesome stuff. So guessing you are running 2-node ESA with the two Dell servers? Also, it is great to talk to someone who is leasing equipment for home lab. How do you come about doing it this way (leased Dell PowerEdge R740 and another PowerEdge R750)? Do you mind sharing some details on where you are sourcing your lease, is this directly from Dell? Also, what kind of cost if you mind me asking that?

Is the R740 the only lease, or the R750 is leased also? I have only played around with ESA in a non-supported nested environment a bit but would love to transition from OSA in the lab to ESA. What types of drive hardware are you running in your Dell servers, are you utilizing NVMe throughout? Have you used HCI bench or another tool to get some rough benchmark numbers?

I am sharing some similarities with you there on software stack ran as well, using Veeam here too, also using NAKIVO directly on my Synology NAS, Vembu for testing, and also Synology Active Backup.

Also, wondering on guidance...it looks like you went full circle, from Optiplex minitowers to full-on PowerEdge servers. Was there a reason you went back the direction of full servers as opposed to the minis? Were you hitting limitations there, specifically wanting to go ESA and 25 gig networking?

Ok, I know I will have more questions, eagerly awaiting your feedback on the above. Thanks again Jeff and welcome to the community. Will be great to have someone with your knowledge and experience level for ones to bounce questions off of, especially on the VMware side of things. 👍 

 
Posted : 23/12/2023 3:47 pm
JNew1213
(@jnew1213)
Posts: 22
Eminent Member
Topic starter
 

I'm actually running a 4-node vSAN ESA cluster, on four HP EliteDesk 800 G5 Mini machines, each with 250GB and 2TB M.2 SSDs and each with a Thunderbolt to 10G Ethernet adapter. The cluster doesn't run full time (power, heat -- the 10G adapters get too hot to touch -- and I don't trust it as storage for important VMs.

The Dells: I needed to replace my old PowerEdge servers for vSphere 8, though I had been thinking about doing so since vSphere 7 came out. Mid-2022 and there was no money for new hardware. Big hardware purchases usually tail tax season. That's why they invented tax refunds!

I checked out Dell's Small Business Website for worst-case pricing on a new R740. This was the logical step forward from my Twelfth Generation Dells. The Thirteenth Generation R730 is a nice machine, but it's life is limited and I didn't want to have to worry about servers again for 5-10 years (my T620, which I bought new, lasted more than ten years before it aged-out and was sold, still reliable and almost silent).

So I put together a minimal configuration for an R740 on Dell's site and then called Dell Small Business and asked them for a quote on that configuration. What I put together was a single silver Cascade Lake refresh CPU, with minimal RAM (8GB) and a single SATA SSD. I added a RAID controller, took the usual dual power supplies, and I think that was it. Recapping: Single processor, minimal RAM, minimal disk.

I told Sales that I wanted to lease for three years and they provided terms for lease and lease-to-own (with $1 buyout at the end). Terms were fair as was the finance charge. A single phone call to verify my phone number, etc. was all it took. Sales tax is payable up front in full, even before the first lease payment.

You're going to love how much this soon-to-be monster of a machine cost: $115/month!

Okay, soon-to-be-monster... Time to really outfit the thing. Second matching processor: eBay purchase, risers 3 & 4: Dell. BOSS card for boot: eBay again. Two sticks of supported Micron M.2 for the BOSS card: eBay. RAM: eBay and Crucial. All 64GB RDIMMs. Nice stuff. Mellanox ConnectX-4 25Gbit NIC: eBay. iDRAC Enterprise license: eBay.

That was it. I had everything here when the server arrived. I added another 1TB M.2 on a PCIe card that I had laying around. I have a domain controller sitting on that, pretty much isolated from the shared storage where everything else resides.

The R750 is another matter. When browsing eBay to see what they had in the way of PowerEdge servers, I came across a seller with six available data center pulls of well configured R750 machines. His asking price was decent, as was his eBay rating. I emailed him and we arranged an off-eBay sale for the machine.

I paid, the seller waited until everything cleared and was satisfied that he got his payment, he gave me the Dell service tag number so I could look it up and, once I was satisfied, he sent the machine.

The R750 was fully decked-out with dual Ice Lake Gold processors, 256GB RAM, a boot SSD, I think SATA, and two U.2 NVMe SSDs in the front bays, attached to a SAS/SATA/NVMe front-mounted RAID controller. the machine came with a license for iDRAC Enterprise.

The r750 has dual 1400w power supplies and came with the long risers Dell provides for use with installed graphics cards. The machine also came with Dell's Very High Performance ("Gold") fans. Surely you were able to hear them when I powered the system on for the first time.

To that configuration, I added another 256GB RAM, a BOSS-S2 card with supported Intel M.2 SSDs, cables, and a few other things, oh, and another Mellanox CX-4. I soon after replaced the Gold fans with Standard fans to reduce noise and power usage. I don't have a GPU in either PowerEdge.

The R750 is powered off most of the time. After the upgrade to ESXi 8.0U2 the fans no longer ramp down as much as they used to, so the machine is a bit loud. Also, it pulls about 385 watts at idle. (The R740 uses about 180 watts with ~30 VMs running). iDRAC alone uses 19 watts with the machine powered off, so it's plugged into a pair of smart outlets that are usually turned off.

I haven't benchmarked the Dells. I am not sure the reading from within a VM is all that accurate and performance is fine. If it didn't match my desktop machine, I probably wouldn't be happy and... I don't want that!!

I did benchmark the vSAN cluster with HCI Bench, I think, when originally built with with ESXi7 (OSA, of course), and it was decent. This was a good three years ago. In one test I was able to migrate a VM to/from the cluster in less time than migrating a similar size VM at work on a Nutanix cluster. So, I was satisfied. The four brand-new HP Minis with 64GB RAM, two SSDs, and 10G adapters were an expensive toy.

Backups: Veeam for VMs and physical workstations. NFR license through Spiceworks. Veeam also replicates my vCenter Servers and domain controllers. Synology Hyper Backup backs up four times a day from one NAS to another (via rsync). A handful of VMs are replicated to different storage using vSphere Replication. One uses SRM in addition to vSphere Replication. Just for practice. I was using Synology's Cloud Sync to back up to Google Drive for a couple of years, but recently, Google cracked down on formerly "unlimited" accounts that were using terabytes of storage, so no more Cloud Sync. I've been using CrashPlan to backup about 11.5TB of data for years. I added a Backblaze personal plan to all that when I moved off Google.

I've tried Synology's Active Backup for Business. Curious as to how it could backup from the free version of ESXi. It does! But it exposes your hosts by requiring SSL to be turned on and, as I recall, it needed a modification made to each VM's .vmx file.

The OptiPlexes... After decades of cast-off Compaq and home-built white box servers, I bought the T620. A Lenovo TS-something server and the OptiPlexes followed rather than preceded it. A 7010. A 9020. There was a Dell R710 and an R720 in there too, before the R740/R750.

The Lenovo was limited to 32GB and a single CPU, and it became dated fast. Buggy BIOS too. The Optiplexes also maxed out at 20GB or 32GB. Too many boxes  of that size required to run a full Microsoft and VMware stack. And no iLO or iDRAC. All the boxes had vPro, but that is a poor substitute.

Brandon, I've enjoyed your discussion of ESXi and your home setup for a while now. Appreciate the effort you put into things. Thanks for all your work.

 
Posted : 23/12/2023 5:35 pm
Brandon Lee
(@brandon-lee)
Posts: 408
Member Admin
 

@jnew1213 This is a fantastic explanation of the evolution of your lab. Loving reading through this. Definitely akin to a few of the steps I have made in the lab environment over the years. I'm curious on the G5s - Are you running ESA with 4 drives or more? Was curious if it was easy to get away with that in physical implementations of ESA. I know when ESA first came out I saw that as a requirement, but I was able to bypass that check in the nested config I played around with. Just curous if you ran into any weird issues getting ESA to adopt less disks or were you able to fit that many in the G5s?

Also, wow, I can't believe the lease for the 740 was $115/mo, I was expecting much more than that. Also, thinking for home labs, leasing might be a great option in this price range since every 3 years or so you could have new hardware basically and just trade out the payment.

On the HP minis, it sounds like you mainly have these to play around with ESA, is that right? Also, do you run into any issues powering down and powering back up your lab environment with the services you are running? Are you using some automation to do this gracefully, PowerCLI, etc? Am I right in thinking it sounds like you don't run anything 24x7? 

I need to spin up a current Horizon pod again and had a Horizon 8.x cluster running with UAGs and a Kemp Loadbalancer for LB. However, funny enough, my Horizon UAGs were popped when the Log4j vulnerability surfaced...I was busy patching some production boxes with Log4j, and my home UAGs got compromised, lol. I had to burn them down. Since then, I have just been using Twingate to connect remotely and not opening any ports to the outside.

I am also wondering what will happen soon with Broadcom wanting to sell off the EUC side, as mentioned in the news lately, which may mean we won't have access to the licenses much longer with VMUG and vExpert, but waiting to see what happens there. 

Thank you also for the kind words Jeff. The best part of creating content and putting it out there is meeting up with people in the community and fellow enthusiasts, like yourself. I'm really glad you joined up here on the forums, and I'm looking forward to exchanging more ideas! 👍 

 
Posted : 23/12/2023 8:41 pm
JNew1213
(@jnew1213)
Posts: 22
Eminent Member
Topic starter
 

Each of the ESA nodes has just the two SSDs in them. What used to be the cache (250GB) and capacity (2TB) drives. vSAN never complained about that. I lost a point for not having 25Gb in the hosts. I also had difficulty updating the cluster using an image. I had to do the machines the old fashioned (preferable!) way, using baselines. Lifecycle Manager did not like the Samsung 2TB drive in each box. It didn't flag the smaller SSD. Someone gave me a Ruby vSphere Console command to have that check skipped. I made the change, but haven't tested it yet.

The minis were originally acquired to do "vSAN" whatever that entailed; cache and capacity drives, and an external 10Gb adapter. That was all the space in the little things. Fully configured, the four boxes were close to $8000. That was for roughly 6TB usable space. I never had anything running on them except a Horizon pool. I don't trust vSAN for anything important. At least not without VMware support.

The vSAN boxes can be powered down and up at will. There's a wizard to do that, but at some point, if there are VMs on the cluster, they are going to go down in such a fashion that... well, it seems to leave objects or pieces of things when the nodes are powered back up. These oft-occurring objects are part of my distrust for the system. What are they and where did they come from? How can they be safely cleaned off?

The problem with leasing the server without the buyout at the end is that, at the end, you own nothing. The server gets returned to Dell, I don't know if there's any credit toward a new machine either. The purpose behind getting current hardware is that it would be usable for years to come; years after the lease expires. Three to five years is fine for the hardware in a corporate data center, but in a home lab, it should last much longer.

There is one other thing I think needs discussion, but I haven't seen much, if any, yet. Each new generation of Intel machine (the same may be true for AMD machines, which I don't use) uses more power, puts out more heat, is louder, and seems to be incrementally more expensive than the last. It looks to me like the last generally usable Dell for the home lab may be the Rx40. The Rx50 uses more than twice the power and is louder. The Rx60 is out of my price range, even talking about stripped-down configuration.

On top of that, the machine can be so power hungry and so hot running that at least one data center I know of has had to limit the number of such units in a rack due to power usage and the fact that the temperature in the "warm aisle" behind the rack reached 120 degrees F. This is not home lab-friendly.

The R750 and above have liquid cooling options for the processors, but there are still fans to cool memory and PCIe card, and the liquid cooling requires one or more external pumps, some sort of manifold device, I believe a multi-gallon reservoir, and other accommodations. Not home lab-friendly.

The core environment, the R740, all the NASes, 30+ VMs, etc., that's all "production" and doesn't go down. I strive for 100% uptime, but living in an apartment, and being at the mercy of sometimes weird and always limited power, a single ISP, etc., it's nowhere near possible. Running two ACs in the summer is fine. Add the dishwasher, and it flips a circuit breaker downstairs, in a locked electrical room. Waking the super up at 1am to reset the breaker... oh, don't get me started!

When there's service required on a one of the PowerEdge systems, the other is available for vMotion of the load. When a NAS requires maintenance, that's a bit more difficult. There's space for the VMs to move, but moving things from storage device to storage device, even connected at 25Gbit, takes time. Further, both the big NASes are used for other things, including as backup and replication targets, media hosts and streaming, etc. NASes are taken down individually and usually around 4am, and I have to make sure there are no jobs running that can't be recovered later.

I am not big on automation. So bringing things up and down is a manual process. I have been working on a runbook, on and off, that would detail the process, but it's not nearly at a useful stage of completeness.

Putting a Kemp LB in front of my Horizon connection servers is a project for the coming year. I was looking at doing it with a virtual F5 LTM, but there's a cost involved, and the Kemp looks easier to administer. The F5 would be more practical learning, but... I am not part of of the Networking team at work, so I would never get to touch an F5 in a hundred years.

Haven't thought about Log4j in a while. It didn't affect me too much at home; only CrashPlan used Java logging, and they said they weren't affected. Horizon was clean and VMware Tools had an issue only in combination with something else. I did upgrade Log Insight, I think, but that may have actually been before the vulnerability was revealed. Work was another story. We were busy for months patching, implementing workarounds patching again, running reports, etc. Every time VMware updated code, which they sometimes did without changing version numbers (!!!), build numbers were incremented, we had to upgrade the affected software again.

Like everyone, we're waiting to see what shakes out as the months pass since the Broadcom merger. My VMware TAM and service manager are safe (for now), but on last Tuesday's weekly TAM call, we were introduced to two new teammates who replaced two others, no longer on the account. I hope Horizon remains something we can get from VMUG or a similar program at another vendor once it's spun off. Work is invested in Horizon to the tune of nearly 50,000 desktops running out of two data centers, so we're not going anywhere. Horizon is what I do. We have far less of an investment in Workspace ONE and none in Carbon Black.

Fun talking about this stuff with someone whose eyes don't glaze over after twenty seconds.

 
Posted : 23/12/2023 9:56 pm
Brandon Lee
(@brandon-lee)
Posts: 408
Member Admin
 

@jnew1213 Absolutely, these are the kinds of discussions that are so fascinating to me. It never ceases to amaze me how much we can learn just talking out home lab ideas and discussing the paths we have all taken. I always come away with fun and new things to think about and try to implement. You now have me thinking of leasing, haha. 

I am really wondering where things are headed with VMware, personally. It definitely feels like things are taking a different turn with Broadcom at the helm, as we knew it would. Hopefully, the same enthusiasm and culture in the community can remain intact. It really made me wonder where VMUG was headed after the acquisition, but so far, it seems things are holding together for now. 

You make excellent points on the CPUs coming out. Aren't things supposed to be the opposite with each new architecture? That used to be the trend, with every new nanometer size reduction, we benefited from less heat and lower power. However, I wonder if it is the absolute density they are going for between Intel and AMD and their war with each other on benchmarks and such that is leading to cramming so many cores in a package that heat and power draw has exponentially increased.

These are excellent points you bring up to think about for running home lab equipment. It has led me to think about the mini PC route for my next lab refresh, but I have found it challenging since mini PCs are so limited in terms of networking and storage. It is difficult to really do the "enterprisey" things we want to do and set up technologies based on mini PC-based labs. There is just nothing like a big fat enterprise server and what it can do. But the heat and power draw are definitely deterrents. I have liked many of the Supermicro IoT line of servers and thought about refreshing with a new set of those, but just haven't pulled the trigger on anything there as of yet.

I did install a mini split unit upstairs where my home lab currently sits to assist with cooling, and it has done a good job over the past two summers. Mini-splits I have found do have some quirks, though, and I have an extra long condensation line running through the attic that started to leak on me this past summer, leading to obvious issues.

Hopefully, Horizon will remain in the VMUG catalog and allow us all to still have our hands on it. I have been wondering, as you mention companies that are heavily invested in Horizon, what the future looks like there. Things are certainly getting interesting. You will probably like the post here. I just posted this morning that Broadcom is canceling partner agreements with those that don't meet certain new requirements: https://www.virtualizationhowto.com/community/news/broadcom-cancelling-partner-agreements/

 
Posted : 24/12/2023 11:53 am
JNew1213
(@jnew1213)
Posts: 22
Eminent Member
Topic starter
 

It was maybe a year ago that Broadcom announced (or let slip) that they were going to concentrate their VMware business on their, I think it was, 600 largest customers. That left a lot of small and mid-size shops wondering if they have any kind of future with VMware.

Another issue for VMware users is the migration from perpetual licensing to subscription licensing. It's going to cause trouble when products that you don't expect to stop working run the clock and stop, until you update a license key or something. I guess VMUG users are used to that, but many businesses/data centers probably aren't.

CPUs... it seems Intel is stuck on the 10nm process, which they call Intel 7. (Liars!!). AMD ran away with core counts, so Intel is shoring up their side of the shop with onboard accelerators. Have you looked at a list of SKUs for some of the latest processors? You can get them with a Chinese smorgasbord of integrated add-ons.

I posted a comment on one of your last videos mentioning that I recently purchased a couple of GMKtec G3 mini PCs. They turned out to be nice little machines. I got one for $99 (bare bones, added a 32GB SODIMM and 512GB M.2) and another for under $130 via Amazon (8GB RAM, 256GB M.2). They have a single 2.5Gb Intel NIC onboard that ESXi 8 recognizes out of the box. (Oddly Windows Server 2022 doesn't recognize the NIC). They are Alder Lake N100-based, so performance is okay for everyday office tasks. They top out at 32GB RAM (single SODIMM/single channel), so not going to run a whole home data center. They draw 6 watts at idle. So, really cute machines. But back in their boxes now, until I find a use for them or gift them to someone. Are they a substitute for something 60 times more expensive? Nope.

All of the mini machines, 1 liter boxes, single board computers, etc., even the ones with a PCIe slot, like the ZimaBoard, won't power a 25Gb card or fill the bandwidth available from a 10Gb card. Most of those machines are based on older processors as well. By the time the design of the machine is done and they are manufactured, they're already a couple of generations old. The GMKtec is nearly up-to-date, with a 12 gen Intel, but the NIC is the older v-225, not v-226. And there's only one.

My goal at home was to emulate what I support at work, so there is no substitute for enterprisey, as you call it, stuff. My first Dell, the T620 was in response to my getting a job where half the data center was Dell blades. Except for the blade chassis, which I couldn't support at home, a PowerEdge is a PowerEdge, and the T620 was very very close in maintenance and operation to the M620 blades.

I noticed the split/ductless blower on the wall behind you. I would love one, but the coop board told me no. No holes through the building's outer walls. I figure I could do it anyway if I run the piping through the air conditioner sleeves already present. Someday, maybe. Leaks from the attic are no fun, but... no attic here. Though some years ago the upstairs neighbor caused a ceiling collapse in the bathroom, necessitating a complete demolish and redo.

My company has almost a thousand ESXi servers on prem. We just recently configured a permanent AWS Cloud Connector, and we have a "cloud team" now too, but there's no move imminent, and we get new orders of servers coming in from Dell every few months. Sapphire Lake now. I mentioned the power and heat issue. Cloud is way too expensive for large scale everyday use.

Regarding your post, I can see Broadcom whittling down anything that has to do with VMware that doesn't come directly from them, including their partner network. I am wondering if that might be an improvement over the way things are now. I've heard horror stories from small organizations trying to get a quote on VMware products, only to hear crickets chirping when they reach out to a reseller.

Times, they are a-changing. But that never stops. 2024 will be the start of my 40th year in IT. Not sure if I want to retire, but it sure is on my mind a lot!

 
Posted : 24/12/2023 3:09 pm
Brandon Lee reacted
Brandon Lee
(@brandon-lee)
Posts: 408
Member Admin
 

@jnew1213 that is really good observation on the Intel side of things. I hadn't really paid close attention to the details there until you mentioned it. Intel has definitely lost a lot of ground to AMD these past couple of years in the Enterprise. It is interesting, a few years back and you probably have seen this as well, when you got quotes from big-box vendors like Dell, AMD was never mentioned, but now it seems many of the quotes I have seen give the choice of AMD or Intel and it always enters the discussions early on. I'm sure this is not what Intel wants to hear. Not sure Pat Gelsinger can get them back on track quick enough to make up the lost ground. However, I think there are a lot of enterprise folks out there that will always prefer Intel over AMD due to the years of experience with them.

I try to keep an open mind on that front, and definitely have been impressed with the little minis that I have been running and test with the Ryzen procs. The little GMKtek K10 Nucbox with the Ryze 5800U has been rock solid running 10-15 VMs in my lab since I installed ESXi. Haven't seen any issues there.

I remember your comment now on YouTube, just now putting that together. I need to get a G3 in the lab and do some testing. As you mentioned, the N100s seem to be great little power-efficient CPUs for running just a few VMs or as a container host. They will probably do most of what a hobbyist would want to do running a few self-hosted services in Docker.

You mentioned Zimaboard. I have yet to get my hands on one as of yet. But it is on my list of gear to try. I'm wondering about the Zimaboard NAS boxes I have been seeing the kickstarters for and how viable some of these devices will be for home lab use. You may have seen my post on the Aoostar with the 5800U proc: https://www.virtualizationhowto.com/community/home-lab-forum/aoostar-nas-with-ryzen-5800u-6-nvme-6-hdd-and-10-gig-network-home-server. I think we will see this space get more active in 2024 and beyond. Makes me wonder if devices like this will be good platforms, surprised this one has 10 gig and NVMe. However, as others mention, performance is in question due to the older components.

Wow you have me beat with 40 in IT. I am working on 25+ myself, but I imagine you have seen all types of changes in your career and probably more to come. Do you have plans of retiring soon or just taking it year by year?

 
Posted : 25/12/2023 9:09 am
JNew1213
(@jnew1213)
Posts: 22
Eminent Member
Topic starter
 

The retirement thing... This spring will have me in my current job for five years. That's two years longer than any prior job in my career, most of which were consulting assignments. I have a pension coming, with decent healthcare and other benefits. At the end of a career this is what you want. I got lucky.

There's no way I can afford to retire any time soon so, yes, taking the years as they come. If retirement ever happens, pension income and Social Security, with maybe some consulting on the side, have to make that possible. I was never a "saver." There's "rainy day" money, but not "take care of me" money. On the other hand, have you seen my toys?!

When I started my career in 1984 it was mainframe COBOL and Wang VS. There were green-screen terminals (with ashtrays next to them), line printers, 8 inch floppies, and a tape drive. There wasn't a PC for another year or two and that arrived in another department. Things have sure changed. Odd thing, I have mainframe on my resume, just to provide a background on my career and experience, and I still get the occasional job offer for something that I haven't touched in almost four decades. I wish recruiters would actually READ resumes!

At work, we're Intel only. I don't remember AMD at any job I've worked, but it's possible I missed it or have forgotten about it. Dell sent us a couple of AMD PowerEdge servers some time back to try out. They're running at idle, if they're running at all. I have never seen them in any spreadsheet, inventory, or monitoring system. I am Intel-only here too. I just don't care for AMD. I think my last AMD experience was an Opteron server that I built, way, way back. To anyone who asks, I just say that vMotion doesn't work across processor families and leave it at that.

The ZimaCube looks interesting. It's an expensive device though, and only six drive bays. I have told myself this once in the past and I violated my own statement twice, the last time a few months ago, but... NOTHING BUT SYNOLOGY going forward.

Robby at NASCompares built a low-end NAS into a Jonsbo case, and it looked like fun. What the heck? I needed separate storage for... err... Linux ISOs, so I got the parts, mostly from Amazon, nothing from AliExpress, and built the thing. Five 18TB refurb drives from ServerPartDeals.com (recommended!). I didn't go so cheap with motherboard and processor though. ASRock MB for 8th and 9th Gen Intel with a used Core i3 8100 from eBay. 64GB RAM and a 1TB M.2 for... I don't know what. Also a 2.5" SATA SSD for whatever OS I got to work on the thing. I put a 10G card in the one slot that it has.

TrueNAS, OpenMediaVault, Ubuntu Server. No luck with any of them. TrueNAS refused to allow the machine to be domain-added. Some time issue that even iX Systems couldn't figure out. OMV? Whoa. You need a plug-in to find plug-ins for that. Who designed such a thing? Ubuntu? Even with Cockpit and Webmin installed, I couldn't get it to do what I wanted, which is what Synology does out of the box: mdadm RAID with the BTRFS file system on top.

I ended up going with a release candidate of TrueNAS. Still working through various issues with SMB when it's domain-joined and with NFS, but it mostly works, at least for backup. Running ZFS. The thing cost me about $1600 with the drives. A Synology wouldn't have been much more.

The Aoostar has an interesting form factor. I believe there's at least one other vendor using the same thing. But only two drives and they seem crammed in there with a too-small (?) fan at the bottom. I think everyone wishes that Synology would license DiskStation Manager for use on cheap hardware. We'd all have cheap 8, 12, or 16 bay NASes with a great OS.

Great discussion! Most enjoyable.

Have a very happy holiday, Brandon.

 

 
Posted : 25/12/2023 12:53 pm