vSAN architecture recommendations
I have 2 existing ESXi hosts I'm looking to set up vSAN with. I have a few questions on best practices:
1) Since there are only 2 nodes, I read that I will need a "Witness" host or appliance. Right now, I don't have any other physical host to dedicate to being a Witness. Should I just find a mini PC and toss it on there? Or should I be okay just using a virtual appliance? And if so, I would need to put one appliance on each host, right?
2) I plan on using 58GB M.2 Optane drives as vSAN cache drives. None of my VMs are super read/write heavy so I think that size should be okay? Or should I go with a slower but higher capacity nvme drive?
3) One of my hosts has a backplane with 6x 1TB 2.5" SATA SSDs. Currently the controller is in IT mode. Should I keep it as a JBOD and pass those to vSAN, or should I flash it to IR mode and create a RAID array and pass that to vSAN?
@macolm-r vSAN 2-node is great. I ran this a long while before I went to my 3-node cluster I am running currently. The witness node is best run as a virtual appliance which is the recommended way. Also, one neat thing about the witness node is it is the only supported "nested" version of ESXi. The Witness appliance is just a nested ESXi installation that houses metadata.
Another side point, VMware, at least before Broadcom, would charge you for the license if you run the witness on bare metal. But if you run the virtual appliance, the license for that is free.
Also, vSAN only works with JBOD. It handles the disk provisioning. So you just need to present the disks to vSAN as JBOD disks and form your disk groups. There are two architectures with vSAN now - OSA original storage architecture and ESA, enhanced storage architecture. I haven't had the chance to really play around with ESA to run my production workloads on as a while back it required 25 GbE network connectivity.
Now with 8.0 Update 2 they have what they call AF-0 Ready Node Profile that allows 10 Gbe. I would probably just spin up the OSA architecture with the "disk groups" concept. You need at least 1 disk for cache and 1 disk for capacity. You can only have 1 cache device per disk group, but multiple disks in your capacity tier.
@brandon-lee awesome, thanks. do i need one Witness appliance per host? or should 1 for the whole environment be fine?
@malcolm-r yeah cool thing with the witness host is it will service your whole environment. Also, good to know is they now allow one witness host to service multiple vSAN clusters as well, which is great!
@brandon-lee okay i think last question for now: can i mix SATA SSDs and NVMe SSDs in the capacity tier? or should i keep it all the same type? i won't be using any spinning drives.
@malcolm-r No worries on the questions! You can mix drives, but I would probably use your NVMe for cache and then use your SATA SSDs for your capacity tier and keep everything consistent. If you mix drives, you would see weird behaviors and performance differences across your hosts most likely.
alright, so i'm trying to get this set up and the vSAN config isn't liking my setup. i'm 99% sure it's because with my setup (2 physical hosts) i just have everything in one cluster. i didn't really plan my vcenter architecture. i just kinda tossed everything into one pile.
i'm thinking i need to re-assess my layout.
@malcolm-r I like your naming convention by the way 😀 Hey can you show a screenshot of what you are seeing with the vSAN Witness appliance when you try to select it. Are you getting an error of sorts?
yep, if i go in to configure vSAN, selecting 2-node, i select my disks, then i go to select a witness host and nothing shows as available:
@malcolm-r It has been a while since I deployed 2-node. I bet they have added checks for the witness to not be in the same cluster as it is servicing, which is a good thing, but for home lab, makes it a bit more challenging. To you have another host you could place the appliance on by chance?
@brandon-lee not right now. but doesn't having a 3rd host defeat the feature of a 2-node vSAN cluster?
@malcolm-r It does make it a little less appealing for home lab since they have added that as a requirement. However, for many enterprise customers, it is a good fit since they may have edge environments that the 2-node is all that is needed and they have a central DC where they can place the witness node. It does still make sense for home lab though if you had another standalone host and you wanted a way to have vMotions and such as a possibility for maintenance with a shared datastore and you didn't have external storage.
I have been waiting to see if VMware would come out with a way to host this in AWS or somewhere as an offsite witness appliance. However, have not seen any news on that front in quite a while.
@brandon-lee i guess i could nest an esxi host under my main host and connect that to vcenter and put the appliance there. i'm not sure if the networking will get funky if i try that.
yeah you could definitely try doing that...that is good thinking outside the box
@malcolm-r also nested networking can be a mind bender sometimes, but you could just create your nested ESXi host on the same port group as your vSAN traffic and the witness appliance should pick up the same untagged traffic.