Guys, could this possibly be the most worthy mini PC for home server? I can't believe the specs. Check these out:
- i9-12900H or i9-13900H
- DDR5
- PCIe 4.0 x16 slot
- 2 10 gig SFP ports
- 2 2.5 GbE ports
- a PCI-e 4 slot
- U.2 NVMe SSD support
https://store.minisforum.com/products/minisforum-ms-01?sscid=c1k7_10my0v&variant=44385972158709
wow that is bonkers. i'm super tempted to grab one even though i have no use for it right now....
wait.... and it has out of band management?????
@malcolm-r I didn't even catch the out-of-band management, even better! Wondering about the power and efficiency cores with ESXi and Promox. I know these can cause issues. This thing with a Xeon or a Ryzen proc would be perfect in many ways.
@jnew1213 have you ever played around with P+E cores with ESXi? Curious on your findings. I recently reviewed a GMKTek M3 with an i5 with P+E and had some issues with ESXi and Proxmox. There are some boot strings you need to add to prevent purple screens.Â
I haven't yet run ESXi on any machine with P+E cores. ESXi's scheduler, being unaware of the different cores, is going to treat them the same when assigning them to a workload. I don't see how this is a good thing.
I realize the "thing to do" is to tell ESXi to ignore differences in cores at boot time but, I think, disabling efficiency cores in the machine's BIOS and letting ESXi schedule using only performance cores may be a better idea, depending on workload and whether any consistency is desired across application instances.
Example: we compute Pi to 100,000 digits in a VM running on an Alder Lake Core i9 with 24 cores (P+E). The computation takes, say, 7 minutes. We do it a second time and it takes 12 minutes, because this time, ESXi has given the VM more efficiency cores during more scheduling instances. Just by chance. There must be some workloads that you don't want to schedule against efficiency cores. Ever.
As an aside, the Alder Lake N100 in the little GMKtec mini that I've been playing with has only efficiency cores. So that eliminates issues with different core types. ESXi runs just fine on it without any boot string requirements. Only the TPM is not usable (no VIB for it, it seems).
As another aside (can you have too many asides?), my desktop PC happens to be the above described Alder Lake Core i9 with 24 cores. It's an Intel NUC Extreme, running Windows 11. I don't think it'll ever be running ESXi, even when eventually retired from desktop use.
The MS-01 looks nice. But for its price, I can pick up two Dell PowerEdge R730 systems, which we know run ESXi really, really well. Just sayin'.
@jnew1213 fair point regarding the R730s. but for folks where power/noise is a concern i think these are a great upgrade path.
Okay. I want one. 96GB RAM. Windows 11. Please send!
You are a kind and generous soul.
@malcolm-r @jnew1213 It has arrived! Didn't get one as a sample unit, just pulled the trigger myself for testing....review coming quickly. Going to be testing with 64GB of DDR5 and a couple of EVOs I have. Excited to dive into this one.
@jnew1213 @malcolm-r Just got this posted this morning. Here is my review of the MS-01, initial thoughts on testing, etc:
yessss. i'm pretty sure i'm going to pick up 2 of these (probably with 12th gen i9) to replace my big dual Xeon 2U server. It's 20c/40t total, but even with e-cores disabled i'll have 12c/24t of 12th gen which will outpace the Xeons quite nicely.
the memory capacity is a downside, but i think i can make it work. my current system is using about half of the 256GB of RAM, but i've been generous with allocation and i can probably make it work. plus i still have my 3rd host i can move things over to if needed.
Good review, Brandon. In both senses. You liked the machine and you covered it well.
I am left wondering who this machine was designed for. Who's its target audience?
It has a non-server processor in it, one that almost requires it be partially turned off for server use. It maxes out at 96GB RAM, unofficially, using non-binary DDR5 DIMMs. Better than 64GB, of course, but short of the 128GB or more than even a small server would support. It has vPro, which is a corporate management feature. It takes three M.2 sticks or two and a U.2, but not all U.2s fit. I like that the SSDs are actively cooled. I bet the machine is a little furnace with three SSDs installed.
Then there's its networking capability. Two SPF+ ports and two 2.5Gb Ethernet jacks (Intel based). Very flexible. Server-ish more than desktop-ish, though 2.5Gb is not really a server standard. I'd like to see RJ45 jacks instead of SFP+ sockets. Much more likely that 10Gb would be used with copper connects rather than fibre, and not having to purchase transceivers of either type would be a cost savings.
The machine seems to be a chimera of server features and desktop features. It doesn't seem to be at home on most desktops and it falls way short of anything that would be considered for a data center.
I do see the machine fitting into a home lab, either as a single server or as part a short stack of nodes in a cluster of some sort.
Of course, they are not inexpensive, and choices for low power, low cost home lab "servers" are increasing in number seemingly every day.
I wonder what goodies the MS-02 will pack when its time arrives.
Â
@jnew1213 thank you! As always, appreciate your feedback on the posts and the hardware itself. I am with you on the feeling of being torn with these machines. They have just enough of the right things to be left wanting more in terms of hardware so they are more like servers, which they are not.Â
I am imagining a world where the MS-02 will have a proper Xeon proc, ECC DIMM memory with max of at least 512 GB and tons of cooling and storage options. Hey I can dream right?
Definitely some things I think could be better with this little machine. However, I will say it does excite me as I think this may be a turning point of other hardware to follow and I am betting (hoping) someone will "go for it" and make a proper workstation class machine in this form factor that will have everything wanted for home lab.
The big appeal for these I think will be the power/noise footprint vs a proper server. I would definitely like to take a cluster of 4 or so of these for a spin with vSAN and all NVMe storage and see what kind of IOPs the i9 can push with HCI bench. In my Supermicros, I was CPU bound with the old Xeon-D's, but still pushed 100,000 IOPs.
2024 is going to be interesting from a mini PC perspective. Hoping the MS-01 will bring in a new era of options.
Â