Mini PC & Server

Minisforum MS-01 Review: Best Home Server Mini PC Early 2024

In this Minisforum MS-01 Mini PC review, see why it is a great choice for virtualization, virtual machines, containers, and more in your home lab with 10 gig and 2.5 gig networking, NVMe storage, an expansion slot and more!

I was excited when I first saw the teaser of the Minisforum MS-01 mini PC. For a long while, I have been waiting for mini PCs to progress to the point where they were very similar to other server hardware that I have been using in my home lab environment. Let’s look at my take and review of the Minisforum MS-01 for home labbers running virtualization, virtual machines, containers, etc. As a note, I paid for the MS-01 out of my own pocket, so my thoughts are my own.

State of Mini PCs up until now and my current home lab

For quite some time now, they have been stuck with 2.5 gig networking and very few storage options or expansion slots. Currently, I have Supermicro servers that are getting long in the tooth, configured with 10 gig network adapters, NVMe storage, vSAN, and other features. The Minisforum MS-01 has changed all of that, with a few caveats. I think it shows the promise of where things can be with home labs and mini PCs in the very near future with tons of hardware acceleration, making it a great server or workstation PC.

Minisforum MS-01 review unit hardware specifications

SpecificationDetail
ProcessorIntel Core i9-13900H
RAM OptionsUp to 96GB (unofficially)
Storage Options(3) M.2 NVMe slots, U.2
Expandable StorageSupports up to 24TB SSD
Network Ports4 Ports
USB Ports2 USB4 Ports
Cooling SystemHigh-Speed Cooling Fan
Additional SlotsPCIe 4.0 x16, M.2 NVMe SSD
TechnologyIntel vPRO Compatibility
Networking Speed(2) 10Gbps SFP+, (2) 2.5Gbps RJ45
Display Support8K, Triple Displays
Use CaseHome lab server, workstation
Minisforum MS-01 specs overview

Paradigm shift in home labs

I, and I think many others, have been waiting for a mini PC like the MS-01. It really checks off most of the boxes that most have been holding off when thinking of using mini PCs for home labs. It is a compact mini PC that includes not one, but two 10 GbE adapters, and (2) 2.5 GbE adapters, along with (2) 40 gig USB-4 ports.

In addition to that, you have (3) M.2 slots with the ability to run a U.2 drive in addition to (2) of the M.2 slots. Since it runs DDR5, you can max the memory out at 96 GB, which is much better than the 64 GB with DDR4.

The intel Core i9-13900H has plenty of horsepower, although it is a hybrid processor, and we will get to the hurdles with that in just a bit. So, for virtualization purposes, this thing is spec’ed out like we would all like to see for the most part. 

Also, it is great that Minisforum offers this in a barebones option without an operating system, as it allows configuring it with the RAM and storage you want to configure, without paying the premium from the factory for those options. Keep in mind with the barebones unit, you don’t get an OS license as it doesn’t come with a preinstalled hard drive. Keep that in mind, as most mini PCs will come with a low-end, no-name NVMe SSD with Windows 11 Pro included. 

Below is an exploded view of the internals with the MS-01.

Exploded view of the minisforum ms 01
Exploded view of the minisforum ms 01

High-Speed Networking and Connectivity

With the MS-01, networking and connectivity are not lacking. Seeing a Mini PC come out with this kind of connectivity and networking is phenomenal, especially considering what we have been used to with mini PCs before.

With multiple ethernet network cards, including advanced high-speed connectivity with 10 gig SFP+ ports and multi-gig connectivity with the (2) 2.5 gig network devices. Keep in mind also with the expansion slot, you have the capability to add things like 25 gig network cards if you want.

The multitude of USB ports, including the (2) USB 4.0 40 gig ports, provides high compatibility with peripherals and other devices. Not that it matters a lot running a server, but it also includes wireless and Bluetooth connectivity. 

Viewing the connectivity and ports on the ms 01
Viewing the connectivity and ports on the ms 01

Excellent Cooling for heat

This thing is configured with a Core i9 processor that has the potential to generate some heat along withe the (3) NVMe slots. However, I really like how the fans and cooling are configured in this unit as it has the large fan on the front side where the processor is located and the high-speed cooling fan on the underneath side to actively cool the NVMe SSDs you have installed.

Cooling configuration on the ms 01
Cooling configuration on the ms 01

Great storage options

One of the limitations with many of the mini PCs that I have reviewed and had in the home lab so far has been storage. Most of the minis I have had my hands on up to this point have 1 or maybe 2 NVMe slots. Or they may have 1 NVMe slot and a 3.5 inch solid state drive slot

Storage is another highlight of the MS-01. Equipped with NVMe SSDs, it offers lightning-fast data access speeds. The flexibility to expand storage to its maximum capacity allows users to store large volumes of data without compromising on system performance.

Below is the photo from Minisforum showing the 3 NVMe drives installed in the MS-01.

Nvme drive configuration on the minisforum ms 01
Nvme drive configuration on the minisforum ms 01

Below is the photo from Minisforum showing the U.2 drive and the (2) NVMe SSDs.

U.2 and nvme drive configuration on the minisforum ms 01
U.2 and nvme drive configuration on the minisforum ms 01

Discrete graphics possibilities

The MS-01 with the PCI-e 4.0 slot has the potential to support discrete graphics cards, something we don’t see often with mini PCs in general. This will be a great option for those who want this unit for gaming, graphic design, video editing, or other use case along those lines that need a high-end GPU.

Intel vPro for Advanced Management

Intel vPro technology in the MS-01 allows for advanced management capabilities, such as out-of-band management. This is great for those who want to use this as a home lab server and have that IPMI-like experience. This feature enables better control over the system and provides better security and remote management possibilities.

Below you can see the i226-V and the i226-LM indicating the ability to manage the MS-01.

Viewing the intel network adapters for out of band management
Viewing the intel network adapters for out of band management
Intel me password configuration in the minisforum ms 01
Intel me password configuration in the minisforum ms 01

Unboxing the Minisforum MS-01

Let’s look at the unboxing of the Minisforum MS-01 mini PC and how it came to me. When I purchased, the only option left was the Core i9-13900H barebones configuration. This is what I opted for. Honestly, in the home lab, I think the Core i9-12900H would serve you just as well. 

The ms 01 in the box
The ms 01 in the box

The MS-01 is in secured plastic inside the box in a foam compartment.

Plastic packaging
Plastic packaging

After removing the plastic, this is a view of the back of the MS-01. You can see the SFP cages on the left hand side and the 2.5 GbE ports next to those, with the USB-4 ports, HDMI, and then (2) USB-A ports, with the barrel port connector for the power brick all the way on the right-hand side.

Back side of the minisforum ms 01
Back side of the minisforum ms 01

On the front from right to left, we see the power button, the audio jack, USB-A 3.x adapter, then (2) USB-A ports. 

Front view of the ms 01
Front view of the ms 01

For a comparison of the size of the power adapter, here is the MS-01 next to the power brick for the unit.

The power brick next to the ms 01
The power brick next to the ms 01

The MS-01 comes with a U.2 adapter plate that plugs into one of the PCI-e M.2 slots on the underneath side of the unit, which we will look at below.

U.2 adapter card
U.2 adapter card

After removing the unit from the outside housing, using their tool-less button press and the inside slides out of the outside housing. Here you can see the PCI-e 4.0 expansion slot which is great to see with a mini PC form factor.

After removing the cover and viewing the expansion slot
After removing the cover and viewing the expansion slot

Another angle of the top of the unit outside of the housing.

Another view of the expansion slot
Another view of the expansion slot

To get to the RAM slots, you have to remove the (3) little screws in the fan assembly and it pops out to reveal the (2) SO-DIMM slots. These will take an unofficial maximum of 96 GB of RAM with 92) 48 GB DIMMs.

With the cpu fan removed revealing the ram slots
With the cpu fan removed revealing the ram slots

Moving to the underneath portion of the unit. There is a fan assembly that also has (3) screws on each corner. You remove these and it reveals the (3) M.2 slots. 

Removing the fan underneath and revealing the nvme slots
Removing the fan underneath and revealing the nvme slots

As you can see, the U.2 adapter plate I have inserted below fits into the first M.2 slot. Then, (2) of the screws from the fan assembly will secure the U.2 plate in place once you have your drive mounted.

Showing how the u.2 adapter plugs into the nvme slot
Showing how the u.2 adapter plugs into the nvme slot

One disappointment here is that normal height U.2 drives, like this Intel Optane drive I received from the vExpert program won’t fit. Actually they fit in the slot ok, but you won’t be able to get the fan assembly back on there or the unit back into the housing.

An optane u.2 drive is too tall for the case
An optane u.2 drive is too tall for the case

Another point to note is the switch on the unit where I have the arrow pointing. This switch must be in the M.2 position if you have that slot populated with an M.2 NVMe SSD. If you accidentally have it switched to the U.2 slot, it will damage a.k.a “fry” your M.2 SSD as it puts out more voltage when it is switched to the U.2 position.

The power switch for u.2 vs m.2
The power switch for u.2 vs m.2

The BIOS 

I have to say that I wasn’t too crazy about the “wizardized” BIOS with the Minisforum MS-01. I am old school when it comes to BIOS configuration and like the classic “blue screen” BIOS menus, etc. These just seen much easier and more responsive. 

Some may like the more wizardized BIOS screens though.

The bios of the minisforum ms 01
The bios of the minisforum ms 01

A note on Secure boot and installing from USB

As a note, if you want to install a hypervisor like VMware ESXi or Proxmox, you will likely create a USB drive using Rufus from the ISO file. However, this will not be compatible with secure boot.

When I first unboxed the unit, I had to disable Secure Boot to get to a point where I could install it from USB. If you don’t, you will see this type of screen.

Secure boot violation
Secure boot violation

However, I ran into an issue where it wasn’t obvious how to disable Secure Boot. 

Setting up security on the minisforum ms 01 bios
Setting up security on the minisforum ms 01 bios

As it turns out, you can’t disable Secure Boot, unless you have a password set for the Administrator password and the user password. I set a password for both of these and was able to disable Secure Boot.

Setting a password to disable secure boot
Setting a password to disable secure boot

As you can see on the Security screen, the Secure Boot option is not greyed out. It will be until you set the administrator and user password and then save and reboot back into the BIOS. Then you can disable Secure Boot.

Getting into secure boot to disable on the minisforum ms 01
Getting into secure boot to disable on the minisforum ms 01

VMware hurdles

When installing VMware ESXi, you will have issues like all mini PCs with hybrid processors. You will  receive a purple screen of death about the CPU mismatch.

Purple screen of death on the minisforum ms 01 for vmware esxi
Purple screen of death on the minisforum ms 01 for vmware esxi

You will need to reboot the MS-01, and if you leave the E cores enabled, you will need to add the following boot option as you boot the system for installation and then again on the first boot-up.

cpuUniformityHardCheckPanic=FALSE
Adding the kernel boot parameter on the ms 01
Adding the kernel boot parameter on the ms 01

After you are booted into ESXi, enable SSH, remote to your MS-01 and run the following command to make the boot option persistent:

esxcli system settings kernel set -s cpuUniformityHardCheckPanic -v FALSE
Making the kernel boot parameter persistent in vmware esxi
Making the kernel boot parameter persistent in vmware esxi

Now we can reboot at will, without needing to enter the boot option each time.

Viewing the boot information during the vmware esxi boot
Viewing the boot information during the vmware esxi boot

After ESXi is installed and the MS-01 is booted.

The minisforum ms 01 fully booted into vmware esxi
The minisforum ms 01 fully booted into vmware esxi

Also, a beautiful sight in ESXi is the fact that all the network adapters are discovered without issue. We see the (4) network adapters here comprised of the (2) 10 gig adapters and (2) 2.5 gig adapters.

The four network adapters are found in vmware esxi
The four network adapters are found in vmware esxi

Enabling and Disabling Efficient Cores

I wanted to see what we see in ESXi when enabling and disabling the Efficient cores. With everything enabled and the ESXi boot option in place to allow it to boot, we can see we have a total of 41.9 GHz available, not bad.

However, you will note a few interesting things. We see the Logical processors equal the number of cores per socket. 

Viewing the cpu information with both the efficient cores and performance cores enabled
Viewing the cpu information with both the efficient cores and performance cores enabled

This lines up with what we see if we run the following command:esxcli hardware cpu global get

We see even though the Hyperthreading Supported: true and Hyperthreading Enabled: true, we see the Hyperthreading Active: false.

Getting cpu information from the command line with esxcli
Getting cpu information from the command line with esxcli

The reason for this is that due to the non-uniform architecture of the Core i9-13900H, ESXi doesn’t support hyperthreading and it is disabled.

E Cores disabled

The Minisforum BIOS, under the Advanced menu, allows you to disable a number of the E cores or all of them under the Active Efficient-cores option.

Disabling the efficient cores
Disabling the efficient cores

With the E cores disabled, we see something different. The total CPU represented is now displaying as 18 GHz, but this is only being counted as the 6 CPUs of the Performance cores and hyperthreading is not counted. 

However, we see we have 6 Cores per socket and 12 Logical Processors.

Only the performance cores enabled on the minisforum ms 01
Only the performance cores enabled on the minisforum ms 01

10 gig and 2.5 gig networking in ESXi

I am glad to say that I had no issues with the networking in VMware ESXi with the Minisforum MS-01. Everything worked flawlessly with the 10 gig and 2.5 gig adapters as you would want. The box linked up to my Ubiquiti 2.5 gig 48 port switch and my Ubiquiti 10 gig switch without any quirks.

The minisforum ms 01 10 gig and 2.5 gig network adapters connected in vmware esxi
The minisforum ms 01 10 gig and 2.5 gig network adapters connected in vmware esxi

Power consumption

Another interesting test I wanted to perform was power consumption. The first test I did was with the E cores disabled. I created 41 virtual machines, all Ubuntu Server 22.04 LTS. Even with 41 VMs, it barely stressed the CPU, although, in fairness, these weren’t doing a whole lot. 

Running 41 virtual machines on the ms 01
Running 41 virtual machines on the ms 01

However, using PowerCLI, I powered off all the VMs and then powered them all back on at the same time.

Stopping all the virtual machines using powercli
Stopping all the virtual machines using powercli

Next, powering them back on.

Starting all virtual machines at once using powercli
Starting all virtual machines at once using powercli

As expected, I was able to saturate the CPUs with this operation. Impressively, I didn’t really hear the MS-01 ramp up in noise during this time. In all fairness though, I have other older servers running that may have just overshadowed it.

The cpus are at 100 percent during start of all vms
The cpus are at 100 percent during start of all vms

Power usage shot up to 120 watts.

Full power draw at 120 watts
Full power draw at 120 watts

However, this only lasted for around 10 seconds. After that, it quickly ramped back down to 44 watts.

With 41 vms running at idle
With 41 vms running at idle

After enabling the E-Cores, the MS-01 at max still draws 120 watts. It seemed to quickly get back to normal (40-45 watts) after the VMs quickly settled down.

Full power draw at 120 watts with both e cores and p cores enabled
Full power draw at 120 watts with both e cores and p cores enabled

This was when I caught the 120-watt draw with the power on operation. Below, I didn’t catch it with the E Cores enabled, where it was completely at 100%. But, this may have just been timing.

After reenabling the efficient cores
After reenabling the efficient cores

Video review

Is this the best home server in early 2024?

Out of the units that I have seen that can be purchased at this time, yes, I think this is the best mini PC you can buy if you want to have a home lab server. The cost is right, the features are excellent, and the hardware and expansion capabilities you get with this unit, are just simply not found in any other model of mini PC that I have seen.

Is this mini PC perfect? No. Here is my laundry list of what could potentially be awesome improvements to the MS-01:

  • A uniform Xeon processor option for a true “workstation” class machine (this would be killer). Maybe even a Ryzen counterpart would be interesting as well since these are uniform procs.
  • DIMM slots instead of SODIMMs. It seems there is real estate inside and on the board where this could be possible in this form factor. It would allow for a much higher maximum memory capacity. ECC memory capabilities would be excellent in line with the XEON processor option above.
  • Ability to take the full height U.2 drives like the Optane drives I have
  • No easy-to-bump physical switch for switching between U.2 and M.2
  • More tool-less design inside the MS-01, such as taking the fans out, etc.

However, all of this considered, I applaud Minisforum for making what I think is the best mini PC for a server option that you can buy and would be my first choice right now. It would also be a great mini workstation running a GeForce or AMD add-in graphics card (GPUs).

Wrapping up

Wrapping up this Minisforum MS-01 mini PC review, I think this is a great little PC, and it will make a fantastic server running enterprise hypervisors. Hopefully, this information will help you in your decision if you think this might be a good option for you. Things I love are the 10 gig network adapters, 2.5 gig network adapters, (3) NVMe slots, DDR 5, and PCIe 4.0. The Core i9-13900H is a very powerful CPU. However, it presents challenges like all hybrid CPUs do with modern hypervisors like VMware ESXi. Let me know in the comments what you think of this mini PC. Also, check out the forums here, where we have a thread on the MS-01 going as well. Chime in if you want to put in your two cents worth there. 

Subscribe to VirtualizationHowto via Email 🔔

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

17 Comments

    1. Yolo,

      I agree with you. VMware hasn’t been designed to work with them. I am doubtful at least in the near future that enterprise CPUs will have these types of hybrid core configurations and VMware may never fully support them. Proxmox with the 6.x kernel from what I understand is able to schedule them but you still don’t really have any control over which cores it picks for which VMs. I am hoping this capability will be built in, maybe in a future release of Linux virtualization.

  1. Dear Brandon

    Thank you for yout great review.
    Im waiting also the same box and i bought it exactly for the same reasons as you. Basically to decomision my power hungry esxi server that i have now in home.
    I have a question because its not clear in your review. If you dont disable the E or P cores in the machine, what is really huppening with the VMs? they run fine or the crash? i dont care what cores esxi will give to each vm, i just try to understant what is happening.

    Thx
    Chris

    1. Hey Chris! Thank you for the comment. Glad you like the review on the MS-01. So far it has been a great little machine. Good question on the VMs. Actually nothing bad happens when you enable the E cores. The VMs are stable and run as expected. Same goes for disabling the E cores. As long as you run the VMware kernel parameters and make those persistent you should be fine. Let me know if this helps. Thanks again.

      Brandon

      1. I’m guessing your test VMs only had one vCPU each? From what I’ve read, there are scheduling conflicts that can lead to crashes when you have a multi-core guest that jumps between (eg) two P-cores and a P-core and an E-core. William Lam has a blog post on potential workarounds with CPU Affinity.

        1. Squozen,

          Thank you for your comment! No crashes that I have seen when not managing the cores manually. However, I need to do more extensive testing.

          Brandon

  2. Security Boot. You just enter bios before you try to boot any os. Boot up direct go to bios. You will see, it’s editable.

  3. Great review Brandon.

    Question on the primary NVMe slot and the specs. They state 2TB max on the website. Can a larger M.2 drive capacity fit with double sided memory chips or will it only support single sided M.2 drives up to 2TB max only?

    1. AF, thank you for the comment. Good question. I would test this to confirm or deny but I don’t have anything larger than a 2 TB drive in the lab 🙂 Will keep you posted though if I get my hands on one.

      Brandon

  4. Brandon, howdy!

    So, I’ve been thinking, and I would appreciate your input.

    For the summer, what if I spent a grand, or a bit more, on an MS-01 with a U.2 of some size, maybe an M.2 as well, and 96GB of DDR5.

    I am wondering if I can get myself down from three big NASes and a PowerEdge R740 running all the time to just the little PC. For the summer.

    My reasoning is as follows. Electricity has gotten more expensive. The rack puts out a lot of heat. I need to run the AC anyway in the summer, but it does work longer and harder with all the equipment in the room.

    I doubt it, but could the machine pay for itself in two or three summers?

    This would be an ESXi 8.0U2 host, connected to a BliKVM for management. (I have both PiKVM and BliKVM finally working here, the PiKVM fronting an 8-port HDMI KVM. That’s a whole story.) I plan to disable the processor’s E cores in the BIOS.

    I’d be replacing the two Scalable Xeon 2nd gen Silver processors in the R740 with a single 13th gen Core i9. Load is about 30 VMs. Mostly Windows and VMware’s Photon appliance VMs.

    I would be losing my 25Gb connections between PowerEdge and NAS, but… I will do dual 10G and “suffer” the performance hit for a few months.

    Some stuff would break shutting down two NASes. Veeam and some other stuff, I am sure. This would be a major project.

    But then… new hardware. Hmmmm

    Your thoughts, sir?

    1. Jeff,

      I think this is definitely a worthy goal and something I am planning on tackling myself. I have one MS-01 currently, but would like to add 2 more to have 3 nodes and collapse down my old supermicro servers that are sucking quite a bit of electricity. I think the MS-01 would house the 30 VMs fine. One thing to note is the size of the U.2 drive needs to be the slim drives. I was disappointed I couldn’t use my Optane drives in there, but it has the 3 NVMe slots as well. Also, I haven’t personally tested, but the PCI-e slot I believe can house some 25 GB cards. I think Servethehome did some testing with a few 25 GB cards. Just a thought on that front. There may be model numbers listed there to play around with. I do know there are some weird things with bifurcation that I ran into with plugging in an add-on card and it cut my memory in half on the MS-01. But, there was a minisforum employee who posted on the forums here on the issue with bifurcation, and there is an easy fix (albeit a little unorthodox of placing tape over a few pins. You can read that thread here: https://www.virtualizationhowto.com/community/mini-pcs/does-the-minisforum-ms-01-support-bifurcation-with-the-pci-e-slot.

      Even if the machine can’t pay for itself in 2 or 3 summers, I think you will have fun in the process, have less noise, and less heat, along with lower power draw 😉

      Brandon

      1. Brandon, thanks for the reply.

        I went ahead and ordered the MS-01 a week ago. It hasn’t shipped yet. Argh!

        I have acquired, during a period of extreme haziness, both an Nvidia T600 (low profile, thin, active heatsink, 40 watt) video card to pass through to a Plex server AND a Mellanox ConnectX-4 25G card. Oops! Two cards, one slot.

        I am going to try the video card and hope that works. If not, I might put the 25G card in while I look for another GPU. I purchased only one 25G transceiver for the Mellanox card, and two 10G transceivers for the MS-01.

        The last time I bought one of these Mellanox cards, it was nearly $400. This latest one was $35. Jeez.

        Two Samsung 2TB M.2 SSDs for VM storage are waiting here. They’ll go in the two fastest slots in the machine. Another 512GB M.2 will house ESXi from the slowest slot.

        96GB RAM is on the way also.

        As you said, it’ll be a fun project.

        1. Jeff,

          That is great! I will be looking forward to your experience and feedback on the MS-01. Will be great to see how you get along with the Mellanox card and 25G. This is something I would love to test myself. Please do keep us posted!

          Brandon

          1. Quick reply: Experience and feedback.

            The machine arrived this morning. Impressive fit and finish. Heavier than I thought it would be and not plasticky at all.

            The Nvida T600 GPU that I bought for it doesn’t work. The machine doesn’t
            POST with the card installed. Even after a 10 minute wait. Yah, the machine’s first boot took four or five minutes to kick off after power on.

            ESXi 8.0 U2 is installed. The installation took maybe two minutes? Less.

            I have two 10G drops to the network using 10GTek transceivers, which is my preferred brand for hosts and my big Agg switch (though they don’t seem to be MikroTik friendly).

            ESXi can see but won’t communicate with the TPM in the MS-01. This is the same thing I’ve seen in the GMKtec NUCbox machines. Something with the protocol used with the chip, which is not the protocol used by ESXi. I am going to disable the TPM so I don’t have a banner in vCenter and a yellow mark on the server’s icon.

            That’s it for now. I’ll continue to look for a GPU that fits in the machine.

            Question for you, Brandon: I thought the integrated GPU on the Core i9 would be in use by ESXi and I would need something else to pass through to a VM, but ESXi is listing the iGPU as a PCI device that can be passed through. Do you know if this is something that will work? What does ESXi do if I attach a monitor/KVM to the machine to troubleshoot and the iGPU is in use in a VM?

          2. Jeff,

            It is great to know your feedback on the Nvidia T600 GPU. That is a bummer that it didn’t work. I think that is one of the downsides of this unit. I am not sure if this is in play…check out the forum thread here: https://www.virtualizationhowto.com/community/mini-pcs/does-the-minisforum-ms-01-support-bifurcation-with-the-pci-e-slot/, where a minisforum employee mentioned a peculiarity of the pins on the slot. I am not sure this would be a workaround, but might be worth trying out. To be honest, I haven’t had very good success either in ESXi or Proxmox passing through an APU to a virtual machine. Even though it is “sort of” discreet, I haven’t been able to get them to work. But there may be a trick I am not trying there and I need to spend more time with it to be honest.

            The TPM issue is one that I have seen as well across every single mini PC I have been sent or purchased myself. They use the protocol format that ESXi doesn’t like. I am not sure why the difference, but there seems to be a different spec used in this consumer grade hardware. I know William Lam has written about this a few times as well.

            Keep us updated on your findings Jeff with the MS-01…it is great to have you putting it through the paces in a home lab scenario 🙂

    1. Yes, i am very much looking forward to seeing if Minisforum puts out a Ryzen model. I think that would be fantastic. Also, if you enter the boot parameters in ESXi, it actually allows you to leave on the E cores.

      Brandon

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.