I just stumbled across something pretty crazy that I think a lot of us hardware tinkerers are going to be talking about. Tom’s Hardware covered a new CXL 2.0 add-in card that drops into a PCIe slot and basically lets you expand your system memory the same way you’d add a GPU or NIC. This isn’t your standard DIMM riser board. This thing can hold up to 512GB of DDR5 all on its own.
From what I gathered, the first wave of these cards is being targeted at TRX50 (Threadripper) and W790 (Xeon W-3400/2400) workstation boards. They’re designed to use Compute Express Link (CXL 2.0), which rides on top of PCIe 5.0. That means the CPU sees the card not as storage or cache, but as legitimate system memory it can allocate.
Below is an example of one of the cards with the image coming from Gigabyte:
A couple things stand out to me:
-
Insane capacity: A Threadripper workstation could already handle a ton of RAM with 8 channels of DDR5. Slapping one of these in could push you into terabyte-class memory territory without going enterprise server.
-
Homelab potential in the future: Right now it’s workstation/server focused, but imagine a few years from now when these trickle down and prices fall. A single add-in card that doubles or triples your system RAM could be huge for Proxmox, ESXi, or even just local AI/LLM workloads.
-
Bandwidth and latency: This is the big question. Normal DIMM slots are still going to have better latency. PCIe-attached memory will probably have a performance penalty and they do. This reminds me of the memory tiering VMware is doing. But for memory-hungry tasks (VM sprawl, databases, caching layers), raw capacity sometimes matters more than nanoseconds.
-
Power/heat: Cramming half a terabyte of DDR5 onto a single card makes me wonder how hot these things run and what the power draw will look like.
It feels a lot like we’re watching the early days of ideas like this for add-in cards of all sorts to expand memory and other hardware. If this works well enough, I can see a future where PCIe slots aren’t just for GPUs or storage, but for memory pools, accelerators, and other resources. The demands created by AI are going to change things quite a bit in this area.
Curious what you think. Would you ever slot one of these into a homelab box if/when they become affordable?