<?xml version="1.0" encoding="UTF-8"?>        <rss version="2.0"
             xmlns:atom="http://www.w3.org/2005/Atom"
             xmlns:dc="http://purl.org/dc/elements/1.1/"
             xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
             xmlns:admin="http://webns.net/mvcb/"
             xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
             xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <channel>
            <title>
									VHT Forum - Recent Topics				            </title>
            <link>https://www.virtualizationhowto.com/community/</link>
            <description>Virtualization Howto Discussion Board</description>
            <language>en-US</language>
            <lastBuildDate>Sat, 04 Apr 2026 22:40:29 +0000</lastBuildDate>
            <generator>wpForo</generator>
            <ttl>60</ttl>
							                    <item>
                        <title>Minisforum MS-A2 &amp; Proxmox</title>
                        <link>https://www.virtualizationhowto.com/community/proxmox-help/minisforum-ms-a2-proxmox/</link>
                        <pubDate>Tue, 17 Mar 2026 23:40:12 +0000</pubDate>
                        <description><![CDATA[Im fairly new to the whole process of setting up a virtualization server and it&#039;s also my first Minisforum. So, im looking for some advice on exactly how to proceed.
I have the MS-A2, and I...]]></description>
                        <content:encoded><![CDATA[<p>Im fairly new to the whole process of setting up a virtualization server and it's also my first Minisforum. So, im looking for some advice on exactly how to proceed.</p>
<p>I have the MS-A2, and I would like to setup a virtualization server using ProxMox. I have some questions.</p>
<p>I have a small business, and I need several server instances to handle internal and external tasks. I may be small biz, but to me it's critical that I don't loose data. Is it possible to install 2 NVME drives in a mirrored pair for redundancy? Ive been told to use HW raid because using SW raid means the software stack running the raid has more potential failure points compared to a dedicated HW raid configuration. Also, im wondering how to setup a network config which has private/internal VM's which can communicate outward, and also a set of VM's which can reach outward but are not allowed to reach inward to my internal/private network. I really don't know how to set that up. I have a ubiquity router, and all attempts to setup subnets don't provide that network isolation. It seems all nodes can communcate regardless of which network they are on, so im not geting that isolation of the internal network away from the nodes which are on a public network.</p>
<p>Just not sure what im doing wrong. So, looking for advice.</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Ron Watkins</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/proxmox-help/minisforum-ms-a2-proxmox/</guid>
                    </item>
				                    <item>
                        <title>Minisforum MS-03 teased with Intel Panther Lake (is this the true MS-01 successor?)</title>
                        <link>https://www.virtualizationhowto.com/community/mini-pcs/minisforum-ms-03-teased-with-intel-panther-lake-is-this-the-true-ms-01-successor/</link>
                        <pubDate>Sat, 14 Mar 2026 23:13:18 +0000</pubDate>
                        <description><![CDATA[Minisforum just teased a new mini workstation called the MS-03, and from what is being reported so far it looks like it may be the successor to the MS-01, which a lot of us are already runni...]]></description>
                        <content:encoded><![CDATA[<p data-start="382" data-end="596">Minisforum just teased a new mini workstation called the <strong data-start="439" data-end="448">MS-03</strong>, and from what is being reported so far it looks like it may be the successor to the <strong data-start="534" data-end="543">MS-01</strong>, which a lot of us are already running in home labs. I currently am running 5 of the MS-01s in my Proxmox Ceph mini cluster.</p>
<p data-start="598" data-end="779">A few different tech sites have picked up the story and there are some interesting early details starting to surface. I pulled together the concrete info below so we can discuss it. The i<a href="https://mp.weixin.qq.com/s/lYF567CwmM58hYFtTcw4dQ" target="_blank" rel="noopener">mage below is from here</a>:</p>
937
<h3 data-section-id="1dr4gfw" data-start="781" data-end="828">CPU: Intel Core Ultra 7 356H (Panther Lake)</h3>
<p data-start="830" data-end="959">The MS-03 is expected to be powered by <strong data-start="869" data-end="918">Intel’s upcoming Panther Lake mobile platform</strong>, specifically the <strong data-start="937" data-end="958">Core Ultra 7 356H</strong>.</p>
<p data-start="961" data-end="1001">Specs floating around right now include the following for the CPU:</p>
<ul data-start="1003" data-end="1235">
<li data-section-id="157v70h" data-start="1003" data-end="1107">
<p data-start="1005" data-end="1023"><strong data-start="1005" data-end="1023">16 cores total</strong></p>
<ul data-start="1026" data-end="1107">
<li data-section-id="1prjgng" data-start="1026" data-end="1049">
<p data-start="1028" data-end="1049">4 Performance cores</p>
</li>
<li data-section-id="31qnbj" data-start="1052" data-end="1074">
<p data-start="1054" data-end="1074">8 Efficiency cores</p>
</li>
<li data-section-id="oefn1k" data-start="1077" data-end="1107">
<p data-start="1079" data-end="1107">4 Low power Efficiency cores</p>
</li>
</ul>
</li>
<li data-section-id="hq2r7h" data-start="1108" data-end="1146">
<p data-start="1110" data-end="1146">Boost clocks up to about <strong data-start="1135" data-end="1146">4.7 GHz</strong></p>
</li>
<li data-section-id="nrxfk2" data-start="1147" data-end="1164">
<p data-start="1149" data-end="1164"><strong>18 MB cache</strong></p>
</li>
<li data-section-id="1qusyh0" data-start="1165" data-end="1194">
<p data-start="1167" data-end="1194"><strong data-start="1167" data-end="1194">Xe3 integrated graphics</strong></p>
</li>
<li data-section-id="rvf3r9" data-start="1195" data-end="1235">
<p data-start="1197" data-end="1235">Integrated NPU for AI acceleration</p>
</li>
</ul>
<p data-start="1237" data-end="1387">Panther Lake is Intel’s next architecture after Lunar Lake and Meteor Lake, so this will likely bring improvements in both performance and efficiency.</p>
<h3 data-section-id="1b77h6t" data-start="1389" data-end="1426">AI compute looks like a big focus</h3>
<p data-start="1428" data-end="1510">One interesting part of the specs is how much AI acceleration is being advertised with this one. There are several reported numbers related to AI performance, including the following:</p>
<ul data-start="1539" data-end="1626">
<li data-section-id="zpypmv" data-start="1539" data-end="1566">
<p data-start="1541" data-end="1566">~<strong data-start="1542" data-end="1553">40 TOPS</strong> from the GPU</p>
</li>
<li data-section-id="1sn9033" data-start="1567" data-end="1594">
<p data-start="1569" data-end="1594">~<strong data-start="1570" data-end="1581">50 TOPS</strong> from the NPU</p>
</li>
<li data-section-id="8d2ns" data-start="1595" data-end="1626">
<p data-start="1597" data-end="1626"><strong data-start="1597" data-end="1626">~90 TOPS total AI compute</strong></p>
</li>
</ul>
<p data-start="1628" data-end="1752">That suggests Minisforum may be positioning this system partly as an AI workstation mini PC, not just a general desktop, which is not surprising as most of the manufacturers are jumping on this bandwagon of marketing.</p>
<h3 data-section-id="17r8b8b" data-start="1754" data-end="1782">Power target around 70 W</h3>
<p data-start="1784" data-end="1900">Some reports say the system may run the CPU at <strong data-start="1831" data-end="1863">around a 70 W power envelope</strong>, but it will be interesting to see if when released if it is around this power level as this will be somewhat high for mini PC and home lab I think but still doable. But also, if that ends up being true, it could mean much stronger sustained performance compared to many compact systems that throttle heavily.</p>
<h3 data-section-id="583y8j" data-start="2037" data-end="2053">Chassis size</h3>
<p data-start="2055" data-end="2096">The MS-03 appears to remain very compact and look basically like the MS-01 (see the picture above), with the dimensions of <strong data-start="2098" data-end="2121">195 × 195 × 42.5 mm</strong>. So still firmly in the small workstation mini PC category.</p>
<h3 data-section-id="14se6fr" data-start="2183" data-end="2211">Possible MS-01 successor</h3>
<p data-start="2213" data-end="2353">Several sites suggest this is essentially the <strong data-start="2259" data-end="2300">next generation of the MS-01 platform</strong>, even though they have already released the MS-02. I think we would all agree the MS-02 was really its own type of mini PC and looks more like a mini workstation footprint, instead of what the MS-01 started. That raises some obvious questions for those of us running MS-01 systems today.</p>
<p data-start="2436" data-end="2463">Things we still don’t know are the following. Will it keep 10GbE networking or maybe have 25 GbE like the MS-02? Will there still be multiple NVMe slots? Will <strong data-start="2556" data-end="2567">OCuLink</strong> be there for external GPU support? Will there be any PCIe expansion options? What will be the memory type and capacity? Is there a release timeline or pricing?</p>
<h3 data-section-id="134so7y" data-start="2695" data-end="2742">Why this could be interesting for home labs</h3>
<p data-start="2744" data-end="2851">If Minisforum keeps the same philosophy as the MS-01, this could end up being another strong home lab node with many of the same features that we really liked about the MS-01 such as the high core count CPU, small footprint, high-speed networking, AI acceleration, integrated graphics. So if they do, I think this could definitely be interesting for running Proxmox clusters, AI inference workloads, edge compute clusters or bare metal Kubernetes or Docker nodes. </p>
<h3 data-section-id="1qrcys2" data-start="3131" data-end="3163">Curious what everyone thinks</h3>
<p data-start="3165" data-end="3195">A few questions for those in the forum. Do you think 10GbE returns on this system? Would you upgrade from an <strong data-start="3274" data-end="3283">MS-01</strong> for Panther Lake? Would you use this more for <strong data-start="3392" data-end="3426">virtualization or AI workloads</strong>?</p>
<p data-start="3429" data-end="3547">Personally I am really curious if Minisforum doubles down on the mini workstation / home lab hybrid concept again. If they keep the networking and storage flexibility of the MS-01 but add Panther Lake performance, this could end up being another really popular platform.</p>
<p data-start="3706" data-end="3746" data-is-last-node="" data-is-only-node="">Would love to hear what everyone thinks.</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/mini-pcs/minisforum-ms-03-teased-with-intel-panther-lake-is-this-the-true-ms-01-successor/</guid>
                    </item>
				                    <item>
                        <title>How to Separate Proxmox Ceph, Cluster, and Migration Networks with Dedicated VLANs</title>
                        <link>https://www.virtualizationhowto.com/community/proxmox-help/how-to-separate-proxmox-ceph-cluster-and-migration-networks-with-dedicated-vlans/</link>
                        <pubDate>Tue, 17 Feb 2026 03:47:01 +0000</pubDate>
                        <description><![CDATA[So, in my Proxmox mini cluster, I have gone through the process of moving critical traffic off the main &quot;management&quot; network that the Proxmox hosts are on to their own dedicated VLANs. Let&#039;s...]]></description>
                        <content:encoded><![CDATA[<p>So, in my Proxmox mini cluster, I have gone through the process of moving critical traffic off the main "management" network that the Proxmox hosts are on to their own dedicated VLANs. Let's look at how to move each of these to a dedicated network.</p>
<h2 class="text-text-100 mt-3 -mb-1 text- font-bold">Networks used for segmented traffic</h2>
<p class="font-claude-response-body break-words whitespace-normal leading-"><strong>MTU 1500 (Client-facing/Management):</strong></p>
<ul class=":mb-0 :mt-1 :gap-1 :pb-1 :pb-1 list-disc flex flex-col gap-1 pl-8 mb-3">
<li class="whitespace-normal break-words pl-2">vmbr0 (10.3.33.0/24) - Management/API/SSH</li>
<li class="whitespace-normal break-words pl-2">bond0.335 (10.3.35.0/24) - Proxmox Cluster (Corosync ring1)</li>
<li class="whitespace-normal break-words pl-2">All VM VLANs (2, 10, 149, 222)</li>
</ul>
<p class="font-claude-response-body break-words whitespace-normal leading-"><strong>MTU 9000 (Backend Infrastructure):</strong></p>
<ul class=":mb-0 :mt-1 :gap-1 :pb-1 :pb-1 list-disc flex flex-col gap-1 pl-8 mb-3">
<li class="whitespace-normal break-words pl-2">bond0.334 (10.3.34.0/24) - Ceph OSD cluster traffic</li>
<li class="whitespace-normal break-words pl-2">bond0.336 (10.3.36.0/24) - VM live migration</li>
<li class="whitespace-normal break-words pl-2">bond0 and physical interfaces (must stay 9000)</li>
</ul>
<h2 class="text-text-100 mt-3 -mb-1 text- font-bold">Moving Ceph OSD traffic</h2>
<p>Storage traffic benefits from being on its own dedicated network and VLAN. This way you can cleanly set jumbo frames and dedicate links to this network if you want as well. For me I am using bonded LACP 10 gig connections for all my traffic. So, I am using VLANs to segment and carve the traffic up.</p>
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 1: Verify Connectivity</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="sticky opacity-0 group-hover/copy:opacity-100 top-2 py-2 h-12 w-0 float-right">
<div class="absolute right-0 h-8 px-2 items-center inline-flex z-10">
<div class="relative">
<div class="transition-all opacity-100 scale-100">The first step on all of these networks we are introducing is to test and make sure you have connectivity from each host over to the other hosts. Replace the IPs of course with your IPs and hostnames, etc.</div>
</div>
</div>
</div>
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># On each host, verify new network is configured and reachable</span>
</span><span><span class="token token">ping</span> -c <span class="token token">2</span> <span class="token token">10.3</span>.34.210  <span class="token token"># From any host to pvehost01</span>
</span><span><span class="token token">ping</span> -c <span class="token token">2</span> <span class="token token">10.3</span>.34.211  <span class="token token"># pvehost02</span>
</span><span><span class="token token">ping</span> -c <span class="token token">2</span> <span class="token token">10.3</span>.34.212  <span class="token token"># pvehost03</span>
</span><span><span class="token token">ping</span> -c <span class="token token">2</span> <span class="token token">10.3</span>.34.213  <span class="token token"># pvehost04</span>
</span><span><span class="token token">ping</span> -c <span class="token token">2</span> <span class="token token">10.3</span>.34.214  <span class="token token"># pvehost05</span></span></code></pre>
</div>
</div>
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 2: Update Ceph Configuration</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="sticky opacity-0 group-hover/copy:opacity-100 top-2 py-2 h-12 w-0 float-right">
<div class="absolute right-0 h-8 px-2 items-center inline-flex z-10">
<div class="relative">
<div class="transition-all opacity-100 scale-100">So, we update the ceph.conf on one host and this replicates across. For my configuration, I only changed the <strong>cluster_network</strong>. I changed this from 10.3.33.0/24 to 10.3.34.0/24.</div>
</div>
</div>
</div>
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># On any one node (config replicates automatically):</span>
</span><span><span class="token token">nano</span> /etc/pve/ceph.conf
</span><span>
</span><span><span class="token token"># Change ONLY the cluster_network line:</span>
</span><span>cluster_network <span class="token token">=</span> <span class="token token">10.3</span>.34.0/24
</span><span>
</span><span><span class="token token"># Leave public_network and all monitor addresses on old network</span>
</span><span>public_network <span class="token token">=</span> <span class="token token">10.3</span>.33.0/24</span></code></pre>
</div>
</div>
930
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 3: Set noout Flag</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="sticky opacity-0 group-hover/copy:opacity-100 top-2 py-2 h-12 w-0 float-right">
<div class="absolute right-0 h-8 px-2 items-center inline-flex z-10">
<div class="relative">
<div class="transition-all opacity-100 scale-100">We set the <strong>noout</strong> flag while we are messing with the OSDs and Ceph networks. </div>
</div>
</div>
</div>
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span>ceph osd <span class="token token">set</span> noout</span></code></pre>
</div>
</div>
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 4: Rolling OSD Restart</h3>
<p>After setting the noout flag, we roll through all the hosts and restart the OSD service. It is important after each host to check the status of Ceph with the <strong>ceph -s</strong> command and wait for any backfilling that might happen to complete.</p>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># On each node, one at a time:</span>
</span><span>systemctl restart ceph-osd.target
</span><span><span class="token token">sleep</span> <span class="token token">10</span>
</span><span>ceph -s  <span class="token token"># Verify cluster health before next node</span>
</span><span>
</span><span><span class="token token"># Wait for any backfilling to complete between nodes</span>
</span><span><span class="token token"># Continue to next node when ceph -s shows all PGs active+clean</span></span></code></pre>
</div>
</div>
931
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 5: Verify Migration</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="sticky opacity-0 group-hover/copy:opacity-100 top-2 py-2 h-12 w-0 float-right">
<div class="absolute right-0 h-8 px-2 items-center inline-flex z-10">
<div class="relative">
<div class="transition-all opacity-100 scale-100">Once you restart the OSD service, you should be able to see it start using the new IP address when you view the ceph OSD dump.</div>
</div>
</div>
</div>
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># Check OSDs are using new network:</span>
</span><span>ceph osd dump <span class="token token">|</span> <span class="token token">head</span> -30
</span><span>
</span><span><span class="token token"># Should see cluster addresses like:</span>
</span><span><span class="token token">#  </span>
</span><span><span class="token token">#      ^public           ^cluster (new!)</span></span></code></pre>
</div>
</div>
929
<p>&nbsp;</p>
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 6: Unset noout</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="sticky opacity-0 group-hover/copy:opacity-100 top-2 py-2 h-12 w-0 float-right">
<div class="absolute right-0 h-8 px-2 items-center inline-flex z-10">
<div class="relative">
<div class="absolute inset-0 flex items-center justify-center">
<div class="transition-all opacity-0 scale-50">Once you get through all the hosts, then unset the noout flag.</div>
</div>
</div>
</div>
</div>
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span>ceph osd <span class="token token">unset</span> noout</span></code></pre>
</div>
</div>
<p class="font-claude-response-body break-words whitespace-normal leading-"><strong>Result:</strong> Ceph cluster replication traffic now on dedicated 10.3.34.0/24 network with MTU 9000</p>
<h2 class="text-text-100 mt-3 -mb-1 text- font-bold">Move your Proxmox Cluster Ring (Corosync)</h2>
<p>Instead of totally moving my cluster network, what I am doing below is "adding" another network to the Corosync networks. This way I have two networks that can be used for cluster traffic. If something happens to one VLAN, the other network/VLAN should be unaffected.</p>
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 1: Edit Corosync Configuration</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="sticky opacity-0 group-hover/copy:opacity-100 top-2 py-2 h-12 w-0 float-right">
<div class="absolute right-0 h-8 px-2 items-center inline-flex z-10">
<div class="relative">
<div class="transition-all opacity-100 scale-100">Like the Ceph OSD network, we edit this file on one host and let it replicate around. You see the lines where I have "# ADD THIS" is where you edit the file and update the configuration.</div>
</div>
</div>
</div>
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># On any one node (config replicates automatically):</span>
</span><span><span class="token token">nano</span> /etc/pve/corosync.conf
</span><span>
</span><span><span class="token token"># Add ring1_addr to each node in nodelist:</span>
</span><span><span class="token token">node</span> <span class="token token">{</span>
</span><span>  name: pvehost01
</span><span>  nodeid: <span class="token token">1</span>
</span><span>  quorum_votes: <span class="token token">1</span>
</span><span>  ring0_addr: <span class="token token">10.3</span>.33.210
</span><span>  ring1_addr: <span class="token token">10.3</span>.35.210  <span class="token token"># ADD THIS</span>
</span><span><span class="token token">}</span>
</span><span><span class="token token"># Repeat for all nodes with their respective .211, .212, .213, .214 addresses</span>
</span><span>
</span><span><span class="token token"># Add second interface to totem section:</span>
</span><span>totem <span class="token token">{</span>
</span><span>  <span class="token token">..</span>.
</span><span>  config_version: <span class="token token">20</span>  <span class="token token"># INCREMENT THIS</span>
</span><span>  interface <span class="token token">{</span>
</span><span>    linknumber: <span class="token token">0</span>
</span><span>  <span class="token token">}</span>
</span><span>  interface <span class="token token">{</span>          <span class="token token"># ADD THIS</span>
</span><span>    linknumber: <span class="token token">1</span>
</span><span>  <span class="token token">}</span>
</span><span>  <span class="token token">..</span>.
</span><span><span class="token token">}</span></span></code></pre>
</div>
</div>
932
<p>Below you can see the additional network added for the corosync configuration.</p>
933
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 2: Restart Corosync</h3>
<p>You may not technically HAVE to restart the corosync service across all your hosts as they may automatically pick up the changes, but it is a "cover the bases" type operation. </p>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># Modern Proxmox with knet may apply automatically</span>
</span><span><span class="token token"># Check if both rings are active:</span>
</span><span>corosync-cfgtool -s
</span><span>
</span><span><span class="token token"># If not showing both rings, restart on each node:</span>
</span><span>systemctl restart corosync
</span><span>pvecm status  <span class="token token"># Verify cluster stays quorate</span></span></code></pre>
</div>
</div>
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 3: Verify Dual-Ring Operation</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="sticky opacity-0 group-hover/copy:opacity-100 top-2 py-2 h-12 w-0 float-right">
<div class="absolute right-0 h-8 px-2 items-center inline-flex z-10">
<div class="relative">
<div class="transition-all opacity-100 scale-100">After the restart, see if you see the dual-network operation.</div>
</div>
</div>
</div>
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># On each node:</span>
</span><span>corosync-cfgtool -s
</span><span>
</span><span><span class="token token"># Should show:</span>
</span><span><span class="token token"># LINK ID 0 udp - addr = 10.3.33.X (all nodes connected)</span>
</span><span><span class="token token"># LINK ID 1 udp - addr = 10.3.35.X (all nodes connected)</span></span></code></pre>
</div>
</div>
934
<p>&nbsp;</p>
<p class="font-claude-response-body break-words whitespace-normal leading-"><strong>Result:</strong> Proxmox cluster has redundant communication paths with ring1 on dedicated 10.3.35.0/24 network</p>
<h2 class="text-text-100 mt-3 -mb-1 text- font-bold">Move the VM Migration Network</h2>
<p>Finally, we will move the VM migration network so that it uses its own dedicated network for migration traffic.</p>
<h3 class="text-text-100 mt-2 -mb-1 text-base font-bold">Step 1: Configure Migration Network</h3>
<div class="relative group/copy bg-bg-000/50 border-0.5 border-border-400 rounded-lg">
<div class="overflow-x-auto">
<pre class="code-block__code !my-0 !rounded-lg !text-sm !leading-relaxed p-3.5" contenteditable="false"><code class="language-bash"><span><span class="token token"># Via GUI:</span>
</span><span><span class="token token"># Datacenter → Options → Migration Settings → Edit</span>
</span><span><span class="token token"># Type: secure</span>
</span><span><span class="token token"># Network: 10.3.36.0/24</span>
</span><span>
</span><span><span class="token token"># Or via CLI:</span>
</span><span><span class="token token">nano</span> /etc/pve/datacenter.cfg
</span><span><span class="token token"># Add:</span>
</span><span>migration: secure,network<span class="token token">=</span><span class="token token">10.3</span>.36.0/24</span></code></pre>
</div>
</div>
935
<p>The new network has been selected and confirmed.</p>
936
<p class="font-claude-response-body break-words whitespace-normal leading-"><strong>Result:</strong> VM live migrations now use dedicated 10.3.36.0/24 network with MTU 9000</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/proxmox-help/how-to-separate-proxmox-ceph-cluster-and-migration-networks-with-dedicated-vlans/</guid>
                    </item>
				                    <item>
                        <title>Step-by-Step Guide: Checking Ceph OSD Disk Health</title>
                        <link>https://www.virtualizationhowto.com/community/proxmox-help/step-by-step-guide-checking-ceph-osd-disk-health/</link>
                        <pubDate>Wed, 11 Feb 2026 16:09:38 +0000</pubDate>
                        <description><![CDATA[I have been working a ton with Ceph lately in the home lab. Just some notes on how to check if you have a Ceph disk that is showing to have slow disk. You can see your Ceph health with the c...]]></description>
                        <content:encoded><![CDATA[<p>I have been working a ton with Ceph lately in the home lab. Just some notes on how to check if you have a Ceph disk that is showing to have slow disk. You can see your Ceph health with the command:</p>
<pre contenteditable="false">ceph status

or 

ceph -s</pre>
928
<h2 data-pm-slice="1 1 []" data-en-clipboard="true">Step 1: Identify the Problem OSD</h2>
<div>
<pre contenteditable="false"># Check overall cluster health
ceph status
# Get detailed health information (shows which OSD has issues)
ceph health detail
```
**Example output:**
```
 BLUESTORE_SLOW_OP_ALERT: 1 OSD(s) experiencing slow operations in BlueStore
osd.6 observed slow operation indications in BlueStore
Note the OSD number (in this case: osd.6)</pre>
</div>
<hr />
<h2>Step 2: Locate the OSD's Host</h2>
<div>
<pre contenteditable="false"># Find which physical host contains the OSD
ceph osd find &lt;osd-number&gt;
# Example:
ceph osd find 6
Example output:
json
{
"osd": 6,
"addrs": {
"addrvec": 
},
"osd_fsid": "900daf28-d681-4637-90db-9764bcfd2f11",
"host": "pvehost04",
"crush_location": {
"host": "pvehost04",
"root": "default"
}
}
Note the hostname (in this case: pvehost04)</pre>
</div>
<hr />
<h2>Step 3: Connect to the Host</h2>
<div>
<pre contenteditable="false"># SSH to the host containing the problematic OSD
ssh root@pvehost04</pre>
</div>
<div> </div>
<hr />
<h2>Step 4: Identify the Physical Disk</h2>
<div>
<pre contenteditable="false"># Find the OSD's logical volume
ceph-volume lvm list | grep -A 10 "osd.&lt;number&gt;"
# Example:
ceph-volume lvm list | grep -A 10 "osd.6"
```
**Example output:**
```
====== osd.6 =======
 /dev/ceph-46ed1f42-7685-4bfd-b64f-ad525bddc935/osd-block-900daf28...
block device /dev/ceph-46ed1f42-7685-4bfd-b64f-ad525bddc935/osd-block-900daf28...
block uuid f94nrL-KRDg-D648-Ia7B-F3Yx-hjwQ-HppqAT
Note the VG name (in this case: ceph-46ed1f42-7685-4bfd-b64f-ad525bddc935)</pre>
</div>
<h3>Find the underlying physical disk:</h3>
<div>
<pre contenteditable="false"># Show the complete disk hierarchy
lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,FSTYPE
# Or find the physical volume for the VG
pvs | grep &lt;vg-name&gt;
# Example:
pvs | grep ceph-46ed1f42-7685-4bfd-b64f-ad525bddc935
```
**Example output:**
```
/dev/nvme1n1 ceph-46ed1f42-7685-4bfd-b64f-ad525bddc935 lvm2 a-- &lt;953.87g
Note the physical device (in this case: /dev/nvme1n1)</pre>
</div>
<hr />
<h2>Step 5: Check Disk Health</h2>
<h3>For NVMe Drives:</h3>
<div>
<pre contenteditable="false"># Install nvme-cli if not present
apt install nvme-cli -y
# Check SMART health summary
nvme smart-log /dev/nvme1n1
# Or using smartctl
smartctl -a /dev/nvme1n1
# Check for errors
nvme error-log /dev/nvme1n1</pre>
</div>
<h3>For SATA/SAS Drives:</h3>
<div>
<pre contenteditable="false"># Install smartmontools if not present
apt install smartmontools -y
# Quick health check
smartctl -H /dev/sdX
# Full SMART information
smartctl -a /dev/sdX
# Check for specific error indicators
smartctl -a /dev/sdX | grep -E "Reallocated|Pending|Current_Pending|Offline_Uncorrectable|UDMA_CRC_Error"</pre>
</div>
<div> </div>
<h2>Step 6: Interpret Health Results</h2>
<h3>Critical Values to Check:</h3>
<h4>For NVMe:</h4>
<ul>
<li>
<div><b>critical_warning</b>: Should be 0 (anything else is bad)</div>
</li>
<li>
<div><b>temperature</b>: Should be &lt; 70°C (&lt; 158°F)</div>
</li>
<li>
<div><b>available_spare</b>: Should be &gt; 10%</div>
</li>
<li>
<div><b>percentage_used</b>: Wear indicator (100% = end of life)</div>
</li>
<li>
<div><b>media_errors</b>: Should be 0</div>
</li>
<li>
<div><b>error log entries</b>: Review for I/O errors</div>
</li>
</ul>
<h4>For SATA/SAS:</h4>
<ul>
<li>
<div><b>SMART overall-health</b>: Should be PASSED</div>
</li>
<li>
<div><b>Reallocated_Sector_Ct</b>: Should be 0 (or very low)</div>
</li>
<li>
<div><b>Current_Pending_Sector</b>: Should be 0</div>
</li>
<li>
<div><b>Offline_Uncorrectable</b>: Should be 0</div>
</li>
<li>
<div><b>UDMA_CRC_Error_Count</b>: High values indicate cable/connection issues</div>
</li>
<li>
<div><b>Temperature</b>: Should be &lt; 55°C</div>
</li>
</ul>
<h2>Step 7: Check OSD Performance Metrics</h2>
<div>
<pre contenteditable="false"># From any Ceph node, check OSD performance
ceph osd perf
# Check OSD utilization
ceph osd df
# Check for current slow operations (run on the OSD's host)
ceph daemon osd.&lt;number&gt; dump_ops_in_flight
# Check historic slow operations (run on the OSD's host)
ceph daemon osd.&lt;number&gt; dump_historic_slow_ops</pre>
</div>
<h2>Step 8: Monitor I/O Performance (Optional)</h2>
<div>
<pre contenteditable="false"># Install sysstat if not present
apt install sysstat -y
# Monitor real-time I/O stats (watch for high await times or %util)
iostat -x &lt;device&gt; 2 5
# Example for NVMe:
iostat -x nvme1n1 2 5
# Example for SATA:
iostat -x sda 2 5
Key metrics to watch:
%util: &gt; 90% consistently = saturated disk
await: &gt; 10ms = slow responses
r_await / w_await: Read/write latency separately</pre>
</div>
<h2>Step 9: Check System Logs</h2>
<div>
<pre contenteditable="false"># Check for disk-related errors in dmesg
dmesg -T | grep -i "&lt;device&gt;" | tail -50
# Example:
dmesg -T | grep -i nvme1n1 | tail -50
# Check systemd journal for Ceph or disk issues
journalctl -u ceph-osd@&lt;number&gt; --since "1 hour ago"
# Example:
journalctl -u ceph-osd@6 --since "1 hour ago"</pre>
</div>
<h2 data-pm-slice="1 2 []" data-en-clipboard="true">Step 10: Common Issues and Resolutions</h2>
<h3>Issue: Slow Operations During Rebalancing</h3>
<div><b>Cause</b>: Normal during data migration</div>
<div><b>Solution</b>: Wait for rebalancing to complete or mute the warning:</div>
<div> </div>
<div data-codeblock="true" data-line-wrapping="false">
<div data-plaintext="true">
<pre contenteditable="false">ceph health mute BLUESTORE_SLOW_OP_ALERT --sticky</pre>
</div>
</div>
<h3>Issue: High Media Errors or Reallocated Sectors</h3>
<div><b>Cause</b>: Failing disk</div>
<div><b>Solution</b>: Replace the disk:</div>
<div> </div>
<div data-codeblock="true" data-line-wrapping="false">
<div data-plaintext="true">
<pre contenteditable="false"># Mark OSD out (triggers data migration)
ceph osd out &lt;osd-number&gt;
# Monitor rebalancing
watch ceph -s
# Once complete, remove OSD
ceph osd down &lt;osd-number&gt;
ceph osd rm &lt;osd-number&gt;
ceph auth del osd.&lt;osd-number&gt;
ceph osd crush rm osd.&lt;osd-number&gt;</pre>
</div>
</div>
<h3>Issue: High Temperature</h3>
<div><b>Cause</b>: Poor cooling or failing fan</div>
<div><b>Solution</b>: Improve airflow, check datacenter HVAC</div>
<h3>Issue: Disk Full</h3>
<div><b>Cause</b>: Imbalanced data distribution</div>
<div><b>Solution</b>: Check weight and rebalance:</div>
<div data-codeblock="true" data-line-wrapping="false">
<div data-plaintext="true">
<pre contenteditable="false">ceph osd df tree
ceph osd reweight &lt;osd-number&gt; &lt;weight&gt;</pre>
</div>
</div>
<div> </div>
<hr />
<h2>Quick Reference Checklist</h2>
<div>
<pre contenteditable="false"># 1. Identify problem OSD
ceph health detail
# 2. Find host
ceph osd find &lt;osd-number&gt;
# 3. SSH to host
ssh &lt;hostname&gt;
# 4. Find physical disk
lsblk -o NAME,SIZE,TYPE,MOUNTPOINT,FSTYPE
# 5. Check health (NVMe)
nvme smart-log /dev/&lt;device&gt;
# 5. Check health (SATA)
smartctl -a /dev/&lt;device&gt;
# 6. Check OSD performance
ceph osd perf
ceph daemon osd.&lt;number&gt; dump_historic_slow_ops
# 7. Monitor I/O
iostat -x &lt;device&gt; 2 5
# 8. Check logs
journalctl -u ceph-osd@&lt;number&gt; --since "1 hour ago"</pre>
</div>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/proxmox-help/step-by-step-guide-checking-ceph-osd-disk-health/</guid>
                    </item>
				                    <item>
                        <title>How to install PegaProx in Proxmox</title>
                        <link>https://www.virtualizationhowto.com/community/proxmox-help/how-to-install-pegaprox-in-proxmox/</link>
                        <pubDate>Mon, 09 Feb 2026 13:45:14 +0000</pubDate>
                        <description><![CDATA[Let&#039;s look at how to install PegaProx in your home lab. Take a look at my full write up on PegaProx here: Managing Multiple Proxmox Clusters Gets Messy When You Want Smarter Placement.
Inte...]]></description>
                        <content:encoded><![CDATA[<p><!-- wp:paragraph --></p>
<p>Let's look at how to install PegaProx in your home lab. Take a look at my full write up on PegaProx here: <a href="https://www.virtualizationhowto.com/2026/02/managing-multiple-proxmox-clusters-gets-messy-when-you-want-smarter-placement/" target="_blank" rel="noopener">Managing Multiple Proxmox Clusters Gets Messy When You Want Smarter Placement</a>.</p>
<p>Interestingly, PegaProx provides both VM and LXC container templates you can download from their mirrors that already have PegaProx installed. You can install it in a Docker container as well, but this is noted to be a <strong>dev-only</strong> type solution. When you provision these, they have a quick wizardized provisioning process to configure PegaProx in your environment.</p>
<p><!-- /wp:paragraph -->

<!-- wp:paragraph --></p>
<p>Download the images from the Pegaprox site here: <a href="https://pegaprox.com/">PegaProx</a>.</p>
<p><!-- /wp:paragraph -->

<!-- wp:image --></p>
<figure class="wp-block-image aligncenter size-full"><a href="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/downloading-the-provided-vm-or-lxc-container-template-1.png"><img class="wp-image-309295" src="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/downloading-the-provided-vm-or-lxc-container-template-1.png" alt="Downloading the provided vm or lxc container template" /></a>
<figcaption class="wp-element-caption">Downloading the provided vm or lxc container template</figcaption>
</figure>
<p><!-- /wp:image -->

<!-- wp:paragraph --></p>
<p>I picked the LXC container image. You download the image, and then copy this to your Proxmox host or just curl the image URL from your Proxmox host directly if you have it connected to the Internet.</p>
<p><!-- /wp:paragraph -->

<!-- wp:paragraph --></p>
<p>Once you pull down the backup image they provide, you can import it into Proxmox using the commands below for each respective image type. Be sure to change out the ID for the VM or LXC container and also what storage you want to target.</p>
<pre contenteditable="false"># For VM
qmrestore /var/lib/vz/dump/vzdump-qemu-XXX-YYYY_MM_DD-HH_MM_SS.vma.zst 100 --storage local-lvm

# For Container
pct restore 100 /var/lib/vz/dump/vzdump-lxc-XXX-YYYY_MM_DD-HH_MM_SS.tar.zst --storage local-lvm

# Replace:
#   100 = Your desired VM/CT ID
#   local-lvm = Your target storage name
#   File path = Path to your downloaded backup</pre>
<p><!-- wp:paragraph --></p>
<p>Below you can see me running the command for the downloaded image and getting this restored to my Proxmox server host.</p>
<p><!-- /wp:paragraph -->

<!-- wp:image --></p>
<figure class="wp-block-image aligncenter size-full"><a href="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/restoring-the-provided-lxc-container-template-downloaded-from-pegaprox-1.png"><img class="wp-image-309297" src="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/restoring-the-provided-lxc-container-template-downloaded-from-pegaprox-1.png" alt="Restoring the provided lxc container template downloaded from pegaprox" /></a>
<figcaption class="wp-element-caption">Restoring the provided lxc container template downloaded from pegaprox</figcaption>
</figure>
<p><!-- /wp:image -->

<!-- wp:paragraph --></p>
<p>I like how easy they have made this to get up and running. When you first login with the default username and password which are the below:</p>
<p><!-- /wp:paragraph -->

<!-- wp:paragraph --></p>
<p>Container:</p>
<p><!-- /wp:paragraph -->

<!-- wp:table --></p>
<figure class="wp-block-table">
<table class="has-fixed-layout">
<tbody>
<tr>
<td>Username</td>
<td><code>root</code></td>
</tr>
<tr>
<td>Password</td>
<td><code>PegaProx2026!</code></td>
</tr>
</tbody>
</table>
</figure>
<p><!-- /wp:table -->

<!-- wp:paragraph --></p>
<p>Virtual machine:</p>
<p><!-- /wp:paragraph -->

<!-- wp:table --></p>
<figure class="wp-block-table">
<table class="has-fixed-layout">
<tbody>
<tr>
<td>Username</td>
<td><code>pegaprox_admin</code></td>
</tr>
<tr>
<td>Password</td>
<td><code>PegaProx2026!</code></td>
</tr>
</tbody>
</table>
</figure>
<p><!-- /wp:table -->

<!-- wp:paragraph --></p>
<p>Then it will launch the configuration wizard you see below. You will set up your network connection and choose which port you want to run it on. It defaults to <strong>port 5000, 5001, 5002</strong>.</p>
<p><!-- /wp:paragraph -->

<!-- wp:paragraph --></p>
<p>&nbsp;</p>
<p><!-- /wp:paragraph -->

<!-- wp:image --></p>
<figure class="wp-block-image aligncenter size-full"><a href="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/running-the-initial-wizard-to-configure-pegaprox-1.png"><img class="wp-image-309299" src="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/running-the-initial-wizard-to-configure-pegaprox-1.png" alt="Running the initial wizard to configure pegaprox" /></a>
<figcaption class="wp-element-caption">Running the initial wizard to configure pegaprox</figcaption>
</figure>
<p><!-- /wp:image -->

<!-- wp:paragraph --></p>
<p>Setting a new password, etc.</p>
<p><!-- /wp:paragraph -->

<!-- wp:image --></p>
<figure class="wp-block-image aligncenter size-full"><a href="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/more-of-the-configuration-screen-on-the-lxc-container-1.png"><img class="wp-image-309301" src="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/more-of-the-configuration-screen-on-the-lxc-container-1.png" alt="More of the configuration screen on the lxc container" /></a>
<figcaption class="wp-element-caption">More of the configuration screen on the lxc container</figcaption>
</figure>
<p><!-- /wp:image -->

<!-- wp:paragraph --></p>
<p>At this point once the configuration wizard is ran, you can connect to the IP address and port configuration and login with the default password:</p>
<p><!-- /wp:paragraph -->

<!-- wp:list --></p>
<ul class="wp-block-list"><!-- wp:list-item -->
<li>user: <strong>pegaprox</strong></li>
<!-- /wp:list-item -->

<!-- wp:list-item -->
<li>pass: <strong>admin</strong></li>
<!-- /wp:list-item --></ul>
<p><!-- /wp:list -->

<!-- wp:image --></p>
<figure class="wp-block-image aligncenter size-full"><a href="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/logging-into-the-web-ui-for-pegaprox-1.png"><img class="wp-image-309303" src="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/logging-into-the-web-ui-for-pegaprox-1.png" alt="Logging into the web ui for pegaprox" /></a>
<figcaption class="wp-element-caption">Logging into the web ui for pegaprox</figcaption>
</figure>
<p><!-- /wp:image -->

<!-- wp:paragraph --></p>
<p>After you get logged in for the first time, this is what the all clusters overview screen by default looks like.</p>
<p><!-- /wp:paragraph -->

<!-- wp:image --></p>
<figure class="wp-block-image aligncenter size-full"><a href="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/default-dashboard-screen-with-pegaprox-1.png"><img class="wp-image-309305" src="https://www.virtualizationhowto.com/wp-content/uploads/2026/02/default-dashboard-screen-with-pegaprox-1.png" alt="Default dashboard screen with pegaprox" /></a>
<figcaption class="wp-element-caption">Default dashboard screen with pegaprox</figcaption>
</figure>
<p><!-- /wp:image --></p>
<p><!-- /wp:paragraph --></p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/proxmox-help/how-to-install-pegaprox-in-proxmox/</guid>
                    </item>
				                    <item>
                        <title>Remove a Proxmox node from a Cluster and one that is stuck</title>
                        <link>https://www.virtualizationhowto.com/community/proxmox-help/remove-a-proxmox-node-from-a-cluster-and-one-that-is-stuck/</link>
                        <pubDate>Sun, 08 Feb 2026 03:02:25 +0000</pubDate>
                        <description><![CDATA[If you work with Proxmox clusters long enough, you will likely have a cluster that you need to remove a node from. This is a fairly easy process using the right commands. First, check the st...]]></description>
                        <content:encoded><![CDATA[<p>If you work with Proxmox clusters long enough, you will likely have a cluster that you need to remove a node from. This is a fairly easy process using the right commands. First, check the status of your cluster with the command:</p>
<pre contenteditable="false">pvecm status</pre>
<p>This will show you the current status of your Proxmox cluster. Then to remove the selected node, you run the following command:</p>
<pre contenteditable="false">pvecm delnode &lt;nodename&gt;</pre>
<p>&nbsp;</p>
925
<p>However, you may have times when you run this command, it says it worked, but you still see the node listed in the Proxmox web UI. When this happens check the directory located here to see if there is a directory still showing for the deleted node:</p>
<pre contenteditable="false">ls /etc/pve/nodes</pre>
924
<p>If you see the deleted Proxmox cluster node listed here, then you can delete the folder representing the node:</p>
<pre contenteditable="false">rm -rf /etc/pve/nodes/&lt;nodename&gt;</pre>
926
<p>Check the web UI to make sure the node is gone.</p>
927
<p>&nbsp;</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/proxmox-help/remove-a-proxmox-node-from-a-cluster-and-one-that-is-stuck/</guid>
                    </item>
				                    <item>
                        <title>Script to add all virtual machines to Proxmox High Availability</title>
                        <link>https://www.virtualizationhowto.com/community/proxmox-help/script-to-add-all-virtual-machines-to-proxmox-high-availability/</link>
                        <pubDate>Sun, 25 Jan 2026 14:08:52 +0000</pubDate>
                        <description><![CDATA[Hi all, just a quick note I wanted to toss out here. If you are looking for a script to add all of your Proxmox virtual machines to HA without having to click through the GUI to add each one...]]></description>
                        <content:encoded><![CDATA[<p>Hi all, just a quick note I wanted to toss out here. If you are looking for a script to add all of your Proxmox virtual machines to HA without having to click through the GUI to add each one individually. In the script below, you will see that I have my storage defined (we are adding all VMs on shared storage), and I am excluding VMs that match my Veeam worker nodes (vm-worker*).</p>
<p>First confirm your storage ID:</p>
<pre contenteditable="false">grep -Rho --include="*.conf" -E '^(scsi|virtio|sata|ide)+:\s*+:' /etc/pve/nodes/*/qemu-server/*.conf \
| sed -E 's/^(scsi|virtio|sata|ide)+:\s*(+):.*/\2/' \
| sort -u
</pre>
923
<p>In this output, my storage id was "rbd-vm:</p>
<pre contenteditable="false">STORAGE_ID="rbd-vm"

# Get list of HA-managed VMIDs
HA_VMS=$(ha-manager config | awk '/^vm:/{print $1}' | sed 's/vm://')

for cfg in /etc/pve/nodes/*/qemu-server/*.conf; do
  id=$(basename "$cfg" .conf)

  # Skip if already HA-managed
  if echo "$HA_VMS" | grep -qx "$id"; then
    continue
  fi

  # Skip templates
  if grep -q '^template: 1' "$cfg"; then
    continue
  fi

  # Skip vbr-worker VMs
  name=$(awk '/^name:/{print $2}' "$cfg")
  if []; then
    continue
  fi

  # Only include Ceph RBD-backed VMs
  if grep -q "${STORAGE_ID}:" "$cfg"; then
    ha-manager add "vm:$id" --state started
    echo "Added vm:$id ($name) to HA"
  fi
done</pre>
<p>&nbsp;</p>
<p>&nbsp;</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/proxmox-help/script-to-add-all-virtual-machines-to-proxmox-high-availability/</guid>
                    </item>
				                    <item>
                        <title>How to Update Proxmox Datacenter Manager</title>
                        <link>https://www.virtualizationhowto.com/community/proxmox-help/how-to-update-proxmox-datacenter-manager/</link>
                        <pubDate>Sun, 18 Jan 2026 19:59:27 +0000</pubDate>
                        <description><![CDATA[Let&#039;s briefly look at the steps to update Proxmox Datacenter Manager (PDM). It is very similar to how you would update a Proxmox VE Server node. Navigate to Administration &gt; Updates.
Jus...]]></description>
                        <content:encoded><![CDATA[<p>Let's briefly look at the steps to update Proxmox Datacenter Manager (PDM). It is very similar to how you would update a Proxmox VE Server node. Navigate to <strong>Administration &gt; Updates.</strong></p>
<p>Just like Proxmox VE Server, you need to add the <strong>no subscription</strong> repository to your PDM server as you can see I have done below.</p>
921
<p>Then refresh the updates that are available. Once you have done that, you can apply the updates. It will launch the pop-up dialog like it does with Proxmox VE Server and you will type a <strong>Y</strong> to confirm.</p>
922
<p>That is all there is to it. Once the updates apply, you can refresh the interface and it should reflect a new version listed for Proxmox Datacenter Manager.</p>
<p>&nbsp;</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/proxmox-help/how-to-update-proxmox-datacenter-manager/</guid>
                    </item>
				                    <item>
                        <title>How to Install Veeam Hardened Linux Appliance 13 step by step</title>
                        <link>https://www.virtualizationhowto.com/community/how-tos-shared/how-to-install-veeam-hardened-linux-appliance-13-step-by-step/</link>
                        <pubDate>Fri, 09 Jan 2026 16:57:37 +0000</pubDate>
                        <description><![CDATA[Below are screenshots of deploying the Veeam Data Platform 13 appliance in a VMware vSphere environment. First, you grab the ISO or OVA. Here I am grabbing the OVA appliance file.

&nbsp;...]]></description>
                        <content:encoded><![CDATA[<p>Below are screenshots of deploying the Veeam Data Platform 13 appliance in a VMware vSphere environment. First, you grab the ISO or OVA. Here I am grabbing the OVA appliance file.</p>
900
<p>&nbsp;</p>
<p><span style="font-size: 18pt">Deploying the OVA appliance in vSphere</span></p>
<p>This is just classic OVA deployment in vSphere but below are the screenshots of the process for my home lab.</p>
901
<p>Selecting the name and folder location.</p>
902
<p>Select a compute resource where you want the OVA to be deployed.</p>
903
<p>Review the initial details that it shows in the OVA wizard.</p>
904
<p>Select the target storage for the OVA appliance deployment.</p>
905
<p>Select the virtual network where you want to attach it.</p>
906
<p>Ready to complete the OVA deployment wizard.</p>
907
<p><span style="font-size: 18pt">Starting the appliance and initial configuration wizard</span></p>
<p>Once the appliance is deployed, power it up. You will see the appliance boot as below.</p>
908
<p>It will then kick off an initial configuration wizard for Veeam. First, accept the EULA.</p>
909
<p>Next, set the hostname for the appliance.</p>
910
<p>Next, configure your network settings.</p>
911
<p>On the next step, you will be able to configure your NTP setup.</p>
912
<p>Next, set your veeamadmin password.</p>
913
<p>Be sure to set something that is super complex, it will only allow 3 characters of the same type in a row.</p>
914
<p>Setup MFA for your user account.</p>
915
<p>Configure a security officer account. This is optional and you can check the box to skip this part if you want.</p>
916
<p>Summary screen of the initial configuration settings.</p>
917
<p>It will apply the configuration settings and restart all the services.</p>
918
<p>The appliance will reboot and you will see something similar to the below on the console. This allows you to see how you connect to the appliance.</p>
919
<p>When logging in, the first thing it will do is update the host components and apply updates.</p>
920
<p>Hopefully, this walkthrough of how to install the Veeam hardened linux appliance using the OVA appliance in a VMware vSphere environment works. The process is straightforward and intuitive in classic Veeam fashion. Enjoy!</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/how-tos-shared/how-to-install-veeam-hardened-linux-appliance-13-step-by-step/</guid>
                    </item>
				                    <item>
                        <title>Let’s Encrypt is adding SSL certs for IP addresses and what this means</title>
                        <link>https://www.virtualizationhowto.com/community/home-lab-forum/lets-encrypt-is-adding-ssl-certs-for-ip-addresses-and-what-this-means/</link>
                        <pubDate>Mon, 22 Dec 2025 04:52:55 +0000</pubDate>
                        <description><![CDATA[I ran across this in the Let’s Encrypt community forum and figured it was worth sharing here because this actually affects a lot of home lab and self-hosting setups or could potentially be u...]]></description>
                        <content:encoded><![CDATA[<p data-start="282" data-end="447">I ran across this in the Let’s Encrypt community forum and figured it was worth sharing here because this actually affects a lot of home lab and self-hosting setups or could potentially be useful. </p>
<p data-start="449" data-end="682"><span class="hover:entity-accent entity-underline inline cursor-pointer align-baseline"><span class="whitespace-normal">Let's Encrypt</span></span> is starting to move toward SSL/TLS certificates for <strong data-start="541" data-end="557">IP addresses</strong>, not just DNS names. That’s something we haven’t really had from public CAs before, and it solves a pretty common problem.</p>
<h3 data-start="684" data-end="723">What does this mean in plain terms?</h3>
<p data-start="725" data-end="762">Right now, if you hit something like: https:<span class="hljs-comment">//192.168.1.50</span></p>
<p data-start="794" data-end="805">You either:</p>
<ul data-start="806" data-end="916">
<li data-start="806" data-end="829">
<p data-start="808" data-end="829">Get a browser warning</p>
</li>
<li data-start="830" data-end="854">
<p data-start="832" data-end="854">Use a self-signed cert</p>
</li>
<li data-start="855" data-end="916">
<p data-start="857" data-end="916">Or create a fake/internal DNS name just to make HTTPS happy</p>
</li>
</ul>
<p data-start="918" data-end="1042">With this change, you’ll be able to get a trusted HTTPS certificate for an IP address directly and you won't need a domain name.</p>
<h3 data-start="1044" data-end="1073">Why this actually matters</h3>
<p data-start="1075" data-end="1144">This comes up way more often than people might think, especially in labs or development environments. Think about things like the following URLs where we usually get a certificate warning:</p>
<ul data-start="1171" data-end="1338">
<li data-start="1171" data-end="1208">
<p data-start="1173" data-end="1208">Proxmox, ESXi, or appliance web UIs</p>
</li>
<li data-start="1209" data-end="1231">
<p data-start="1211" data-end="1231">NAS management pages</p>
</li>
<li data-start="1232" data-end="1256">
<p data-start="1234" data-end="1256">Firewalls and switches</p>
</li>
<li data-start="1257" data-end="1278">
<p data-start="1259" data-end="1278">Internal dashboards</p>
</li>
<li data-start="1279" data-end="1338">
<p data-start="1281" data-end="1338">Test services you don’t want to bother putting behind DNS</p>
</li>
</ul>
<p data-start="1340" data-end="1423">A lot of us just live with browser warnings for these, but this will give us a better way moving forward.</p>
<h3 data-start="1425" data-end="1455">A couple important gotchas</h3>
<p data-start="1457" data-end="1495">This isn’t magic and there are limits.</p>
<ul data-start="1497" data-end="1697">
<li data-start="1497" data-end="1577">
<p data-start="1499" data-end="1577">These certs are IP-only. Currently, if I am reading this right, you can’t mix DNS names and IPs in the same cert, so this will cause some complexity and either/or scenarios</p>
</li>
<li data-start="1578" data-end="1626">
<p data-start="1580" data-end="1626">Validation is done via HTTP-01 or TLS-ALPN-01</p>
</li>
<li data-start="1627" data-end="1697">
<p data-start="1629" data-end="1697">Let’s Encrypt needs to be able to reach that IP during issuance</p>
</li>
</ul>
<p data-start="1699" data-end="1812">So this isn’t for completely isolated internal-only IPs unless you already have a way to expose them temporarily to get a cert issued.</p>
<h3 data-start="1814" data-end="1855">This doesn’t replace normal DNS certs</h3>
<p data-start="1857" data-end="2000">If you already have proper DNS and domain-based certs, nothing really changes there. DNS certs are still the cleanest solution long-term. This new solution as part of upcoming changes will just give admins another option to help reduce some challenges they may face. It will also help with bad habits like ignoring cert warnings and it will make IP access only much cleaner and trustworthy.</p>
<h3 data-start="2128" data-end="2170">Why this may be a big deal for home labs</h3>
<p data-start="2172" data-end="2235">Most of us start out accessing things IP-only. DNS comes later in the lab (a soon later). And then it means you need a public DNS name that you can use for proper SSL certificate issuance. So many likely don't start with this out of the gate. So these upcoming changes will help to run secure HTTPS earlier, avoid self-signed certificates that can be a nightmare to manage and make management feel more production-grade.</p>
<h3 data-start="2401" data-end="2419">Rollout timing</h3>
<p data-start="2421" data-end="2493">This is still rolling out, so don’t expect everything to work right away. ACME clients and reverse proxies will need updates. If you’re using Certbot, Traefik, Caddy, Nginx, etc., support will depend on client updates.</p>
<p data-start="2421" data-end="2493">Read their blog here: <a href="https://community.letsencrypt.org/t/upcoming-changes-to-let-s-encrypt-certificates/243873">Upcoming Changes to Let’s Encrypt Certificates - API Announcements - Let's Encrypt Community Support</a></p>
<p data-start="2861" data-end="2950" data-is-last-node="" data-is-only-node="">Curious how others here might use this. Proxmox UI? Network gear? Temporary lab services? I think we also need more information on the requirements and limitations of needing to have Let's Encrypt be able to access the IP for issuance.</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/"></category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/home-lab-forum/lets-encrypt-is-adding-ssl-certs-for-ip-addresses-and-what-this-means/</guid>
                    </item>
							        </channel>
        </rss>
		