<?xml version="1.0" encoding="UTF-8"?>        <rss version="2.0"
             xmlns:atom="http://www.w3.org/2005/Atom"
             xmlns:dc="http://purl.org/dc/elements/1.1/"
             xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
             xmlns:admin="http://webns.net/mvcb/"
             xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#"
             xmlns:content="http://purl.org/rss/1.0/modules/content/">
        <channel>
            <title>
									Kubernetes and Containers - VHT Forum				            </title>
            <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/</link>
            <description>Virtualization Howto Discussion Board</description>
            <language>en-US</language>
            <lastBuildDate>Sat, 16 May 2026 22:28:05 +0000</lastBuildDate>
            <generator>wpForo</generator>
            <ttl>60</ttl>
							                    <item>
                        <title>How to Debug Docker Builds in Visual Studio Code (Step-by-Step Guide)</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/how-to-debug-docker-builds-in-visual-studio-code-step-by-step-guide/</link>
                        <pubDate>Sat, 18 Oct 2025 02:37:30 +0000</pubDate>
                        <description><![CDATA[If you’re a fan of Docker and Visual Studio Code, which I am a fan of both, there’s some cool news that makes debugging Docker builds a whole lot easier. Docker and Microsoft have recently t...]]></description>
                        <content:encoded><![CDATA[<p data-start="0" data-end="302">If you’re a fan of Docker and Visual Studio Code, which I am a fan of both, there’s some cool news that makes debugging Docker builds a whole lot easier. Docker and Microsoft have recently trying to improve the developer experience, working with Docker. The latest update now lets you <strong data-start="256" data-end="299">debug Docker builds directly in VS Code</strong>.</p>
<p data-start="304" data-end="734">In the past, if your Docker build failed or something didn’t act like you thought it would in your Dockerfile, your only real options were to add a bunch of RUN echo or RUN ls commands. You would then need to rebuild repeatedly, and hope you eventually found the issue. Now, Docker’s <strong>BuildKit debugger</strong> can actually open up an interactive debugging session right inside VS Code, letting you step through each instruction in your Dockerfile like real code.</p>
<p data-start="736" data-end="821">You start by adding the new <strong>--debug flag</strong> to your Docker build command, like this:</p>
<div class="contain-inline-size rounded-2xl relative bg-token-sidebar-surface-primary">
<div class="sticky top-9">
<div class="absolute end-0 bottom-0 flex h-9 items-center pe-2">
<div class="bg-token-bg-elevated-secondary text-token-text-secondary flex items-center gap-4 rounded-sm px-2 font-sans text-xs">
<pre contenteditable="false">docker build --debug .</pre>
</div>
</div>
</div>
</div>
<p data-start="855" data-end="1187">Then VS Code will recognize the BuildKit debug session. It will then allow you to connect to it. Once you connect, you can see the build stages, inspect environment variables, and even poke around the container’s filesystem at each step. It’s a much more visual way to troubleshoot. This will allow you to much more easily see what's happening inside your image during the build process.</p>
<p data-start="855" data-end="1187">Below showing adding a breakpoint to your code and you can see that it has stopped where the breakpoint was added.</p>
742
<p>File explorer with the new docker build kit tool.</p>
743
<p>Inspecting variables with the new buildkit.</p>
744
<p data-start="1189" data-end="1453">This new integration is really great for developers who work with multi-stage builds. You can pause inside any stage, check dependencies, and verify installed binaries. You can then make sure your <code data-start="1371" data-end="1377">COPY</code> or <code data-start="1381" data-end="1386">RUN</code> instructions are doing what you think they are before moving on.</p>
<p data-start="1455" data-end="1802">To make it work, you’ll need to have the latest Docker extension for VS Code and make sure you’re using <strong>BuildKit</strong>. You need this since the debugger relies on BuildKit’s capabilities. Once you’ve got that, you can either attach to a running build or launch one straight from VS Code using the “Docker: Build with Debug” command from the Command Palette.</p>
<p data-start="1804" data-end="2046">If you are like me, this feels like a big step forward if you have ever spent too long trying to fix broken Docker builds, which I have definitely sunk many hours into this in the home lab and production environments. Instead of guessing what went wrong, you can now literally step through your build and inspect the state of things in real time.</p>
<p data-start="2048" data-end="2333">I’ve tried it out on a small self-hosted app I have built, and it works really well. You can drop into the environment mid-build and even run commands manually to test things out. I think the work between Docker and Microsoft and their collaboration are really tightening the loop between development and building containers.</p>
<p data-start="2335" data-end="2583">If you want to dig deeper, the full write-up from Docker has step-by-step details and screenshots showing how it all works: <a class="decorated-link" href="https://www.docker.com/blog/debug-docker-builds-with-visual-studio-code/" target="_new" rel="noopener" data-start="2464" data-end="2583">Debug Docker Builds with Visual Studio Code<span class="ms-0.5 inline-block align-middle leading-none" aria-hidden="true"></span></a></p>
<p data-start="2585" data-end="2768" data-is-last-node="" data-is-only-node="">Definitely worth checking out if you’re building images regularly or tired of “debugging by rebuild.” This should save a ton of time for anyone who’s serious about Docker development.</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/how-to-debug-docker-builds-in-visual-studio-code-step-by-step-guide/</guid>
                    </item>
				                    <item>
                        <title>Kubernetes 1.34 Is Out and here is what Matters for SREs and Home Labs Both</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/kubernetes-1-34-is-out-and-here-is-what-matters-for-sres-and-home-labs-both/</link>
                        <pubDate>Tue, 02 Sep 2025 23:56:25 +0000</pubDate>
                        <description><![CDATA[In case you haven&#039;t seen it yet, Kubernetes 1.34 just dropped. It&#039;s not necessarily a flashy release, but I think it is one that SREs (and home labbers) should appreciate. It’s full of quali...]]></description>
                        <content:encoded><![CDATA[<p data-start="141" data-end="383">In case you haven't seen it yet, Kubernetes 1.34 just dropped. It's not necessarily a flashy release, but I think it is one that SREs (and home labbers) should appreciate. It’s full of quality-of-life improvements that will help to make clusters more predictable and cut down on surprises.</p>
<p data-start="141" data-end="383"><a href="https://kubernetes.io/blog/2025/08/27/kubernetes-v1-34-release/">- Kubernetes v1.34: Of Wind &amp; Will (O' WaW) | Kubernetes</a></p>
726
<p data-start="385" data-end="422">Here are a few of the highlights:</p>
<ul data-start="424" data-end="1576">
<li data-start="424" data-end="681">
<p data-start="426" data-end="681"><strong data-start="426" data-end="466">Dynamic Resource Allocation (now Stable)</strong> – it now has first class handling of GPUs, FPGAs, and accelerators. It also has better visibility, smarter scheduling, and the ability to share slices. This I think will be a big deal for ML in production, but even more exciting for home lab GPU configurations</p>
</li>
<li data-start="682" data-end="901">
<p data-start="684" data-end="901"><strong data-start="684" data-end="724">Per-container restart policy (Alpha)</strong> – restart a single container instead of rescheduling the whole pod. Less disruption, more efficiency. Great in prod, and handy in labs when you’re testing multi-service pods.</p>
</li>
<li data-start="902" data-end="1047">
<p data-start="904" data-end="1047"><strong data-start="904" data-end="956">Short-lived ServiceAccount tokens (Beta/Default)</strong> – replaces long-lived imagePullSecrets. More secure and less secret management overhead.</p>
</li>
<li data-start="1048" data-end="1212">
<p data-start="1050" data-end="1212"><strong data-start="1050" data-end="1079">Built-In Pod mTLS (Alpha)</strong> – native short-lived certs for workload-to-workload encryption, no sidecars needed. Moves Kubernetes toward zero-trust by default.</p>
</li>
<li data-start="1213" data-end="1339">
<p data-start="1215" data-end="1339"><strong data-start="1215" data-end="1232">KYAML (Alpha)</strong> – stricter, safer YAML that avoids those frustrating “why did this deploy break on whitespace” problems.</p>
</li>
<li data-start="1340" data-end="1471">
<p data-start="1342" data-end="1471"><strong data-start="1342" data-end="1373">OCI artifact volumes (Beta)</strong> – mount configs, models, or binaries from a registry without bloating images. Clean and simple.</p>
</li>
<li data-start="1472" data-end="1576">
<p data-start="1474" data-end="1576"><strong data-start="1474" data-end="1515">Graceful windows node shutdown (Beta)</strong> – Windows nodes finally respect termination grace periods.</p>
</li>
</ul>
<p data-start="1578" data-end="1807">From an SRE’s perspective, this is a release that is all about stability, observability, and security. From a home labber’s perspective, I think it helps with some of the friction with running Kubernetes in a home lab.</p>
<p data-start="1809" data-end="2087">I’m personally excited about per-container restarts and the DRA graduation. Those are two that solve real pain points that I’ve seen in both production and lab environments. I am curious what you all think. Will you be testing 1.34 right away? Or will you be waiting until these features harden a bit?</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/kubernetes-1-34-is-out-and-here-is-what-matters-for-sres-and-home-labs-both/</guid>
                    </item>
				                    <item>
                        <title>Kubernetes 1.33 Octarine is GA New Features</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/kubernetes-1-33-octarine-is-ga-new-features/</link>
                        <pubDate>Fri, 09 May 2025 03:33:13 +0000</pubDate>
                        <description><![CDATA[Hey all my Kubernetes homelabbers, the Kubernetes 1.33 release has officially gone GA packed . It&#039;s packed with some powerful new features that will provide new ways of how we manage persist...]]></description>
                        <content:encoded><![CDATA[<p class="" data-start="183" data-end="196">Hey all my Kubernetes homelabbers, the Kubernetes 1.33 release has officially gone GA packed . It's packed with some powerful new features that will provide new ways of how we manage persistent storage, autoscaling, and networking in our clusters. Here's a quick summary of the highlights:</p>
<h3 class="" data-start="437" data-end="469">&#x1f504; Volume Populators Go GA</h3>
<p class="" data-start="470" data-end="855">Volume Populators are now stable. This means you can now pre-populate your PVCs with data from sources other than just volume snapshots or clones.<br data-start="616" data-end="619" />This is enabled using the <code data-start="643" data-end="658">dataSourceRef</code> field. It also requires a CRD and the <code data-start="692" data-end="722">volume-data-source-validator</code> controller in your cluster. Super helpful for scenarios like injecting test data or restoring from backups using custom controllers.</p>
<p data-start="470" data-end="855">Below is an example of how to use the volume populators:</p>
<pre contenteditable="false">apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pvc1
spec:
  ...
  dataSourceRef:
    apiGroup: provider.example.com
    kind: Provider
    name: provider1</pre>
<h3 class="" data-start="857" data-end="903">&#x1f9f9; PersistentVolume Reclaim Policy Fixes</h3>
<p class="" data-start="904" data-end="1178">There’s now a new protection in place for persistent volumes (PVs) when using the “Delete” reclaim policy. Before, if you deleted a PV before the PVC, the deletion logic might get skipped. Kubernetes 1.33 introduces new finalizers to make sure the reclaim policy is carried out no matter the order in which things are deleted.</p>
<h3 class="" data-start="1180" data-end="1214">&#x26a0;&#xfe0f; Endpoints API Deprecation</h3>
<p class="" data-start="1215" data-end="1450">The old <code data-start="1223" data-end="1234">Endpoints</code> API is now officially deprecated. If you're still using it, you'll start seeing warnings. The Kubernetes team recommends switching to <code data-start="1369" data-end="1385">EndpointSlices.</code> These have been stable since the release of 1.21 and have better scalability.</p>
<h3 class="" data-start="1452" data-end="1504">&#x1f4c8; Horizontal Pod Autoscaler Tolerance (Alpha)</h3>
<p class="" data-start="1505" data-end="1743">K8s 1.33 adds another new feature in alpha that allows you to tweak the tolerance level for HPA metrics. This will mean fewer unnecessary scale-up/scale-down events. Even when your resource usage fluctuates just slightly. This gives more control for production environments.</p>
<h3 class="" data-start="1745" data-end="1786">&#x1f525; nftables Mode in Kube-Proxy (GA)</h3>
<p class="" data-start="1787" data-end="2022">The long-awaited nftables mode for <code data-start="1822" data-end="1834">kube-proxy</code> is coming to GA in 1.33. It replaces iptables for better performance and scalability. This is true especially on modern Linux kernels. The iptables module is the default for now, but nftables is the path forward.</p>
<p class="" data-start="2029" data-end="2196">These changes bring exciting new capabilities and performance boosts, especially for home labbers and platform engineers. If you are working with persistent storage and autoscaling in Kubernetes this release contains a lot of great new functionality.</p>
<p class="" data-start="2198" data-end="2327">Check out the full release notes and blog here: <a class="" href="https://kubernetes.io/blog/2025/05/08/kubernetes-v1-33-volume-populators-ga/" target="_new" rel="noopener" data-start="2251" data-end="2327">https://kubernetes.io/blog/2025/05/08/kubernetes-v1-33-volume-populators-ga/</a></p>
<p class="" data-start="2329" data-end="2431">Anyone updated to 1.33 already and testing things out?</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/kubernetes-1-33-octarine-is-ga-new-features/</guid>
                    </item>
				                    <item>
                        <title>How to upgrade Flux CD in your Kubernetes cluster</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/how-to-upgrade-flux-cd-in-your-kubernetes-cluster/</link>
                        <pubDate>Tue, 22 Apr 2025 03:29:56 +0000</pubDate>
                        <description><![CDATA[First things first, you have to have the latest version of the Flux CLI installed. You can use the package manager that you originally used to install it to upgrade it. You can check to see ...]]></description>
                        <content:encoded><![CDATA[<p>First things first, you have to have the latest version of the Flux CLI installed. You can use the package manager that you originally used to install it to upgrade it. You can check to see if there is an update available with the command:</p>
<pre contenteditable="false">flux check --pre</pre>
<p>See here for documentation on the initial installation and package managers you can use: <a href="https://fluxcd.io/flux/cmd/">Flux CLI | Flux</a>.</p>
<p>After you have updated your Flux CLI, you can see this with the command:</p>
<pre contenteditable="false">flux --version</pre>
<p><img style="margin-left: auto;margin-right: auto" src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/698-2025-04-2122-07-01.jpg" /></p>
<p>You can check and see what version your controllers are sitting at with the command:</p>
<pre contenteditable="false">flux check</pre>
<p>You can see below I am at an older version than my newly updated Flux CLI at 2.5.1. Below we are at v2.4.0.</p>
<p><img style="margin-left: auto;margin-right: auto" src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/699-2025-04-2122-08-54.jpg" /></p>
<p>If you have used the Flux CLI bootstrap command to install Flux in your Kubernetes cluster, you can simply rerun the bootstrap command using the same parameters you used initially to bootstrap. The command will look something like this:</p>
<pre contenteditable="false">flux bootstrap git --url=https://&lt;git server URL&gt;/devops/k8s-gitops.git --branch=main --path=clusters/clkube --token-auth</pre>
<p>After rerunning the bootstrap command, you will see the update process work through and the controllers should be redeployed with the latest version. Then, you can run the <strong>flux check</strong> command again, and you should see your distribution has been updated.</p>
<p><img style="margin-left: auto;margin-right: auto" src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/700-2025-04-2122-25-18.jpg" /></p>
<p> </p>
<p> </p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/how-to-upgrade-flux-cd-in-your-kubernetes-cluster/</guid>
                    </item>
				                    <item>
                        <title>How to add Docker Hub Authentication to GitLab Pipeline</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/how-to-add-docker-hub-authentication-to-gitlab-pipeline/</link>
                        <pubDate>Wed, 02 Apr 2025 19:56:32 +0000</pubDate>
                        <description><![CDATA[With the recent changes at Docker regarding the new limits placed on pulls from the official Docker registry, it is a great idea to add authentication to your pipelines as the new limits wil...]]></description>
                        <content:encoded><![CDATA[<p>With the recent changes at Docker regarding the new limits placed on pulls from the official Docker registry, it is a great idea to add authentication to your pipelines as the new limits will likely hinder even very low traffic CI/CD solutions. What are those limits?</p>
<p>Starting April 1, Docker will enforce the following pull rate limits according to the GitLab KB here: <a href="https://about.gitlab.com/blog/2025/03/24/prepare-now-docker-hub-rate-limits-will-impact-gitlab-ci-cd/">Prepare now: Docker Hub rate limits will impact GitLab CI/CD</a></p>
<table>
<thead>
<tr>
<th>User type</th>
<th>Pull rate limit per hour</th>
<th>Number of public repositories</th>
<th>Number of private repositories</th>
</tr>
</thead>
<tbody>
<tr>
<td>Business, Team, Pro (authenticated)</td>
<td>Unlimited (fair use)</td>
<td>Unlimited</td>
<td>Unlimited</td>
</tr>
<tr>
<td>Personal (authenticated)</td>
<td>200 per 6-hour window</td>
<td>Unlimited</td>
<td>Up to 1</td>
</tr>
<tr>
<td>Unauthenticated users</td>
<td>100 per 6-hour window per IPv4 address or IPv6 /64 subnet</td>
<td>Not applicable</td>
<td>Not applicable</td>
</tr>
</tbody>
</table>
<p>To avoid this, you need to add authentication to your CI/CD pipelines, no matter which solution you are using. However, how do we do this with GitLab?</p>
<p>We can add the authentication using a special <strong>DOCKER_AUTH_CONFIG</strong> parameter in your <strong>config.toml</strong>. This is the configuration file for your runner and the contents get created when you pair a runner with your GitLab instance using the <strong>gitlab-runner register</strong> command.</p>
<h2>Generating the Docker Hub authentication token</h2>
<p>First, we need to sign up for Docker Hub and generate a personal token. You can do this after you have a free Docker account, navigate to <strong>Account Settings &gt; Personal access tokens</strong>.</p>
<p><img src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/692-personal-access-token-on-docker-hub.png" /></p>
<p>After you click to generate new token, you will see this screen. You can add a description, choose or just leave None for the expiration date, and set the permissions for the token. For what we are trying to accomplish pulling images from the online public repo, you can just leave it set to <strong>Public Repo Read-only</strong>.</p>
<p><img style="margin-left: auto;margin-right: auto" src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/693-personal-access-token-on-docker-hub-2.png" /></p>
<p>Once we have the token file, we need to create a BASE64 version of our authentication information that combines our username and the token together. On a Linux machine, you can do this using the command, <strong>note</strong>, below is not literal, you will replace <strong>dockeruser</strong> and the <strong>tokentexttokentexttokentext </strong>with your real token that you create in the Personal access tokens area above.</p>
<pre contenteditable="false">echo -n "dockeruser:tokentexttokentexttokentext" | base64</pre>
<p>When you run this command from a Linux terminal, you will get the BASE64 encoded token that we can use in the next step.</p>
<p><img style="margin-left: auto;margin-right: auto" src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/694-personal-access-token-on-docker-hub-3.png" /></p>
<p> </p>
<h2>Updating the config.toml file</h2>
<p>in the special config.toml file, we can use the <strong>environment</strong> parameter to configure the authentication like in the below example. Note the following:</p>
<ul>
<li><strong>environment</strong> parameter</li>
<li>URL is https://index.docker.io/v1</li>
<li>Then enter your generated token that you get from using the process above where you see <strong>REDACTED_DOCKER_AUTH</strong> in the environment string below</li>
</ul>
<pre contenteditable="false">concurrent = 1
check_interval = 0
connection_max_age = "15m0s"
shutdown_timeout = 0


  session_timeout = 1800

[]
  name = "my-runner"
  url = "https://gitlab.example.com"
  id = 0
  token = "REDACTED_TOKEN"
  token_obtained_at = 2024-10-21T00:31:49Z
  token_expires_at = 0001-01-01T00:00:00Z
  executor = "docker"
  environment = 
  clone_url = "https://gitlab.example.com"
  
  
    MaxUploadedArchiveSize = 0
    
    
    
  
    tls_verify = false
    image = "rocker/verse:latest"
    dns = 
    privileged = false
    disable_entrypoint_overwrite = false
    oom_kill_disable = false
    disable_cache = false
    volumes = 
    extra_hosts = 
    shm_size = 0
    network_mtu = 0
</pre>
<p> </p>
<h2>Confirming your pipeline is authenticating to Docker Hub</h2>
<p>Now that we have everything in place, we just need to run the pipeline once again and make sure it shows that it is now authenticating using the special <strong>$DOCKER_AUTH_CONFIG</strong> parameter.</p>
<p><img src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/695-confirming-authentication-in-the-Gitlab-pipeline-run.png" /></p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/how-to-add-docker-hub-authentication-to-gitlab-pipeline/</guid>
                    </item>
				                    <item>
                        <title>Upgrade Netdata in Kubernetes using Helm</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/upgrade-netdata-in-kubernetes-using-helm/</link>
                        <pubDate>Fri, 28 Feb 2025 00:44:00 +0000</pubDate>
                        <description><![CDATA[If you are running netdata to monitor your Kubernetes cluster, how do you upgrade it when the agent version needs upgraded? Note the following steps to upgrade your netdata agent deployment ...]]></description>
                        <content:encoded><![CDATA[<p>If you are running netdata to monitor your Kubernetes cluster, how do you upgrade it when the agent version needs upgraded? Note the following steps to upgrade your netdata agent deployment in Kubernetes.</p>
<p>Check the version of netdata that is installed. If you want to see what version is currently installed, run this command. Replace "netdata" with the namespace you have netdata installed in.</p>
<pre contenteditable="false">helm list -n netdata
</pre>
627
<p>Update your helm repos to the latest versions:</p>
<pre contenteditable="false">helm repo update</pre>
<p>To upgrade netdata to the latest after updating your helm repo, run the command:</p>
<pre contenteditable="false">helm upgrade &lt;release-name&gt; netdata/netdata -n &lt;namespace&gt; --reuse-values</pre>
<p>Below I have replaced with the appropriate values for netdata in my environment. Also, in a microk8s cluster, so need to add the "microk8s" in front of helm.</p>
626]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/upgrade-netdata-in-kubernetes-using-helm/</guid>
                    </item>
				                    <item>
                        <title>Command to Build Docker Image from Custom Dockerfile</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/command-to-build-docker-image-from-custom-dockerfile/</link>
                        <pubDate>Fri, 31 Jan 2025 16:06:20 +0000</pubDate>
                        <description><![CDATA[Building Docker images isn&#039;t a difficult process and is a great skill for DevOps. At its most simple definition, it has you run the docker build command to create a new image. It then reads ...]]></description>
                        <content:encoded><![CDATA[<p><!-- wp:paragraph --></p>
<p>Building Docker images isn't a difficult process and is a great skill for DevOps. At its most simple definition, it has you run the <strong>docker build</strong> command to create a new image. It then reads instructions from your Dockerfile. These instructions typically have the directive to pull a base image, like Ubuntu, and then execute commands and other instructions to create the final Docker image.</p>
<p><!-- /wp:paragraph --> <!-- wp:image --></p>
<figure>
513
<br />
<figcaption class="wp-element-caption">docker build help command</figcaption>
</figure>
<p><!-- /wp:image --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Creating a Custom Dockerfile</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>The Dockerfile is a special file that is a simple text file with the Docker image instructions. A Dockerfile is a text file containing the instructions needed to build a Docker image. The instructions are executed in order they appear in the text file. These can include commands to copy files, set environment variables, run commands, and other important directives to make your application work as intended.</p>
<p><!-- /wp:paragraph --> <!-- wp:paragraph --></p>
<p>Here is a simple example of a Dockerfile that runs a Python application. We can see the different directives in the file that tells Docker how to build the resulting image. The below is a simple docker image defined in the Dockerfile.</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code># Use an official Python runtime as a parent image
FROM python:3.8-slim

# Set the working directory in the container
WORKDIR /app

# Copy the current directory contents into the container at /app
COPY . /app

# Install any needed packages specified in requirements.txt
RUN pip install --no-cache-dir -r requirements.txt

# Make port 80 available to the world outside this container
EXPOSE 80

# Define environment variable
ENV NAME World

# Run app.py when the container launches
CMD 
</code></pre>
<p><!-- /wp:code --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Dissecting the commands in the Docker file</h3>
<p><!-- /wp:heading --> <!-- wp:list --></p>
<ul class="wp-block-list"><!-- wp:list-item -->
<li><strong>FROM</strong>: Specifies the base image to use.</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li><strong>WORKDIR</strong>: Sets the working directory for any RUN, CMD, ENTRYPOINT, COPY, and ADD instructions.</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li><strong>COPY</strong>: This copies new files or directories to the container's filesystem.</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li><strong>RUN </strong>instruction: The run command directive executes any commands in a new layer on top of the current image and then commits the results to the resulting image</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li><strong>CMD </strong>instruction: This tells the image which command to run within the container.</li>
<!-- /wp:list-item --></ul>
<p><!-- /wp:list --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Command to Build Docker Image</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Use the docker build command to build a Docker image from a custom Dockerfile. This command takes several options and arguments to define how the build process should be carried out.</p>
<p><!-- /wp:paragraph --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Using the Docker Build Command</h3>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Below is the command to build Docker image from custom created Dockerfile using the Docker build command from the command line and specifying the root directory (current working directory) for the Dockerfile, etc.</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>docker build -t mycustomimage:latest .</code></pre>
<p><!-- /wp:code --> <!-- wp:list --></p>
<ul class="wp-block-list"><!-- wp:list-item -->
<li><strong>-t</strong> - Tags the image with a name and optionally a tag in the nameformat.</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li>The <strong>"."</strong> at the end - This specifies the build context. It tells Docker to look for files located in the specified directory. The period tells it to look in the current directory for local files. Here we start to see the build output that will result in the built image.</li>
<!-- /wp:list-item --></ul>
<p><!-- /wp:list --> <!-- wp:image --></p>
<figure>
514
<br />
<figcaption class="wp-element-caption">building a docker custom image</figcaption>
</figure>
<p><!-- /wp:image --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Build Context</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>The build context is the set of files located in the specified directory and its subdirectories. Docker sends these files to the Docker daemon when building the image. One important consideration is to keep the build context as small as possible. This helps to speed up the build process and reduce resource usage.</p>
<p><!-- /wp:paragraph --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Managing Build Context</h3>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Use a<strong> .dockerignore</strong> file to exclude files and directories from the build context. This file works similarly to a <strong>.gitignore</strong> file, listing patterns for files and directories to ignore. As an example, we are ignoring node modules and log files below:</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code># Ignore the node_modules directory
node_modules

# Ignore all files with the .log extension
*.log</code></pre>
<p><!-- /wp:code --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Multiple Build Stages</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>One of the advanced build features with Docker is performing a multi-stage build. These are an advanced build feature of Docker. When doing a multi-stage build, you use multiple <strong>FROM</strong> statements in a single Dockerfile.</p>
<p><!-- /wp:paragraph --> <!-- wp:paragraph --></p>
<p>One reason you might do this is to optimize your build and reduce the size of the resulting container. When you separate the build environment from the runtime environment, you can include all necessary dependencies for building your application in one stage. Then, only the final artifacts are copied to the runtime stage.</p>
<p><!-- /wp:paragraph --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Example of Multi-Stage Build</h3>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Here’s an example of a multiple build stage in a Dockerfile:</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code># Stage 1: Build
FROM maven:3.6.3-jdk-8 as builder
WORKDIR /app
COPY . .
RUN mvn clean install

# Stage 2: Run
FROM openjdk:8-jre-alpine
WORKDIR /app
COPY --from=builder /app/target/app.jar .
CMD </code></pre>
<p><!-- /wp:code --> <!-- wp:paragraph --></p>
<p>In this example:</p>
<p><!-- /wp:paragraph --> <!-- wp:list --></p>
<ol class="wp-block-list"><!-- wp:list-item -->
<li>The first stage uses a Maven image to compile the Java application.</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li>The second stage uses a lightweight OpenJDK runtime image to run the compiled application.</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li>This separation makes sure that the final image only contains the runtime dependencies that are necessary. It makes it result in a smaller image.</li>
<!-- /wp:list-item --></ol>
<p><!-- /wp:list --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Benefits of Multi-Stage Builds</h3>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>There are many benefits to note for a multiple build stage Dockerfile. Note the following:</p>
<p><!-- /wp:paragraph --> <!-- wp:list --></p>
<ul class="wp-block-list"><!-- wp:list-item -->
<li><strong>Smaller Image Size</strong>: By copying only the necessary artifacts to the final image, you can significantly reduce the size of the image.</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li><strong>Better Security</strong>: Smaller images with fewer components reduces the attack surface and vulnerabilities</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li><strong>Separation</strong>: By separating the build environment from the runtime environment it helps to make sure that the final image is clean. Also, it helps to know that it only contains what is needed to run the app.</li>
<!-- /wp:list-item --></ul>
<p><!-- /wp:list --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Environment Variables in Docker</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Environment variables are a good way to configure applications and services. You can set the build time variables in a Dockerfile using the <strong>ENV </strong>instruction.</p>
<p><!-- /wp:paragraph --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Setting Environment Variables</h3>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Note the following environment variables where we are setting the app environment and debug settings:</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>ENV APP_ENV=production
ENV APP_DEBUG=false</code></pre>
<p><!-- /wp:code --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Advanced Docker Build Options</h2>
<p><!-- /wp:heading --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Build Arguments</h3>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Build arguments (ARG) are used to pass variables at build time.</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>ARG VERSION=1.0
RUN echo "Version: $VERSION"</code></pre>
<p><!-- /wp:code --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Using a Build Context</h3>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>You can specify a different destination path directory as the build context rather than the local directory by the following syntax:</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>docker build -t my-image:latest /path/to/build/context</code></pre>
<p><!-- /wp:code --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Using Docker Hub</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>The Docker Hub service is Docker's cloud repository. It is where you can find Docker images and share your newly created image. To push your custom Docker images to Docker Hub, you need to log in and use the <strong>docker push</strong> command.</p>
<p><!-- /wp:paragraph --> <!-- wp:paragraph --></p>
<p>Note the following syntax for a Docker login tagging the image, and finally pushing it to the Docker Hub repo.</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>docker login
docker tag my-image:latest mydockerhubusername/my-image:latest
docker push mydockerhubusername/my-image:latest</code></pre>
<p><!-- /wp:code --> <!-- wp:image --></p>
<figure>
515
<br />
<figcaption class="wp-element-caption">docker login command allows connecting to docker hub</figcaption>
</figure>
<p><!-- /wp:image --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Dockerfile and Docker Build Example</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>Let's go through an example of building a Docker image for a Python application. First, <strong>create a Dockerfile</strong>:</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>FROM python:3.9-slim
WORKDIR /app
COPY . /app
RUN pip install --no-cache-dir -r requirements.txt
CMD </code></pre>
<p><!-- /wp:code --> <!-- wp:paragraph --></p>
<p>Now, we can <strong>build the Docker Image</strong>:</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>docker build -t my-python-app:latest .</code></pre>
<p><!-- /wp:code --> <!-- wp:paragraph --></p>
<p>Finally, <strong>run the Docker Container</strong>:</p>
<p><!-- /wp:paragraph --> <!-- wp:code --></p>
<pre class="wp-block-code" contenteditable="false"><code>docker run -d -p 5000:5000 my-python-app:latest</code></pre>
<p><!-- /wp:code --> <!-- wp:paragraph --></p>
<p>The Docker container starts up and runs from the custom image.</p>
<p><!-- /wp:paragraph --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Load Build Definition</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>The build definition is an important part of Docker images. As you would expect, the Dockerfile is the build definition for the resulting Docker image. It details the steps needed to create the image. The definition must be correct and well-structured. By giving attention to this, it helps make sure the build process runs smoothly and produces the desired container image.</p>
<p><!-- /wp:paragraph --> <!-- wp:heading --></p>
<h2 class="wp-block-heading">Best Practices for Building Docker Images</h2>
<p><!-- /wp:heading --> <!-- wp:paragraph --></p>
<p>There are definitely best practices for building Docker image files that you want to keep in mind. Note the following</p>
<p><!-- /wp:paragraph --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Minimize Image Size</h3>
<p><!-- /wp:heading --> <!-- wp:list --></p>
<ul class="wp-block-list"><!-- wp:list-item -->
<li>Use a minimal base image - In other words, why use a bloated base image if you don't need one?</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li>Clean up temporary files and package managers - also use a docker ignore file to prevent unnecessary files from creeping in</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li>Use multi-stage builds to separate the build environment from the runtime environment which results in a smaller image size as unnecessary files and resources are not inadvertently copied in.</li>
<!-- /wp:list-item --></ul>
<p><!-- /wp:list --> <!-- wp:heading --></p>
<h3 class="wp-block-heading">Security Considerations</h3>
<p><!-- /wp:heading --> <!-- wp:list --></p>
<ul class="wp-block-list"><!-- wp:list-item -->
<li>Regularly update your base images to include the newest security updates</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li>Use official images from Docker Hub as these are properly vetted for security concerns</li>
<!-- /wp:list-item --> <!-- wp:list-item -->
<li>Scan images for vulnerabilities using third-party tools or native Docker tools like Docker Scout</li>
<!-- /wp:list-item --></ul>
<p><!-- /wp:list --> <!-- wp:paragraph --></p>
<p>Looking at Docker Scout:</p>
<p><!-- /wp:paragraph --> <!-- wp:image --></p>
<figure>
516
<br />
<figcaption class="wp-element-caption">looking at docker scout in docker desktop</figcaption>
</figure>
<p><!-- /wp:image --> <!-- wp:paragraph --></p>
<p>Viewing vulnerabilities in Docker Desktop for a Docker image using Docker Scout:</p>
<p><!-- /wp:paragraph --> <!-- wp:image --></p>
<figure>
517
<br />
<figcaption class="wp-element-caption">looking at docker scout vulnerabilities for a sample docker image</figcaption>
</figure>
<p><!-- /wp:image --> <!-- wp:paragraph --></p>
<p>It is a great exercise to build Docker container images and looking at the command to build Docker image from custom created Dockerfile. By going through the process to build your own custom image with a custom Dockerfile, it helps you understand the components of a Docker image and the commands needed to bring everything together.</p>
<p><!-- /wp:paragraph --> <!-- wp:paragraph --></p>
<p>Practice in a lab environment building custom images so you understand the process. It will help you become proficient with building custom images. I used this process to build a custom arpwatch container for my home lab environment and learned a ton doing this.</p>
<p><!-- /wp:paragraph --> <!-- wp:paragraph --></p>
<p>Let me know what you guys think about building your own custom Docker images. Have you done this before? What projects and solutions make sense to build your own custom container image?</p>
<p><!-- /wp:paragraph --></p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/command-to-build-docker-image-from-custom-dockerfile/</guid>
                    </item>
				                    <item>
                        <title>Docker Expose Port: How to get traffic into your container</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/docker-expose-port-how-to-get-traffic-into-your-container/</link>
                        <pubDate>Fri, 31 Jan 2025 04:39:29 +0000</pubDate>
                        <description><![CDATA[When you spin up Docker containers, there are generally two types of communication that you want to configure. That is communication between containers and communication from the outside. Do...]]></description>
                        <content:encoded><![CDATA[<p>When you spin up Docker containers, there are generally two types of communication that you want to configure. That is communication<span> </span><strong>between<span> </span></strong>containers and communication<span> </span><strong>from the outside</strong>. Docker allows you to expose docker ports in a couple of different ways for each of these use cases. Let’s take a look at what you need to know to expose a port or multiple ports to your Docker containers.</p>
<h2 id="what-does-docker-expose-port-mean" class="wp-block-heading">What Does Docker Expose Port Mean?</h2>
<p>When you “expose” a port or multiple ports, it means that you are making that network port on a<span> </span>Docker container available<span> </span>to be connected to from either the Docker network or the outside world. By default, Docker containers are isolated in the way their networking is designed and are not accessible unless you configure this.</p>
<p>Isolation helps with security as if a port doesn’t need to be exposed to the outside, it shouldn’t be. And, if you can limit the number of network ports you expose, the better off you will be from an attack surface. However, the downside is that you will need to think about which ports need to be open for necessary or expected communication with your containers.</p>
<h2 id="how-to-expose-ports-in-docker" class="wp-block-heading">How to Expose Ports in Docker</h2>
<p>Let’s take a look at how to expose ports in Docker and see what commands are needed for opening a Docker port.</p>
<p>To expose ports in Docker, you use the docker run command with the<span> </span><strong>-p or –publish flag<span> </span></strong>for publish ports functionality. This flag specifies which ports are available for connection. Let’s look at a simple example of exposing port 80 on an Nginx Docker image web server using port 8080 on the outside.</p>
<pre class="wp-block-code" contenteditable="false"><code>docker run -d -p 8080:80 nginx</code></pre>
<p>In this command,<span> </span><strong>8080 is the host port</strong>, and<span> </span><strong>80 is the container port</strong>. The -d flag runs the container in<span> </span><strong>detached<span> </span></strong>mode which is how you want to run containers to keep them running without having to have a console connection to your Docker container host.</p>
<p>The port expose configuration via a port number means that you are exposing the container on the same network internally to the Docker network. By default when you spin up a new container and don’t specify the container network, they are connected to the default bridge network on the Docker host.</p>
<h2 id="configuring-container-ports-to-listen" class="wp-block-heading">Configuring container ports to listen</h2>
<p>When<span> </span>creating a Docker container<span> </span>from a docker container image, configuring container ports that you want the container to listen on is an important step. To do this, we use a special instruction called<span> </span><strong>EXPOSE<span> </span></strong>in the Dockerfile. For example let’s look at what expose instructions for ports 80 and 443 would look like. Pretty simple:</p>
<pre class="wp-block-code" contenteditable="false"><code>EXPOSE 80 EXPOSE 443</code></pre>
<p>This EXPOSE instruction tells Docker that the container should expose multiple ports, 80 and 443.</p>
<p>Do you have to do this necessarily? No, if the container image already is exposing the ports, then the configuration will have this by default and you won’t have to “re-expose” the port in your Docker directives.</p>
<p>Case in point, if you see the below, you can see the port<span> </span><strong>81/tcp</strong><span> </span>is being exposed on the container, but you don’t see a host side port mapping.</p>
<div class="wp-block-image">
<figure>
507
<br />
<figcaption class="wp-element-caption">viewing an exposed port in docker</figcaption>
</figure>
</div>
<p>In addition, the Docker Compose code for the container above (Nginx Proxy Manager) looks like the following, so you don’t see any directives on the ports side or an expose directive for port 81. So, it means the container image is already exposing this port and the container listens on this exposed port by default.</p>
<div class="wp-block-image">
<figure>
508
<br />
<figcaption class="wp-element-caption">published ports in a docker compose file</figcaption>
</figure>
</div>
<h2 id="docker-container-port-mapping" class="wp-block-heading">Docker Container Port Mapping</h2>
<p>Let’s look at the Docker container port mapping construct. This is the configuration that allows mapping internal container ports to external host ports. When you do this, it allows traffic that is destined for the host IP network interface to be forwarded inward to the container and vice versa.</p>
<pre class="wp-block-code" contenteditable="false"><code>docker run -d -p 5000:5000 testapp</code></pre>
<p>This command maps<span> </span><strong>port 5000 on the host to port 5000 on the container</strong>. This configures communication to the container’s ports via the host IP address and makes the container accessible by the host system. Otherwise, you wouldn’t be able to communicate with it.</p>
<h2 id="configuring-the-exposed-ports-with-docker-compose" class="wp-block-heading">Configuring the exposed Ports with Docker Compose</h2>
<p>Docker Compose is the means by which you can spin up Docker container “stacks” that allow multiple containers to be provisioned at once. This is a great way to work with Docker containers if you have an application that requires multiple containers as part of the overall application architecture.</p>
<p>Docker Compose uses YAML code to describe the container configuration. The<span> </span><strong>ports</strong><span> </span>directive configures published ports available for connection from the host into the container.</p>
<pre class="wp-block-code" contenteditable="false"><code>version: '3'
services:
  web:
    image: nginx
    ports:
      - "8080:80"</code></pre>
<p>In this configuration, the web service exposes port 80 of the container on port 8080 of the host machine.</p>
<h2 id="verifying-exposed-ports-with-docker-ps-and-docker-compose-ps" class="wp-block-heading">Verifying Exposed Ports with Docker PS and Docker-Compose PS</h2>
<p>When you provision a container, you can verify the exposed port mappings using the<span> </span><strong>docker ps</strong><span> </span>command.</p>
<pre class="wp-block-code" contenteditable="false"><code>docker ps</code></pre>
<p>Look for the PORTS column in the output to see the mappings</p>
<p><img src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/510-using-docker-ps-to-view-port-configurations.png" /></p>
<p>When you use<span> </span><strong>docker-compose ps</strong><span> </span>command, it shows the same information, just for the application stack configured via the Docker Compose file:</p>
<p><img src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/511-viewing-ports-using-docker-compose-ps.png" /></p>
<h2 id="troubleshooting-docker-expose-port-mappings" class="wp-block-heading">Troubleshooting Docker expose port mappings</h2>
<p>There are a few issues that could come up when you are trying to expose a specified port for a container. Note the following:</p>
<ul class="wp-block-list">
<li>Make sure the port is not in use by another container</li>
<li>Make sure the container is attached to the Docker network that you assume it is connected to</li>
<li>Is the port mapped to the host port you expect?</li>
<li>If you are just exposing a port to use with something like Nginx Proxy Manager, make sure the port proxy is configured corrected to forward traffic to the internally exposed ports of the Docker container.</li>
</ul>
<h2 id="best-practices-for-exposing-ports" class="wp-block-heading">Best Practices for Exposing Ports</h2>
<ol class="wp-block-list">
<li><strong>Use Non-Standard Ports:</strong><span> </span>Common ports like 80 and 443 are going to be widely used, so you can use non-standard ports to avoid conflicts</li>
<li><strong>Limit Exposed Ports:</strong><span> </span>Only expose Docker container ports as this will help to avoid security vulnerabilities by only exposing what is absolutely needed</li>
<li><strong>Document Exposed Ports:</strong><span> </span>Maintain clear documentation of which ports are exposed and their purposes.</li>
</ol>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/docker-expose-port-how-to-get-traffic-into-your-container/</guid>
                    </item>
				                    <item>
                        <title>Install Kube VIP in Microk8s Kubernetes cluster for HA</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/install-kube-vip-in-microk8s-kubernetes-cluster-for-ha/</link>
                        <pubDate>Sun, 26 Jan 2025 17:46:01 +0000</pubDate>
                        <description><![CDATA[Here are my cheat notes on how to install Kube VIP in Microk8s kubernetes cluster. First, edit the csr.conf.template file found here:
sudo vi /var/snap/microk8s/current/certs/csr.conf.templ...]]></description>
                        <content:encoded><![CDATA[<p>Here are my cheat notes on how to install Kube VIP in Microk8s kubernetes cluster. First, edit the csr.conf.template file found here:</p>
<pre contenteditable="false">sudo vi /var/snap/microk8s/current/certs/csr.conf.template</pre>
<p><img src="https://www.virtualizationhowto.com/wp-content/uploads/wpforo/attachments/2/461-editing-the-csr-conf-template-file.jpg" /></p>
<p>Add the IP address as IP.99="a.b.c.d" to the file. Then refresh your microk8s certificates:</p>
<pre contenteditable="false">sudo microk8s refresh-certs -e ca.crt</pre>
<p>Create a values.yaml file. You can find an example here:</p>
<pre contenteditable="false">https://github.com/kube-vip/helm-charts/blob/main/charts/kube-vip/values.yaml</pre>
<p>It will look something like this below. Replace the config: address with your address that will be the VIP and change any other settings here you would like, but most can use defaults.</p>
<pre contenteditable="false"># Default values for kube-vip.
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
image:
  repository: ghcr.io/kube-vip/kube-vip
  pullPolicy: IfNotPresent
  # Overrides the image tag whose default is the chart appVersion.
  # tag: "v0.7.0"

config:
  address: ""

# Check https://kube-vip.io/docs/installation/flags/
env:
  vip_interface: ""
  vip_arp: "true"
  lb_enable: "true"
  lb_port: "6443"
  vip_cidr: "32"
  cp_enable: "false"
  svc_enable: "true"
  svc_election: "false"
  vip_leaderelection: "false"

extraArgs: {}
  # Specify additional arguments to kube-vip
  # For example, to change the Prometheus HTTP server port, use the following:
  # prometheusHTTPServer: "0.0.0.0:2112"

envValueFrom: {}
  # Specify environment variables using valueFrom references (EnvVarSource)
  # For example we can use the IP address of the pod itself as a unique value for the routerID
  # bgp_routerid:
  #  fieldRef:
  #    fieldPath: status.podIP

envFrom: []
  # Specify an externally created Secret(s) or ConfigMap(s) to inject environment variables
  # For example an externally provisioned secret could contain the password for your upstream BGP router, such as
  #
  # apiVersion: v1
  # data:
  #   bgp_peers: "&lt;address:AS:password:multihop&gt;"
  # kind: Secret
  #   name: kube-vip
  #   namespace: kube-system
  # type: Opaque
  #
  # - secretKeyRef:
  #    name: kube-vip

extraLabels: {}
  # Specify extra labels to be added to DaemonSet (and therefore to Pods)

imagePullSecrets: []
nameOverride: ""
fullnameOverride: ""
# Custom namespace to override the namespace for the deployed resources.
namespaceOverride: ""

serviceAccount:
  # Specifies whether a service account should be created
  create: true
  # Annotations to add to the service account
  annotations: {}
  # The name of the service account to use.
  # If not set and create is true, a name is generated using the fullname template
  name: ""

podAnnotations: {}

podSecurityContext: {}
# fsGroup: 2000

securityContext:
  capabilities:
    add:
      - NET_ADMIN
      - NET_RAW
    drop:
      - ALL

resources: {}
  # We usually recommend not to specify default resources and to leave this as a conscious
  # choice for the user. This also increases chances charts run on environments with little
  # resources, such as Minikube. If you do want to specify resources, uncomment the following
  # lines, adjust them as necessary, and remove the curly braces after 'resources:'.
  # limits:
  #   cpu: 100m
  #   memory: 128Mi
  # requests:
  #   cpu: 100m
  #   memory: 128Mi

volumes: []
  # Specify additional volumes
  #   - hostPath:
  #       path: /etc/rancher/k3s/k3s.yaml
  #       type: File
  #     name: kubeconfig

volumeMounts: []
  # Specify additional volume mounts
  # - mountPath: /etc/kubernetes/admin.conf
  #   name: kubeconfig

hostAliases: []
  # Specify additional host aliases
  # - hostnames:
  #     - kubernetes
  #   ip: 127.0.0.1

nodeSelector: {}

tolerations:
  - effect: NoSchedule
    key: node-role.kubernetes.io/control-plane
    operator: Exists
affinity: {}
  # nodeAffinity:
  #   requiredDuringSchedulingIgnoredDuringExecution:
  #     nodeSelectorTerms:
  #     - matchExpressions:
  #       - key: node-role.kubernetes.io/master
  #         operator: Exists
  #     - matchExpressions:
  #       - key: node-role.kubernetes.io/control-plane
  #         operator: Exists

podMonitor:
  enabled: false
  labels: {}
  annotations: {}

priorityClassName: ""</pre>
<p>Finally, we need to add the helm repo for kube-vip, update, and then install:</p>
<pre contenteditable="false">microk8s helm3 repo add kube-vip https://kube-vip.io/helm-charts
microk8s helm3 repo update
microk8s helm3 install kube-vip kube-vip/kube-vip --namespace kube-system -f values.yaml</pre>
<p>Hopefully, this will help anyone trying to get this up and running.</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/install-kube-vip-in-microk8s-kubernetes-cluster-for-ha/</guid>
                    </item>
				                    <item>
                        <title>Change Kubernetes Cluster CIDR subnet for Pods in Microk8s</title>
                        <link>https://www.virtualizationhowto.com/community/kubernetes-and-containers/change-kubernetes-cluster-cidr-subnet-for-pods-in-microk8s/</link>
                        <pubDate>Sat, 25 Jan 2025 17:05:18 +0000</pubDate>
                        <description><![CDATA[If you are running Microk8s and need to change the pod CIDR for Calico after the cluster has been created, below is are the steps you can use to accomplish this. These are the steps that I h...]]></description>
                        <content:encoded><![CDATA[<p>If you are running Microk8s and need to change the pod CIDR for Calico after the cluster has been created, below is are the steps you can use to accomplish this. These are the steps that I have taken to fill in the gaps on the official documentation on the Microk8s site:</p>
<p>By default, the pod CIDR is 10.1.0.0/16. This may be fine in your environment. However, if your production LAN is running in a subnet that is contained or overlapped by this /16 subnet, you need to change it since it will cause issues in getting traffic in and out of your cluster.</p>
<p>First, edit your file:</p>
<pre contenteditable="false">/var/snap/microk8s/current/args/cni-network/cni.yaml</pre>
<p>You will want to search for the value "10.1.0.0/16" and change it to the subnet you want to use instead. For instance:</p>
<pre contenteditable="false">Change "10.1.0.0/16" to "10.100.0.0/16"</pre>
<p>Next, edit the file:</p>
<pre contenteditable="false"> /var/snap/microk8s/current/args/kube-proxy</pre>
<p>Change the line:</p>
<pre contenteditable="false">Change "10.1.0.0/16" to "10.100.0.0/16"</pre>
<p>Apply your YAML file with the command:</p>
<pre contenteditable="false">microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml</pre>
<p>Next, restart your Microk8s service on all nodes:</p>
<pre contenteditable="false">sudo snap restart microk8s</pre>
<p>Note, if your cluster has already created the default ippools with the other subnet that gets created by default, you will need to delete this configuration. Why? If you don't, the cluster will continue to serve out IPs from this default pool and you won't see your new configuration take effect on new pods spun up. Here are the steps in addition I took to get that done. Note, the commands just copied and pasted from the official documentation didn't work for me in this step. So, here are the correct commands:</p>
<pre contenteditable="false">microk8s kubectl get ippools.crd.projectcalico.org

microk8s kubectl delete ippool default-ipv4-ippool

microk8s kubectl apply -f /var/snap/microk8s/current/args/cni-network/cni.yaml</pre>
<p>After this, restart your deployments, daemon sets, etc. The new pods that are created should be provisioned with the new IP addresses. Hopefully, these steps will help ones struggling with this.</p>]]></content:encoded>
						                            <category domain="https://www.virtualizationhowto.com/community/kubernetes-and-containers/">Kubernetes and Containers</category>                        <dc:creator>Brandon Lee</dc:creator>
                        <guid isPermaLink="true">https://www.virtualizationhowto.com/community/kubernetes-and-containers/change-kubernetes-cluster-cidr-subnet-for-pods-in-microk8s/</guid>
                    </item>
							        </channel>
        </rss>
		