Don't miss out on new posts! Sign up! Also, visit the VHT Forums!
Kube-Vip For MariaD...
 
Notifications
Clear all

Kube-Vip For MariaDB HA

12 Posts
3 Users
0 Likes
578 Views
(@4evercoding)
Posts: 6
Active Member
Topic starter
 

I am trying to setup a mariadb HA environment in my K8s kubernetes cluster. (So far I have 2 galera clusters and 2 maxscale running). I am trying to create this diagram:

image

 

Im struggling to setup a VIP using kube-vip for the maxscale loadbalancers.

I saw your youtube video about Easy k3s Kubernetes Tools with K3sup and Kube-VIP. But it left me with many questions. Do I need K3s? Can I not leverage my K8s cluster? From their docs it sounds like I can install as a static pod but I am unsure whether the video does it as a static pod or daemonset (and which one I need in my setup above).

 
Posted : 03/01/2024 12:01 pm
(@4evercoding)
Posts: 6
Active Member
Topic starter
 

Also wanted to note after looking back at my environment I have metallb using ARP. Could I instead create a virtual IP with metallb? Or should I really consider kube-vip for my HA?

 
Posted : 03/01/2024 12:39 pm
Brandon Lee
(@brandon-lee)
Posts: 409
Member Admin
 

@4evercoding welcome to the forums! Glad you reached out and joined up. I will do some research and see on a definitive answer for K3sup on vanilla Kubernetes. I know it was written for K3s but it may work for K8s as well. Also, MetalLB is really meant for handing out IP addresses like DHCP (not technically but it behaves like this) for your Kubernetes services that are self-hosted. However, this IP address should follow the service no matter which host it lives on. 

Also, with Kubevip, only one host "owns" that virtual IP address at any one time. So you would be funneling your traffic to the one IP address of the master node. 

@t3hbeowulf I am curious of your insights here as well. Are you guys hosting highly available DBs in your K8s production clusters and curious the architecture you have chosen to do this if so.

 

 
Posted : 03/01/2024 1:36 pm
Brandon Lee
(@brandon-lee)
Posts: 409
Member Admin
 

@4evercoding It looks like Kube-Vip is not a K3s-only solution. I had used it with k3sup, which is a k3s-only solution, but Kube-Vip should work with vanilla Kubernetes. Also, Kube-Vip functionality has been extended to include not only control plane load balancing, but also load balancing for any services of type LoadBalancer. 

I am going to get this back in the lab on a vanilla K8s cluster and do some experimenting. You mentioned you were struggling with Kube-Vip. What type of issues did you run into?

 
Posted : 03/01/2024 11:00 pm
(@4evercoding)
Posts: 6
Active Member
Topic starter
 

PS. Please excuse my lack in devlops (im quite new to all this).

Im struggling to understand why we use kube-vip over metallb. It seems like a similar solution to kube-vip.

Also, MetalLB is really meant for handing out IP addresses like DHCP (not technically but it behaves like this) for your Kubernetes services that are self-hosted. However, this IP address should follow the service no matter which host it lives on.

Are you suggesting I should not use metallb? I dont intend to but without successful setup of kube-vip im beginning to struggle how components are setup. Im using Maxscale (in favor of the usual HAProxy as it provides optimizations for mariadb that we use).

Also, with Kubevip, only one host "owns" that virtual IP address at any one time. So you would be funneling your traffic to the one IP address of the master node.

Agree. That is how I interpreted it- one host with one backup to the virtual ip for automatic failover.

Kube-Vip should work with vanilla Kubernetes. Also, Kube-Vip functionality has been extended to include not only control plane load balancing, but also load balancing for any services of type LoadBalancer.

Can you clarify if I have this correct? Control Plane is a cluster of nodes separate from the docker nodes in my k8s cluster. When i type kubectl get nodes I see docker nodes in my setup. These docker nodes deploy containers whereas Control Plane nodes are a separate entity for VIP and loadbalancing. One control plane cluster = 3 control plane nodes = 1 VIP? In the future if we expand services I need to deploy another Control Plane Node to enable a second VIP?

What type of issues did you run into?

I dont know how to deploy Kube-Vip to my K8s cluster. I know we have 6 docker k8s VM nodes deployed. So it sounds like I need to deploy 3 ubuntu nodes, run k3sup on each as suggested in your video?

 

For ease I have compiled my notes the environment I am designing which is documented and made public at my github: https://github.com/advra/mariadb-high-availibility . Feel free to provide feedback. The design shares ideas from older articles I've came across to enable High Availability with Active/Active configuration in mind.

 

This post was modified 2 months ago by 4evercoding
 
Posted : 04/01/2024 2:54 pm
Brandon Lee
(@brandon-lee)
Posts: 409
Member Admin
 

@4evercoding

PS. Please excuse my lack in devlops (im quite new to all this).

Im struggling to understand why we use kube-vip over metallb. It seems like a similar solution to kube-vip.

Are you suggesting I should not use metallb? I dont intend to but without successful setup of kube-vip im beginning to struggle how components are setup. Im using Maxscale (in favor of the usual HAProxy as it provides optimizations for mariadb that we use).

Kube-Vip overlaps in functionality with what MetalLB provides, but not the other way around. MetalLB doesn't provide a VIP for the control plane. However, Kube-VIP can provide a VIP for the control plane AND also provide IP addresses for Kubernetes services. 

Can you clarify if I have this correct? Control Plane is a cluster of nodes separate from the docker nodes in my k8s cluster. When i type kubectl get nodes I see docker nodes in my setup. These docker nodes deploy containers whereas Control Plane nodes are a separate entity for VIP and loadbalancing. One control plane cluster = 3 control plane nodes = 1 VIP? In the future if we expand services I need to deploy another Control Plane Node to enable a second VIP?

You can have a separate control plane cluster from the workload cluster. However, this isn't necessarily a requirement. You CAN run workloads on control plane nodes. This is typically what most due with small clusters. However, for larger production deployments, you will see the control plane nodes designated for only that role.  

I dont know how to deploy Kube-Vip to my K8s cluster. I know we have 6 docker k8s VM nodes deployed. So it sounds like I need to deploy 3 ubuntu nodes, run k3sup on each as suggested in your video?

You shouldn't necessarily need to deploy a k3s cluster to deploy Kube-Vip. This should be possible with your current cluster. Do you have more details on the issues you have seen with Kube-Vip and any errors you are running into? 

For ease I have compiled my notes the environment I am designing which is documented and made public at my github: https://github.com/advra/mariadb-high-availibility . Feel free to provide feedback. The design shares ideas from older articles I've came across to enable High Availability with Active/Active configuration in mind.

 This is a great idea! Also, let me know if the above answers make sense.

 

 
Posted : 05/01/2024 12:17 pm
(@t3hbeowulf)
Posts: 23
Eminent Member
 

Posted by: @brandon-lee

@t3hbeowulf I am curious of your insights here as well. Are you guys hosting highly available DBs in your K8s production clusters and curious the architecture you have chosen to do this if so.

I wish I had more information on this issue. Most of our Kubernetes environments systems connect to externally configured databases either through AWS RDS or a plethora of on-prem SQL servers. (There is an entirely separate team dedicated to managing all of the DB infrastructure and our DevOps group rarely gets to peak under the hood.) The DB infrastructure for on-prem SQL instances is built such that most applications can only access read-only replicas of data and ETL processes are responsible for moving data back and forth from the inner layers and out to other replicas. "HA" is essentially handled by having many read-only replicas on the "edges". 

 

This post was modified 2 months ago by t3hbeowulf
 
Posted : 06/01/2024 8:57 pm
(@4evercoding)
Posts: 6
Active Member
Topic starter
 

@brandonlee

You can have a separate control plane cluster from the workload cluster. However, this isn't necessarily a requirement. You CAN run workloads on control plane nodes. This is typically what most due with small clusters. However, for larger production deployments, you will see the control plane nodes designated for only that role.

Sorry still figuring this out. So I can designate kube-vip control plane of 3 of the top 4 docker nodes?

$ k get nodes
NAME                       STATUS   ROLES    AGE    VERSION
sb1ldocker01.xyznet   Ready    <none>   599d   v1.21.8-mirantis-1
sb1ldocker02.xyz.net   Ready    <none>   599d   v1.21.8-mirantis-1
sb1ldocker03.xyz.net   Ready    <none>   599d   v1.21.8-mirantis-1
sb1ldocker04.xyz.net   Ready    <none>   314d   v1.21.8-mirantis-1
sb1ldtr01.xyz.net      Ready    <none>   587d   v1.21.8-mirantis-1
sb1ldtr02.xyz.net      Ready    <none>   587d   v1.21.8-mirantis-1
sb1ldtr03.xyz.net      Ready    <none>   587d   v1.21.8-mirantis-1
sb1lucp01.xyz.net      Ready    master   700d   v1.21.8-mirantis-1
sb1lucp02.xyz.net      Ready    master   700d   v1.21.8-mirantis-1
sb1lucp03.xyz.net      Ready    master   700d   v1.21.8-mirantis-1

 

 
Posted : 08/01/2024 11:36 am
Brandon Lee
(@brandon-lee)
Posts: 409
Member Admin
 

@4evercoding Did you make progress here? You should be able to carve the nodes you want to use out of your cluster for this purpose.

 
Posted : 18/01/2024 9:47 am
(@4evercoding)
Posts: 6
Active Member
Topic starter
 

Tagging back to this. I wasnt able to work on this until yesterday. The biggest confusion  before was the deployment but now I now understand KubeVip configuration modes. You can configure KubeVIP as a single pod deployment (staticpod) or multipod loadbalancer (daemonset). You can also designate master or worker nodes for a  total of 4 base configurations. 

For HA we'd want daemonset. Since maxscale is on the service layer we only need daemonset on the worker nodes.  So I now have successfully deployed in my environment a VIP pointed to maxscale where I can then access the galera cluster with multiple nodes. 

Great!

Now I do have a question with Maxscale. I currently have one instance deployed with a custom helm statefulset helm chart I created. Is statefulset helm appropriate or should I be single deployment? To me, the Statefulset made most sense in a loadbalancer config as they all would have the same configuration and all point to the same VIP. This makes expanding/shrinking the number of maxscale instances approrpiate with statefulset. But I have doubts how the VIP manages connections (or perhaps thats something kubevip does in the background by routing IPs to the correct maxscale instance)

 
Posted : 03/02/2024 10:49 am
(@4evercoding)
Posts: 6
Active Member
Topic starter
 

Update:

So I think I answered my own question here. In a Master maxscale configuration it would be appropriate to use statefulset as they all share the same maxscae.cnf file. Statefulset allows the increase/decrease of replica count in the cluster and its perfect for this purpose. The replicasets all pin to one service detailed below:

Maxscale Cluster service service.yaml:

Note: This service is for metallb which will pin the VIP. I believe Kubevip will be slightly different (something Ill look into once I get kubevip in our environment)

apiVersion: v1
kind: Service
metadata:
  name: {{ include "maxscale-helm.fullname" . }}
  labels:
    {{- include "maxscale-helm.labels" . | nindent 4 }}
spec:
  type: {{ .Values.service.type }}
  ports:
    - name: mariadb
      port: {{ .Values.service.mariadbProtocolPort }}
      targetPort: 3306
      protocol: TCP
    - name: maxscale-webgui
      port: {{ .Values.service.maxscaleWebGuiPort }}
      targetPort: 8989
      protocol: TCP
  selector:
    app: maxscale-cluster # reference that all maxscales in statefulset will use

Heres my current deployment. My org cannot deploy kube-vip at the moment until later on. So for now I am using Metallb to host a VIP. Metallb works well for failover but I suspect it is not service loadbalancing in the same way kube-vip does.

MaxscaleClusterTopologyDiagram.drawio

Note: Im using custom helm charts I've made but hopefully the above diagram properly depicts whats going on.

What are your thoughts? Are my assumptions correct?

This post was modified 2 weeks ago 3 times by 4evercoding
 
Posted : 05/02/2024 12:15 pm
Brandon Lee
(@brandon-lee)
Posts: 409
Member Admin
 

@4evercoding It is great you are sharing your experiences....helps all of us learn from what you are seeing. I believe all of your assumptions are correct here. Using statefulsets, you should have the ability to scale the number of MaxScale instances up or down based on demand. It will also allow maintaining a consistent configuration across replicas.

Your use of MetalLB as a stopgap until you can deploy Kube-VIP should work well. Both MetalLB and Kube-VIP can provide the VIP for high availability, but they operate slightly differently as you know:

  • MetalLB acts as a LoadBalancer in environments where external load balancers are not natively available (like bare metal clusters). It can assign external IP addresses (VIPs) to services and ensure that traffic reaches the correct node in the cluster.
  • Kube-VIP, on the other hand, can offer a similar LoadBalancer functionality but is also capable of providing virtual IP management directly on top of Kubernetes, often used for Kubernetes control plane high availability as I noted in the video you referenced.
 
Posted : 06/02/2024 9:14 am