Master Node Components: #
- API Server:
- Acts as the front-end for the Kubernetes control plane.
- Exposes the Kubernetes API.
- Controller Manager:
- Manages controllers that regulate the state of the system.
- Examples include Node Controller, Replication Controller, and Endpoints Controller.
- Scheduler:
- Assigns nodes to newly created pods based on resource requirements and other constraints.
- etcd:
- A distributed key-value store that stores the configuration data of the cluster.
- Represents the overall state of the cluster.
Worker Node Components: #
- Kubelet:
- Ensures that containers are running in a Pod on the node.
- Communicates with the master node to receive instructions.
- Kube Proxy:
- Maintains network rules on nodes.
- Enables communication between different pods and services within the cluster.
- Container Runtime:
- The software responsible for running containers, such as Docker or containerd.
- Manages the containers on the worker node.
Shared Components: #
- Pod:
- The smallest and simplest unit in the Kubernetes object model.
- Represents a single instance of a running process in a cluster.
- Service:
- An abstraction that defines a logical set of pods and a policy by which to access them.
- Enables network communication between different sets of pods.
- Volume:
- A directory accessible to all containers in a pod.
- Allows data to persist beyond the lifetime of a pod.
- Namespace:
- A way to divide cluster resources between multiple users or teams.
- Provides isolation and a scope for names.
- Deployment:
- A higher-level abstraction that allows declarative updates to applications.
- Describes the desired state for a set of pods and controls their lifecycle.
- ReplicaSet:
- Ensures that a specified number of replicas of a pod are running at all times.
- Often used by Deployments for managing the pod lifecycle.
- ConfigMap and Secret:
- ConfigMaps allow you to decouple configuration artifacts from container images.
- Secrets are used to store sensitive information such as passwords and API keys.
Kubernetes Workflow
When you deploy a Kubernetes Deployment, several actions take place in the background, involving various components of the Kubernetes architecture. Here’s a step-by-step overview of what happens when you deploy a Deployment:
- Definition File Submission:
- You create a YAML or JSON file that defines your Deployment, specifying details such as the container image, desired replicas, labels, and other configuration options.
- kubectl Apply:
- You use the kubectl apply command to submit the Deployment definition to the Kubernetes API server.
- API Server:
- The API server receives the Deployment definition and stores it in the etcd datastore, which is the central configuration store for the Kubernetes cluster.
- Controller Manager:
- The Controller Manager detects the new Deployment object in the etcd datastore.
- The ReplicaSet Controller, which is part of the Controller Manager, creates a ReplicaSet based on the specifications in the Deployment.
- Scheduler:
- The Scheduler takes over to schedule the desired number of pods onto available nodes in the cluster.
- The selected nodes will run the pod replicas defined in the Deployment.
- etcd:
- The etcd datastore is updated with information about the newly created ReplicaSet and the associated pods.
- Kubelet:
- On each node, the Kubelet detects that there are new pods to be scheduled.
- The Kubelet communicates with the container runtime (e.g., Docker) to pull the specified container image and start the containers.
- Container Runtime:
- The container runtime pulls the specified container image and starts the containers within the pods.
- Kube Proxy:
- Kube Proxy ensures network connectivity between pods and services, configuring the necessary network rules.
- Controller Manager (Scaling):
- The ReplicaSet Controller continuously monitors the state of pods and ensures that the desired number of replicas is maintained.
- If there are any discrepancies between the desired state and the actual state, the ReplicaSet Controller takes corrective actions, such as creating or terminating pods.
- Service (Optional):
- If the Deployment is associated with a Service, the Service is created or updated to include the new pods. The Service provides a stable endpoint for accessing the pods.
Throughout this process, the various components work together to ensure that the application defined in the Deployment is deployed and maintained according to the specified configuration. The coordination between the API server, etcd, controllers, scheduler, and other components enables the automation of deployment and scaling operations in a Kubernetes cluster.
HELM – Deploy – SonarQube- AKS
helm repo add sonarqube https://SonarSource.github.io/helm-chart-sonarqube
helm install my-sonarqube sonarqube/sonarqube –version 10.3.0+2009
Kubectl get pods
NAME READY STATUS RESTARTS AGE
my-sonarqube-postgresql-0 1/1 Running 0 3m52s
my-sonarqube-sonarqube-0 0/1 Pending 0 3m52s
kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 6m15s
my-sonarqube-postgresql ClusterIP 10.0.85.37 <none> 5432/TCP 4m15s
my-sonarqube-postgresql-headless ClusterIP None <none> 5432/TCP 4m15s
my-sonarqube-sonarqube ClusterIP 10.0.172.170 <none> 9000/TCP 4m15s
Note: no external IP for Sonarqube service
kubectl patch service my-sonarqube-sonarqube -p ‘{“spec”: {“type”: “LoadBalancer”}}’
Command to refresh deployment
Helm-list
NAME NAMESPACE REVISION UPDATED STATUS CHART APP VERSION
my-sonarqube default 1 2023-11-20 18:21:42.526633812 +0000 UTC deployed sonarqube-10.3.0+2009 10.3.0
Upgrade
helm upgrade –install my-sonarqube sonarqube/sonarqube –version 10.3.0+2009
delete helm
helm uninstall my-sonarqube