Summary
In this article, you’ll learn Anatomy of Kubernetes Pods and NodePort Service example with Running Containers in Pods.
Introduction to Pods
Kubernetes revolves around the pods. So, you have to know what these pods are. Pods are the run time environment using which we deploy the applications. It is an atomic unit of scheduling in Kubernetes. One or multiple containers deployed together on a single host is called as a pod. We will see how pods are deployed and scaled inside Kubernetes cluster. Now I am going to explain about the Kubernetes Pods and NodePort Service with Running Containers in Pods and Connecting to Running Containers in Kubernetes
Cluster Setup
The Kubernetes cluster setup that I am using for this demo consists of one master node and 3 worker node VMs. All these 4 VMs are running on Goggle cloud with centOS.
We will be connecting to these VMs using a cloud shell. Let’s check the status of these nodes by running kubectl get nodes command.
As you can see from the command output, we have 1 master and 3 worker nodes. All 4 nodes are ready and running successfully.
Now we will review the pod manifest file with nginx container. Go to the pod working directory where we have kept the manifest file. I have created one manifest file for the demo.
Most of the Kubernetes objects manifest file, the above pod manifest file consists of 4 top labels required field i.e. apiVersion, kind, metadata, and spec. So, we should have them in our yaml config file.
Pod Config Creation:
We will deploy the pods using kubectl to create command. Let’s create the nginx pod. First, check if there are already any pods running inside the Kubernetes cluster.
As you can see that there are no pods running. Now we will create the pods using kubectl create command followed by the name of the manifest file which I have already mentioned above.
Here the nginx-pod.yaml is our manifest file and our pod got created after issuing the above command. Now again we will check if pod is created or not by issuing below get pods command:
Here you can see that nginx pod is successfully running. Now as there are 4 nodes inside the Kubernetes cluster, we have then how can we know that on which node this nginx pod is running on. We can get the info by getting pods commands with a wide option.
Now in the above image, you can see that the IP address of the pod and it is running on the worker 3 node.
Suppose we have lost our manifest file or deleted, and we have the nginx pod running inside our Kubernetes cluster. Then we can print the nginx pod details that are running from the yaml file.
Now you can from the above image that the nginx pod configuration in yaml format. This is useful when we want to display every minor detail of running an object.
Now to see the complete details of the pods issue command: “kubectl describe pod nginx pod | more”. Then you can see all the details about the pods:
You can see above the detail info about the pod that it is running fine on the worker node with the IP address detail and event details. We can also expose this pod using NodePort service. We will be discussing more details about the Kubernetes services like NodePort, Loadbalancer, and clusterIP services in our other Kubernetes services blog.
Expose the pods using Nodeport service:
Once the app is exposed to the outside world on to the internet, then we will access a sample html webpage and also the default nginx webpage using node IP and node port from any web browser.
Node IP: it the external IP address of any master node or any worker node inside the Kubernetes cluster. Besides accessing the webpage from the internet, we will also try to access that sample html webpage internally from the worker nodes using Kubernetes cluster using pod IP.
To create test.html page inside nginx-pod root directory, we need to get inside the pod first.
Here hostname is the pod name I have given i.e. nginx-pod. Now let’s create a sample “test.html” page inside the nginx root directory.
test.html:
This is a very simple html file and we are just printing some header. Now let’s expose this nginx-pod through NodePort Service. So now I will run the command.
So, now we have successfully exposed the nginx pod to the outside world using NodePort Service. Now we need to know the node port number on which this nginx webpage is exposed to the outside world. We can get that info by running describe command.
Svc option is for the nodeport service name of the nginx-pod. You can see above the node port is 30758. So, to access the webpage from the web browser, we need node IP and node port. This node IP is the external IP address of the Kubernetes master or any worker node inside the Kubernetes cluster.
As we have already checked that nginx pod is running on worker node 3, so let’s take the worker node 3 IP.
Here external IP is yellow marked i.e. 35.196.57.193. Now we have our node port 30758. So, we have everything to access the nginx page on the browser.
Test the node port IP:
Pod Delete Operation:
Here is my demo, I have created two objects i.e. pod and NodePort service. Now let’s delete the pod and the service with the delete command followed by pod name and service name like I have done in below:
We will see now if it deleted successfully or not by below commands:
Here you can see above that the two objects deleted successfully.
Kubernetes Services Anatomy:
We can address above problems using Kubernetes Services.
To solve all the above problems Kubernetes Services come into picture.
Kubernetes Service
Service is a way of a grouping of pods that are running on a cluster. These are cheaper and we can have as many Java web development services possible within our cluster. Services provide some of the important features that are standardized across the cluster such as load balancing, service discovery between apps and features to support zero-downtime app deployment.
One of the services to solve the above problem is “NodePort Service”
NodePort Service real-time example:
NodePort service example Demo
Here I have created two yaml manifest files inside the working directory you can see that.
Nginx-deploy.yaml:
This ngix-app pod runs on port 80.
Nginx-svc.yaml:
Deployment
Let’s get inside to the pod with the below command:
Test.html:
Conclusion
Here, I have explained the most important features of NodePort Services. Also, I have demonstrated running Containers in Pods and Connecting to Running Containers in Kubernetes.
References