Workloads & Scheduling 15%
1. Understand deployments and how to perform rolling update and rollbacks
2. Use ConfigMaps and Secrets to configure applications
3. Know how to scale applications
4. Understand the primitives used to create robust, self-healing, application deployments
5. Understand how resource limits can affect Pod scheduling
6. Awareness of manifest management and common templating tools
1. Understand deployments and how to perform rolling update and rollbacks
Question:
Create a new deployment called web-prod-268, with image nginx:1.16 and 1
replica. Next upgrade the deployment to version 1.17 using rolling
update.
Make sure that the version upgrade is recorded in the resource annotation.
Use context: kubectl config use-context k8s-c1-H
Solution: First, we need to create a deployment “web-prod-268” with image “nginx:1.16” and must have replicaa =1
kubectl create deployment web-prod-268 --image=nginx:1.16
We can check the rollout history of this deployment.
kubectl rollout history deployment web-prod-268
For reference purpose:
[root@master1 ~]# |
In
question, it is asked us to upgrade the deployment and use new image
“1.17”. For upgrade we can use “set image” sub-command. For record the
upgrade, we can use option “–record”
Syntax of command would be “kubectl set image deployment deployment_name containername=newimagename –record”
How to find the container name? , we can use “describe” command.
[root@master1 ~]# kubectl get deployments.apps web-prod-268 -o yaml | grep -A 10 container
containers:
- image: nginx:1.17
imagePullPolicy: IfNotPresent
name: nginx >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
[root@master1 ~]#
kubectl set image deployment web-prod-268 nginx=nginx:1.17 --record
It’s time for post checks. Again, execute the rollout history command to see the output.
kubectl rollout history deployment web-prod-268
For reference purpose:
|
Also, we should check the image version.
kubectl describe deployments.apps web-prod-268 | grep -i "Image:"
For reference purpose:
|
Congratulation!! You have completed the question successfully.
3. Know how to scale applications
Question: Scale the deployment web-app to 6 pods
kubectl config use-context k8s-c1-H
Solution:
First, we should check where this Deployment is running. In the
question, no namespace is defined. Thus, this deployment must be running
on default namespace under user-context k8s-c1-H.
kubectl config use-context k8s-c1-H
Check the deployment and identify how many replicas are defined, it means how many pods are there.
kubectl get deployments.apps web-app
For references, my deployment has only 2 replicas.
Now, we can update the existing deployment “” to replicas 6
|
For references
|
Congratulation!! You have completed the question successfully.
5. Understand how resource limits can affect Pod scheduling
Question 1: Use context: kubectl config use-context k8s-c1-s
Schedule a pod as follows:
· Name: nginx-kusc00401
· Image: nginx
· Node selector: disktype=ssd
Solution: In this question, it is asked us to use nodeselector parameter.
Use the correct context.
kubectl config use-context k8s-c1-s
Open the URL : https://kubernetes.io
Click on Documentation
Search “nodeSelector disktype” This is what given in the question. See the below print screen.
Copy the below yaml into one file disktype.yaml
apiVersion: v1 kind: Pod metadata: name: nginx-kusc00401 #updated labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent nodeSelector: disktype: ssd
kubectl apply -f disktype.yaml
Post checks / How to verify it ? / Pod must be in running state.
kubectl get pods nginx-kusc00401 -o wide
For your references.
|
Question 2: Schedule Pod on Master Node
after that create Pod prod-regis, which can only be scheduled on a
master node do not add new labels on any nodes.
Use context: kubectl config use-context k8s-c1-s
Solution:
Here we need to add the toleration for running on master nodes, but
also the nodeSelector to make sure it only runs on master nodes. If we
only specify a toleration the Pod can be scheduled on master or worker
nodes.
Select the correct context.
kubectl config use-context k8s-c1-s
First, we need to check the taint on master node.
[root@master1 ~]# kubectl describe nodes master1.example.com | egrep -i taint
Taints: node-role.kubernetes.io/control-plane:NoSchedule
From
the above output, we get the know that
key=node-role.kubernetes.io/control-plane, value is not defined and
effect = NoSchedule
After
that we need to create a pod “prod-regis” with toleration. However,
Kubernetes scheduler can create this pod on other nodes too.
Thus, we need to add nodeselector option too on pod template.
Open the URL : https://kubernetes.io, Click on “Documentation” , Search
“taint and toleration” open the first URL and look for pod template
file with toleration option. Modify the yaml file as per question
requirement.
cat <<EOF>> selector.yaml apiVersion: v1 kind: Pod metadata: name: prod-regis # modified labels: env: test spec: containers: - name: nginx image: nginx imagePullPolicy: IfNotPresent tolerations: - key: "node-role.kubernetes.io/control-plane" # modified effect: "NoSchedule" nodeSelector: # Added node-role.kubernetes.io/control-plane: "" # Added EOF
In
the above yaml file, I have added the nodeselector line. Indentation of
nodeselector is same as for toleration. In the next line just add the
master node key and value must be blank.
kubectl apply -f selector.yaml
Post check: Check if our newly created pod “prod-regis” must be running on master node.
kubectl get pods prod-regis -o wide
For your references.
|
If you want to learn from basic to advance on Taints and Toleration topic then you may watch my taint video.
What is taints and toleration in Kubernetes
6. Awareness of manifest management and common templating tools
Question: Use context: kubectl config use-context k8s-c1-H
There
are two Pods named oodb-* in Namespace project-lab. Lab management
asked you to scale the Pods down to one replica to save resources.
Solution: Select the correct context.
kubectl config use-context k8s-c1-H
First, we need to locate the pods under namespace “project-lab”.
kubectl get pods -n project-lab | grep oo
From
the above output, it is not clear that these pods are belongs to which
deployment / statefulset / daemonset. This is the reason, we can search
on deployment, statefulset and daemonset. See the reference for more
options.
kubectl get deployments.apps,statefulsets.apps,ds -n project-lab
For reference purpose only.
|
Now, it is clear that these pods are belongs to statefulsets. To fulfil the task we simply run:
kubectl -n project-lab scale statefulset oodb --replicas=1
Post checks / How to verify it? From the output, pod must be 1
kubectl get deployments.apps,statefulsets.apps,ds -n project-lab
For reference purpose only.
|
Congratulation!! You have completed the question successfully.
Qeustion: Use context: kubectl config use-context k8s-c1-t
There
are numbers of pods in all namespaces. Write a command into
/var/log/find_pods_age.sh which lists all Pods sorted by their AGE
(metadata.creationTimestamp).
Write
a second command into /var/log/find_pods_uid.sh which lists all Pods
sorted by field metadata.uid. Use kubectl sorting for both commands.
Solution: If you not remember the command then you may refer the kubernetes.io website and search for cheetsheet.
Use the correct context.
kubectl config use-context k8s-c1-t
For part 1.
Most
probably, in exam the sort option “metadata.creationTimestamp” must be
given. You just need to use “–sort-by”. Or you can type “–sort” and
then press the tab button twice.
kubectl get pod -A --sort-by=.metadata.creationTimestamp
If
the above command executed well then save the command in one file
“/var/log/find_pods_age.sh”. Please bear in mind that in the question it
is mentioned that you need to write a command in the file. Also, try to
execute this file, you should see the same output.
echo "kubectl get pod -A --sort-by=.metadata.creationTimestamp" > /var/log/find_pods_age.sh
sh /var/log/find_pods_age.sh
For part 2.
One can sort the pods list from UID.
kubectl get pod -A --sort-by=.metadata.uid
As per the demand of question, we need to write this command in one file “/var/log/find_pods_uid.sh”
echo "kubectl get pod -A --sort-by=.metadata.uid" > /var/log/find_pods_uid.sh
For your references.
|
[…] Workloads & Scheduling 15% ==> Link […]