What is Kubernetes Controller?
- Controller is the brain of Kubernetes
- the processes that monitor Kubernetes objects and respond accordingly
Replication Controller
Why do we need this?
Monitor pods and replicate if necessary: If for some reason our application crashes and pod fails, user may lose information. We’d like to have more than one pods in order to prevent it.
- Replication controller helps run multiple instances of a single pod in Kubernetes cluster.
- This provides higher availability.
- Replication controller ensures that the minimum, specified number of pods are running at all times.
- What happens if there are more pods than the number of replicas specified?
- The replica set will terminate the new pod we create additionally, not allowing more pods with the same label.
- What happens if there are more pods than the number of replicas specified?
Load Balancing and Scaling:
- Replication Controller creates multiple pods to share the load across them
- ex: when the number of users increase, we deploy additional Pod to balance the load across the two pods
- If we run out of the space in the first node, then we can deploy additional pods in other nodes in the Kubernetes cluster.
- ex: when the number of users increase, we deploy additional Pod to balance the load across the two pods
Replication Controller vs Replica Set
- Replication controller is the older technology being replaced by replica set.
- Replica set is the new recommended way to set up replication.
How to create a replication controller
# rc-definition.yaml
apiVersion: v1
kind: ReplicationController
metadata:
name: myapp-rc
labels:
app: myapp
type: front-end
spec:
# the most crucial part
template:
# same as what we have in pod-definition.yaml
metadata:
name: myapp-pod
labels:
app: myapp
type: Pod
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
Commands
to create replication controller
kubectl create -f rc-definition.yaml
to view the created replication controllers
kubectl get replicationcontroller
Notes
- Pods created from replication controller has their names started with replication controller’s name.
- We can check for
apiVersion
of replicaset via commandkubectl api-resources | grep replicaset
Replica Set
A process that monitors the pods.
# replicaset-definition.yaml
apiVersion: apps/v1
kind: ReplicaSet
metadata:
name: myapp-replicaset
labels:
app: myapp
type: front-end
spec:
template:
metadata:
name: myapp-pod
labels:
app: myapp
type: front-end-pod
spec:
containers:
- name: nginx-container
image: nginx
replicas: 3
selector:
# matchLabels selector simply matches the labels specified under pod labels
matchLabels:
type: front-end-pod
selector section
Selector helps the replica set identify what parts fall under it. This is the major difference between ReplicaController and ReplicaSet.
- Replica set can also manage pods that were not created as part of the replica set creation.
Commands
to create a replica set
kubectl create -f replicaset-definition.yaml
to get the list of replica set
kubectl get replicaset
# alternative: rs is a shortname
kubectl get rs
to get replicaset information
kubectl describe replicaset <replicaset_name>
# alternative: rs is a shortname
kubectl describe rs <replicaset_name>
to delete replicaset
# This also deletes all underlying pods
kubectl delete replicaset <replicaset_name>
# alternative: rs is a shortname
kubectl delete rs <replicaset_name>
to update replicaset
# This opens up a temporary replicaset configuration file on terminal.
# This allows to modify configuration when running.
# After you edit a replicaset, pods that have been already created are not going to be re-created automatically.
# We'll need to either delete and recreate the entire replicaset or remove existing pods and recreate them.
kubectl edit replicaset <replicaset_name>
# alternative: rs is a shortname
kubectl edit rs <replicaset_name>
to get explaination on replicaset
kubectl explain replicaset
# alternative: rs is a shortname
kubectl explain rs
Labels and Selectors
How does the replica set know what parts to monitor?
There could be hundres of other pods in the cluster running different applications.
We can use labels as a filter for replica set.
Under the selector section, we use the matchLabels
filter and provide the same labels that we used while creating the pods.
This way, the replica set knows which pods to monitor.
When one of the pods failed in the future
Replica set will use the template
to create the new pod again to maintain the desired number of pods.
Scale
- (a) Update the number of
replicas
in the definition file to a new number. (b) Then, sekubectl replace -f replicaset-definition.yaml
to update the replica set. - (a) Use
kubectl scale --replicas=6 -f replicaset-definition.yaml
to update.
- This will not automaically update the
replicaset-definition.yaml
file for the newreplicas
number.
- (a) Use
kubectl scale --replicas=6 replicaset <replicaset_name>
insetad of specifying file.