Get Started. It's Free
or sign up with your email address
Kubernetes by Mind Map: Kubernetes

1. RBAC Authorization

1.1. Role

1.1.1. Rules that are set of permissions Additive, No Deny Rules Within a namespace

1.1.2. Grant access to resources in a single namespace

1.1.3. apiVersion: kind: Role metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"]

1.2. ClusterRole

1.2.1. Cluster-wide role

1.2.2. Cluster-scoped resources like Nodes

1.2.3. namespaced resources (like pods) across all namespaces

1.2.4. apiVersion: kind: ClusterRole metadata: # "namespace" omitted since ClusterRoles are not namespaced name: secret-reader rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]

1.3. RoleBinding

1.3.1. A role binding grants the permissions defined in a role to a user or set of users

1.3.2. It holds a list of subjects (users, groups, or service accounts),

1.3.3. Permissions can be granted within a namespace with a RoleBinding

1.3.4. apiVersion: # This role binding allows "jane" to read pods in the "default" namespace. kind: RoleBinding metadata: name: read-pods namespace: default subjects: - kind: User name: jane # Name is case sensitive apiGroup: roleRef: kind: Role #this must be Role or ClusterRole name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to apiGroup:

1.4. ClusterRoleBinding

1.4.1. Permissions can be granted cluster-wide with a ClusterRoleBinding

1.4.2. apiVersion: # This role binding allows "dave" to read secrets in the "development" namespace. kind: RoleBinding metadata: name: read-secrets namespace: development # This only grants permissions within the "development" namespace. subjects: - kind: User name: dave # Name is case sensitive apiGroup: roleRef: kind: ClusterRole name: secret-reader apiGroup:

1.4.3. apiVersion: # This cluster role binding allows anyone in the "manager" group to read secrets in any namespace. kind: ClusterRoleBinding metadata: name: read-secrets-global subjects: - kind: Group name: manager # Name is case sensitive apiGroup: roleRef: kind: ClusterRole name: secret-reader apiGroup:

2. Controllers

2.1. Deployments

2.1.1. Use Case Rollout a replicaset Declare new state of Pods Rollback to an earlier deployment revision Scale up the Deployment Pause the Deployment to apply multiple fixes to its PodTemplateSpec and resume it to start a new rollout Use the status of the Deployment Clean up older ReplicaSets

2.1.2. Deployment Status Failed Deployment Factors Deadline Parameter Status Conditions

2.1.3. Deployment Spec

2.1.4. Creating a Deployment apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80 Deployment name three replicated Pods, indicated by .spec.replicas: 3 .spec.selector defines how the Deployment finds which Pods to manage. In this case, simple select a label that is defined in the Pod template(app:nginx). .spec.template defines Pods metadata and specification Kubectl apply -f manifest.yaml Kubectl get deployments name: lists the names of the Deployemtns in the cluster desired: displays the desired number of replicas of the application current: displays how many replicas are currently running up-to-date: displays the number of replicas that have been updated to achieve the desired state available: displays how many replicas of the application are available to your users age: displays the amount of time that the application has been running Kubectl rollout status deployment.v1.apps/deployment-name Kubectl get rs ReplicaSet Name Format: [deployment-name]-[randome-string]

2.1.5. Updating a Deployment A Deployment’s rollout is triggered if and only if the Deployment’s Pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout. Example kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1 kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record kubectl edit deployment.v1.apps/nginx-deployment See Rollout Status kubectl rollout status deployment.v1.apps/nginx-deployment Kubectl get deployments Kubectl get rs Kubectl get pods By default, Deployment ensures that at least 75% of the desired number of Pods are up By default, Deployment also ensures at most 125% of the desired number of Pods are up Kubectl describe deployments Rollover aka multiple updates in-flight Label Selector Updates It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped all of the implications. A Deployment's rollout is triggered if and only if the Deployment's Pod Template(.spec.template) is changed.

2.1.6. Rolling Back a Deployment the Deployment's rollout history is kept in the system You can rollback anytime Checking Rollout History of a Deployment Check the revisions of this Deployment See the details of each revision Rolling Back to a Previous Revision kubectl rollout undo deployment.v1.apps/nginx-deployment kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2

2.1.7. Scaling a Deployment kubectl scale deployment.v1.apps/nginx-deployment --replicas=10 Horizontal Pod Autoscaling Setup a autoscaler for your Deployment and choose the minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pods

3. Tasks

3.1. Run Application

3.1.1. Horizontal Pod Autoscale Concept Horizontal Pod Autoscaler automatically sales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization or with custom metrics support HPA does not apply to objects that can't be scaled such as DaemonSets HPA is implemented as a Kubernetes API Resource and a controller The controller periodically adjusts the # of RS in a replication controller or deployment to match the observed average CPU utilization of the target specified by user How does HPA work? Implemented as a control loop