시작하기. 무료입니다
또는 회원 가입 e메일 주소
Kubernetes 저자: Mind Map: Kubernetes

1. RBAC Authorization

1.1. Role

1.1.1. Rules that are set of permissions

1.1.1.1. Additive, No Deny Rules

1.1.1.2. Within a namespace

1.1.2. Grant access to resources in a single namespace

1.1.3. apiVersion: rbac.authorization.k8s.io/v1 kind: Role metadata: namespace: default name: pod-reader rules: - apiGroups: [""] # "" indicates the core API group resources: ["pods"] verbs: ["get", "watch", "list"]

1.2. ClusterRole

1.2.1. Cluster-wide role

1.2.2. Cluster-scoped resources like Nodes

1.2.3. namespaced resources (like pods) across all namespaces

1.2.4. apiVersion: rbac.authorization.k8s.io/v1 kind: ClusterRole metadata: # "namespace" omitted since ClusterRoles are not namespaced name: secret-reader rules: - apiGroups: [""] resources: ["secrets"] verbs: ["get", "watch", "list"]

1.3. RoleBinding

1.3.1. A role binding grants the permissions defined in a role to a user or set of users

1.3.2. It holds a list of subjects (users, groups, or service accounts),

1.3.3. Permissions can be granted within a namespace with a RoleBinding

1.3.4. apiVersion: rbac.authorization.k8s.io/v1 # This role binding allows "jane" to read pods in the "default" namespace. kind: RoleBinding metadata: name: read-pods namespace: default subjects: - kind: User name: jane # Name is case sensitive apiGroup: rbac.authorization.k8s.io roleRef: kind: Role #this must be Role or ClusterRole name: pod-reader # this must match the name of the Role or ClusterRole you wish to bind to apiGroup: rbac.authorization.k8s.io

1.4. ClusterRoleBinding

1.4.1. Permissions can be granted cluster-wide with a ClusterRoleBinding

1.4.2. apiVersion: rbac.authorization.k8s.io/v1 # This role binding allows "dave" to read secrets in the "development" namespace. kind: RoleBinding metadata: name: read-secrets namespace: development # This only grants permissions within the "development" namespace. subjects: - kind: User name: dave # Name is case sensitive apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: secret-reader apiGroup: rbac.authorization.k8s.io

1.4.3. apiVersion: rbac.authorization.k8s.io/v1 # This cluster role binding allows anyone in the "manager" group to read secrets in any namespace. kind: ClusterRoleBinding metadata: name: read-secrets-global subjects: - kind: Group name: manager # Name is case sensitive apiGroup: rbac.authorization.k8s.io roleRef: kind: ClusterRole name: secret-reader apiGroup: rbac.authorization.k8s.io

2. Controllers

2.1. Deployments

2.1.1. Use Case

2.1.1.1. Rollout a replicaset

2.1.1.2. Declare new state of Pods

2.1.1.3. Rollback to an earlier deployment revision

2.1.1.4. Scale up the Deployment

2.1.1.5. Pause the Deployment to apply multiple fixes to its PodTemplateSpec and resume it to start a new rollout

2.1.1.6. Use the status of the Deployment

2.1.1.7. Clean up older ReplicaSets

2.1.2. Deployment Status

2.1.2.1. Failed Deployment

2.1.2.1.1. Factors

2.1.2.1.2. Deadline Parameter

2.1.2.1.3. Status Conditions

2.1.3. Deployment Spec

2.1.4. Creating a Deployment

2.1.4.1. apiVersion: apps/v1 kind: Deployment metadata: name: nginx-deployment labels: app: nginx spec: replicas: 3 selector: matchLabels: app: nginx template: metadata: labels: app: nginx spec: containers: - name: nginx image: nginx:1.7.9 ports: - containerPort: 80

2.1.4.1.1. .metadata.name: Deployment name

2.1.4.1.2. three replicated Pods, indicated by .spec.replicas: 3

2.1.4.1.3. .spec.selector defines how the Deployment finds which Pods to manage. In this case, simple select a label that is defined in the Pod template(app:nginx).

2.1.4.1.4. .spec.template defines Pods metadata and specification

2.1.4.2. Kubectl apply -f manifest.yaml

2.1.4.3. Kubectl get deployments

2.1.4.3.1. name: lists the names of the Deployemtns in the cluster

2.1.4.3.2. desired: displays the desired number of replicas of the application

2.1.4.3.3. current: displays how many replicas are currently running

2.1.4.3.4. up-to-date: displays the number of replicas that have been updated to achieve the desired state

2.1.4.3.5. available: displays how many replicas of the application are available to your users

2.1.4.3.6. age: displays the amount of time that the application has been running

2.1.4.4. Kubectl rollout status deployment.v1.apps/deployment-name

2.1.4.5. Kubectl get rs

2.1.4.5.1. ReplicaSet Name Format: [deployment-name]-[randome-string]

2.1.5. Updating a Deployment

2.1.5.1. A Deployment’s rollout is triggered if and only if the Deployment’s Pod template (that is, .spec.template) is changed, for example if the labels or container images of the template are updated. Other updates, such as scaling the Deployment, do not trigger a rollout.

2.1.5.2. Example

2.1.5.2.1. kubectl --record deployment.apps/nginx-deployment set image deployment.v1.apps/nginx-deployment nginx=nginx:1.9.1

2.1.5.2.2. kubectl set image deployment/nginx-deployment nginx=nginx:1.9.1 --record

2.1.5.2.3. kubectl edit deployment.v1.apps/nginx-deployment

2.1.5.3. See Rollout Status

2.1.5.3.1. kubectl rollout status deployment.v1.apps/nginx-deployment

2.1.5.4. Kubectl get deployments

2.1.5.5. Kubectl get rs

2.1.5.6. Kubectl get pods

2.1.5.7. By default, Deployment ensures that at least 75% of the desired number of Pods are up

2.1.5.8. By default, Deployment also ensures at most 125% of the desired number of Pods are up

2.1.5.8.1. Kubectl describe deployments

2.1.5.9. Rollover

2.1.5.9.1. aka multiple updates in-flight

2.1.5.10. Label Selector Updates

2.1.5.10.1. It is generally discouraged to make label selector updates and it is suggested to plan your selectors up front. In any case, if you need to perform a label selector update, exercise great caution and make sure you have grasped all of the implications.

2.1.5.11. A Deployment's rollout is triggered if and only if the Deployment's Pod Template(.spec.template) is changed.

2.1.6. Rolling Back a Deployment

2.1.6.1. the Deployment's rollout history is kept in the system

2.1.6.2. You can rollback anytime

2.1.6.3. Checking Rollout History of a Deployment

2.1.6.3.1. Check the revisions of this Deployment

2.1.6.3.2. See the details of each revision

2.1.6.4. Rolling Back to a Previous Revision

2.1.6.4.1. kubectl rollout undo deployment.v1.apps/nginx-deployment

2.1.6.4.2. kubectl rollout undo deployment.v1.apps/nginx-deployment --to-revision=2

2.1.7. Scaling a Deployment

2.1.7.1. kubectl scale deployment.v1.apps/nginx-deployment --replicas=10

2.1.7.2. Horizontal Pod Autoscaling

2.1.7.2.1. Setup a autoscaler for your Deployment and choose the minimum and maximum number of Pods you want to run based on the CPU utilization of your existing Pods

3. Tasks

3.1. Run Application

3.1.1. Horizontal Pod Autoscale

3.1.1.1. Concept

3.1.1.1.1. Horizontal Pod Autoscaler automatically sales the number of pods in a replication controller, deployment, replica set or stateful set based on observed CPU utilization or with custom metrics support

3.1.1.1.2. HPA does not apply to objects that can't be scaled such as DaemonSets

3.1.1.1.3. HPA is implemented as a Kubernetes API Resource and a controller

3.1.1.1.4. The controller periodically adjusts the # of RS in a replication controller or deployment to match the observed average CPU utilization of the target specified by user

3.1.1.2. How does HPA work?

3.1.1.2.1. Implemented as a control loop