GCP & Kubernetes

Get Started. It's Free
or sign up with your email address
Rocket clouds
GCP & Kubernetes by Mind Map: GCP & Kubernetes

1. Google Cloud Platform

1.1. Networking

1.1.1. https://cloud.google.com/compute/docs/networking

1.1.2. Different networks - even in the same project - cannot communicate directly with each other - they must communicate through the internet (or possibly through a common VPN)

1.1.3. Each network can have different subnets. Subnets can communicate with each other - given appropriate firewall rules

1.1.4. Even hosts on the same subnet can not communicate without a firewall rule allowing it, creating much greater granularity than available with traditional networks.

1.1.5. Tags can be used for creating firewall rules, greatly simplifying granular firewall rule creation. The same tags can be used in multiple networks, however,

1.1.5.1. Tags are not recognized across networks. E.G., If I tag server A as "ping-from" on network X and server B as "ping-to" on network Y, and attempt to ping from A to B's external IP, it won't work if my rule on network Y is to allow ping-from to ping ping-to. But, I can create a rule on network Y to allow A's external IP to ping any ping-to systems, and A will be able to ping B.

1.1.6. commands

1.1.6.1. gcloud compute networks create <network_name> --mode auto

1.1.6.1.1. creates new network w auto subnets

1.2. regions & zones

1.2.1. gcloud config set compute/zone us-east1-d

1.3. Container Engine

1.3.1. built on Kubernetes

1.3.1.1. Kubernetes clusters

1.3.1.1.1. gcloud container clusters create <cluster-name> --network <network> --scopes "https://www.googleapis.com/auth/projecthosting,storage-rw"

1.3.1.1.2. gcloud container clusters delete <cluster-name>

1.3.1.1.3. gcloud container clusters list

1.3.1.1.4. gcloud container clusters get-credentials <cluster>

1.3.2. Docker

1.3.2.1. gcloud docker -- <docker command>

1.3.2.1.1. EG:

1.3.3. Container Registry

1.3.3.1. gcr.io/<project-name>

1.3.3.2. gcloud container images list

1.3.4. Good Intro

1.4. Compute Engine

1.4.1. gcloud compute images list

1.4.1.1. list all the images available

1.5. Cloud Shell

1.5.1. appears to be one instance per user - same instance across multiple projects

1.5.2. Appears to be independent of project (my k8s config shows clusters in multiple projects)

1.6. Cook Books

1.6.1. Take a standard image, add an application, make an image, deploy in a pod

1.7. Tutorials

1.7.1. Jenkins in GKE

1.7.1.1. See also

1.8. Projects

1.8.1. gcloud projects list

1.8.2. Guide to projects, permissions, & accounts

1.9. AAA

1.9.1. 2FA

1.9.1.1. Enforcement

1.9.1.1.1. After turning on enforcement, all new users need to be placed into an exception group to they can set up 2FA

1.9.2. Google Cloud Directory Sync

1.9.2.1. best practices

1.10. to authenticate in SDK:

1.10.1. gcloud auth application-default login

1.11. Documentation

1.11.1. Google Cloud Compute Tips

1.12. gcloud

1.12.1. config

1.12.1.1. gcloud config configurations list

1.12.1.1.1. lists all your configurations

1.12.1.2. gcloud config configurations activate <configuration-name>

1.12.1.2.1. change configurations

1.12.2. --format

1.12.2.1. table format, no labels

1.12.2.1.1. --format 'table(zone:label="")'

1.12.2.2. json format

1.12.2.2.1. --format json

2. Kubernetes (K8s)

2.1. Documentation

2.1.1. what is kubernetes

2.1.2. User Guide

2.1.3. Xoriant Blog - K Building Blocks

2.1.4. Network Design

2.1.5. Tutorials

2.1.6. Security Best Practices

2.1.6.1. good, only slightly TwistLock biased

2.1.7. 4-Day Docker & Kubernetes Training

2.1.8. KubeWeekly

2.1.8.1. TONS of K8s relevant info

2.2. has

2.2.1. Cluster - a group of nodes

2.2.1.1. Node - a physical or virtual machine

2.2.1.1.1. has

2.2.1.1.2. is

2.2.1.2. allow isolation between pods within a cluster - perhaps for different teams, perhaps by environment (dev, test, prod)

2.2.1.2.1. Default: within a namespace, all pods can talk to each other

2.2.1.2.2. DefaultDeny: Pods in the namespace will be inaccessible from any source except the pod's local Node

2.2.1.3. A production cluster should have at least 3 nodes

2.2.1.4. disks

2.2.2. Master Controller (typically 1)

2.2.2.1. has

2.2.2.1.1. Deployments

2.2.2.1.2. Discovery Service

2.2.2.1.3. Replication Controller

2.2.2.1.4. Scheduling Manager

2.2.2.1.5. Heapster

2.2.2.1.6. GCE only: GLBC - GCE Load Balance Controller

2.2.2.1.7. KubeDNS

2.2.2.1.8. dashboard

2.2.2.1.9. API

2.2.3. command line utility

2.2.3.1. kubectl

2.2.3.1.1. has

2.2.4. Services

2.2.4.1. integrate w HashiCorp Vault?

2.2.4.2. single endpoint to multiple pods to provide consistent point of entry for service consumer

2.2.4.2.1. LoadBalancer

2.2.4.2.2. NodePort

2.2.5. Networking

2.2.5.1. IP-per-Pod model: IP addresses applied at a Pod level

2.2.5.1.1. All containers within a Pod use different ports on same IP

2.2.5.1.2. Pod's single IP is the same inside and outside the pod.

2.2.5.2. Google Compute Engine

2.2.5.2.1. Each VM

2.2.5.3. Service

2.2.5.3.1. pod load balancing

2.2.5.3.2. virtual IP for client access

2.2.6. namespaces

2.2.6.1. create subdomains for services. <service-name>.<namespace-name>.svc.cluster.local.

2.2.6.1.1. See https://kubernetes.io/docs/admin/namespaces/

2.2.7. Labels

2.2.8. Secrets

2.2.8.1. implemented in etcd

2.2.8.1.1. not encrypted

2.2.8.2. available to all containers in cluster

2.2.8.3. Secrets Management (more here than just K8s)

2.2.9. contexts

2.2.9.1. seems to be

2.2.10. console

2.2.10.1. GUI

2.2.10.1.1. 127.0.0.1:8001/ui

2.2.10.2. can be used to explore API

2.2.10.2.1. 127.0.0.1:8001/api

2.3. kubectl commands

2.3.1. kubectl cheat sheet

2.3.2. kubectl cluster-info

2.3.2.1. gets info about the cluster

2.3.3. kubectl get

2.3.3.1. lists the objects in the cluster

2.3.3.1.1. kubectl get nodes

2.3.3.1.2. kubectl get services

2.3.3.1.3. kubectl get deployments

2.3.3.1.4. kubectl get pods -l <label-name>=<label-value>

2.3.4. kubectl proxy

2.3.4.1. create a route between the terminal and K8s cluster - allows access to the API

2.3.4.2. open a browser to http://localhost:8001/ui for the K8s GUI

2.3.5. kubectl expose

2.3.5.1. exposes deployment as a service externally

2.3.5.1.1. EG kubectl expose deployment/kubernetes-bootcamp --type="NodePort" --port 8080

2.3.5.1.2. how to determine if an exposed service requires authentication or not? How to require auth?

2.3.6. kubectl describe

2.3.6.1. describes object w a lot of details

2.3.6.1.1. kubectl describe deployment

2.3.6.1.2. kubectl describe services

2.3.6.1.3. kubectl describe services/kubernetes-bootcamp

2.3.7. kubectl run

2.3.7.1. creates a deployment

2.3.8. kubectl config

2.3.8.1. kubectl config get-contexts

2.3.8.1.1. list all the contexts available in the k8s config

2.3.8.2. kubectl config use-context <context-name>

2.3.8.2.1. sets current context

2.3.9. kubectl exec

2.3.9.1. run a command on container. Often used to get to a shell

2.3.9.1.1. kubectl exec <pod-name> -it -- "bash"

2.3.10. kubectl attach

2.3.10.1. (look this up)

2.3.10.1.1. kubectl attach nettools-3282871191-3m089 -c nettools -ti

2.3.11. kubectl top pods

2.3.11.1. show top pods by CPU load

2.4. k8s runs

2.4.1. deployments

2.4.2. jobs

2.4.2.1. if a job fails, it will try again

2.4.2.1.1. check to see if this is really true or if there is a setting to control

2.4.3. bare pod

2.4.3.1. if you want something to just terminate if it fails (eg, building new infrastructure)

2.4.4. Replication Controllers

2.5. DNS

2.5.1. creates its own dns

2.5.1.1. service.namespace.svc.cluster.local

2.6. deployments

2.6.1. deployment YAML

2.6.1.1. resources

2.6.1.1.1. limits

2.6.1.1.2. requests

3. Docker

3.1. sample Dockerfiles

3.2. Dockerfile commands

3.2.1. FROM

3.2.2. MAINTAINER

3.2.3. RUN

3.2.4. ENTRYPOINT

3.3. commands

3.3.1. docker pull

3.3.1.1. pull an image from another repo

3.3.1.2. docker pull <tag>

3.3.2. docker push

3.3.2.1. push an image to a repo

3.3.3. docker images

3.3.3.1. list all images

3.3.4. docker ps

3.3.4.1. show currently running docker processes

3.3.4.2. -a

3.3.4.2.1. show current and finished processes

3.3.5. docker build

3.3.5.1. docker build -t <tag> <Dockerfile location>

3.3.5.1.1. EG docker build -t user/nmap .

3.3.6. docker run

3.3.6.1. docker run <tag> <params>

3.3.6.2. -it

3.3.6.2.1. interactive

3.3.6.3. -v <from>:<to>:<permissions>

3.3.6.3.1. share a volume or file

3.3.7. docker logs

3.3.7.1. docker logs <container name>

3.3.8. docker inspect

3.3.8.1. docker inspect <container name>

3.3.9. docker rm

3.3.9.1. docker rm <container name>

3.3.9.1.1. remove container

3.3.10. docker rmi

3.3.10.1. remove image <tag>

3.3.11. docker cp

3.3.11.1. docker cp <from> <to>

3.4. cookbooks

3.4.1. delete all images with <none> tag (find a better way)

3.4.1.1. docker images | grep '<none>' | cut -c 72-83 | xargs -n1 docker image rm

4. cookbooks

4.1. Download docker image & put in in my Google Cloud Repository (GCR)

4.1.1. find image on dockerhub

4.1.1.1. docker search <search-text>

4.1.2. pull from dockerhub

4.1.2.1. docker pull <tag>

4.1.2.1.1. EG docker pull hello-world

4.1.3. check the list of images, get a tag

4.1.3.1. docker images

4.1.4. tag the image with my GCR info

4.1.4.1. docker tag <current-tag> <new-repo-specific-tag-and-version>

4.1.4.1.1. EG docker tag 48b5124b2768 gcr.io/my-project/hello-world:v1

4.1.5. push the image to my GCR

4.1.5.1. 1. gcloud auth configure-docker

4.1.5.2. 2. docker push <new-repo-specific-tag-and-version>

4.1.5.2.1. EG docker push gcr.io/my-project/hello-world:v1

4.1.5.3. DEPRECATED: gcloud docker -- push <new-repo-specific-tag-and-version>

4.1.5.3.1. EG gcloud docker -- push gcr.io/my-project/hello-world:v1

4.1.5.3.2. IMPORTANT: use gcloud to use your gcloud authentication

4.2. delete all exited & dead containers in docker

4.2.1. docker ps -f status=exited -f status=dead --format "{{.ID}}" | xargs docker rm

4.3. create a cluster

4.3.1. use the gui. Look at command line if you want it.

4.3.2. then, add the cluster to your kubectl config

4.3.2.1. gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-name>

4.4. add a container to the cluster, creating a pod along the way

4.4.1. make sure the image is in your local repository first!

4.4.1.1. docker images

4.4.2. use kubectl run to add the container

4.4.2.1. kubectl run <image-name> --image=<image-tag>

4.5. delete all clusters in your kubectrl config (eg, the clusters have been deleted in GKE)

4.5.1. kubectl config get-clusters | grep -v NAME | xargs -n 1 kubectl config delete-cluster

4.6. get to a command line in a container. Replace "bash"with "sh" if bash not supported in container

4.6.1. If its the only container in the pod

4.6.1.1. kubectl exec -it <pod-name> -- "bash"

4.6.2. If there are multiple containers in the pod

4.6.2.1. first find the container name for the container you want

4.6.2.1.1. kubectl describe pod <pod-name>

4.6.2.2. then exec the shell

4.6.2.2.1. kubectl exec -it -p <pod-name> -c <container-name> -- "bash"

4.7. list all the containers in all your clusters (close, but not working yet)

4.7.1. kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].name}"

4.7.2. kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.name}{", "}{end}{end}' |\ sort

4.8. list all your clusters

4.8.1. kubectl config view

4.8.1.1. and then look in the contexts section

4.9. delete a pod/deployment

4.9.1. kubectl get pods

4.9.1.1. list the pods to see your pod is there

4.9.2. kubectl get deployments

4.9.2.1. get the name of your pod's deployment

4.9.3. kubectl delete deployment <deployment-name>

4.9.3.1. you need to delete the deployment. If you delete the pod, kubernetes will recreate it

4.9.4. kubectl get deployments

4.9.4.1. make sure your deployment is gone

4.9.5. kubectl get pods

4.9.5.1. make sure your pod is gone

4.10. show all gke instances by name, zone, tags, & status

4.10.1. gcloud compute instances list --filter 'name~gke.*' --format "table(name:sort=1,zone,tags.items.list():label=TAGS,status)"

4.11. scaling

4.11.1. scale pods up and down

4.11.1.1. kubectl scale deploy <deployment> -n <namespace> --replicas <replica count>

4.11.2. scale nodes up and down

4.11.2.1. gcloud container clusters resize <cluster> --size <number of nodes per zone> --project <project> --zone <master zone>