1. OMG this is old and some may be wrong...
1.1. cookbooks
1.1.1. Download docker image & put in in my Google Cloud Repository (GCR)
1.1.1.1. find image on dockerhub
1.1.1.1.1. docker search <search-text>
1.1.1.2. pull from dockerhub
1.1.1.2.1. docker pull <tag>
1.1.1.3. check the list of images, get a tag
1.1.1.3.1. docker images
1.1.1.4. tag the image with my GCR info
1.1.1.4.1. docker tag <current-tag> <new-repo-specific-tag-and-version>
1.1.1.5. push the image to my GCR
1.1.1.5.1. 1. gcloud auth configure-docker
1.1.1.5.2. 2. docker push <new-repo-specific-tag-and-version>
1.1.1.5.3. DEPRECATED: gcloud docker -- push <new-repo-specific-tag-and-version>
1.1.2. delete all exited & dead containers in docker
1.1.2.1. docker ps -f status=exited -f status=dead --format "{{.ID}}" | xargs docker rm
1.1.3. create a cluster
1.1.3.1. use the gui. Look at command line if you want it.
1.1.3.2. then, add the cluster to your kubectl config
1.1.3.2.1. gcloud container clusters get-credentials <cluster-name> --zone <zone> --project <project-name>
1.1.4. add a container to the cluster, creating a pod along the way
1.1.4.1. make sure the image is in your local repository first!
1.1.4.1.1. docker images
1.1.4.2. use kubectl run to add the container
1.1.4.2.1. kubectl run <image-name> --image=<image-tag>
1.1.5. delete all clusters in your kubectrl config (eg, the clusters have been deleted in GKE)
1.1.5.1. kubectl config get-clusters | grep -v NAME | xargs -n 1 kubectl config delete-cluster
1.1.6. get to a command line in a container. Replace "bash"with "sh" if bash not supported in container
1.1.6.1. If its the only container in the pod
1.1.6.1.1. kubectl exec -it <pod-name> -- "bash"
1.1.6.2. If there are multiple containers in the pod
1.1.6.2.1. first find the container name for the container you want
1.1.6.2.2. then exec the shell
1.1.7. list all the containers in all your clusters (close, but not working yet)
1.1.7.1. kubectl get pods --all-namespaces -o jsonpath="{.items[*].spec.containers[*].name}"
1.1.7.2. kubectl get pods --all-namespaces -o=jsonpath='{range .items[*]}{"\n"}{.metadata.name}{":\t"}{range .spec.containers[*]}{.name}{", "}{end}{end}' |\ sort
1.1.8. list all your clusters
1.1.8.1. kubectl config view
1.1.8.1.1. and then look in the contexts section
1.1.9. delete a pod/deployment
1.1.9.1. kubectl get pods
1.1.9.1.1. list the pods to see your pod is there
1.1.9.2. kubectl get deployments
1.1.9.2.1. get the name of your pod's deployment
1.1.9.3. kubectl delete deployment <deployment-name>
1.1.9.3.1. you need to delete the deployment. If you delete the pod, kubernetes will recreate it
1.1.9.4. kubectl get deployments
1.1.9.4.1. make sure your deployment is gone
1.1.9.5. kubectl get pods
1.1.9.5.1. make sure your pod is gone
1.1.10. show all gke instances by name, zone, tags, & status
1.1.10.1. gcloud compute instances list --filter 'name~gke.*' --format "table(name:sort=1,zone,tags.items.list():label=TAGS,status)"
1.1.11. scaling
1.1.11.1. scale pods up and down
1.1.11.1.1. kubectl scale deploy <deployment> -n <namespace> --replicas <replica count>
1.1.11.2. scale nodes up and down
1.1.11.2.1. gcloud container clusters resize <cluster> --size <number of nodes per zone> --project <project> --zone <master zone>
1.1.12. restart a container without killing a pod
1.1.12.1. exec into the container and run
1.1.12.1.1. kill -HUP 1
1.1.12.2. eg, exec in to the sidecar to restart nginx to pick up a new cert
1.1.13. check a certificate
1.1.13.1. in a running pod
1.1.13.1.1. openssl s_client -connect <domain-name>:<port> | openssl x509 -noout -text
1.1.13.2. in the secret for a pod
1.1.13.2.1. list all the certs first
1.1.13.2.2. then describe the cert
1.2. Kubernetes (K8s)
1.2.1. Documentation
1.2.1.1. what is kubernetes
1.2.1.2. User Guide
1.2.1.3. Xoriant Blog - K Building Blocks
1.2.1.4. Network Design
1.2.1.5. Tutorials
1.2.1.6. Security Best Practices
1.2.1.6.1. good, only slightly TwistLock biased
1.2.1.7. 4-Day Docker & Kubernetes Training
1.2.1.8. KubeWeekly
1.2.1.8.1. TONS of K8s relevant info
1.2.2. has
1.2.2.1. Cluster - a group of nodes
1.2.2.1.1. Node - a physical or virtual machine
1.2.2.1.2. allow isolation between pods within a cluster - perhaps for different teams, perhaps by environment (dev, test, prod)
1.2.2.1.3. A production cluster should have at least 3 nodes
1.2.2.1.4. disks
1.2.2.2. Master Controller (typically 1)
1.2.2.2.1. has
1.2.2.3. command line utility
1.2.2.3.1. kubectl
1.2.2.4. Services
1.2.2.4.1. integrate w HashiCorp Vault?
1.2.2.4.2. single endpoint to multiple pods to provide consistent point of entry for service consumer
1.2.2.5. Networking
1.2.2.5.1. IP-per-Pod model: IP addresses applied at a Pod level
1.2.2.5.2. Google Compute Engine
1.2.2.5.3. Service
1.2.2.6. namespaces
1.2.2.6.1. create subdomains for services. <service-name>.<namespace-name>.svc.cluster.local.
1.2.2.7. Labels
1.2.2.8. Secrets
1.2.2.8.1. implemented in etcd
1.2.2.8.2. available to all containers in cluster
1.2.2.8.3. Secrets Management (more here than just K8s)
1.2.2.9. contexts
1.2.2.9.1. seems to be
1.2.2.10. console
1.2.2.10.1. GUI
1.2.2.10.2. can be used to explore API
1.2.3. kubectl commands
1.2.3.1. kubectl cheat sheet
1.2.3.2. kubectl cluster-info
1.2.3.2.1. gets info about the cluster
1.2.3.3. kubectl get
1.2.3.3.1. lists the objects in the cluster
1.2.3.4. kubectl proxy
1.2.3.4.1. create a route between the terminal and K8s cluster - allows access to the API
1.2.3.4.2. open a browser to http://localhost:8001/ui for the K8s GUI
1.2.3.5. kubectl expose
1.2.3.5.1. exposes deployment as a service externally
1.2.3.6. kubectl describe
1.2.3.6.1. describes object w a lot of details
1.2.3.7. kubectl run
1.2.3.7.1. creates a deployment
1.2.3.8. kubectl config
1.2.3.8.1. kubectl config get-contexts
1.2.3.8.2. kubectl config use-context <context-name>
1.2.3.9. kubectl exec
1.2.3.9.1. run a command on container. Often used to get to a shell
1.2.3.10. kubectl attach
1.2.3.10.1. (look this up)
1.2.3.11. kubectl top pods
1.2.3.11.1. show top pods by CPU load
1.2.4. k8s runs
1.2.4.1. deployments
1.2.4.2. jobs
1.2.4.2.1. if a job fails, it will try again
1.2.4.3. bare pod
1.2.4.3.1. if you want something to just terminate if it fails (eg, building new infrastructure)
1.2.4.4. Replication Controllers
1.2.5. DNS
1.2.5.1. creates its own dns
1.2.5.1.1. service.namespace.svc.cluster.local
1.2.6. deployments
1.2.6.1. deployment YAML
1.2.6.1.1. resources
1.3. Docker
1.3.1. sample Dockerfiles
1.3.2. Dockerfile commands
1.3.2.1. FROM
1.3.2.2. MAINTAINER
1.3.2.3. RUN
1.3.2.4. ENTRYPOINT
1.3.3. commands
1.3.3.1. docker pull
1.3.3.1.1. pull an image from another repo
1.3.3.1.2. docker pull <tag>
1.3.3.2. docker push
1.3.3.2.1. push an image to a repo
1.3.3.3. docker images
1.3.3.3.1. list all images
1.3.3.4. docker ps
1.3.3.4.1. show currently running docker processes
1.3.3.4.2. -a
1.3.3.5. docker build
1.3.3.5.1. docker build -t <tag> <Dockerfile location>
1.3.3.6. docker run
1.3.3.6.1. docker run <tag> <params>
1.3.3.6.2. -it
1.3.3.6.3. -v <from>:<to>:<permissions>
1.3.3.7. docker logs
1.3.3.7.1. docker logs <container name>
1.3.3.8. docker inspect
1.3.3.8.1. docker inspect <container name>
1.3.3.9. docker rm
1.3.3.9.1. docker rm <container name>
1.3.3.10. docker rmi
1.3.3.10.1. remove image <tag>
1.3.3.11. docker cp
1.3.3.11.1. docker cp <from> <to>
1.3.4. cookbooks
1.3.4.1. delete all images with <none> tag (find a better way)
1.3.4.1.1. docker images | grep '<none>' | cut -c 72-83 | xargs -n1 docker image rm
1.3.5. tools
1.3.5.1. container diff
1.3.5.1.1. GoogleContainerTools/container-diff
1.4. Google Cloud Platform
1.4.1. Networking
1.4.1.1. https://cloud.google.com/compute/docs/networking
1.4.1.2. Different networks - even in the same project - cannot communicate directly with each other - they must communicate through the internet (or possibly through a common VPN)
1.4.1.3. Each network can have different subnets. Subnets can communicate with each other - given appropriate firewall rules
1.4.1.4. Even hosts on the same subnet can not communicate without a firewall rule allowing it, creating much greater granularity than available with traditional networks.
1.4.1.5. Tags can be used for creating firewall rules, greatly simplifying granular firewall rule creation. The same tags can be used in multiple networks, however,
1.4.1.5.1. Tags are not recognized across networks. E.G., If I tag server A as "ping-from" on network X and server B as "ping-to" on network Y, and attempt to ping from A to B's external IP, it won't work if my rule on network Y is to allow ping-from to ping ping-to. But, I can create a rule on network Y to allow A's external IP to ping any ping-to systems, and A will be able to ping B.
1.4.1.6. commands
1.4.1.6.1. gcloud compute networks create <network_name> --mode auto
1.4.2. regions & zones
1.4.2.1. gcloud config set compute/zone us-east1-d
1.4.3. Container Engine
1.4.3.1. built on Kubernetes
1.4.3.1.1. Kubernetes clusters
1.4.3.2. Docker
1.4.3.2.1. gcloud docker -- <docker command>
1.4.3.3. Container Registry
1.4.3.3.1. gcr.io/<project-name>
1.4.3.3.2. gcloud container images list
1.4.3.4. Good Intro
1.4.4. Compute Engine
1.4.4.1. gcloud compute images list
1.4.4.1.1. list all the images available
1.4.5. Cloud Shell
1.4.5.1. appears to be one instance per user - same instance across multiple projects
1.4.5.2. Appears to be independent of project (my k8s config shows clusters in multiple projects)
1.4.6. Cook Books
1.4.6.1. Take a standard image, add an application, make an image, deploy in a pod
1.4.7. Tutorials
1.4.7.1. Jenkins in GKE
1.4.7.1.1. See also
1.4.8. Projects
1.4.8.1. gcloud projects list
1.4.8.2. Guide to projects, permissions, & accounts
1.4.9. AAA
1.4.9.1. 2FA
1.4.9.1.1. Enforcement
1.4.9.2. Google Cloud Directory Sync
1.4.9.2.1. best practices
1.4.10. to authenticate in SDK:
1.4.10.1. gcloud auth application-default login
1.4.11. Documentation
1.4.11.1. Google Cloud Compute Tips
1.4.12. gcloud
1.4.12.1. config
1.4.12.1.1. gcloud config configurations list
1.4.12.1.2. gcloud config configurations activate <configuration-name>
1.4.12.2. --format
1.4.12.2.1. table format, no labels
1.4.12.2.2. json format
2. K8s
2.1. What version of docker/containerd am I running?
2.1.1. kubectl get nodes -o wide
3. GCP Service Accounts
3.1. Service agents | IAM Documentation | Google Cloud
4. gcloud
4.1. formatting
4.1.1. how to show the default formatting for a command: force a broken table
4.1.1.1. EG
4.1.1.1.1. g compute routes list --format="table(" ERROR: (gcloud.compute.routes.list) More tokens expected [ table( name, network.basename(), destRange, firstof( nextHopInstance, nextHopGateway, nextHopIp, nextHopVpnTunnel, nextHopPeering, nextHopNetwork, nextHopHub).scope() :label=NEXT_HOP, priority ) table( *HERE*].