AWS Certified DevOps Engineer - Professional

Get Started. It's Free
or sign up with your email address
Rocket clouds
AWS Certified DevOps Engineer - Professional by Mind Map: AWS Certified DevOps Engineer - Professional

1. DevOps Core Concept

1.1. CI & CD

1.1.1. Continuous Integration

1.1.1.1. is a process of automating regular codecommit followed through Build and Test processes designed to highlight integration problem

1.1.2. Continuous Deployment

1.1.2.1. takes the form of workflow based process which accepts tested software build payload from CI server

1.2. Deployment

1.2.1. Single Target Deployment

1.2.2. All at once Deployment

1.2.3. Minimum In Service Style Deployment

1.2.4. Rolling Deployment

1.2.5. Blue/Green Deployment

1.3. Testing

1.3.1. A/B Testing

1.3.1.1. is sending percentage traffic to Blue Environment and Percent of traffic to Green

1.4. Bootstraping

1.5. Immutable Architecture

1.5.1. means "Don't diagnose or Fix, throw away and re-create

2. Monitoring/Metric/Logging

2.1. CloudWatch

2.1.1. Metric Gathering Service

2.1.1.1. 2 weeks is the longest period of time for storring metrics

2.1.1.1.1. CloudWatch launched extended retention of metrics in November 1, 2016. The feature enabled storage of all metrics for customers from the previous 14 days to 15 months. CloudWatch Metrics now supports the following three retention schedules: 1 minute datapoints are available for 15 days 5 minute datapoints are available for 63 days 1 hour datapoints are available for 455 days

2.1.1.2. Metrics fall under different namespaces

2.1.1.3. You can aggregate data accros

2.1.1.3.1. ec2

2.1.1.4. Custom Metrics

2.1.1.4.1. can use custom namespace

2.1.2. Monitoring/Alert Service

2.1.2.1. Alarms

2.1.2.1.1. alarm period must be equal or greater than metric frequency

2.1.2.1.2. alarm can't invoke actions because it is in state. Change of state can invoke actions

2.1.2.1.3. alarm action must be in the same region as alarm

2.1.2.1.4. AWS resources doesn't send metric data under certain conditions. For example, if ebs volume is not attached to ec2 instance it won't send data to cloudwatch

2.1.2.1.5. You can create alarm before metric

2.1.2.2. Alarm State

2.1.2.2.1. OK

2.1.2.2.2. ALARM

2.1.2.2.3. INSUFFICEANT_DATA

2.1.2.3. cli

2.1.2.3.1. mon-put-metric-alarm

2.1.2.3.2. mon-[enable/disable]-alarm

2.1.2.3.3. mon-describe-alarms

2.1.3. Graphing Service

2.2. CloudTrail

2.2.1. the service records all AWS API request that made in your account and delivers in Log to you

2.2.2. Logs contain

2.2.2.1. identity of who made the request

2.2.2.2. time of call

2.2.2.3. The source IP of the call

2.2.2.4. request parameters

2.2.2.5. responds elements returned by AWS

2.2.3. Purposes

2.2.3.1. Enable security analysis

2.2.3.2. Track changes in your account

2.2.3.3. provide compliance auditing

2.2.4. Two types of trail

2.2.4.1. All region to one s3 bucket

2.2.4.2. One Region to one s3 bucket

2.2.5. Storage

2.2.5.1. S3 Server Side Encryption

2.2.5.2. Store log as long as you want or apply archiving and life cycle policies

2.2.5.3. logs are delivering within 15 min

2.2.5.4. New log file are published every 5 min

2.2.6. Notes

2.2.6.1. Aws Cloudtrail records API request made by a user or on behalf of the user by AWS service

2.2.6.2. There is 'invokedBy' field in logs

2.3. CloudWatch Logs

2.3.1. Agent for EC2

2.3.1.1. Ubuntu

2.3.1.2. Amazon OS

2.3.1.3. Windows

2.3.2. Main Purposes

2.3.2.1. Monitor logs from EC2 instances in real time

2.3.2.2. Monitor CloudTrail logged events

2.3.2.3. Archive log data

2.3.3. Terminology

2.3.3.1. Log Event

2.3.3.1.1. Timestamp and message

2.3.3.2. Log stream

2.3.3.2.1. it is the sequence log events that shared single event source

2.3.3.3. Log Group

2.3.3.3.1. it is a group of log stream that shared retention period and monitoring and access controle settings

2.3.3.4. Metric Filters

2.3.3.4.1. how service would extract metrics from events

2.3.3.5. Retention Settings

2.3.4. Filter

2.3.4.1. will be applied ONLY to data that was got after creating Filter

2.3.4.2. it will return first 50 result

2.3.5. Subscription Filter

2.3.5.1. Amazon Kinesis Stream

2.3.5.2. AWS Lambda

2.3.5.3. Amazon Kinesis Firehose

2.4. CloudWatch Events

2.4.1. it is a similar service as CloudTrail but more faster

2.4.2. targets

2.4.2.1. Lambda

2.4.2.2. Kinesis Stream

2.4.2.3. SNS topics

2.4.2.4. built-in targets

2.4.2.5. can have multiple targets

3. Security/Governance/Validation

3.1. Delegation and Federation

3.1.1. Federation

3.1.1.1. allows users from other Identity Provider (IdP) to access to you aws system

3.1.1.2. Two Types

3.1.1.2.1. Corporate/Enterprise Identity Federation

3.1.1.2.2. Web or Social Identity Federation

3.1.2. Roles

3.1.2.1. is an object that contains two policy documents

3.1.2.1.1. TRUST policy

3.1.2.1.2. ACCESS policy

3.1.3. Sessions

3.1.3.1. Session is an set or temporary credentials

3.1.3.2. secret and access key with an expiration

3.1.3.3. you obtain them using Security Token Service STS

3.1.3.3.1. sts:AssumeRole

3.1.3.3.2. sts:AssumeRoleWithSAML

3.1.3.3.3. sts:AssumeRoleWithWebIdentity

3.1.3.4. It may or may not involve Congnito - depending it is web federation

3.1.4. Corporate/Enterprise Identity Federation

3.1.4.1. Types or Request

3.1.4.1.1. Console - Assume Role

3.1.4.1.2. API - GetFederateToken

3.1.4.1.3. Console - AssumeRoleWithSAML

3.1.4.2. Federation Proxy has sts:assumerole permissions

3.1.4.3. respond

3.1.4.3.1. AccessKey

3.1.4.3.2. SecretKey

3.1.4.3.3. Token Id

3.1.4.3.4. ExpirationDate

3.1.5. Web Identity Federation

3.1.5.1. Cognito

3.1.5.1.1. can orchestrate unauthenticated identities

3.1.5.1.2. can merge unauthenticated identity to authenticated if both are provided

3.1.5.1.3. can merge multiple identities like fasebook, twiter ..

3.1.5.1.4. when Id is merged - any sync data is merged

3.1.5.1.5. Types:

4. High Availability & Elasticity

4.1. EC2 AutoScaling

4.1.1. Lifecycle

4.1.1.1. Lifecycle Start

4.1.1.1.1. When you/Autoscaling Group launch instance

4.1.1.2. Lifecycle End

4.1.1.2.1. When you/autoscaling group terminate instance

4.1.1.3. If Scale In or fail health check -> instance will be terminated

4.1.1.4. Enter StandBy

4.1.1.4.1. remove instance from in service and troubleshout

4.1.1.4.2. instance is still managed by autoscaling group

4.1.1.5. Detach

4.1.1.5.1. instance will be removed from AG and still running

4.1.2. Lifecycle Hooks

4.1.2.1. EC2_INSTANCE_LAUNCHING

4.1.2.1.1. for example install new app during launcing

4.1.2.2. EC2_INSTANCE_TERMINATING

4.1.2.2.1. for example copy logs before terminating

4.1.2.3. work process

4.1.2.3.1. 1) AG responds scale out event by launching new instance

4.1.2.3.2. 2) AG puts the instance into Pending:Wait state

4.1.2.3.3. 3) AG sends a message to the notification target defined for the hook

4.1.2.3.4. 4) wait until you tell it continue or timeout ends

4.1.2.3.5. 5) you can perform any action, eg install software

4.1.2.3.6. 6) By default, the instance wait for an hour and will change the state to Pending:Proceed, then it will moved to inService state

4.1.2.3.7. If you need more time you can restart timeout period or change timeout

4.1.2.4. Notes:

4.1.2.4.1. You can change heartbeat timeout or you can set the parameter when you creates lifecycle hooks using `heartbeat-timeout` parameter

4.1.2.4.2. You can call `complete-lifecycle-action`

4.1.2.4.3. `record-lifecycle-action-heartbeat` to add more time to timeout

4.1.2.4.4. 48 hours is maximum that you can keep the server in wait condition

4.1.2.4.5. When Spot Instance terminates you must still complete the lifecycle actions

4.1.2.5. Cooldown

4.1.2.5.1. start when instance enter to inservice state

4.1.2.6. Result Response of instance

4.1.2.6.1. Abandon

4.1.2.6.2. Continue

4.1.3. Launch Configuration

4.1.3.1. Note

4.1.3.1.1. When you create AG you MUST specify LG

4.1.3.1.2. One LG can be associated with Many AG

4.1.3.1.3. One AG can have only One LG

4.1.3.1.4. you can NOT update LG, only create new

4.1.3.1.5. If you changed LC in AG -> new instance will use new LC, old instances will NOT recreated

4.1.3.2. Spot Instances

4.1.3.2.1. You CAN use lifecycle hooks with spot instances

4.1.3.2.2. LC for on-demand and spot instance are different

4.1.3.2.3. set bid price in you LC

4.1.3.2.4. If you want to change bid price you need to create new LC

4.1.3.2.5. If instance was terminated it will try to launch new one

4.1.3.2.6. If your price is below the spot price AG will wait

4.1.3.3. LC from EC2 instance

4.1.3.3.1. you can create LC from running EC2 instance

4.1.3.3.2. some properties of Ec2 instances are not supported AG

4.1.3.3.3. Diff in creating LC from scratch and from EC2 instance

4.1.3.4. Self Healing

4.2. RDS

4.2.1. Engines

4.2.1.1. MySQL

4.2.1.2. MariaDB

4.2.1.3. PostgreSQL

4.2.1.4. MSSQL

4.2.1.4.1. Storage

4.2.1.5. Oracle

4.2.1.6. AuroraDB

4.2.2. Managed Administration

4.2.2.1. RDS

4.2.2.1.1. Provisioning Infrastructure

4.2.2.1.2. Install Software

4.2.2.1.3. Automatic Backups

4.2.2.1.4. Automatic Patching

4.2.2.1.5. Synchronous Data Replication

4.2.2.1.6. Automatic Failover

4.2.2.2. You

4.2.2.2.1. Settings

4.2.2.2.2. Schema

4.2.2.2.3. Performance Tuning

4.2.3. Scaling

4.2.3.1. Vertical Scale/ Scale In

4.2.3.1.1. Compute

4.2.3.1.2. Storage

4.2.3.2. Read Scale Out

4.2.3.2.1. Up to 5 Read Replicas

4.2.3.2.2. Read Replica can be promoted to Master

4.2.3.2.3. can create Read Replica of Read Replica

4.2.3.3. Sharding

4.2.3.3.1. Split you tables across multiple DBs

4.2.3.3.2. Split Tables that aren't joined by queries

4.3. Aurora

4.3.1. you can encrypt unencrypted db you need to create new

4.3.2. compatible with MySQL 5.6

4.3.3. Storage

4.3.3.1. Fault Tolerant and Auto Healing

4.3.3.2. Disk fails are repeated in background

4.3.3.3. Detects database crashes and restartes

4.3.3.4. No crash recovery or cache rebuilding

4.3.3.5. Automatic failover to one of up to 15 read replicas

4.3.3.6. Storage autoscaling from 10Gb up to 64 Tb

4.3.3.6.1. without any distruption

4.3.4. Backups

4.3.4.1. Automatic, Continuous and incremental backups

4.3.4.2. Point in time recovery within a second

4.3.4.3. retention period Up to 35 Days

4.3.4.4. Stored in S3

4.3.4.5. no impact on database performance

4.3.5. Snapshots

4.3.5.1. Stored to S3

4.3.5.2. Keep until you delete them

4.3.5.3. Incremental

4.3.6. Data failures

4.3.6.1. 6 copies of your data

4.3.6.2. 3 availability zones

4.3.6.3. restore in healthy AZ

4.3.6.4. restore from PIT and snapshotes

4.3.7. Fault Tolerance

4.3.7.1. data divides into 10 Gb segments across many disks

4.3.7.2. Transparently handle data loss

4.3.7.3. Can lose up to 2 copies of you data without effecting write

4.3.7.4. Can lose up to 3 copies of data without effecting reading

4.3.7.5. All storages are auto healing

4.3.8. Replicas

4.3.8.1. Aurora Replica

4.3.8.1.1. Share same underline volume that the primary instance

4.3.8.1.2. Updates made by primary instance immediately visible across all replicas

4.3.8.1.3. up to 15

4.3.8.1.4. Performance impact on primary is low

4.3.8.1.5. Replica can failover the target without data loss

4.3.8.2. MySQL Replica

4.3.8.2.1. replay by transaction

4.3.8.2.2. up to 5

4.3.8.2.3. high performance impact on primary

4.3.8.2.4. Replica can failover with potential minutes data loss

4.3.9. Security

4.3.9.1. all instances must be created in VPC

4.3.9.2. SSL

4.3.9.3. KMS data at rest

4.3.9.4. encrypted storage

4.3.9.5. encrypted backups

4.3.9.6. encrypted snapshote

4.3.9.7. encrypted replicas

4.3.9.8. you can not encrypt unencrypted db

4.4. DynamoDB

4.4.1. Primer

4.4.1.1. NoSQL

4.4.1.2. Fully managed

4.4.1.3. Predictable fully manageable performace

4.4.1.4. seamless scalability

4.4.1.5. no visible servers

4.4.1.6. no practical storage limitations

4.4.1.7. Fully resilient and highly available

4.4.1.8. Performance scale is linear way

4.4.1.9. Fully integrated with IAM

4.4.1.10. Structure

4.4.1.10.1. Tables

4.4.1.10.2. Items (Rows)

4.4.1.10.3. Atributes

4.4.1.10.4. Hash Key/Partition Key

4.4.1.10.5. Sort Key/Range Key

4.4.1.11. Data Types

4.4.1.11.1. String

4.4.1.11.2. Number

4.4.1.11.3. Binary

4.4.1.11.4. Boolen

4.4.1.11.5. Null

4.4.1.11.6. Document JSON (List or Map)

4.4.1.11.7. Set (array)

4.4.1.12. Write Capacity Unit (WCU)

4.4.1.12.1. Number of 1KB blocks per second

4.4.1.12.2. Write at least to 2 location to finish write

4.4.1.13. Read Capacity Unit (RCU)

4.4.1.13.1. Number of 4 KB blocks per second

4.4.1.13.2. Read from only 1 location

4.4.1.13.3. Eventually Consistent By Default

4.4.1.13.4. You can configure Immediate consistent

4.4.1.13.5. If your items are smaller than 4 KB in size, each read capacity unit will yield one strongly consistent read per second or two eventually consistent reads per second.

4.4.1.14. Unlike SQL - Schema is not part of database level

4.4.2. Partition

4.4.2.1. There is underlying storage and processing nodes of DynamoDB

4.4.2.2. 1 partition

4.4.2.2.1. 10 Gb

4.4.2.2.2. 3000 RCU

4.4.2.2.3. 1000 WCU

4.4.2.3. when > 10Gb or > 3000 RCU or > 1000 WCU

4.4.2.3.1. get new partition

4.4.2.3.2. data automatically spreaded over the time

4.4.2.4. distribution between partitions

4.4.2.4.1. based on HASH / Partition Key

4.4.2.5. Limitations

4.4.2.5.1. Partitions will automatically increase

4.4.2.5.2. Partitions don't decrease

4.4.2.5.3. Allocated WCU and RCU are splitted between partitions

4.4.2.5.4. Each partition key is limited

4.4.2.6. calculating

4.4.2.6.1. MAX((WCU/1000 +RCU/3000),Storage/10Gb) Round Up to whole number

4.4.3. GSI/LSI

4.4.3.1. Global Secondary Indexies

4.4.3.1.1. Allows to use alternative partition key

4.4.3.1.2. As with LSI, you can shoose

4.4.3.1.3. GSI doesn't share WCU and RCU and has own

4.4.3.1.4. changes written Async

4.4.3.1.5. Supports ONLY eventual consistency

4.4.3.2. Local Secondary Indexies

4.4.3.2.1. Contain

4.4.3.2.2. Any data that written to the table copied Async to any LSI

4.4.3.2.3. Shares WCU and RCU with the table

4.4.3.2.4. sparse index

4.4.3.2.5. Operations

4.4.3.2.6. Concerns about LSI

4.4.3.2.7. Supports two consistencies

4.4.4. Stream & Replication

4.4.4.1. Stream

4.4.4.1.1. it is ordered record of updates to dynamo db table

4.4.4.1.2. When stream is enabled

4.4.4.1.3. AWS guaranty that each change in the table occur ONLY once

4.4.4.1.4. All changes to the Table occur in the scream in nr realtime

4.4.4.1.5. Stream contains only Partition Keys from NOW to - 24 hours

4.4.4.1.6. can be used in

4.4.4.1.7. Stream View

4.4.4.1.8. aws dynamodb create-table \ --table-name BarkTable \ --attribute-definitions AttributeName=Username,AttributeType=S AttributeName=Timestamp,AttributeType=S \ --key-schema AttributeName=Username,KeyType=HASH AttributeName=Timestamp,KeyType=RANGE \ --provisioned-throughput ReadCapacityUnits=5,WriteCapacityUnits=5 \ --stream-specification StreamEnabled=true,StreamViewType=NEW_AND_OLD_IMAGES

4.4.4.2. Replication

4.4.4.2.1. Stream + Kinesis

4.4.5. DeepDive

4.4.5.1. Performance

4.4.5.1.1. Write

4.4.5.1.2. Read

4.4.5.1.3. Add random prefix to partition key to distribute it through partitions

4.5. SQS

4.5.1. messages 256 kb

4.5.2. SQS Extended Library allows to handle > 256 Kb using S3

4.5.3. Ensures delivery of message at least once

4.5.4. Not Fifo

4.5.5. Messages can be retain up to 14 days

4.5.6. Long polling reduce cost of frequent polling

4.5.6.1. polling every 20 seconds if queue is empty

4.5.7. 0.5$ for every million requests plus data transfer charges

4.5.8. Use cases

4.5.8.1. prioritisation using two SQS queues

4.5.8.2. parallel processing reading from one queue different components will do different roles

4.5.8.2.1. SNS -> to many SQS

4.6. Kinesis

4.6.1. Stream

4.6.1.1. Collect and process large streams of data in real time

4.6.1.2. Create data processing application

4.6.1.2.1. Read data from the stream

4.6.1.2.2. Process records

4.6.1.3. Scenarios

4.6.1.3.1. Fast log processing

4.6.1.3.2. Realtime metric and reporting

4.6.1.3.3. Realtime data analitics

4.6.1.3.4. Complex stream processing

4.6.1.4. Benefits

4.6.1.4.1. Realtime aggregation of data

4.6.1.4.2. Loading aggregated data into data warehouses / mapreduce cluster

4.6.1.4.3. Durability and Elasticity

4.6.1.4.4. Parallel application readers

4.6.2. Analitics

4.6.2.1. The easiest way to run SQL queries against streaming data

4.6.3. Firehose

4.6.3.1. The easiest way to load streaming data to

4.6.3.1.1. S3

4.6.3.1.2. Redshift

4.6.3.2. That enables near realtime analitics

4.6.3.3. Batch size or interval to control how often data will be upload to S3 or Redshift

4.6.3.4. Firehose API or Linux agent

4.6.3.5. compression

4.6.3.6. scalable

4.6.3.7. secure - encription

4.6.3.8. monitoring through CloudWatch

4.6.3.9. Pay as you go

4.6.3.9.1. pay only for data and nothing for infrastructure

4.7. Kinesis vs SQS

4.7.1. Kinesis

4.7.1.1. Data can be processed at the same time or within 24 hours by different consumer

4.7.1.2. Data can be replayed within the time

4.7.1.3. 24 hours of retention

4.7.2. SQS

4.7.2.1. You can write to multiple queues. Then you can read from multiple source. But you can not reuse the information later

4.7.2.2. up to 24 days of retention

5. CI/CD/Automation

5.1. CloudFormation

5.1.1. Stack

5.1.1.1. Stack Updating

5.1.1.2. Stack Creation

5.1.1.2.1. Upload Template to S3

5.1.1.2.2. Syntax Check

5.1.1.2.3. Stack Name

5.1.1.2.4. Parameter Verification

5.1.1.2.5. Processing the template and Stack creation

5.1.1.2.6. Resource Ordering

5.1.1.2.7. Stack Completion or Rollback

5.1.1.3. Stack Nesting

5.1.1.3.1. Why

5.1.2. Template

5.1.2.1. Anatomy

5.1.2.1.1. Paramters

5.1.2.1.2. Mappings

5.1.2.1.3. Conditions

5.1.2.1.4. Recources

5.1.2.1.5. Outputs

5.1.2.2. Intrinsic Functions

5.1.2.2.1. Base64

5.1.2.2.2. FindInMap

5.1.2.2.3. GetAtt

5.1.2.2.4. GetAZs

5.1.2.2.5. Join

5.1.2.2.6. Select

5.1.2.2.7. Ref

5.1.2.2.8. Condition Functions

5.1.2.3. DependOn

5.1.2.4. DeletionPolicy

5.1.2.4.1. Specifies what happen with every resource during stack deleting

5.1.2.4.2. Delete

5.1.2.4.3. Retain

5.1.2.4.4. Snapshot

5.1.2.4.5. Default is Delete

5.1.2.5. Wait Conditions & Handlers

5.1.2.5.1. Wait conditions and handlers are two related components

5.1.2.5.2. Wait condition handler is cloudformation resource that doesn't have properties but it generates signed URL, which can be used to communicate SUCCESS or FALIRE

5.1.2.5.3. Wait condition generally has four components:

5.1.2.6. Custom Resources

5.1.2.6.1. backed by

5.1.2.6.2. Type = Custom::*

5.1.2.6.3. ServiceToken: Ref: LambdaFunction or SNS topic

5.1.2.6.4. Request

5.1.2.6.5. Respond

5.1.2.7. Creation Policy

5.1.2.7.1. Can ONLY be used with EC2 instances and Autoscaling groups (currently)

5.1.2.7.2. Tow Main Components

5.1.2.7.3. When number of signals equal or more than Count in Creation policy that resource is marked as complete

5.1.3. Stack Policy

5.1.3.1. govern what can be changed and by who

5.1.3.2. can't be deleted after creation; only can be changed

5.1.3.3. Stack Policy

5.1.3.3.1. Can prevent resources update

5.1.3.3.2. Evaluated first before changing resources

5.1.3.3.3. By default, absence of stack policy, allows any updates

5.1.3.3.4. Once stack policy is applied it can not be removed

5.1.3.3.5. Once policy was applied, by default ALL resources are protected, Update:* is denied

5.1.3.3.6. To remove Default protection you need to update the policy with explicit "Allow" for one/more/all resources

5.1.3.3.7. Update:*

5.1.3.3.8. Update Impact

5.2. OpsWorks

5.2.1. Two Part

5.2.1.1. CHEF Agent

5.2.1.1.1. Configuration of Machines

5.2.1.2. OpsWork Automation Engine

5.2.1.2.1. Create, Update, Delete various AWS Infrastructures Components

5.2.1.2.2. Handling Load Balancing

5.2.1.2.3. AutoScaling

5.2.1.2.4. AutoHealing

5.2.1.2.5. LifeCycle Events

5.2.2. Chef

5.2.2.1. Cookbooks

5.2.2.2. Recepies

5.2.2.3. OpsWorks and Chef are declarative desired state engines

5.2.3. Instances require INTERNET access (default vpc, public subnets or private subnets with nat)

5.2.4. Linux and Windows machines can NOT be mixed

5.2.5. changes os etc apply to new instances ONLY

5.2.6. RDS instance can ONLY be associated with ONLY one Stack

5.2.7. Stack Clone Operation Does NOT clone RDS instances

5.2.8. Specific Resources

5.2.8.1. EIP

5.2.8.2. RDS

5.2.8.3. Volumes

5.2.9. Layers

5.2.9.1. Opswork

5.2.9.1.1. Traditional Layer

5.2.9.2. ECS

5.2.9.2.1. Integrate with ECS cluster

5.2.9.3. RDS

5.2.10. Events

5.2.10.1. Setup

5.2.10.1.1. instance has been created

5.2.10.2. Configure

5.2.10.2.1. enters or leaves online; associated or disassociated EIP; attach or detach ELB to the layer on all instances in the stack

5.2.10.3. Deploy

5.2.10.3.1. when you run deploy command on the instance

5.2.10.4. Undeploy

5.2.10.4.1. occur when you deleted app on the layer or run undeploy

5.2.10.5. Shutdown

5.2.10.5.1. when instance shutdown before instance is terminated

5.2.11. Instances

5.2.11.1. Stack and Layer influence on Instances

5.2.11.2. Stack has default OS for instances

5.2.11.3. Layers contains recipes, general, network ,disc configs

5.2.11.4. Types

5.2.11.4.1. 24x7

5.2.11.4.2. Time based instances

5.2.11.4.3. Load based instances

5.2.11.5. parameters that can be overwritten

5.2.11.5.1. Subnet Id

5.2.11.5.2. SSH Key

5.2.11.5.3. os

5.2.11.5.4. root device type

5.2.11.5.5. volume type

5.2.11.6. Instance Auto Healing

5.2.11.6.1. Each Opswork instance has Opswork Agent

5.2.11.6.2. Heartbeat style health check

5.2.12. Applications

5.2.12.1. Creating

5.2.12.1.1. App Name

5.2.12.1.2. Document Root

5.2.12.1.3. Data source

5.2.12.1.4. App source

5.2.12.1.5. Env Variables

5.2.12.1.6. Domain Name

5.2.12.1.7. SSL Settings

5.2.12.2. Deployment

5.2.12.2.1. it executes the Deployment of App against instances by Command

5.2.12.2.2. Passed to command is application id

5.2.12.2.3. App Parameters are passed to Chef env

5.2.12.2.4. Deploy recipe access to app source and pull

5.2.12.2.5. 5 version or app are maintained

5.2.12.3. Deployment Commands

5.2.12.3.1. Creates application deployment

5.2.12.3.2. stack level command to be executed against stack

5.2.12.3.3. Commands

5.2.13. Bershelf

5.2.13.1. was provided in Opsworks/Chef 11.10

5.2.14. Databags

5.2.14.1. is a global JSON objects accessible from within CHEF framework

5.2.14.2. There are multiple databags, including: STACK, LAYER, APP, INSTANCE

5.2.14.3. Data is accessed through Chef `data_bag_item` & `search` methods

5.2.15. events can be run manually

5.3. Elastic Beanstalk

5.3.1. deploy, monitor and scale an application

5.3.2. focus towards infrastructure, focusing on components and performance - not configuration and specifications

5.3.3. it attempts to remove or simplify infrastructure management, allowing application to deployed into infrastructure environment easily

5.3.4. components

5.3.4.1. Applications

5.3.4.1.1. your entire application is one EB application OR

5.3.4.1.2. each logical component of your application is EB application

5.3.4.1.3. each application is providing own URL

5.3.4.2. Environments

5.3.4.2.1. Each application can have several environments

5.3.4.2.2. Environment is either single instance or scalable

5.3.4.2.3. types

5.3.4.2.4. App URL can swaped between environment

5.3.4.3. Versions

5.3.4.3.1. app can have many versions

5.3.4.3.2. zip file

5.3.4.3.3. version of app can be deployed in environments

5.3.5. Configuration

5.3.5.1. Web Tier

5.3.5.1.1. Scaling

5.3.5.1.2. Instances

5.3.5.1.3. Notifications

5.3.5.1.4. Software Configuration

5.3.5.1.5. Updates and Deployments

5.3.5.1.6. Health

5.3.5.2. Network Tier

5.3.5.3. Date Tier

5.3.5.3.1. Types

5.3.5.3.2. one db can be used in only one EB

5.3.5.3.3. Cloning EB doesn't clone DB

5.3.5.3.4. DB can be moved to other EB

5.3.6. Supported Environments

5.3.6.1. NodeJS

5.3.6.2. PHP

5.3.6.3. Python

5.3.6.4. Ruby

5.3.6.5. Go

5.3.6.6. Java

5.3.6.7. Tomcat

5.3.6.8. Microsoft IIS

5.3.6.9. Docker

5.3.6.9.1. Glassfish

5.3.6.9.2. Go

5.3.6.9.3. Python

5.3.6.9.4. Custom

5.3.7. ebextantions

5.3.7.1. it is a folder .ebextations

5.3.7.2. it allows granular configuration of EB environment and customisation resources that it has (EC2, ELB and other)

5.3.7.3. YAML format

5.3.7.4. ended by .config

5.3.7.5. Files contain key sections

5.3.7.5.1. option_settings

5.3.7.5.2. recources

5.3.7.5.3. other

5.3.7.6. files will be processed in alphabetical order

5.3.7.7. leader_instance

5.3.7.7.1. leader instance picked up during deployment and after deployment it becomes general instance

5.3.7.7.2. to make sure that some commands were executed once

5.3.8. EB Docker

5.3.8.1. Application Source Bundle

5.3.8.1.1. Files

6. Operations

6.1. Instance Type Deepdive

6.1.1. Virtualization

6.1.1.1. Types

6.1.1.1.1. Para-Virtualisation

6.1.1.1.2. Hardware Virtualization (HVM)

6.1.1.2. AMI supports either first or second virtualisation

6.1.1.2.1. Usually Older AMI supports Para - Virtualisation

6.1.1.2.2. Using HVM gives much more wide choice of instance type and sizes

6.1.1.2.3. Some families only supports HVM (eg T2)

6.1.1.2.4. Some Old instance types don't support HVM (eg. T1)

6.1.2. Instance Types

6.1.2.1. T

6.1.2.1.1. General Burstable

6.1.2.2. M

6.1.2.2.1. General Purpose

6.1.2.3. C

6.1.2.3.1. CPU utilisation

6.1.2.4. R

6.1.2.4.1. RAM optimised

6.1.2.5. G

6.1.2.5.1. GPU, graphic

6.1.2.6. I

6.1.2.6.1. High IO instances

6.1.2.7. D

6.1.2.7.1. Dense Storage - Disk performance

6.1.3. Instance Size

6.1.4. Instance Generation

6.1.5. Instance Features

6.1.5.1. EBS optimisation

6.1.5.1.1. Dedicated bandwidth, consistency higher throughput

6.1.5.1.2. 500Mbs -4000Mbs

6.1.5.2. enhanced networking

6.1.5.2.1. AWS supported SR-IOV

6.1.5.2.2. less jitter

6.1.5.2.3. higher throughput

6.1.5.2.4. VM must have driver

6.1.5.3. Instance Store Volumes

6.1.5.4. GPU availability

6.2. EBS Deep Dives

6.2.1. Types

6.2.1.1. Magnetic

6.2.1.1.1. ~100 IOPS

6.2.1.1.2. 2-40 ms latency

6.2.1.1.3. ~10Mb throughput

6.2.1.1.4. no burst

6.2.1.2. General purpose SSD

6.2.1.2.1. 3 IOPS per Gb

6.2.1.2.2. Burst up to 3000

6.2.1.2.3. up to 160 Mb/s troughput

6.2.1.2.4. Larger volumes can scale to 10000 IOPS

6.2.1.2.5. IOPS limits assume 256 KB block size

6.2.1.2.6. IOPS Burst Pool starts with 5400 000

6.2.1.3. Procision IOPS

6.2.1.3.1. max IOPS 20 000

6.2.1.3.2. up to 320 Mb/s throughtput

6.2.1.3.3. 50 IOPS for Gb of storage max

6.2.1.3.4. 99.9% performance consistency

6.2.2. Terms

6.2.2.1. Capacity

6.2.2.1.1. Amount of data in Gb

6.2.2.2. Troughput

6.2.2.2.1. Data throughput Mb/s for read/write operations

6.2.2.3. Block Size

6.2.2.3.1. Size of each write or read in KB

6.2.2.4. IOPS

6.2.2.4.1. Number read and write operation per second

6.2.2.5. Latency

6.2.2.5.1. Delay between read/write request and completion ms

6.2.3. Performance element

6.2.3.1. Instance

6.2.3.2. IO

6.2.3.2.1. You can not maintain MAX IOPS and MAX throughput in the same time

6.2.3.3. network

6.2.3.4. ebs volumes

6.2.4. IOPS vs Throughput

6.2.4.1. eg, you expect 3000 IOPS and 160Mbs

6.2.4.1.1. 32KB * 3000 IOPS = 96Mbs

6.2.4.1.2. 256KB *750 IOPS =192Mbs

6.2.4.1.3. 256KB * 3000 IOPS = 768 Mbs that beyond 160 MBs MAX

6.2.4.2. Throughput = IOPS * Block Size

6.2.4.3. you need to optimise Block Size

6.2.5. RAID

6.2.5.1. We need more

6.2.5.1.1. Large EBS optimised instance can deliver 32000 IOPS @ 16k 500 Mbs

6.2.5.1.2. Max IOPS 48000 IOPS @ 16k 10Gb

6.2.5.2. RAID 0

6.2.5.3. LVM Stripe

6.2.5.4. You need to freeze filesystem during snapshotting

6.2.6. Burst Pool

6.2.6.1. MAX Burst is 3000 IOPS

6.2.6.2. Start Burst pool 5400 000

6.2.6.3. you don't need to worry about provision large volume to get better performance

6.2.7. Tips

6.2.7.1. Pre-warming is not longer required for new EBS volumes

6.2.7.2. Volumes created from snapshots are Lazy restored from S3 and require PrepWarming to get full performance

6.2.7.3. First snapshot - full copy, next are incremental. Often snapshots - cheaper and faster