b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32...

90
https://www.slideshare.net/AmazonWebServices/aws-storage-and-content-delivery-services Extremely durable a. Highly Available b. infinitely scalable c. Object based storage d. Very low cost e. Main features 1. by default 100 buckets per account, this can be increased from service limit in console O Bytes OBJECT size Volume of data is unlimite No of object you can store is unlimited. Above size is for one object Can be apply to objects >100 MB a. After you initiate a multipart upload and upload one or more parts, you must either complete or abort the multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort a multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage. b. c. Screen clipping taken: 05-08-2018 22:51 Multipart upload 1. A Standard Standard-IA RRS Glacier One Zone-IA Availability 99.99% 99.9% 99.99% 99.99% 99.99% Designed Durability 99.999999999% 99.999999999% 99.999999999% Stores Data in min of 3 AZs Redundancy - Multiple Devices (Disk) in multiple facilities (AZ) Stores Data in min of 3 AZs Redundancy - Multiple Devices (Disk) in multiple facilities (AZ) Stores Data in SINGLE AZ, so data will be lost when AZ dstruction. Redundnatly in single AZ Data is lost in the event of one entire Availability Zone Data is resilient in the event of one entire Availability Zone destruction 5 TB 5 GB In Single put Operation 5 MB to 5 TB can be uploaded through multipart Maximum no of parts in multipart is == 1 to 10,000 Object broken into 10000 parts 99.999999999% Screen clipping taken: 06-08-2018 09:47 30 July 2018 08:36 S3 Page 1

Transcript of b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32...

Page 1: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

https://www.slideshare.net/AmazonWebServices/aws-storage-and-content-delivery-services

Extremely durablea.Highly Availableb.infinitely scalablec.Object based storaged.Very low coste.

Main features1.by default 100 buckets per account, this can be increased from service limit in console

O Bytes

OBJECT size

Volume of data is unlimiteNo of object you can store is unlimited. Above size is for one object

Can be apply to objects >100 MBa.After you initiate a multipart upload and upload one or more parts, you must either complete or abort the multipart upload in order to stop getting charged for storage of the uploaded parts. Only after you either complete or abort a multipart upload will Amazon S3 free up the parts storage and stop charging you for the parts storage.

b.

c.

Screen clipping taken: 05-08-2018 22:51

Multipart upload1.

A Standard Standard-IA RRS Glacier One Zone-IA

Availability 99.99% 99.9% 99.99% 99.99% 99.99%

Designed Durability 99.999999999% 99.999999999% 99.999999999%

Stores Data in min of 3 AZs

Redundancy -Multiple Devices (Disk) in multiple facilities (AZ)

Stores Data in min of 3 AZs

Redundancy -Multiple Devices (Disk) in multiple facilities (AZ)

Stores Data in SINGLE AZ, so data will be lost when AZ dstruction. Redundnatly in single AZ

Data is lost in the event of one entire Availability Zone

Data is resilient in the event of one entire Availability Zone destruction

5 TB5 GBIn Single put Operation

5 MB to 5 TB can be uploaded through multipart•Maximum no of parts in multipart is == 1 to 10,000•

Object broken into 10000 parts

99.999999999%

Screen clipping taken: 06-08-2018 09:47

30 July 2018 08:36

S3 Page 1

Page 2: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

facilities (AZ)

Data is resilient in the event of one entire Availability Zone destruction

facilities (AZ)

Data is resilient in the event of one entire Availability Zone destruction

Data is lost in the event of one entire Availability Zone destruction

Zone destruction

Latency Low

Throughput High

Use Mobile, Gaming, Big Data Analytics

for storing secondary backup copies of on-premises dataOr easily re-creatable data, or for storage used as an S3 Cross-Region Replication target from another AWS Region.

PER GB Storage Price Low than standard

Low than standard-IA Lowest

Per GB retrieval fee Low than standard

Low than standard-IA

Security SSL - Data in transitEncryption -Data at rest

SSL - Data in transitEncryption -Data at rest

Cost costs 20% less than storing it in S3 Standard-IA

Screen clipping taken: 06-08-2018 00:16

1.Object size could be 0 Byte to 5 TB2.Can store unlimited objjects3.Designed Duratbility 11x 9s 99.9999999994.Availability 99.99%5.

Optional server side encryption (using AWS SDKs)a.Customer managed client side encryptionb.

Encryption6.

REST and SOAP interface7.Each account can have max 100 buckets8.Control access at bucket and object level9.Globally unique bucket names regardless of the AWS region10.Amazon S3 creates a bucket in the region you select.11.

• Optimize latencya.• Minimize costsb.• Address regulatory requirementsc.

You can choose a region to: 12.

You can control access to buckets and objects with: • Access Control Lists (ACLs) • Bucket policies • Identity and Access Management (IAM) policies13.

https://s3.ap-south-1.amazonaws.com/rad100bucket/Welcome.html

Retrieval fees distinguishes s3 standar-IA to S3 Standard. If you use this storage class for Frequently access data, this will become costlier than s3 standard in long run. It is only get benefit for Infrequently Access data

S3 Page 2

Page 3: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

https://s3.ap-south-1.amazonaws.com/rad100bucket/Welcome.html

S3 . REGION . Amazonaws.com/BUCKET_Name / ObjectName

When static website hosting enable following URL starts working

http://rad100bucket.s3-website.ap-south-1.amazonaws.com/

Bucket . s3-website . REGION . Amazonaws . com

Server Access Loging○

Object level logging○

Logging•

S3 Page 3

Page 4: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Bucket name should unique globally

https:// s3 -REGION .amazonaws.com/BucketName

READ after WRITE consistency○

For new objects•

Eventual consistency○

For Overwrite and PUTS•

Consistencies

06 August 2018 08:10

S3 Page 4

Page 5: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Automatic copying data across region○

Versioning must be enabled on source and destination

Buckets must be in different AWS regions

Amazon S3 must have permissions to replicate objects from that source bucket to the destination bucket on your behalf --you must have the iam:PassRole permission

In order to enable three prerequisites○

So that you can replicate only specific type of files (only pdf or .xml etc..)

You may want to choose different storage class on replication

You have to define rule for replication○

CRR (CROSS REGION REPLICATION)

Cross-region Replication enables automatic and asynchronous copying of objects across buckets in different AWS regions.

These buckets can be owned by different AWS accounts

The objects you want to replicate. You can request Amazon S3 to replicate all or a subset of objects by providing a key name prefix in the configuration. For example, you can configure cross-region replication to replicate only objects with the key name prefix Tax/. This causes Amazon S3 to replicate objects with a key such as Tax/doc1 or Tax/doc2, but not an object with the key Legal/doc3.

By default, Amazon S3 uses the storage class of the source object to create an object replica. You can optionally specify a storage class to use for object replicas in the destination bucket.

For cross region replication to work Versioning must be enabled on buckets (source as well as destination), with IAM pass role

Filter particular file and then replicate○

You may choose to replicate all files○

You have to define Replication rule,Like criteria for replication

CROSS Region Replication06 August 2018 08:16

S3 Page 5

Page 6: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Unless you make specific requests in the replication configuration, the object replicas in the destination bucket are exact replicas of the objects in the source bucket. For example:

Replicas have the same key names and the same metadata—for example, creation time, user-defined metadata, and version ID.

Amazon S3 stores object replicas using the same storage class as the source object, unless you explicitly specify a different storage class in the replication configuration.

Assuming that the object replica continues to be owned by the source object owner, when Amazon S3 initially replicates objects, it also replicates the corresponding object access control list (ACL).

S3 Page 6

Page 7: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

it’s an object level property

One bucket can have different class of storage objects in it

You mention the Storage class while uploading object to bucket and not during creating bucket. This implies that

There is no option to select Glacier as Storage class while uploading object… You create GLACIER objects by first uploading objects using STANDARD, RRS, STANDARD_IA, or ONEZONE_IA as the storage class. Then you transition these objects to the GLACIER storage class using lifecycle management

Few Facts•

Standard

RRS (reduced redundancy)

Types○

First Byte Latency - Miliseconds

Features○

Use○

Frequently Access1.

Standard - IA is designed for larger objects and has a minimum object size of 128KB. Objects smaller than 128KB in size will incur storage charges as if the object were 128KB.

□Standard - IA

One Zone -IA

Types○

Use

InFrequently Access(IA)2.

categories of Storage Class

Infrequently vs Frequently Access data06 August 2018 08:51

S3 Page 7

Page 8: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Use○

This also an infrequent access a.

This is because once stored in Galcier, for accessing data you have to restore the object first. Then can only use. This adds on latency

i.But First Byte Latency in minutes to few hoursb.

Glacier Storage Class3.

Storage Class Durability (designed for) Availability (designed for)

STANDARD 100.00% 99.99%

RRS 99.99% 99.99%

STANDARD_IA 99.999999999% 99.90%

ONEZONE_IA 99.999999999% 99.50%

GLACIER 99.999999999% 99.99%(after you restore objects)

S3 Page 8

Page 9: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Using an encryption client library, such as the Amazon S3 Encryption Client, you retain control of the keys and complete the encryption and decryption of objects client-side using an encryption library of your choice. Some customers prefer full end-to-end control of the encryption and decryption of objects; that way, only encrypted objects are transmitted over the Internet to Amazon S3.

From <https://www.udemy.com/aws-certified-solutions-architect-associate-practice-tests/learn/v4/t/quiz/330230/results/103622420>

Screen clipping taken: 06-08-2018 17:57

Sever Side Encryption

SSE-S3 SSE-KMS SSE-C

Aws manages keys

Aws manages keys Client Manages the keys

You will also need to have an audit trail so you can see who used your key to access which object and when, as well as view failed attempts to access data from users without permission to decrypt the data

Encryption06 August 2018 09:34

S3 Page 9

Page 10: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 06-08-2018 09:35

S3 Page 10

Page 11: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 06-08-2018 09:52

LifeCycle rule06 August 2018 09:52

S3 Page 11

Page 12: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Transfer Acceleration06 August 2018 09:55

S3 Page 12

Page 13: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 06-08-2018 10:34 Screen clipping taken: 06-08-2018 10:34

When versioning is enabled, a simple DELETE cannot permanently delete an object.

Instead, Amazon S3 inserts a delete marker in the bucket, and that marker becomes the current version of the object with a new ID. When you try to GET an object whose current version is a delete marker, Amazon S3 behaves as though the object has To permanently delete versioned objects, you must use DELETE Object versionId.

The following figure shows that deleting a specified object version permanently removes that object.been deleted (even though it has not been erased) and returns a 404 error.

To permanently delete versioned objects, you must use DELETE Object versionId.

The following figure shows that deleting a specified object version permanently removes that object.

Deleting Versioned Object06 August 2018 10:33

S3 Page 13

Page 14: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Scenario 1: Suppose that you are hosting a website in an Amazon S3 bucket named website as described in Hosting a Static Website on Amazon S3. Your users load the website endpoint http://website.s3-website-us-east-1.amazonaws.com. Now you want to use JavaScript on the webpages that are stored in this bucket to be able to make authenticated GET and PUT requests against the same bucket by using the Amazon S3 API endpoint for the bucket, website.s3.amazonaws.com. A browser would normally block JavaScript from allowing those requests, but with CORS you can configure your bucket to explicitly enable cross-origin requests from website.s3-website-us-east-1.amazonaws.com.

Scenario 2: Suppose that you want to host a web font from your S3 bucket. Again, browsers require a CORS check (also called a preflight check) for loading web fonts. You would configure the bucket that is hosting the web font to allow any origin to make these requests.CROSS ORIGIN resource sharing

Screen clipping taken: 06-08-2018 13:37

CORS06 August 2018 13:32

S3 Page 14

Page 15: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

A pre-signed URL gives you access to the object identified in the URL, provided that the creator of the pre-signed URL has permissions to access that object.

The pre-signed URLs are useful if you want your user/customer to be able to upload a specific object to your bucket, but you don't require them to have AWS security credentials or permissions.

•When you create a pre-signed URL, you must provide your security credentials and then specify a bucket name, an object key, an HTTP method (PUT for uploading objects), and an expiration date and time. The pre-signed URLs are valid only for the specified duration.

•You can generate a pre-signed URL programmatically using the AWS SDK for Java or the AWS SDK for .NET. If you are using Microsoft Visual Studio, you can also use AWS Explorer to generate a pre-signed object URL without writing any code. Anyone who receives a valid pre-signed URL can then programmatically upload an object.

Screen clipping taken: 06-08-2018 13:58

PRe-signed URL to access S306 August 2018 13:49

S3 Page 15

Page 16: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

There is no “data transfer in” costs to your EC2 instance if the data is coming from an S3 bucket in the same region

There is no “data transfer out” costs from your S3 bucket if the data is going to an EC2 instance in the same region

Data Transfer Cost06 August 2018 17:01

S3 Page 16

Page 17: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

See Below table○

System defined•

Meta data starts with x-amz-meta○

i.e x-amz-meta-location○

User Defined•

System Defined Metadata

Two types of Metadata for objects•

Metadata06 August 2018 19:08

S3 Page 17

Page 18: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Static Website URL

<bucket-name>.s3-website-<AWS-region>.amazonaws.com

Screen clipping taken: 06-08-2018 17:07

URL06 August 2018 17:06

S3 Page 18

Page 19: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 06-08-2018 19:30

Read write consistencies06 August 2018 19:30

S3 Page 19

Page 20: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Create an origin access identity, which is a special CloudFront user, and associate the origin access identity with your distribution

Change the permissions either on your Amazon S3 bucket or on the objects in your bucket so only the origin access identity has read permission (or read and download permission).

To ensure that your users access your objects using only CloudFront URLs, regardless of whether the URLs are signed, perform the following tasks:

Screen clipping taken: 06-08-2018 22:59

Cloud Front06 August 2018 22:54

S3 Page 20

Page 21: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 09-08-2018 21:53

Storage Gateway06 August 2018 23:05

S3 Page 21

Page 22: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

To get the public IP (or hostname or other inforamtion) of running instance, you can use following URL[ec2-user] curl http://169.254.169.254/latest/meta-data/public-ipv4

Elastic Compute cloud•Reduces time required to obtain and boot new server to minutes•Allow quick scaling up and down when requirement changes•Only pay for the capacity you use•Provides developers tool to build failure resistant application and isolate themselves from common failure scnearions•

No long term commitement○

Low cost and flexible ○

Useful for applications with short term , spiky or unpredictible work loads that cannot be interruptedd○

Applications being developed first time on Amazon EC2○

On- demand•

Applications with steady state or predictible work loads○

Standard RI (Reserved Instance)

Convertible RI - attributes can be changed later on also

Scheduled RI (Instances which needs to be reserved for particular time - end of month, or on specific day or fraction of the day, sale)

Reserved capacity○

Reserved•

Applications with flexible start and end time○

To run it on very low compute price, ○

SPOT•

Bring Your Own License○

Meet Compliance and Regulatory Requirements○

Great for licensing which does not support multitennancy ○

An Amazon EC2 Dedicated host is a physical server with EC2 instance capacity dedicated for your use and allows you to reliably launch EC2 instances on the same Dedicated host over time. You have visibility over how your Dedicated hosts are utilized and you can determine how many sockets and cores are installed on the server. These features allow you to minimize licensing costs in a bring-your-own-license (BYOL) scenario and help you address corporate compliance and regulatory requirements.

From <https://ap-south-1.console.aws.amazon.com/ec2/v2/home?region=ap-south-1#Hosts:sort=hostId>

Dedicated hosts•

EC2 Options•

F For FPGA (Field Programable Gate Array)•I for IOPS•G - Graphics•H - High Disk Throughput•T - Cheap General purpose (t2 micro) low •D - for Density•R - RAM•M - main choice for general purpose apps --> t + EBS optimization + good n/w througput•C - Compute•P - Graphics (think Pics)•X - extreme Memory•

EC2 Instance Types• General purpose T M (apps)

Compute Optimized C

GPU compute P

Memory optimized R X

Storage Optimized D i

EC230 July 2018 10:16

EC2 Page 22

Page 23: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

1.

Stopping an EBS-backed on-demand instance, will stop the charges and preserve the data

From <https://www.udemy.com/aws-certified-solutions-architect-associate-practice-tests/learn/v4/t/quiz/330234/results/104127256>

2.3.

create Storage volumesa.And attach them to EC2 instancesb.

file system on top of these volumei.Run a DBii.Or use them in any other way you would use a block deviceiii.

Once attached one can create c.

EBS Allows you to4.

EBS volumes are replicated in same AZ on separate physical hardware,where they are automatically replicated to protect you from the failure of single component.

5.

EC2 instance and corresponding volumes remain in same AZ. It coul dnot be possible to attach volume to EC2 instance from different AZ or region

6.

Snapshots are created for backup and Image is created from snapshot to boot EC2 instance7.EBS snapshots stored in S3, but you cant go and search them it is not visible8.

Go to actionsi.Create Volumeii.From the options can select different AZiii.

Attaching to other instance in other AZa.

Create volume snapshoti.From the Actions select copy and choose other region to moveii.

Moving Volume to other regionsb.

For any actions on volume, create sanpshot first and then do following for different purpose9.

Designed for Transactional workloads i.SSD based (high IOPS - Input output operation per second - READ & Writea.

Through put intensive and big data workloadsi.HDD based (high throughput) Throughput = IOPS x IO sizeb.

Two Major categories10.

EC2 and attached EBS volumes should be in same avaiability zone11.When you terminate EC2 instance, root volume gets deleted, but other volumes stays there. 12.EBS volumes are available on different servers in same availability zones only. So if at the time of AZ failure, volumes could not survivied. So AWS recommend to always keep EBS volume snapshot in S# bucket for high durability

13.

Ratio of IOPS to Volume 50:1 so for volume size of 8 GIB the max IOPS can be 40014.

Create a snapshot of the volume a.Create a vloume from snapsot in other AZ b.

The best way to move EBS vloumes from one AZ to another is 15.

Ratio of 3 IOPS / GB upto 10,000 IOPS + can burst upto 3000 IOPS (i.e for 8 GB it will give 240 IOPS + Max 3000 IoPS

General Purpose SSD (GP2) - Can store upto 16 TB○

It gives you option define IOPS manually, however it is in the ratio of 50:1 (i.e if you have 8 GIB storage, IOPS can be 8x50 = 400 IOPS max.

High I/O intensive applications, relationa or NOSQL DB

Where you need IOPS more than 10,000 IOPS

Can provision upto 20,000 IOPS per volume

Provisioned IOPS SSD (io1) - Can store upto 16 TB○

SSD•

Not bootable

Big Data

Data warehousing

Through put Optimized HDD (st1) (minimum VOLUME is 500 GB)○

Not Bootable

Lowest cost storage

Can be used for File server

Cold HDD (sc1)○

HDD •

Only Magnetic with Boot support on volume

Data is accessed infrequently

Used where lowest cost is important

Magnetic (Standard)•

EBS volume Types

Root volume is not encrypted-

Add on volume can be encrypted. Need to select encryption during add on-

The one which has sanpshot ID displayed○

How to detect which volume is the root one from the list ??-

Encryption

Elastic Block Storage (think of Disk in cloud) 30 July 2018 09:19

EC2 Page 23

Page 24: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

1.

EC2 Page 24

Page 25: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Security Group 31 July 2018 23:48

EC2 Page 25

Page 26: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

RAID 0 Striped, No redundancyRAID 1 Mirrored, get redundancyRAID 5 3 disk or More, writing parity, Amazon discourage you to use.RAID 10 striped & Mirrored. Good redundancy and performanceTO create RAID , you have to use OS native feature , AWS doesn't provide option on AWS console

RAID is used with EBS to get increased IP performance•Create RAID between different EBS volumes, Preferred RAID is 10•When you want to use BD is which is not supported in AWS. So to increase performance you can't use inherritent AWS features. In such scenarios to increase IO, RAID can be used

Use Case•

You have to stop application to writing to disk•And flush all caches to the disk•

Freeze the file system○

Unmount thr RAID array○

Shutting down the asociated EC2 instance (this is the easiest way)○

How can we do this ?•

Taking a snapshot of RAID array is tricky•

Windows RAID02 August 2018 06:26

EC2 Page 26

Page 27: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

To have the consistent data in snapshots, best practice is to shutdown EC2 instance and take•AMI are not encrypted at Rest•

NOT Encrypted○

Marked with delete on Termination. You may choose to deselect so it would not deleted when EC2 instance is terminated. ○

By Default ROOT Volume is •Encryption:

Take a snapshot•Go to Snapshot--> Actions•Copy •

Root Volume is not encrypted, TO encrypt root volume•

Screen clipping taken: 02-08-2018 08:24

Encrypt the unencrypted volumes○

And to share it across region to launch○

That’s the only way to •Not that Snapshot and AMI both provides the option copy under Actions menu. This gives liberty to copy it over across region.. •

Select snapshot•Actions --> modify permission --> make it public --> and sell it on Market place •You can also add particular account ID to share with specific people•

Likewise snapshot shared with other region, can also be shared with Other AWS accounts, but remember that it can be shared only when snapshot is not encrypted. Reason for that is Encryption keys are local to the account. (just an explanation --> On the other side you can share it in other region because other region is still under your account

Encrypt ROOT device Volume and taking snapshots02 August 2018 06:46

EC2 Page 27

Page 28: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

EC2 Instance --> Volume --> Snapshot ---> Image (AMI) ----> EC2•

Volume --- > Options ---> 1. Add, Modify Volume settings 2. Auto enable IO settings 3. Attach detach volumes

Snapshot --> options --> 1. create Volume, 2. Create Image , 3. Copy

Image --> Options --> 1. Launch 2. Copy

EC2 Page 28

Page 29: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Instance Store Based EBS based

Cannot stop the instance(if the underlying host fails you will lose your data )

Can be stopped and you will not lose the data when instance is stopped

Instance Store based root volumes created from Template stored in S3 EBS based root Volumes are created from Amazon EBS snapshot (Internally)

You cannot see ROOT DEVICE volumes listed under Volumes in console (obviously not because Volume option is under EBS : ))

During instance creation you can add multiple "Instance store" based volumes as well as EBS based volumes. EBS based volumes added (other than the root) is visible under Volumes in console

All Volumes listed

Also called Ephemeral storage

You can reboot instance without losing data Same is here too.

On instance termination, root volumes will be deleted Same is true with EBS by default, unless you change settings " Delete on termination" to false

During instance creation you can add multiple "Instance store" based volumes as well as EBS based volumes

EBS backed•Instance store backed•

All AMIs are categorized by•

Instance Store vs EBS02 August 2018 09:48

EC2 Page 29

Page 30: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 02-08-2018 15:18

When Load balancer is active but undelying EC2 instance or DB not available in that case you will receive error below

503 Service Temporarily Unavailable

From <http://myappelb-1077365746.ap-south-1.elb.amazonaws.com/>

Loadbalncer can be internal or external•

What to listen in incoming trafic○

What action to be taken○

Listner helps ELB to define•

Automatically performs the helath check based on the criteria defined

If criteria fails removes instance from the load balancer○

Health Check•

Add EC2 instances•

https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/how-elastic-load-balancing-works.html

Load Balancer02 August 2018 12:06

EC2 Page 30

Page 31: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

From <http://myappelb-1077365746.ap-south-1.elb.amazonaws.com/>

EC2 Page 31

Page 32: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Roles02 August 2018 17:13

EC2 Page 32

Page 33: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

http://169.254.169.254/latest/meta-data/public-ipv4

ami-idami-launch-indexami-manifest-pathblock-device-mapping/hostnameinstance-actioninstance-idinstance-typelocal-hostnamelocal-ipv4macmetrics/network/placement/profilepublic-hostnamepublic-ipv4public-keys/reservation-idsecurity-groups

EC2 metadata02 August 2018 17:58

EC2 Page 33

Page 34: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

SQS is the cornerstone of a decoupled application.•

SQS is accessible and usable from anywhere on the Internet. Its only exposed interface is HTTP(S). In fact, from inside EC2, SQS is not accessible unless the EC2 instance actually has outbound access to the Internet. If you have an AWS account, you have credentials, and you can use SQS***

SQS will deliver your message at least once, but cannot guarantee that it will not create duplicates of that message. Additionally, SQS cannot guarantee message order.

Amazon SQS long polling is a way to retrieve messages from your Amazon SQS queues. While the regular

short polling returns immediately, even if the message queue being polled is empty,○

Long polling will reduce the number of CPU cycles and empty responses, saving you money.

long polling doesn't return a response until a message arrives in the message queue, or the long poll times out.

Polling •

Using SQS, you can send, store, and receive messages between software components at any volume

Max Throughput

Best effort delivery

Decouple live user requests from intensive background work – Let users upload media while resizing or encoding it.

Allocate tasks to multiple worker nodes – Process a high number of credit card validation requests.

Batch messages for future processing – Schedule multiple entries to be added to a database.

Used as long as your application can process messages that arrive more than onceand out of order, for example:

Standard Queue○

SQS FIFO queues are designed to guarantee that messages are processed exactly once, in the exact order that they are sent

Use Amazon SQS to transmit any volume of data, at any level of throughput, without losing messages or requiring other services to be available.

SQS lets you decouple application components so that they run and fail independently, increasing the overall fault tolerance of the system.

Multiple copies of every message are stored redundantly across multiple availability

Two types of message Queue•

SQS03 August 2018 09:05

EC2 Page 34

Page 35: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Multiple copies of every message are stored redundantly across multiple availability zones so that they are available whenever needed.

Hence consumer must delete it after processing it○

Amazon SQS doesn't automatically delete the message•

Default visibility Timeout (the time starts when consumer takes message from the queue for processing. Timer prevents other consumer to takes the same message again for processing, because its already in process by consumer. Once the time expires it is become available for other user..

SQS ,because of distributed system, no mehcinism to detect whether actual message is delivered to consumer, hence it has to keep message in queue untill it deleted from the consumer.

Visibility timeout value , Default is 30 sec, Max configurable is upto 12 hrs •

Screen clipping taken: 09-08-2018 08:17

EC2 Page 35

Page 36: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 09-08-2018 07:18

Visibility TimeOut

EC2 Page 36

Page 37: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 09-08-2018 08:39

Screen clipping taken: 09-08-2018 08:40

SNS03 August 2018 09:05

EC2 Page 37

Page 38: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

SWFWORKER

Worker requests to get the task and process the task

Worker returns the result post processing the task

DECIDER

Decider coordinates the task1.Decider schedule the task according to Application logic2.

SWF03 August 2018 09:05

EC2 Page 38

Page 39: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Triggers

S3•SNS•SQS•

•KINESIS•DYNAMODB•

CLOUDWATCH LOGS•CLOUDWATCH EVENTS•

CLOUDWATCH•

API GATEWAY•

Lambda07 August 2018 13:00

EC2 Page 39

Page 40: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Servers○

Worker process○

Elasitc Beanstalk can be used for •

Elastic Beanstalk08 August 2018 12:54

EC2 Page 40

Page 41: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

AutoScaling groups are the cornerstone of any self-healing application on AWS.•AutoScaling groups are not intended to handle sudden spikes in traffic. Rather, they are intended to allow your applications to grow elastically as load increases over a short period of time

Auto scaling is not really intended to respond to instantaneous spikes in traffic, as it will take some time to spin-up the instances that will handle the additional traffic. For sudden traffic spikes, make sure your application issues a 503 - Service Unavailable message.

•When should I use AWS Auto Scaling?

You should use AWS Auto Scaling if you have an application that uses one or more scalable resources and experiences variable load. A good example would be an e-commerce web application that receives variable traffic through the day. It follows a standard three tier architecture with Elastic Load Balancing for distributing incoming traffic, Amazon EC2 for the compute layer, and DynamoDB for the data layer. In this case, AWS Auto Scaling will scale one or more EC2 Auto Scaling groups and DynamoDB tables that are powering the application in response to the demand curve.

Step scaling policies○

You should use EC2 Auto Scaling if you only need to scale Amazon EC2 Auto Scaling groups, or if you are only interested in maintaining the health of your EC2 fleet

EC2 Autoscaling•

Amazon ECS services○

Amazon EC2 Spot fleets○

provisioned read and write capacity for Amazon DynamoDB tables and global secondary indexes,

Amazon Aurora Replicas○

Target tracking scaling policies○

You should use AWS Auto Scaling to manage scaling for multiple resources across multiple services

Autoscaling Application Autoscaling API supports•

Suppoerted Scalable resources•

AutoScaling08 August 2018 15:27

EC2 Page 41

Page 42: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 09-08-2018 09:10

Screen clipping taken: 09-08-2018 09:11

EC2 Page 42

Page 43: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 09-08-2018 09:11

Screen clipping taken: 09-08-2018 09:11

EC2 Page 43

Page 44: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Amazon RDS uses EBS volumes for Database1.

SQL - MS1.MySQL - open source2.PostGreSQl3.Oracle -4.Maria DB5.

PostgreSQL (aurora throughput is 3 times of postgreSQL)i.MySQL (aurora throughput is 5 times of MySQL )ii.

Aurora - AWS supports6.

Six RDS - OLTP (Online Transaction Processing)2.

Business intelligence1.When you really want to run complex queries2.

RedShift (OLAP) 3.

DynamoDB - NoSQL4.

MemCachedi.REDISii.

Used to remove heavy DB workoads in cache forms, Most common queries get cached1.Elasticache5.

Automated Backup and Snapshot

35 days, point in time Automated backup till seconds•Snapshot on daily basis•

Back up retention period○

Back up Window (I/O slows down during Back up window) --> it’s a time daily window when auto backups are created

Two terms •

For point in time restore => last Daily snapshot + daily transactional logs•Deleted automatically when RDS instance is deleted•

Original. REGION.RDS.amazonaws.com○

Restored. REGION.RDS.amazonaws.com○

Restoring backup (automated or snapshot) , will be a new RDS instance , with new DNS endpoint•

Automated backup -

Done Manually (user initiated)•Stored even after you delet RDS instance •

Snapshot :

For the RDS MySQL, MariaDB, PostgreSQL and Oracle database engines, when you elect to convert your RDS instance from Single-AZ to Multi-AZ, the following happens: A snapshot of your primary instance is taken, A new standby instance is created in a different Availability Zone, from the snapshot, synchronous replication is configured between primary and standby instances.

Encryption at rest is Supported for all DB types•Done using KMS (Key management system)•Once RDS is encrypted at rest, data stored in underlying storage, auto backups m snapshot, read replicas all gets encrypted

First create a snapshot

Make a Copy of the snapshot

And encrypt the copy

To encrypt existing DB○

Encrypting an existing DB is not supported•

Encryption

30 July 2018 09:46

Databases Page 44

Page 45: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

And encrypt the copy

For disaster recovery only, may use for DB maintenenance, DB instance Failure, AZ failure-

DB continuously synchronized with standby DB in other availability zone-

AWS automatically replicate production DB to other AZ-

Automatic failover to Standby so that operations can be resume without administrative intervention-

This helps when Production DB goes down and other standby need to make active. AWS changes DNS --pointing to standby IP

RDS instance just has public DNS endpoint and not IP-

MultiAZ for disaster recovery only and not for performance improvement…for perfromance improvement should use READ REPLICA

-

Supported on all supported DB engine (see 6 above)-

Multi-A Z

Are Read only copy of production DB

Achieved using asynchronous

Used for very heavy readworkloads

For Performance Improvement

You can have read replica in Multi AZ

You can have read replica in Other regions too

Read Replica

Databases Page 45

Page 46: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Production DB

Read REplica

Replica of Read Replica

Databases Page 46

Page 47: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Also used as a storage for SESSION DATA. Note that ElastiCache is also considered as Session data storage

DaynamoDB is also most efficient storage (with Indexing) for METADATA. Often used in conjunction with s3..where images stored in S3 and metadata in DynamoDB

It does not support complex relational queries (e.g. joins) or complex transactions•

A NoSQL (originally referring to "non SQL" or "non relational") database provides a mechanism for storage and retrieval of data that is modeled in means other than the tabular relations used in relational databases.

NoSQL DB service•

Document

Key -value pair

Supports both data models•

Use SSD Storage•Spread across three geoprahical distinct data centers (mind AWS is not saying 3 AZ for some reasons)

Consistencies across all copies of data is usually reached within 1 second

Best READ Performance

Eventual consistent READS (default)○

Data read after write almost immediately

Strongly consistent READS ○

Consistency Models•

READ CAPACITY UNIT (1 WRITE /SEC)○

WRITE UNIT○

Pricing model•

Fast (single digit milisecond latency)•

Fit for mobile, web , gaming , ad-tech, IoT and many other apps○

Flexible and reliable•

Fully Managed DB •

In RDS you have to create a snapshot, create read replica and then adjust the instance size which can also incur downtime, while in Dynamo DB is fiarly easy on the fly.

Read Capacity Unit and Write capacity unit○

Write is expensive in dynamo DB compare to READ. So usage is where heavy read workloads, Dynamo DB can be used

Push button scaling - very easy to scale compare to RDS•

Dynamo DB31 July 2018 12:29

Databases Page 47

Page 48: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

If Enhanced VPC Routing is not enabled, Amazon Redshift routes traffic through the Internet, including traffic to other services within the AWS network.

Enable Enhanced VPC routing○

Create S3 VPC endpoint○

In order to import data from s3 (which is AWS resources outside account VPC), two actions required

••

Redshift only avaiable in in 1 AZ. To replicate in another AZ you have to take snapsot (Mind RDS uses EBS volume)

Data warehousing service•10th of the cost of other warehousing solution•

Business inteligence○

If you want to find Sum of RADIOS sold In EMEA, PACIFIC, ○

Unit cost of RADIO sold in each region○

Works on OLAP (Online Analytical processings ) ---> RDS --(OLTP)•

DB perspective ○

Infrastructure layer perspective○

Data warehousing Databases uses different type of architecture for •

Single Node (160 GB0○

Leader Node (Manages client, recieves query

Compute Node (store data, Process query and computation), upto 128 Node

Multi Node○

Configuration•

Uses 1024 KB block size for columnar storage

Ideal for data warehousing and analysics

Column based system require very few I/Os which greatly improve query performance

Data is stored sequentially in storage

It’s a Columnar Data Storage (i.e organizes data by Column unlike row based systems which is ideal for OLTP only )

Employs multiple compression techniques

Significant compression compare to traditional DBs because of columnar storage

When loading data into an empty table m automatically samples your data and select most appropriate compression scheme

Does not require indexes or materialized view so uses less space

Advanced compression○

Distributes data & Query across all compute nodes

Easy to add nodes to your data Warehouse

Massively Parallel Processing (MPP) ○

How it works ( 10 times faster) •

Pricing•

RedShift31 July 2018 12:55

Databases Page 48

Page 49: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

No charge for leader node○

For compute node 1 unit per node per hour○

Back up is chargeble○

Data Transfer (Within VPC and not outside it)○

Pricing•

Encrypted in transit using SSL○

Encrypted at Rest using AES-256 encyption○

Manage your own keys using HSM (Hardware Security Module)

AWS KMS (Key management service)

By Default Redshift takes care of Key management, but you can also manage through○

Encryption•

Only available in 1 AZ , it is not Multi -AZ•

Note S3 supports "Cross Region Replication"□

"Cross Region Snapshot" - automatically (or Manually) allows snapshot to copy over other region

Copying snapshot to other region incurs transfer charges○

Disaster recovery for Redhsift Cluster○

In the time of outage you can restore snapshots to other AZ•

Databases Page 49

Page 50: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Also used as a storage for SESSION DATA. Note dynamo DB is also considered as Session data storage••

Webservice that makes in -memory cache easy deploy, operate and scale in cloud•

It is often used to speed up dynamic database-driven websites by caching data and objects in RAM to reduce the number of times an external data source (such as a database or API) must be read.

MemChached•

REDIS

Elasti Cache31 July 2018 14:46

Databases Page 50

Page 51: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Min 10 GB to 64 TB (increment in + 10 GB)••

Amazon Aurora•Amazon Aurora is a MySQL- and PostgreSQL-compatible enterprise-class database, starting at <$1/day.

Up to 5 times the throughput of MySQL and 3 times the throughput of PostgreSQL•Up to 64TiB of auto-scaling SSD storage•6-way replication across three Availability Zones•Up to 15 Read Replicas with sub-10ms replica lag•Automatic monitoring and failover in less than 30 seconds•

•••

Relational Database Engine•Compatible with MySql and Postgres•

5 time better than MySql○

3 times better than PostGreSQL○

Performance•

1 tenth of commercial DB○

Cost•

- auto scaling (start with 10 GB then scales in increment of 10 GB

When consumed 10 GB , + 10 GB porvisioned automatically

Storage ○

Scale upto 32 vCPUs, 244 GB RAM

Compute○

Scaling ( supports push button scaling, so you can scale up without any maintenance window)•

Maintains two copies of data in EACH AZ, with Minimum 3 AZ. So actually 6 COPIES of data

2 copies of data without affecting DB write availability

3 copies of data without affecting DB read availability

Designed to handle the loss of○

Aurora storage is also self healing. Data blocks and disks are continuously scanned for errors and repaired automatically

Really really highly available as•

Aurora31 July 2018 15:06

Databases Page 51

Page 52: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Feature

Dynamo DB Aurora RDS S3

Storage Size

10 GB - 64 TB Max upto 6 TB

The cumulative size of attributes per item must fit within the maximum DynamoDB item size (400 KB).

Object size could be 0 Byte to 5 TB

Use full for small size large db

DB Type

nonrelational database Relational DB

Fully Managed

Single Digit MiliSecond Latency

Built in Security, Back up and Restore, in-memory Caching (Add an in-memory cache that reduces response times from milliseconds to microseconds)

mobile, web, gaming, ad tech, IoT, and many other applications that need low-latency data access

DynamoDB is typically useful for storing a large number of small records with single digit millisecond latency. DynamoDB record size is limited to 64KB.

Min 10 GB - 64 TB

Snapshots can only be shared withing region with different accounts, but not to other REgions

Dynamo vs Aurora10 August 2018 08:58

Databases Page 52

Page 53: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

A DynamoDB stream is an ordered flow of information about changes to items in an Amazon DynamoDB table. When you enable a stream on a table, DynamoDB captures information about every modification to data items in the table.

Each stream record appears exactly once in the stream.○

For each item that is modified in a DynamoDB table, the stream records appear in the same sequence as the actual modifications to the item.

DynamoDB Streams guarantees the following:•

All data in DynamoDB Streams is subject to a 24 hour lifetime. You can retrieve and analyze the last 24 hours of activity for any given table; however, data older than 24 hours is susceptible to trimming (removal) at any moment

Screen clipping taken: 10-08-2018 09:36

DynamoDB Stream10 August 2018 09:36

Databases Page 53

Page 54: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

https://aws.amazon.com/premiumsupport/compare-plans/○

Trusted Advisor support plan --> •

A host in public subnet used as proxy to for external users to connect your multiple instances

This requires when AWS resources grow a nd so does the administrative access points. Best practice is to use special purpose server instance that is designed to be the primary access point from the internet and acts as proxy to your other instances

Bastion host•

You can sell the reserved instances which are no longer required , to AWS market place to recover your money

CloudWatch - metric dashboard has metrics for CPU, Disk, Network…for all others you have to customize desktop i.e for memory

https://docs.aws.amazon.com/amazonswf/latest/developerguide/swf-dg-intro-to-swf.html

SWF (Simple workflow service)•

Autoscaling•SNS, SQS•

30 July 2018 23:30

Others Page 54

Page 55: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Availability○

Durability○

Reliability•

Throughput○

Elasticity○

Scalability•

General03 August 2018 08:40

Others Page 55

Page 56: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

CloudFormation Elastic Beanstalk

Makes systen Engineers life easy Makes Developers life easy

You may create two or more EB environment (dev and staging) using it

Uses template to describe all the AWS resources you want and takes care of provisioning and configuration

It can duplicate your environment in one region to another region using Script

(Infrastructure as code)

You can bring certain resourcesUp and running, but you cannot duplicate the resources

Cloud formation vs EB 04 August 2018 06:25

Others Page 56

Page 57: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

CloudWatch CloudTrail TrustedAdvisor

Performance Monitoring

Audit- Who is doing what on AWS resources

AWS CloudTrail is an audit service. Use AWS CloudTrail to get a history of AWS API calls and related events in your account. This includes calls made by using the AWS Management Console, AWS SDKs, command line tools, and higher-level AWS services.

Helps to reduce cost, how to increase performance, and how to improve security by optimizing AWS resource, by following AWS best practices

API log monitoring web service

Keeps check onTrusted Advisor

So for anything when it says for strict compliance and auditing purpose this relates to CloudTrail

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.

•CloudTrail•

Basic - Data is available automatically in 5-minute periods at no charge.

Detailed - Data is available in 1-minute periods for an additional cost. To get this level of data, you must specifically enable it for the instance

For RDS detailed monitoring is enabled by default..By default, Amazon RDS metric data is automatically sent to CloudWatch in 1-minute periods.

CPU○

Disk○

Network○

Default metrics for •

CloudWatch•

CloudWatch v CloudTrail vs TrustedAdvisor04 August 2018 06:29

Others Page 57

Page 58: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Active volumes○

Active Snapshots○

Elastive IP address soyou don’t go beyond the service limit○

Keeps check on-

Others Page 58

Page 59: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Because of the low latency required for a cluster placement group, a cluster placement group can only exist within 1 availability zone

Clustered Placement Group •Spread Placement Group•

Screen clipping taken: 08-08-2018 22:37

Placement Group05 August 2018 17:22

Others Page 59

Page 60: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 05-08-2018 18:47

Screen clipping taken: 05-08-2018 18:48

M Import/Export enables you to easily import virtual machine images from your existing environment to Amazon EC2 instances and export them back to your on-premises environment

From <https://aws.amazon.com/ec2/vm-import/>

VM Import Export (New Version is Server Migration service)05 August 2018 18:45

Others Page 60

Page 61: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Assume that Disaster Recovery affects the complete region. So in case of "potential disruption of service" aka Disaster situation, We have to thing to provide solution on other region instead of another Availability Zone

•Route 53 to divert to static website is consider best option when web application failure in disaster recovery situation

Screen clipping taken: 08-08-2018 15:05

Disaster Recovery08 August 2018 13:54

Others Page 61

Page 62: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

AWS Server Migration Service (SMS) is an agentless service which makes it easier and faster for you to migrate thousands of on-premises workloads to AWS. AWS SMS allows you to automate, schedule, and track incremental replications of live server volumes, making it easier for you to coordinate large-scale server migrations.

From <https://aws.amazon.com/server-migration-service/>

What kind of servers can be migrated to AWS using AWS Server Migration Service?

Currently, you can migrate virtual machines from VMware vSphere and Windows Hyper-V to AWS using AWS Server Migration Service.

From <https://aws.amazon.com/server-migration-service/faqs/>

What is the difference between EC2 VM Import and AWS Server Migration Service?

AWS Server Migration Service is a significant enhancement of EC2 VM Import. The AWS Server Migration Service provides automated, live incremental server replication and AWS Console support. For customers using EC2 VM Import for migration, we recommend using AWS Server Migration Service.

From <https://aws.amazon.com/server-migration-service/faqs/>

SMS - Server Migration Service08 August 2018 22:12

Others Page 62

Page 63: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 08-08-2018 22:21

Others Page 63

Page 64: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

AWS CloudHSM is a security service that offers isolated hardware security module (HSM) appliances to give customers an extra level of protection for data with strict corporate, contractual and regulatory compliance requirements.

To decrease latency (and improve application performance), it's best to place your HSMs as close to your EC2 instances as possible.

CloudHSM09 August 2018 05:50

Others Page 64

Page 65: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Decoupling - Synchronous vs Asynchronous09 August 2018 08:47

Others Page 65

Page 66: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Security•Cost Optimization•Operational Excellence•Reliability•Performance Efficiency•

Five Pillars (S-CORP)•

While AWS manages security of the cloud, security in the cloud is the responsibility of the customer. Customers retain control of what security they choose to implement to protect their own content, platform, applications, systems and networks, no differently than they would for applications in an on-site datacenter.

Well Architected Framework09 August 2018 10:28

Others Page 66

Page 67: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Bastion host is used SSH into an EC2 instance•Bastion host is exposed to internet and it should be hardened•Bastion host sits outside the private subnet and is used as secured gateway to that internal network

Bastion host09 August 2018 12:05

Others Page 67

Page 68: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

SimpleWeightedGeographical

Route5309 August 2018 21:55

Others Page 68

Page 69: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

01 August 2018 16:58

BigData Page 69

Page 70: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Makes it easy load and analyse streaming data and provides the ability to build your own custom appliction for your business need

5 transactions per second for read (2 mbps)

1000 records per second for write (1 mbps)

Consists of Shards○

No of Shards X capacity of individual shards

Total capacity of streams drives from○

Kinesis Streams•

Kinesis Firehose•

Kinesis Analytics•

Kinesis 3 types•

Very good Link https://www.slideshare.net/AmazonWebServices/bdt320-new-streaming-data-flows-with-amazon-kinesis-firehose

Screen clipping taken: 04-08-2018 08:04

Kinesis03 August 2018 08:40

Kinesis Page 70

Page 71: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 04-08-2018 08:01

Kinesis Page 71

Page 72: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Kinesis Page 72

Page 73: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Amazon's SLA guarantees a Monthly Uptime Percentage of at least 99.95% for Amazon EC2 and Amazon EBS within a Region.

SLA EC2 EBS

Uptime 99.95% 99.95%•

05 August 2018 17:24

SLA Page 73

Page 74: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Internet Gateway does not impose any bandwidth constraint•Internet Gateway allows two way communication (to and from internet) while NAT allows inside out and not from outside in.

NAT Gateway resides in Public Subnet (the subnet which has route to the internet gateway in its route table)

Major differences in NAT and Internet Gateway is •

Screen clipping taken: 08-08-2018 11:25

Disable source and destination check ( this is enable for EC2 instances by default, which means for any traffic EC2 should be either source or destination, but for NAT instances passes the traffic

Can be used as a bastion server•NAT instance must be in public subnet•There must me route out of the private subnet to the NAT instance, in order for this to work

Amount of traffic instance can support is depends on instance size, if you are bottlenecking you should increase the instance size

NAT Gateways•Works on IPV4•Ideally one with Each AZ•To Provide public connectivity to instances in Private subnet•

NAT Instances•

Works on IPv6•Egress Only Internet Gateway•

Screen clipping taken: 08-08-2018 04:56

This NAT gateway allowed instances In private subnet (Redshift one) to communicateWith internet.

Also NOTE that…Internet Gateway still required to for NAT gateway to communicate to internet.

You add default route to private subnet pointing to "NAT Gateway""NAT Gateway" is already in Public subnet (subnet which has default route to internet Gateway)

NAT07 August 2018 21:24

VPC-NAT Page 74

Page 75: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 08-08-2018 04:41

VPC-NAT Page 75

Page 76: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 07-08-2018 21:52

1. What makes a subnet public is the fact that it has a route on its route table to the Internet Gateway. If no route is created for the Internet Gateway, then the subnet is private.

2. A NAT instance is always placed in a public subnet, because it has to communicate with the internet. Considering the item 1, then the NAT instance must be in a subnet that has a route to an Internet Gateway.

3. The NAT instance allows instances on a private subnet (so instances inside a subnet that has no route to the Internet Gateway) to access the internet. It does that by receiving the traffic from those instances and then propagating it to the internet. Once again, for the NAT instance to be able to to this it has to communicate with the internet, which will happen only if the route table on its subnet has a route to an Internet Gateway.

4. You also have to disable source/destination check on the NAT instance, because on the case of a NAT instance, it will be sending and receiving traffic on behalf of other instances, so the source and/or destination might not be itself.

From <https://acloud.guru/forums/aws-certified-solutions-architect-professional/discussion/-L3eNSxl2dX-jmFr3DSf/Nat%20Instance%20vs%20Internet%20Gateway>

VPC-NAT Page 76

Page 77: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Think of VPC as Virtual Data Center in cloud•VPC endpoints are used to enble private connectivity to AWS services from within your VPCs without using Internet Gateway

Screen clipping taken: 08-08-2018 08:10

VPC Endpoint08 August 2018 08:09

VPC-NAT Page 77

Page 78: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Security Group RulesThe rules of a security group control the inbound traffic that's allowed to reach the instances that are associated with the security group and the outbound traffic that's allowed to leave them.

The following are the characteristics of security group rules:

By default, security groups allow all outbound traffic.

You can't change the outbound rules for an EC2-Classic security group.

Security group rules are always permissive; you can't create rules that deny access.

Security groups are stateful — if you send a request from your instance, the response traffic for that request is allowed to flow in regardless of inbound security group rules. For VPC security groups, this also means that responses to allowed inbound traffic are allowed to flow out, regardless of outbound rules. For more information, see Connection Tracking.

GOLDEN Statement : Security Groups are stateful with bith INGRES and EGRES rules

You can add and remove rules at any time. Your changes are automatically applied to the instances associated with the security group after a short period.

Security Group08 August 2018 08:31

VPC-NAT Page 78

Page 79: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 08-08-2018 22:43

VPC-NAT Page 79

Page 80: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Screen clipping taken: 08-08-2018 08:42

NACL (Network Access Control List08 August 2018 08:42

VPC-NAT Page 80

Page 81: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Elastic IP Address per region == 5 allowed•

STATIC•IPv4 address•Designed for DYNAMIC cloud computing •

Elastic IP address is a •

Masking the failure… of an instance or software by rapidly remapping the address to another instance in your account

Any restarts will have no effect on the elastic IP attached to your instance. When you restart/reboot your EC2 instance, it remains in the “running” state, and will retain the Elastic IP. The EIP is assigned to your account, and will remain assigned until you return it to the AWS pool.

Elastic IP address08 August 2018 08:48

VPC-NAT Page 81

Page 82: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Golden Statements

Public & Private Subnets08 August 2018 11:09

VPC-NAT Page 82

Page 83: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Captures IP traffic going to and from network interfaces in your VPC

-

For VPC related, al IP tranactions-

IP addressed from which the resources being accessed -

VPC flow logs08 August 2018 13:11

VPC-NAT Page 83

Page 84: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Security group can be referenced in source or destination if both VPC is in same region

In general DSN hostname resolved to Instance's public IP address in Peered VPC. However if you enable "DNS hostname resolution" during Peering connection. This only Applies to when VPCs are in same region

Manually adding route required in order to enable flow of traffic between VPC's using private IP address.

A placement group can span peered VPCs that are in the same region; however, you do not get full-bisection bandwidth between instances in peered VPCs

Facts

Screen clipping taken: 08-08-2018 22:45

VPC Peering08 August 2018 22:24

VPC-NAT Page 84

Page 85: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

https://www.slideshare.net/AmazonWebServices/aws-201-w

http://www.devopsschool.com/slides/aws/aws-certified-solutions-architect-associate/index.html#/194

08 August 2018 09:52

LINKS-Must READ Page 85

Page 86: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Subnet which has route to the internet Gateway in routetable

Public Subnet•

Subnet which does NOT have route to the internet Gateway in routetable

One subnet = one availability Zone

Private Subnet•

AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.

Compliance and Auditing == always implies to CloudTrail (need to validate)

So always think of REGION in this case and not AZ ○

Potential Disruption of critical services == Disaster (Disaster Recovery solution)

Golden Statements08 August 2018 11:10

LINKS-Must READ Page 86

Page 87: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Almost everything in Python is an object, with its properties and methods.

A Class is like an object constructor, or a "blueprint" for creating objects.the class describes the "shape", while the object is an actual thing you created from it.

Class = Blueprint for creating object •A function is a block of code which only runs when it is called•

Difference between Method and Function in Python

Here, key differences between Method and Function in Python are explained. Java is also an OOP language, but their is no concept of Function in it. But Python has both concept of Method and Function.Python Method1.Method is called by its name, but it is associated to an object (dependent). 2.A method is implicitly passed the object on which it is invoked. 3.It may or may not return any data. 4.A method can operate on the data (instance variables) that is contained by the corresponding class

Basic Method Structure in Python :

# Basic Python method class class_name def method_name () : ...... # method body ......

Object And Class12 August 2018 13:40

Python Page 87

Page 88: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Python Page 88

Page 89: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

Python Method Python Fucntion

1.Method is called by its name, but it is associated to an object (dependent). 2.A method is implicitly passed the object on which it is invoked. 3.It may or may not return any data. 4.A method can operate on the data (instance variables) that is contained by the corresponding class

Basic Method Structure in Python :

# Basic Python method class class_name def method_name () : ...... # method body ......

Functions1.Function is block of code that is also called by its name. (independent) 2.The function can have different parameters or may not have any at all. If any data (parameters) are passed, they are passed explicitly. 3.It may or may not return any data. 4.Function does not deal with Class and its instance concept.

Diff fuction & Method12 August 2018 16:34

Python Page 89

Page 90: b....CROSS ORIGIN resource sharing Screen clipping taken: 06-08-2018 13:37 CORS 06 August 2018 13:32 S3 Page 14 . A pre-signed URL gives you access to the object identified in the

"KEY" : "VALUE"•It’s a key- Value Pair •

JSON JAVA SCRIPT NOTATION14 August 2018 21:59

JSON Page 90