Showing posts with label AWS Crash course. Show all posts
Showing posts with label AWS Crash course. Show all posts

AWS Security and Compliance Crash Course

In this post we will provide you gist of the AWS security and compliance model.

Shared Security Model


AWS is responsible for securing the underlying infra. While customer is responsible for anything you put on the cloud or connect to the cloud.



Amazon is responsible for the security config of it's product that are considered managed services e.g. dynamoDB, Amazon RDS, Amazon redshift, Amazon workspaces, Amazon EMR.

IAAS :- Amazon EC2 and Amazon VPC are completely under customer's control and thus customer has to take steps to make them secure and compliant.

Storage decommissioning :-

AWS uses the technique detailed in DoD 5220.22-M and NIST 800-88 to destroy data as part of decommissioning process.


AWS Services to secure the cloud 

  • AWS Config :- Manage configuration history and change notifications to enable security.
  • AWS Service catalog :- Catalog allows you to centrally manage commonly-deployed IT services thus enabling users to deploy approved IT services in your organization.
  • AWS Guard Duty:- Offers threat detection and continuous monitoring and malicious behaviors in your AWS accounts.
  • AWS CloudHSM :- Protect your encryption keys with hardware security modules (HSM).
  • Server-side Encryption :- If you prefer S3 to manage encryption process for you.
  • AWS IAM  :- Secure access through IAM Users, Groups and Roles. IAM roles can be mapped to AD groups also.
  • Amazon Macie :- Use Machine learning to automatically discover and protect sensitive data.
  • AWS CloudTrail :- Records all API calls to your AWS account either programmatically or through console. 
 
AWS Artifact :- To get details of all the AWS compliance reports from third-party auditors.

Network security

  • You can connect to AWS access point via http or https using SSL.
  • AWS DirectConnect :- Private connectivity between yours and AWS datacenter.
  • For customer who require additional security amazon provides Amazon VPC which provide private subnet within AWS cloud and the ability to use an IPsec VPN(Virtual private network) device to provide an encrypted tunnel between the amazon vpc and your data center.
  • Amazon corporate network segregation:- Logically the amazon prod network is segregated from amazon corporate network by means of a complex set of network security/segregation devices.

Network Monitoring and Protection

Amazon protects from different type of attacks:-


DDoS:- A Denial of Service (DoS) attack is a malicious attempt to affect the availability of a targeted system, such as a website or application, to legitimate end users. Typically, attackers generate large volumes of packets or requests ultimately overwhelming the target system. In case of a Distributed Denial of Service (DDoS) attack, and the attacker uses multiple compromised or controlled sources to generate the attack.

Man in the Middle attacks(MITM) :- In cryptography and computer security, a man-in-the-middle attack (often abbreviated MitM, MiM attack, MitMA or the same using all capital letters) is an attack where the attacker secretly relays and possibly alters the communication between two parties who believe they are directly communicating with each other.

IP spoofing :- IP Spoofing is a technique used to gain unauthorized access to machines, whereby an attacker illicitly impersonate another machine by manipulating IP packets. IP Spoofing involves modifying the packet header with a forged (spoofed) source IP address, a checksum, and the order value.

Port Scanning :- Port scanner is an application designed to probe a server or host for open ports. This is often used by administrators to verify security policies of their networks and by attackers to identify services running on a host and exploit vulnerabilities.


AWS credentials types:-

  • password
  • Multi factor authentication (MFA)
  • AWS Microsoft AD 
  • IAM roles
  • access keys
  • key pairs
  • X.509 certificates:- X.509 are only used to sign SOAP-based requests . You can have AWS create a x.509 certificate and a private key that you can download, or you can upload your own certificate by using the security credentials page.

Automation :-

Amazon Inspector :- It's an automated security assessment service. It can be very helpful in finding vulnerabilities on OS and suggesting the patches.


Source: https://aws.amazon.com/

AWS certification exam cheat sheets

AWS certification exams grill you on vast topics and lot of services. In this post we have consolidated major services and topics of different exams so that you can access them from a single location.

Below links will give you better info on which topics and services are important for each exam and how to best prepare for them.

AWS Crash Course - EMR

What is EMR?
  • AWS EMR(Elastic MapReduce) is a managed hadoop framework.
  • It provides you an easy, cost-effective and highly scalable way to process large amount of data.
  • It can be used for multiple things like indexing, log analysis, financial analysis, scientific simulation, machine learning etc.
Cluster and Nodes
  • The centerpiece of EMR is Cluster.
  • Cluster is a collection of EC2 instances also called as nodes.
  • All nodes of an EMR cluster are launched in same availability zone.
  • Each node has a role in cluster.
Type of EMR Cluster Nodes
Master Node:- It’s the main boss which manages the cluster by running software components and distributing the tasks to other nodes. Master node will monitor task status and health of cluster.
Core Node:- It’s a slave node which “run tasks” and “store data” in HDFS (Hadoop Distributed Filesystem).
Task Node:- This is also a slave node but it only “run tasks”. It doesn’t store any data. It’s an optional node.
Cluster Types
EMR has two type of clusters
1) Transient :- These are clusters which are shutdown once the jobs is done. These are useful when you don’t need cluster to be running all day long and can save money by shutting them down.
2) Persistent :- Persistent clusters are those which need to be always available to process the continuous stream of jobs or you want the data to be always available on HDFS.
Different Cluster States
An EMR cluster goes through multiple stages as described below:-
STARTING – The cluster provisions, starts, and configures EC2 instances.
BOOTSTRAPPING – Bootstrap actions are being executed on the cluster.
RUNNING – A step for the cluster is currently being run.
WAITING – The cluster is currently active, but has no steps to run.
TERMINATING – The cluster is in the process of shutting down.
TERMINATED – The cluster was shut down without error.
TERMINATED_WITH_ERRORS – The cluster was shut down with errors.


Types of filesystem in EMR
Hadoop Distributed File System (HDFS)
Hadoop Distributed File System (HDFS) is a distributed, scalable file system for Hadoop. HDFS distributes the data it stores across instances in the cluster, storing multiple copies of data on different instances to ensure that no data is lost if an individual instance fails. HDFS is ephemeral storage that is reclaimed when you terminate a cluster.
EMR File System (EMRFS)
Using the EMR File System (EMRFS), Amazon EMR extends Hadoop to add the ability to directly access data stored in Amazon S3 as if it were a file system like HDFS. You can use either HDFS or Amazon S3 as the file system in your cluster. Most often, Amazon S3 is used to store input and output data and intermediate results are stored in HDFS.
Local File System
The local file system refers to a locally connected disk. When you create a Hadoop cluster, each node is created from an Amazon EC2 instance that comes with a preconfigured block of preattached disk storage called an instance store. Data on instance store volumes persists only during the lifecycle of its Amazon EC2 instance.
Programming languages supported by EMR
  • Perl
  • Python
  • Ruby
  • C++
  • PHP
  • R
EMR Security
  • EMR integrates with IAM to manage permissions.
  • EMR has Master and Slave security groups for nodes to control the traffic access.
  • EMR supports S3 server-side and client-side encryption with EMRFS.
  • You can launch EMR clusters in your VPC to make it more secure.
  • EMR integrates with CloudTrail so you will have log of all activites done on cluster.
  • You can login via ssh to EMR cluster nodes using EC2 Key Pairs.
EMR Management Interfaces
  • Console :-  You can manage your EMR clusters from AWS EMR Console .
  • AWS CLI :-  Command line provides you a rich way of controlling the EMR. Refer here the EMR CLI .
  • Software Development Kits (SDKs) :- SDKs provide functions that call Amazon EMR to create and manage clusters. It’s currently available only for the supported languages mentioned above. You can check here some sample code and libraries.
  • Web Service API :- You can use this interface to call the Web Service directly using JSON. You can get more information from API reference Guide .
EMR Billing
  • You pay for EC2 instances used in cluster and EMR.
  • You are charged for per instance hours.
  • EMR supports  On-Demand, Spot, and Reserved Instances
  • As a cost saving measure it is recommenced that task nodes should be Spot instances
  • It’s not a good idea to use spot instances for Master or Core Node as they store data on them. And you will lose data once the node is terminated.
If you want to try some EMR hands on refer this tutorial.

  • This AWS Crash Course series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - Redshift

Redshift is a data warehouse from Amazon. It’s like a virtual place where you store a huge amount of data.
  • Redshift is fully managed petabyte-scale system
  • Amazon Redshift is based on PostgreSQL 8.0.2
  • It is optimized for data warehousing
  • Supports integrations and connections with various applications, including, Business Intelligence tools
  • Redshift provides custom JDBC and ODBC drivers.
  • Redshift can be integrated with CloudTrail for auditing purpose.
  • You can monitor Redshift performance from CloudWatch.
Features of Amazon Redshift
Supports VPC − The users can launch Redshift within VPC and control access to the cluster through the virtual networking environment.
Encryption − Data stored in Redshift can be encrypted and configured while creating tables in Redshift.
SSL − SSL encryption is used to encrypt connections between clients and Redshift.
Scalable − With a few simple clicks, you can choose vertical scaling(increasing instance size) or horizontal scaling(increasing compute nodes).
Cost-effective − Amazon Redshift is a cost-effective alternative to traditional data warehousing practices. There are no up-front costs, no long-term commitments and on-demand pricing structure.
MPP(Massive Parallel Processing) – Redshift Leverages parallel processing which improves query performance. Massively parallel refers to the use of a large number of processors (or separate computers) to perform coordinated computations in parallel. This reduces computing time and improves query performance.
Columnar Storage – Redshift uses columnar storage. So it stores data tables by column rather than by row. The goal of a columnar database is to efficiently write and read data to and from hard disk storage in order to speed up the time it takes to return a query.
Advanced Compression – Compression is a column-level operation that reduces the size of data when it is stored thus help in saving space.
Type of nodes in Redshift
Leader Node
Compute Node
What does these nodes do?
Leader Node:- A leader node receives queries from client applications, parse the queries and develops execution plans, which are an ordered set of steps to process these queries. The leader node then coordinates the parallel execution of these plans with the compute nodes.  Good part is that you will not be charged for leader node hours; only compute nodes will incur charges. If you run single node Redshift cluster you don’t need leader node.
Compute Node:- It execute the steps specified in the execution plans and transmit data among other nodes to serve queries. The intermediate results are sent back to the leader node for aggregation before being sent back to the client applications. You can have 1 to 128 Compute Nodes.
From which sources you can load data in Redshift?
You can do it from multiple sources like :-
  • Amazon S3
  • Amazon DynamoDB
  • Amazon EMR
  • AWS Data Pipeline
  • Any SSH-enabled host on Amazon EC2 or on-premises
Redshift Backup and Restores
  • Redshift can take automatic snapshots of cluster.
  • You can also take manual snapshot of cluster.
  • Redshift continuously backs up its data to S3
  • Redshift attempt to keep at least 3 copies of the data.
Hope the above snapshot give you a decent understanding of Redshift. If you want to try some handson check this tutorial .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - SQS

Today we will discuss about an AWS messaging service called SQS.
  • SQS is Simple Queue Service.
  • It’s a messaging queue service which acts like a buffer between message producing and message receiving components.
  • Using SQS you can decouple the components of an application.
  • Messages can contain upto 256 KB of text in any format.
  • Any component can later retrieve the messages programmatically using the SQS API.
  • SQS queues are dynamically created and scale automatically so you can build and grow applications quickly – and efficiently.
  • You can combine SQS with auto scaling of EC2 instances as per warm up and cool down.
  • Used by companies like Vodafone, BMW, RedBus, Netflix etc.
  • You can use Amazon SQS to exchange sensitive data between applications using server-side encryption (SSE) to encrypt each message body.
  • SQS is pull(or poll) based system. So messages are pulled from SQS queues.
  •  Multiple copies of every message is stored redundantly across multiple availability zones.
  • Amazon SQS is deeply integrated with other AWS services such as EC2, ECS, RDS, Lambda etc.
Two types of SQS queues:-
  • Standard Queue
  • FIFO Queue
Standard Queue :-
  • Standard Queue is the default type offered by SQS
  • Allows nearly unlimited transactions per second.
  • Guarantees that a message will be delivered at least once.
  • But it can deliver the message more than once also.
  • It provides best effort ordering.
  • Messages can be kept from 1 minute to 14 days. Default is 4 days.
  • It has a visibility time out window. And if order is not processed till that time, it will become visible again and processed by another reader.
FIFO Queue :-
  • FIFO queue complements the standard queue.
  • It has First in First Out delivery mechanism.
  • Messages are processed only once.
  • Order of the message is strictly preserved.
  • Duplicates are not introduced in the queue.
  • Supports message groups.
  • Limited to 300 transactions per second.
Hope the above snapshot give you a decent understanding of SQS. If you want to try some handson check this tutorial .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - RDS

Welcome to AWS Crash Course.
What is RDS?
  • RDS is Relational Database Service of Amazon.
  • It is part of its PaaS offering.
  • A new DB instance can easily be launched from AWS management console.
  • Complex administration process like patching, backup etc. are manged automatically by RDS.
  • Amazon has its own relational database called Amazon Aurora.
  • RDS also supports other popular database engines like MySQL, Oracle, SQL Server, PostgreSQL and MariaDB .
RDS Supports Multi AZ(Availability Zone) failovers.
What does that mean?
It means if your primary DB is down. Services will automatically failover to secondary DB in other AZ.
  • Multi-AZ deployments for MySQL,Oracle and PostgreSQL engines utilizes synchronous physical replication to keep data on the standby up-to-date with Primary.
  • Multi-AZ deployments for the SQL server engine use synchronous logical replication to achieve the same result, employing SQL server native mirroring tech.
  • Both approaches safeguard your data in event of a DB instance failure or loss of AZ.
  • Backups are taken from secondary DB which avoids I/O suspension to the primary.
  • Restore’s are taken from secondary DB which avoids I/O suspension to the primary.
  • You can force a failover from one AZ to another by rebooting your DB instance.
But RDS Multi AZ failover is not a scaling Solution.
Read Replicas are for Scaling.
What are Read Replicas?
As we discussed above Multi AZ is synchronous replication of DB.  While read replicas are asynchronous replication of DB.
  • You can have 5 read replicas for both MySQL and PostgreSQL.
  • You can have read replicas in different regions but for MySQL only.
  • Read replica’s can be built off Multi-AZ’s databases.
  • You can have read replica’s of read replica’s , however only for MySQL and this will further increase latency.
  • You can use read replicas for generating reports. By this you won’t put load on the primary DB.
RDS supports automated backups.
But keep these things in mind.
  • There is a performance hit if Multi-AZ is not enabled.
  • If you delete an instance then all automated backups are deleted, however manual db snapshots will not be deleted.
  • All snapshots are stored on S3.
  • When you do a restore , you can change the engine type(e.g. SQL standard to SQL enterprise) provided you have enough storage space.
Hope the above snapshot give you a decent understanding of RDS. If you want to try some handson check this tutorial .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - S3

Welcome to AWS Crash Course.
What is S3?
S3 is Simple Storage Service. It’s an object storage. That means, it’s used for storing objects like photos, videos etc.
  • S3 provides 11 9’s durability 99.999999999%.  Means Losing 1 out of 100 Billion objects.
  • S3 provides 99.99% availability.
  • Files can be 1 byte to 5 TB in size
  • Read after Write consistency for PUTS of new objects – means you can immediately read what you have written.
  • Eventual consistency for overwrite PUTS and DELETES – if you overwrite or delete an existing object it takes time to propagate in S3 globally.
  • Secure your data using ACL and bucket policies.
  • S3 is designed to sustain the loss of 2 facilities concurrently i.e. 2 Availability Zone failures.

    S3 has multiple classes. One is S3 Standard which we discussed above. Others classes are:-

S3-IA (Infrequently Accessed) 
  • S3-IA is for data which is not frequently accessed but still needed an immediate access.
  • You get same durability and availability as S3 but at reduced price.
  • Can manage upto 2 Concurrent facility fault tolerance.
S3-RRS (S3- Reduced Redundancy Storage)
  • 99.99% durability and availability.
  • Use RRS if you are storing non-critical data that can be easily reproduced. Like thumbnails of images.
  • No Concurrent facility fault tolerance
S3 Glacier 
  • Data is stored in Amazon Glacier in “archives.“
  • Archive can be any data such as a photo, video, or document.
  • A single archive can be as large as 40 terabytes.
  • You can store an unlimited number of archives in Glacier
  • Amazon Glacier uses “vaults” as containers to store archives.
  • Under a single AWS account, you can have up to 1000 vaults.
S3 Supports versioning. 
What does that mean?
It means that if you change a file it can keep versions of both old and new files.
  • If you enable versioning in S3 it will keep all the versions even if you delete or update the old version.
  • Great backup tool
  • Once enabled versioning cannot be disabled ,only suspended
  • Integrates with lifecycle rules
  • Versioning’s MFA delete capability which uses multi factor authentication, can be used to provide additional layer of security.
  • Cross region replication , requires versioning enabled on the source bucket
S3  Supports Lifecycle Management
What does that mean?
It means you can move objects from one storage class to another storage class after few days as per your schedule. This is used to reduce cost by moving less critical data to cheaper storage class.
  • Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket
  • Can be used with versioning
  • Can be applied to current versions and previous versions
  • Transition from standard to infrequent access storage class can be done only after the data is in standard class storage for 30 days.
  • You can directly put data from standard to glacier
  • Lifecycle policy will not transition objects that are less than 128KB
If you want to try some handson try this exercise .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - Elastic Beanstalk

Welcome back to AWS Crash Course.
In the last section we discussed about EBS.
In this section we will discuss about AWS Elastic Beanstalk.
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
  • You can push updates from GIT and only the modified files are transmitted to AWS elastic beanstalk.
  • Elastic beanstalk supports IAM, EC2, VPC and RDS instances.
  • You have full access to the resources under elastic beanstalk
  • Code is stored in S3
  • Multiple environments are allowed to support version control. You can roll back changes.
  • Amazon Linux AMI and Windows 2008 R2 supported.
    What are the supported Languages and Development Stacks?
  • Apache Tomcat for Java applications
  • Apache HTTP Server for PHP applications
  • Apache HTTP Server for Python applications
  • Nginx or Apache HTTP Server for Node.js applications
  • Passenger or Puma for Ruby applications
  • Microsoft IIS 7.5, 8.0, and 8.5 for .NET applications
  • Java SE
  • Docker
  • Go
How can you update Elastic Beanstalk?
  • You can upload the code for updating on AWS elastic beanstalk
  • It support multiple running environments like test, pre-prod and prod etc
  • Each environment is independently configured and runs on its own separate AWS resources
  • Elastic beanstalk also stores and tracks application versions over time so an existing environment can easily rolled back to a prior version.
  • New environment can be launched using an older version to try and reproduce a customer problem.
Fault Tolerance
  • Always design, implement, and deploy for automated recovery from failure
  • Use multiple Availability Zones for your Amazon EC2 instances and for Amazon RDS
  • Use ELB for balancing the load.
  • Configure your Auto Scaling settings to maintain your fleet of Amazon EC2 instances.
  • If you are using Amazon RDS, then set the retention period for backups, so that Amazon RDS can perform automated backups.
  What about Security?
  • Security on AWS is a shared responsibility
  • You are responsible for the security of data coming in and out of your Elastic Beanstalk environment.
  • Configure SSL to protect information from your clients.
  • Configure security groups and NACL with least privilege.
This short course was to give you an understanding of elastic beanstalk. If you want to try some hands on follow this AWS tutorial.

AWS Crash Course - EBS

In the last section we discussed about VPC. In this section we will discuss about EBS.
What is EBS?
  • EBS is Elastic Block Storage.
  • EBS volume is a durable, block-level storage. It’s similar to the hard disk that you have in your laptop or desktop.
  • EBS volumes can be used as primary storage for data that requires frequent updates.
  • EBS volume in an Availability Zone is automatically replicated within that zone to prevent data loss due to failure.
  • You can create encrypted EBS volumes with the Amazon EBS encryption feature or use 3rd party software for encryption.
  • To improve performance use RAID Groups e.g. RAID 0, RAID 1, RAID 10
What are the different types of EBS volumes?
  • General Purpose SSD (gp2) – It provides you upto 10,000 IOPS(Input/output operations per second)  and it can be of size from 1GB to 16TB . This is used for for normal loads. And should be enough for your you Dev or UAT setups.
  • Provisioned IOPS SSD (io1) – It provides you upto 20000 IOPS  and it can be of size from 4GB to 16TB . These are generally used for Large SQL/NoSQL Databases.
  • Throughput Optimized HDD (st1) – These provide you upto 500 IOPS  and can range in size from 500GB to 16TB. These are mostly useful for Big Data/ Data warehouses.
  • Cold HDD (sc1) – These are the cheapest kind of disks.  They provide upto 250 IOPS -and can range in size from 500GB to 16TB. These are commonly used fro data archiving as they provide low IOPS but are cheap for storing data which is not used frequently.
You can take snapshots of EBS volumes.
So what is a snapshot?
  • You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots
  • Snapshots are incremental backups – Saves time and storage costs
  • Snapshots support encryption
  • Snapshots exclude data that has been cached by any applications or the OS
  • You can share your unencrypted snapshots with others
  • You can use a copy of a snapshot for Migrations, DR, Data retention etc.
You can try handson with EBS by using this exercise .

AWS Crash Course – Route 53

Route 53 is a DNS service that route user requests.
  • Amazon Route 53 (Route 53) is a scalable and highly available Domain Name System (DNS).
  • The name route 53 is reference to UDP port 53 which is generally used for DNS.
  • Route 53 with its DNS service that allows administrators to direct traffic by simply updating DNS records in the hosted zone.
  • TTL(Time to Live) can be adjusted for resource records to be shorter which allow record changes to propagate faster to clients.
  • One of the key features of Route 53 is programmatic access to the service that allows customers to modify DNS records via web service calls.
Three Main functions of Route 53 are:-
Domain registration:- It allows you to register domain names from your AWS accounts.
DNS service:- This service is used for mapping your website IP to a name. e.g.54.168.4.10 to example.com. It also supports many other formats which we will discuss below.
Health Monitoring:- It can monitor the health of your servers/VMs/instances and can route traffic as per the routing policy. It can also work as a load balancer for region level traffic management.
Route 53 supports different routing policies and you can use the one which is most suitable for your applications.
Routing Policies :-
  • Simple:- In this Route 53 will respond to DNS queries that are only in the record set.
  • Weighted:- This policy let you split the traffic based on different weights assigned. for e.g. 10% traffic goes to us-east-1 and 90% goes to eu-west-1
  • Latency:- Allows to route your traffic based on lowest network latency for your end user.(ie which region will give end user the fastest response time)
  • Failover:- This policy is used when you create an active/passive setup. Route 53 will monitor the health of your primary site using a health check.
  • Geolocation:- This routing lets you choose where your traffic will go based on geographic location of end users. So the user requesting from France will be served from server which is nearest to France.
Route 53 supports many DNS record formats:-
  • A Format :- Returns a 32-bit IPv4 address, most commonly used to map hostnames to an IP address of the host.
  • AAAA Format:-  Returns a 128-bit IPv6 address, most commonly used to map hostnames to an IP address of the host.
  • CNAME Format:- Alias of one name to another. So with CNAME you can set example.com and www.example.com as alias of each other.
  • MX Format :- Maps a domain name to a list of message transfer agents for that domain
  • NS Format:- Delegates a DNS zone to use the given authoritative name servers.
  • PTR Format :- Pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups, but other uses include such things as DNS-SD.
  • SOA Format:- Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone.
  • SRV Format:- Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX.
  • TXT Format :- Originally for arbitrary human-readable text in a DNS record.
Tip:- For the exam understanding A format and CNAME should be enough.
If you want to try some handson try this exercise .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course – VPC

In the last section we discussed about EC2.  In case you missed it you can check it here AWS Crash Course – EC2 .
In this section we will discuss about VPC.
What is VPC?
  • VPC is Virtual Private Cloud.
  • VPC is like your own private cloud inside the AWS public cloud.
  • You can decide the network range.
  • Your VPC is not shared with others.
  • You can launch instances in VPC and restrict inbound/outbound access to them.
  • You can leverage multiple layers of security, including security groups and network access control lists.
  • You can create a Virtual Private Network (VPN) connection between your corporate datacenter and your VPC.
Components of Amazon VPC:-
  • Subnet: A segment of a VPC’s IP address range this is basically the network range of IPs which you assign to your resource e.g. EC2.
  • Internet Gateway: If you want your instance in VPC to be able to access Public Internet, you create an internet gateway.
  • NAT Gateway: You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.
  • Hardware VPN Connection: A hardware-based VPN connection between your Amazon VPC and your datacenter, home network, or co-location facility.
  • Virtual Private Gateway: A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection..
  • Customer Gateway: A customer gateway is a physical device or software application on your side of the VPN connection.
  • Router: Routers acts like a mediator for your sunets in VPC. It interconnect subnets and direct traffic between Internet gateways, virtual private gateways, NAT gateways, and subnets.
  • Peering Connection: A peering connection enables you to route traffic via private IP addresses between two peered VPCs. Peering connection is used to do VPC Peering by which you can establish connections/tunnel between two different VPCs.
VPC has few more components but to avoid confusion we will discuss about them in later sections.
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - EC2

We are starting this series on AWS to give you a decent understanding of different AWS services. These will be short articles which you can go through in 15-20 mins everyday.
You can check the complete series here of AWS Crash Course .
Introduction:-
  • AWS compute is part of it’s IaaS offerings.
  • With compute, you can deploy virtual servers to run your applications.
  • Don’t have to wait for days or weeks to get your desired server capacity.
  • You can manage the OS or let AWS manage it for you.
  • It can be used to build mobile apps or running massive clusters.
  • You can even deploy application serverless.
  • It provides high fault tolerance.
  • Easy scalability and load balancing.
  • You are billed as per your usage.
What is EC2?
  • EC2 is Elastic Compute Cloud
  • It’s VM (virtual machine) in cloud.
  • You can commission one or thousands of instances simultaneously, and pay only for what you use, making web-scale cloud computing easy.
  • Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
  • Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.
What are EC2 pricing models?
  • On Demand – Pay by hour no long term commitment.
  • Reserved – Yearly reservations up to 75% cheaper compared to On Demand.
  • Dedicated – A dedicated Physical server is provided to you. Up to 70% cheaper compared to On Demand.
  • Spot – Bid on spare Amazon computing capacity. Up to 90% cheaper compared to On Demand.
EC2 Instance Types:-
  • General Purpose (T2, M4 and M3) – Small and mid-size databases
  • Compute Optimized (C4 and C3) – High performance front-end fleets, web-servers, batch processing etc.
  • Memory Optimized (X1, R4 and R3) – High performance databases, data mining & analysis, in-memory databases
  • Accelerated Computing Instances(P2, G2 and F1) – Used for graphic workloads
  • Storage Optimized I3 – High I/O Instances – NoSQL databases like Cassandra, MongoDB
  • D2 – Dense-storage Instances – Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing
Check out more details in next section .  AWS Crash Course – VPC
If you want to try some hands on, you can follow this guide to launch Amazon Linux Instance or this for Windows instance.