Solved: How to login to AWS EC2 Linux Instance

In this post we will discuss how you can login to your AWS EC2 linux instance using Putty.
Pre-requisites :-
Once you are done with the pre-requisites let’s move ahead.
Convert .pem key to .ppk
  • First we will convert the .pem key to .ppk key.
  • Click on PuttyGen you downloaded.
  • Click on “Load”. Browse and select your private key with .pem extension.
Now click on “Save private key” .
It will ask if you want to add passphrase. It’s like additional password when you login. If you want you can enter passphrase in “Key passphrase”.
For this exercise i just clicked on “Yes” .

  • Save key with the name you like. Check that the new key file now have .ppk extension.
Using the  Key for Login
Now we will use the .ppk key we just created to login to our ec2 instance.
  • Open Putty that we downloaded earlier.
  • In the left Pane click on Session.
In hostname enter you server details like user name and IP.
If you are using Amazon Linux Image the default user is ec2-user.  So entry will be like ec2-user@33.44.55.66  and Port 22.

  • In the left navigation pane  click on “Connection” and expand it.
Next expand “SSH” and click on “Auth” (refer image below).

In the right pane click on Browse and select the .ppk key we created earlier.
  • Now in the left navigation pane click on “Session” again. In the right pane in the “Saved sessions”, name the session as “test” or whatever you like and click save. This will save your session so that you don’t have to do this activity again.
  • Finally select the session you created and click “Open”. If all is configured correctly you will now be logged in to you EC2 instance.
Note:- If your ssh session gets timed out after being idle for few minutes check this post on how to set putty keep alive time.

Solved: Using filezilla for transferring files to AWS EC2 instances

Filezilla is a great free opensource tool for securely transferring files to and from Unix and Linux servers. Also, it’s secure in comparison  to doing ftp to a linux server.
If not already downloaded you can download filezilla from here .
By default you can enter user id and password  of destination server to connect to it on port 22.
But in case of AWS EC2 instances you don’t get password instead you use the ssh key to connect to it.
We will need .ppk key for this activity. If you have .pem key and not .ppk key refer this post to convert pem key to ppk key  .
Once you have the ppk key let’s move on.
Create new site in Filezilla
  • Click on File > Site Manager
  • In the left pane click on “New Site” and give a name to the site e.g. devserver
  • In the right window enter details as:-
Host:- Mention IP of server
Protocol:- SFTP
Logon type:- Key file
User:- ec2-user (or mention user name for which you uploaded ssh key)
Key file:- Browse and select  the ppk key file
  • Finally click OK.
Try connecting to the server. It should work now.

If you are not allowed to install a third party software please check this post to use SFTP for file transfer to EC2.

Solved: "Network error: Software caused connection abort"

Some time you may have noticed that your putty session is getting disconnected with error “Network error: Software caused connection abort”
This can happen because of time out setting on server or sometime due to firewall. To resolve this issue you will have to set a keep alive time for the session.
After you set a keep alive time putty will send a packet after the specified seconds to keep the session live.
Generally you can set it to 240 seconds i.e. 4 minutes. But at times you may have to keep it low. Like when I connect from my home laptop to my AWS EC2 instance I’ve to keep it at 2 secs.
To set it:
  • Open Putty.
  • Load the session for which you are facing timeout issue.
  • Click on Connection in the left pane .
Here we have set Seconds between keepalives to 2 . (refer image below)



  • Finally click on “Session” in left pane and save the session.
If you already have other saved sessions in putty you will have to repeat above steps for each of the saved session if needed.

Comparing AWS, Azure and Redhat exams

In last couple of years I’ve given certification exams of multiple cloud providers. While AWS and Azure exams are more theoretical and based on multiple choice questions. Redhat Openstack is practical lab exam. In this article I’ll be discussing the pros and cons of the different exam patterns.
AWS
Passed AWS CSA – Associate and Professional
Pros:-
  • You can refer to any question any time you want during the exam.
  • Exam tests you on a wide range of topic.
  • Even if you have made mistakes in the beginning you can recover by reviewing the questions later.
  • No datasheet type questions like “How much RAM does C3.Xlarge offers?”
  • You get 1 Year free tier which is great to learn about AWS.
Cons:-
  • Exams are expensive in comparison to Microsoft Azure (at least in India)
  • In many question it just tests your reading speed.
  • No version, so you don’t know if you should answer as per recent announcement or old method available.
Azure
Passed Architecting Microsoft Azure Solutions.
Pros:-
  • Azure exams are not very expensive in comparison to AWS or Red Hat.
  • Exam tests you on a wide range of topic.
  • Even if you have made mistakes in the beginning you can recover by solving correctly other answers in later sections.
Cons:-
  • No version, so you don’t know if you should answer as per recent announcement or old method available.
  • It’s a race against time.
  • The most insane thing I found in the exam is that with each case study you get 5 to 6 questions. But, for about 2 to 3 questions in that case study you can’t refer back to case study. I don’t understand why Microsoft expects you to remember the whole 2 page case study.
  • 30 days free tier is too less to know about azure.
Update:- Azure is now(Oct-17) offering 12 months free trial.
Redhat Openstack
Passed  RHCSA in Redhat Openstack
Pros:-
  • The difficulty level of the exam is medium.
  • You generally get questions on tasks which you will be doing in real life.
  • You get only 15-20 questions (tasks) and even if someone knows those questions beforehand he will have to do the tasks practically to pass the exam. So even if anyone has dumps, they are useless.
  • It has versioning so you know you have to answer as per Redhat Openstack version 6 or 8.
  • Can play with Redhat Openstack by installing it in your desktop or laptop. Good Learning!
Cons:-
  • Exams are expensive in comparison to Microsoft Azure.
  • If you have made a mistake at the beginning or in the middle of exam chances are you will mess up the whole exam or waste lot of time correcting it.
  • If your machine doesn’t work properly you may lose time. But generally examiners take care of this.
In the end, I’d like to say that professional exams should not be like your college entrance exams where they mostly test your reading speeds and cramming abilities. But it’s OK for them as the undergrad and grad have limited practical experience.
Professional exams should be more practical, that makes you sure that if a person has cleared the exam, he definitely know how to do that stuff in real life.
If you want to know how to prepare for these exams refer my post for AWS , Azure and Redhat Openstack .

Azure Crash Course - WebJobs

  1. Azure WebJobs is similar to AWS Lambda a serverless technology.
You can run programs or scripts in WebJobs in your Azure App Service web app in three ways:
On demand – You trigger it when you need.
Continuously – It will keep on running in background always.
Schedule – You can schedule it to run on specific date and time.
Following file types are accepted:-
  • .cmd, .bat, .exe (using windows cmd)
  • .ps1 (using powershell)
  • .sh (using bash)
  • .php (using php)
  • .py (using python)
  • .js (using node)
  • .jar (using java)
Interestingly Azure WebJobs supports shell scripts which is missing in AWS Lambda.
The WebJobs SDK does not yet support .NET Core.
Typical use case for Azure WebJobs:-
  • Image processing or other CPU-intensive work.
  • Queue processing.
  • RSS aggregation.
  • File maintenance, such as aggregating or cleaning up log files.
  • Other long-running tasks that you want to run in a background thread, such as sending emails.
  • Any tasks that you want to run on a schedule, such as performing a back-up operation every night.
Do Note
  • Web apps in Free mode can time out after 20 minutes if there are no requests to the scm (deployment) site and the web app’s portal is not open in Azure. Requests to the actual site will not reset this.
  • Code for a continuous job needs to be written to run in an endless loop.
  • Continuous jobs run continuously only when the web app is up. Check what’s a Web App .
  • Basic and Standard modes offer the Always On feature which, when enabled, prevents web apps from becoming idle.
  • You can only debug continuously running WebJobs. Debugging scheduled or on-demand WebJobs is not supported.
This article was written to give you quick snapshot of WebJobs.  You can follow this Azure Doc to check how to quickly deploy apps with WebJobs.
Azure Crash Course series is created to give you quick snapshot of Azure services. You can check other services in this series over here .

AWS Crash Course - Redshift

Redshift is a data warehouse from Amazon. It’s like a virtual place where you store a huge amount of data.
  • Redshift is fully managed petabyte-scale system
  • Amazon Redshift is based on PostgreSQL 8.0.2
  • It is optimized for data warehousing
  • Supports integrations and connections with various applications, including, Business Intelligence tools
  • Redshift provides custom JDBC and ODBC drivers.
  • Redshift can be integrated with CloudTrail for auditing purpose.
  • You can monitor Redshift performance from CloudWatch.
Features of Amazon Redshift
Supports VPC − The users can launch Redshift within VPC and control access to the cluster through the virtual networking environment.
Encryption − Data stored in Redshift can be encrypted and configured while creating tables in Redshift.
SSL − SSL encryption is used to encrypt connections between clients and Redshift.
Scalable − With a few simple clicks, you can choose vertical scaling(increasing instance size) or horizontal scaling(increasing compute nodes).
Cost-effective − Amazon Redshift is a cost-effective alternative to traditional data warehousing practices. There are no up-front costs, no long-term commitments and on-demand pricing structure.
MPP(Massive Parallel Processing) – Redshift Leverages parallel processing which improves query performance. Massively parallel refers to the use of a large number of processors (or separate computers) to perform coordinated computations in parallel. This reduces computing time and improves query performance.
Columnar Storage – Redshift uses columnar storage. So it stores data tables by column rather than by row. The goal of a columnar database is to efficiently write and read data to and from hard disk storage in order to speed up the time it takes to return a query.
Advanced Compression – Compression is a column-level operation that reduces the size of data when it is stored thus help in saving space.
Type of nodes in Redshift
Leader Node
Compute Node
What does these nodes do?
Leader Node:- A leader node receives queries from client applications, parse the queries and develops execution plans, which are an ordered set of steps to process these queries. The leader node then coordinates the parallel execution of these plans with the compute nodes.  Good part is that you will not be charged for leader node hours; only compute nodes will incur charges. If you run single node Redshift cluster you don’t need leader node.
Compute Node:- It execute the steps specified in the execution plans and transmit data among other nodes to serve queries. The intermediate results are sent back to the leader node for aggregation before being sent back to the client applications. You can have 1 to 128 Compute Nodes.
From which sources you can load data in Redshift?
You can do it from multiple sources like :-
  • Amazon S3
  • Amazon DynamoDB
  • Amazon EMR
  • AWS Data Pipeline
  • Any SSH-enabled host on Amazon EC2 or on-premises
Redshift Backup and Restores
  • Redshift can take automatic snapshots of cluster.
  • You can also take manual snapshot of cluster.
  • Redshift continuously backs up its data to S3
  • Redshift attempt to keep at least 3 copies of the data.
Hope the above snapshot give you a decent understanding of Redshift. If you want to try some handson check this tutorial .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

How to load data in Amazon Redshift Cluster and query it?

In the last post we checked How to build a Redshift Cluster? . In this post we will see how to load data in that cluster and query it.
Pre-requisite :-
Download SQL Workbench
Download Redshift Driver
Once you have downloaded the above mentioned pre-requisites let's move ahead.
First we will obtain the JDBC URL.
  • Login to your AWS Redshift Console .
  • Click on the cluster you have created. If you have followed the last post it will be "testdw" .
  • In the "Configuration" tab look for JDBC URL .
  • Copy the JDBC URL and save it in notepad.
Now open the SQL Workbench. You just have to click on 32-bit or 64-bit exe as per your OS version. In my case I am using Windows 10 64-bit so the exe name is SQLWorkbench64  .
  • Click on File > Connect window.
  • In the bottom left of the "Select Connection Profile" window click on "Manage Drivers"
  • In the "Manage Drivers" window click on the folder icon, browse to the location of the Redshift driver you downloaded earlier and select it.
Fill other details in "Manage Drivers" Window as below.
Name:- Redshiftdriver JDBC 4.2
Classname:- com.amazon.redshift.jdbc.Driver
  • Click OK
  • Now in the "Select Connection Profile" window
Fill details as below. You can also refer the image below for this.
Driver:- Select the Redshift driver you added.
URL:- Mention the JDBC URL you saved earlier.
Username:- DB username you mentioned during cluster creation.
Password:- Enter password of the DB user .

Check the Autocommit box.






  • Finally click on OK.
  • If everything is configured correctly. You will get connected to DB .
  • Try executing the query
select * from information_schema.tables;
  • If you connection is successful you will see results in the window.
  • Now we will load some sample data which is provided by AWS and kept on S3. In the SQL Workbench copy/paste the below query and execute to create a table.
create table users(
 userid integer not null distkey sortkey,
 username char(8),
 firstname varchar(30),
 lastname varchar(30),
 city varchar(30),
 state char(2),
 email varchar(100),
 phone char(14),
 likesports boolean,
 liketheatre boolean,
 likeconcerts boolean,
 likejazz boolean,
 likeclassical boolean,
 likeopera boolean,
 likerock boolean,
 likevegas boolean,
 likebroadway boolean,
 likemusicals boolean);

create table venue(
 venueid smallint not null distkey sortkey,
 venuename varchar(100),
 venuecity varchar(30),
 venuestate char(2),
 venueseats integer);

create table category(
 catid smallint not null distkey sortkey,
 catgroup varchar(10),
 catname varchar(10),
 catdesc varchar(50));

create table date(
 dateid smallint not null distkey sortkey,
 caldate date not null,
 day character(3) not null,
 week smallint not null,
 month character(5) not null,
 qtr character(5) not null,
 year smallint not null,
 holiday boolean default('N'));

create table event(
 eventid integer not null distkey,
 venueid smallint not null,
 catid smallint not null,
 dateid smallint not null sortkey,
 eventname varchar(200),
 starttime timestamp);

create table listing(
 listid integer not null distkey,
 sellerid integer not null,
 eventid integer not null,
 dateid smallint not null  sortkey,
 numtickets smallint not null,
 priceperticket decimal(8,2),
 totalprice decimal(8,2),
 listtime timestamp);

create table sales(
 salesid integer not null,
 listid integer not null distkey,
 sellerid integer not null,
 buyerid integer not null,
 eventid integer not null,
 dateid smallint not null sortkey,
 qtysold smallint not null,
 pricepaid decimal(8,2),
 commission decimal(8,2),
 saletime timestamp);
Now load sample data. Ensure that in below query you replace "<iam-role-arn>" with your ARN.
So your query should look like.
copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt' 
credentials 'aws_iam_role=arn:aws:iam::123456789123:role/redshiftrole' 
delimiter '|' region 'us-west-2';
copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy venue from 's3://awssampledbuswest2/tickit/venue_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy category from 's3://awssampledbuswest2/tickit/category_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy date from 's3://awssampledbuswest2/tickit/date2008_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy event from 's3://awssampledbuswest2/tickit/allevents_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' timeformat 'YYYY-MM-DD HH:MI:SS' region 'us-west-2';

copy listing from 's3://awssampledbuswest2/tickit/listings_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy sales from 's3://awssampledbuswest2/tickit/sales_tab.txt'
credentials 'aws_iam_role=<iam-role-arn>'
delimiter '\t' timeformat 'MM/DD/YYYY HH:MI:SS' region 'us-west-2';
Once you have loaded the data you can run sample queries like below in your SQL Workbench.



Congrats! You have finally created the Redshift cluster and run queries on it after loading data.
Refer this post if you want to reset the master user password.
Don't forget to cleanup the cluster or you will be billed.
  • For deleting the cluster just click on the Cluster(in our case it's testdw) in the AWS console.
  • Click on "Cluster" drop down and select delete.
That will cleanup everything.
Hope this guide was helpful to you! Do let me know in the comment section if you have any queries or suggestions .

How to create a multi node redshift cluster?

In this tutorial we will see how to create a multi node Amazon Redshift  cluster. If you are new to Redshift refer to our AWS Crash Course – Redshift .
Pre-requisites :-
You will have to create an IAM role and Security Group for Redshift.
Refer How to create a Security Group and How to create an IAM role .
Once you have created the role and security group follow below steps:
  • Login to Amazon Redshift Console . For this  tutorial we will be launching cluster in US East region.
  • In the Dashboard click on Launch Cluster.
In the cluster details section fill in the details as below:-
Cluster identifier:- testdwDatabase name:- testdbMaster user name:- testuserMaster user password:- <your password>
  • In the node configuration section select details as in below screenshot:-
Note:-  We will be creating 2 node cluster so select multi-node.





  • Once done click continue.
  • In the additional configuration screen leave the top section as default (refer image below). Just change “VPC security groups” to “redshiftsg”and   “Available roles” to “redshiftrole” that we created earlier.



  • Click on Continue and Review everything. It will show you approximate cost of running this cluster but if you are using “free tier” you  may run it for free if you have not exhausted your  “free tier” limits.




  • Finally click on launch cluster.
If you have followed everything as in above tutorial your cluster will be ready in around 15 minutes.

Hope this tutorial was helpful to you. In the next post we will discuss how to load data in this cluster and query the data.

How to create an IAM role in AWS?

In this tutorial we will be creating an IAM role which we will use to launch an Amazon Redshift cluster.
  • Login to your IAM Console .
  • In the left Navigation Pane Click on “Roles”
  • Click on “Create New Role”
  • In the AWS Service Role Select “Amazon Redshift” .
  • In the Policy Type Select “AmazonS3FullAccess” and click on “Next Step” .
  • Enter details as below:-
Role Name as :-  redshiftroleRole description: - Amazon Redshift Role
  • Finally Click on Create Role . 

How to create a security group in AWS?

In this tutorial we will create a security group which we will use further while creating our Redhshift Cluster.
  • Login to your AWS VPC Console .
  • In the left Navigation Pane select Security Groups .
  • Once in Security groups section click on Create Security Group .
  • Enter details as below:
Name tag: redshiftsgGroup name: redshiftsgDescription: resdshift security groupVPC: <Select your VPC or if you are not sure leave it as default>
  • Once details are filled Click on Yes, Create .
  • Now select the security group you just created.
  • In the bottom pane click on inbound rules.
Click Edit
Select Redshift in Type and Source 0.0.0.0/0 . By this we are basically allowing all inbound traffic. You can also mention specific IP range in source.

Leave the outbound rule as “All traffic”
  • Finally click save.
Congrats!  You have successfully created a security group.

What is MFA and Why its important? - Security in Cloud

When you talk about cloud you may have heard about MFA.
Why MFA is important? Before I go into that let me tell you a couple of true incidents.
You may have heard about a company called Code Spaces. But for those who don’t know Code Spaces “was” a code repository company like GitHub.
One day a hacker got control  of the AWS Management Console of Code Space.
Hacker sent a mail to company asking for huge ransom to be paid in 12 hours or he will delete all the data in their AWS account. Many of the Code Spaces engineers tried to get access of their company’s AWS account but failed as the hacker created many backdoor users in account. At the end of time limit hacker deleted everything which was in company’s AWS account which includes all servers, databases and even backups. And within a day company almost get wiped out. You can read the full story here .
Now you must be thinking that I am just a developer or sysops guy and not much of value is in my personal AWS account so i can live without MFA. Unfortunately you are wrong my friend.
We have seen many incidents when developers put their Access key ID and Secret Access Key in code so that they don’t have to authenticate manually.  But these developers may want to work with their friend on the code and they upload it on GitHub. Now the problem is that there are hackers who just search the GitHub to get these keys. And, once they get the key they can easily login to your AWS account.  You  may not have anything valuable in your AWS account, but what the hackers do is they spin up huge instances in your account . They use this computing power for Bit Mining. Once they are done with mining they get the money and you get the bill from AWS. (At times we have seen that AWS may waive off your bill in this situation as one time exception but if you are not so lucky, you may have to pay the huge bill in hundreds or thousands of dollars.) You can check Joe’s detailed story here .
In both the cases the hacking could have been avoided if MFA was activated on account.
So what is MFA?
MFA is Multi Factor Authentication. This is like second level of security for your account.




MFA can be activated through multiple ways including SMS(Text message) or an Application like Google Authenticator in you mobile phone.  By enabling this you add an additional authentication to your account. So once you enable MFA you will enter an additional changing code with your normal login ID and password.
For my account I use Google Authenticator. It’s a free App available on iOS and Android app stores.
How can you enable MFA?
It’s simple, Amazon has given clear steps about how to do it. But if you are not sure follow this post  How to enable MFA in AWS for free?
MFA is easiest and quickest way of defense against hackers. You should also follow other security hardening measures to keep yourself safe in cloud. I’ll discuss about them in next post.
For now it’s recommended you should enable MFA in your account ASAP. Even if you are using Azure or any other Cloud you should enable MFA. Remember security in Cloud is shared responsibility.
Be Safe!

Solved: How to enable MFA in AWS for free?

MFA is extremely important to keep your account safe.
Below we will show you how to  enable MFA using Google Authenticator. It’s a free app available on both iOS and Android stores.
  1. Login to your IAM console at https://console.aws.amazon.com/iam/
  2. In the IAM  Dashboard, below Security Status you will see an option of “Activate MFA on your root account”
  3. Click on “Activate MFA on your root account” .
  4. Click on “Manage MFA
  5. In the pop-up select “A virtual MFA device” and click on Next.
  6. Now download ” Google Authenticator ” on your mobile from iOS or Android app store.
  7. Once the app is installed open the App and click on the  + sign in the App.
  8. It will give you an option of “Scan a Barcode”. Click on that.
  9. Now go back to your AWS console and click next.
  10. You will now see a QR code in the AWS. Scan it with your Mobile with the app you opened in step 8.
  11. Now you will see a 6 digit code in your phone. Enter the code in AWS in “Authentication Code 1”.
  12. After few seconds you will see a new code in phone enter that new code in AWS in “Authentication Code 2” . Ensure that both codes are generated in consecutive sequence.
  13. Click on “Activate Virtual MFA“.
  14. That’s all! Next time when you will login to AWS console you will need the code of “Google Authenticator” with your user id and password.This small activity will keep you safe.
If you want to enable MFA for a specific user. Check this post MFA device for a user .
Next Step should be to set billing alert which will let you know if you are going above your billing limits. Check this post AWS Billing Alert .

AWS Crash Course - SQS

Today we will discuss about an AWS messaging service called SQS.
  • SQS is Simple Queue Service.
  • It’s a messaging queue service which acts like a buffer between message producing and message receiving components.
  • Using SQS you can decouple the components of an application.
  • Messages can contain upto 256 KB of text in any format.
  • Any component can later retrieve the messages programmatically using the SQS API.
  • SQS queues are dynamically created and scale automatically so you can build and grow applications quickly – and efficiently.
  • You can combine SQS with auto scaling of EC2 instances as per warm up and cool down.
  • Used by companies like Vodafone, BMW, RedBus, Netflix etc.
  • You can use Amazon SQS to exchange sensitive data between applications using server-side encryption (SSE) to encrypt each message body.
  • SQS is pull(or poll) based system. So messages are pulled from SQS queues.
  •  Multiple copies of every message is stored redundantly across multiple availability zones.
  • Amazon SQS is deeply integrated with other AWS services such as EC2, ECS, RDS, Lambda etc.
Two types of SQS queues:-
  • Standard Queue
  • FIFO Queue
Standard Queue :-
  • Standard Queue is the default type offered by SQS
  • Allows nearly unlimited transactions per second.
  • Guarantees that a message will be delivered at least once.
  • But it can deliver the message more than once also.
  • It provides best effort ordering.
  • Messages can be kept from 1 minute to 14 days. Default is 4 days.
  • It has a visibility time out window. And if order is not processed till that time, it will become visible again and processed by another reader.
FIFO Queue :-
  • FIFO queue complements the standard queue.
  • It has First in First Out delivery mechanism.
  • Messages are processed only once.
  • Order of the message is strictly preserved.
  • Duplicates are not introduced in the queue.
  • Supports message groups.
  • Limited to 300 transactions per second.
Hope the above snapshot give you a decent understanding of SQS. If you want to try some handson check this tutorial .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

Which AWS certification is suitable for me?

Many people have asked me which AWS certification should they do that can help in their career in near future.
AWS provides below certifications
Beginner
AWS Certified Cloud Practitioner
Associate level
AWS Certified Developer – Associate
AWS Certified SysOps Administrator – Associate
AWS Certified Solutions Architect – Associate
Professional level
AWS Certified DevOps Engineer – Professional
AWS Certified Solutions Architect – Professional
Specialty Certifications
AWS Certified Big Data – Specialty
AWS Certified Advanced Networking – Specialty
Now coming to the point which AWS certification is best for you.
If you are a fresher you should go for either AWS Certified Cloud Practitioner or  AWS Certified Developer – Associate . These are easiest of all AWS exams and will give you a good base.
If you have less than 5 to 6 years of experience in IT industry and you are from development background in that case you should go for AWS Certified Developer – Associate certification.
Similarly, if you have less than 5 to 6 years of experience in IT industry and you are from system admin background you should go for AWS Certified SysOps Administrator – Associate certification.
If you have 8 to 9+ years of experience in IT industry you can go for AWS Certified Solutions Architect – Associate certification. Reason for more experience in Architect certification is that generally you can’t send a 2-3 years experience person in front of the client as an Architect. Because client may not accept such a young person to be given responsibility of designing their environment. However, exceptions are always there but this is generally the trend we have seen.
Also, if you are from pre-sales and sales background you can go for AWS Certified Cloud Practitioner and later to Architect exam.
If you are from Database Admin background then going for AWS Certified Big Data – Specialty certification will be a natural progression and a big plus in your resume.
If you are from networking background then AWS Certified Advanced Networking – Specialty certification will be a good choice and natural progression for you.
Once you have gained approximately 2 years of experience with AWS you can go for professional certifications. Dev and SysOps guys can go for AWS Certified DevOps Engineer – Professional and Architects can naturally move to AWS Certified Solutions Architect – Professional.
Learning about all of the above certification will be a huge advantage but to start with above approach should be a good beginning.
We have recently seen a trend where jobs are being posted only for Redshift Architect or IAM Architect so things may change in future when people will look for specialists in only one or two AWS services.
If you want to have a quick snapshot of AWS services you can refer to our free AWS Crash Course.
Do let me know what you think about this and if you have any query or suggestions ask in comments section.

AWS Sample Exam Questions - Part 1

Below are some sample exam questions and you can expect similar questions in exam. You can get more such questions in  acloudguru sample exam questions for AWS Certified Developer – Associate  and here for AWS Certified Solutions Architect – Associate .
Q1. A 3-tier e-commerce web application is current deployed on-premises and will be migrated to AWS for greater scalability and elasticity The web server currently shares read-only data using a network distributed file system The app server tier uses a clustering mechanism for discovery and shared session state that depends on IP multicast The database tier uses shared-storage clustering to provide database fall over capability, and uses several read slaves for scaling Data on all servers and the distributed file system directory is backed up weekly to off-site tapes
Which AWS storage and database architecture meets the requirements of the application?
A. Web servers, store read-only data in S3, and copy from S3 to root volume at boot time App servers snare state using a combination or DynamoDB and IP unicast Database use RDS with multi-AZ deployment and one or more Read Replicas Backup web and app servers backed up weekly via Mils database backed up via DB snapshots.
B. Web servers store -read-only data in S3, and copy from S3 to root volume at boot time App servers share state using a combination of DynamoDB and IP unicast Database, use RDS with multi-AZ deployment and one or more read replicas Backup web servers app servers, and database backed up weekly to Glacier using snapshots.
C. Web servers store read-only data In S3 and copy from S3 to root volume at boot time App servers share state using a combination of DynamoDB and IP unicast Database use RDS with multi-AZ deployment Backup web and app servers backed up weekly via AM is. database backed up via DB snapshots
D. Web servers, store read-only data in an EC2 NFS server, mount to each web server at boot time App servers share state using a combination of DynamoDB and IP multicast Database use RDS with multl-AZ deployment and one or more Read Replicas Backup web and app servers backed up weekly via Mils database backed up via DB snapshots
Answer:- A
B: You can’t directly backup EBS snapshots to Glacier they first need to be move to S3 standard.
C: doesn’t have DB read replicas
D: AWS does not support IP Multicast.
So only A seems to be correct.
Q2. Your customer wishes to deploy an enterprise application to AWS which will consist of several web servers, several application servers and a small (50GB) Oracle database information is stored, both in the database and the file systems of the various servers. The backup system must support database recovery whole server and whole disk restores, and individual file restores with a recovery time of no more than two hours. They have chosen to use RDS Oracle as the database
Which backup architecture will meet these requirements?
A. Backup RDS using automated daily DB backups Backup the EC2 instances using AMI s and supplement with file-level backup to S3 using traditional enterprise backup software to provide file level restore
B. Backup RDS using a Multi-AZ Deployment Backup the EC2 instances using AMI s, and supplement by copying file system data to S3 to provide file level restore.
C. Backup RDS using automated daily DB backups Backup the EC2 instances using EBS snapshots and supplement with file-level backups to Amazon Glacier using traditional enterprise backup software to provide file level restore
D. Backup RDS database to S3 using Oracle RMAN Backup the EC2 instances using AMI s, and supplement with EBS snapshots for individual volume restore.
Answer :- A
B:- Multi AZ is for DR.
C:-Glacier takes 3-4 hours so RTO of 2 Hours can’t be respected.
D:- RDS doesn’t use RMAN backups
Q3. Is there a limit to the number of groups you can have?
A.Yes for all users except root
B. No
C. Yes unless special permission granted
D. Yes for all users
Answer :- D
As per this doc http://docs.aws.amazon.com/IAM/latest/UserGuide/reference_iam-limits.html limits are for all users.
Q4. What is the charge for the data transfer incurred in replicating data between your primary and standby?
A. No charge. It is free.
B. Double the standard data transfer charge
C. Same as the standard data transfer charge
D. Half of the standard data transfer charge
Answer:- A
You are not charged for the data transfer incurred in replicating data between your source DB Instance and Read Replica.
Q5. Through which of the following interfaces is AWS Identity and Access Management available?
A) AWS Management Console
B) Command line interface (CLI)
C) IAM Query API
D) Existing libraries
A.Only through Command line interface (CLI)
B.A, B and C
C.A and C
D. All of the above
Answer:- B
IAM can be accessed through Console, CLI and API .
All the questions posted here are taken from public internet and answers are marked as per our understanding. These are not exam dumps.
If you have any concern or suggestion please update in comments or contact us through the contact page.

OpenStack Crash Course - Neutron

Openstack Neutron is the networking service.  It is similar to AWS VPC or Azure VNET.
  • Manual and automatic management of networks and IP addresses.
  • Distinct networking models for different applications and user groups.
  • Flat networks (VLAN’s) for separating servers and traffic.
  • Supports both Static IP addresses and DHCP.
  • Floating IP addresses for dynamic rerouting to resources on the network.
  • Software-defined networking (SDN), OpenFlow, for multi-tenancy and scalability.
  • Management of intrusion detection systems (IDS), load balancing, firewalls, VPN’s, etc.

OpenStack Crash Course - Glance

Glance is image service of OpenStack. It’s similar to AWS AMI and Azure VM Images.
  • OpenStack Image Service for discovery, registration, and delivery of services for disk and server images
  • Template-building from stored images
  • Storage and cataloging of unlimited backups
  • REST interface for querying disk image information
  • Streaming of images to servers
  • VMware integration, with vMotion Dynamic Resource Scheduling (DRS) and live migration of running virtual machines
  • All OpenStack OS images built on virtual machines
  • Maintenance of image metadata
  • Creation, deletion, sharing, and duplification of images

OpenStack Crash Course - Nova

Openstack Nova is the equivalent of AWS EC2 instances or Azure VMs.
  • It’s an Infrastructure as a Service (IaaS) offering.
  • Provides management and automation of pools of Virtual Machines
  • Bare metal and high-performance computing (HPC) configurations
  • It supports KVM, VMware, and Xen hypervisor virtualization
  • Hyper-V and LXC containerization
  • Python-based with various external libraries: Eventlet for concurrent programming, Kombu for AMQP communication, SQLAlchemy for database access, etc.
  • Designed to scale horizontally on standard hardware with no proprietary hardware or software requirements
  • Inter operable with legacy systems

AWS Crash Course - RDS

Welcome to AWS Crash Course.
What is RDS?
  • RDS is Relational Database Service of Amazon.
  • It is part of its PaaS offering.
  • A new DB instance can easily be launched from AWS management console.
  • Complex administration process like patching, backup etc. are manged automatically by RDS.
  • Amazon has its own relational database called Amazon Aurora.
  • RDS also supports other popular database engines like MySQL, Oracle, SQL Server, PostgreSQL and MariaDB .
RDS Supports Multi AZ(Availability Zone) failovers.
What does that mean?
It means if your primary DB is down. Services will automatically failover to secondary DB in other AZ.
  • Multi-AZ deployments for MySQL,Oracle and PostgreSQL engines utilizes synchronous physical replication to keep data on the standby up-to-date with Primary.
  • Multi-AZ deployments for the SQL server engine use synchronous logical replication to achieve the same result, employing SQL server native mirroring tech.
  • Both approaches safeguard your data in event of a DB instance failure or loss of AZ.
  • Backups are taken from secondary DB which avoids I/O suspension to the primary.
  • Restore’s are taken from secondary DB which avoids I/O suspension to the primary.
  • You can force a failover from one AZ to another by rebooting your DB instance.
But RDS Multi AZ failover is not a scaling Solution.
Read Replicas are for Scaling.
What are Read Replicas?
As we discussed above Multi AZ is synchronous replication of DB.  While read replicas are asynchronous replication of DB.
  • You can have 5 read replicas for both MySQL and PostgreSQL.
  • You can have read replicas in different regions but for MySQL only.
  • Read replica’s can be built off Multi-AZ’s databases.
  • You can have read replica’s of read replica’s , however only for MySQL and this will further increase latency.
  • You can use read replicas for generating reports. By this you won’t put load on the primary DB.
RDS supports automated backups.
But keep these things in mind.
  • There is a performance hit if Multi-AZ is not enabled.
  • If you delete an instance then all automated backups are deleted, however manual db snapshots will not be deleted.
  • All snapshots are stored on S3.
  • When you do a restore , you can change the engine type(e.g. SQL standard to SQL enterprise) provided you have enough storage space.
Hope the above snapshot give you a decent understanding of RDS. If you want to try some handson check this tutorial .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .