AWS Crash Course - S3

Welcome to AWS Crash Course.
What is S3?
S3 is Simple Storage Service. It’s an object storage. That means, it’s used for storing objects like photos, videos etc.
  • S3 provides 11 9’s durability 99.999999999%.  Means Losing 1 out of 100 Billion objects.
  • S3 provides 99.99% availability.
  • Files can be 1 byte to 5 TB in size
  • Read after Write consistency for PUTS of new objects – means you can immediately read what you have written.
  • Eventual consistency for overwrite PUTS and DELETES – if you overwrite or delete an existing object it takes time to propagate in S3 globally.
  • Secure your data using ACL and bucket policies.
  • S3 is designed to sustain the loss of 2 facilities concurrently i.e. 2 Availability Zone failures.

    S3 has multiple classes. One is S3 Standard which we discussed above. Others classes are:-

S3-IA (Infrequently Accessed) 
  • S3-IA is for data which is not frequently accessed but still needed an immediate access.
  • You get same durability and availability as S3 but at reduced price.
  • Can manage upto 2 Concurrent facility fault tolerance.
S3-RRS (S3- Reduced Redundancy Storage)
  • 99.99% durability and availability.
  • Use RRS if you are storing non-critical data that can be easily reproduced. Like thumbnails of images.
  • No Concurrent facility fault tolerance
S3 Glacier 
  • Data is stored in Amazon Glacier in “archives.“
  • Archive can be any data such as a photo, video, or document.
  • A single archive can be as large as 40 terabytes.
  • You can store an unlimited number of archives in Glacier
  • Amazon Glacier uses “vaults” as containers to store archives.
  • Under a single AWS account, you can have up to 1000 vaults.
S3 Supports versioning. 
What does that mean?
It means that if you change a file it can keep versions of both old and new files.
  • If you enable versioning in S3 it will keep all the versions even if you delete or update the old version.
  • Great backup tool
  • Once enabled versioning cannot be disabled ,only suspended
  • Integrates with lifecycle rules
  • Versioning’s MFA delete capability which uses multi factor authentication, can be used to provide additional layer of security.
  • Cross region replication , requires versioning enabled on the source bucket
S3  Supports Lifecycle Management
What does that mean?
It means you can move objects from one storage class to another storage class after few days as per your schedule. This is used to reduce cost by moving less critical data to cheaper storage class.
  • Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket
  • Can be used with versioning
  • Can be applied to current versions and previous versions
  • Transition from standard to infrequent access storage class can be done only after the data is in standard class storage for 30 days.
  • You can directly put data from standard to glacier
  • Lifecycle policy will not transition objects that are less than 128KB
If you want to try some handson try this exercise .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

Solved: Install ifconfig and ssh in Ubuntu

In this post we will see how to install ifconfig and ssh in Ubuntu 16.04 Xenial Xerus. You can install both or either of them independently .
ifconfig
By default you can check IP details in Ubuntu using
ip addr
But if you are more comfortable with “ifconfig”  then follow on.
Ensure that your Ubuntu instance is connected to internet or to your local package repository server from which it can pull packages.
If you need ifconfig in your ubuntu server use following commands.
sudo apt-get updatesudo apt-get install net-tools
“ifconfig” command should now work
ifconfig -a
ssh
If you are trying to get ssh. Use the below command.
sudo apt-get install openssh-server
Once the ssh installation is done check the status of ssh service
sudo service ssh status
If the service is not running you will have to start it with following command.
sudo service ssh start
Once this is done you should get a message of service started ok. You can check service status again with
sudo service ssh status
The default configuration file for ssh is /etc/ssh/sshd_config . If you make any changes to this file you will have to restart the ssh service to make the changes effective.
sudo service ssh restart
If you are using docker you can create a golden image after doing this installation. So that you don’t have to do this installation in all new containers. To check how to create a golden image for a docker container check this post .

Solved: How to create an image of a docker container.

So you have completed all the installation on a docker container and now you want to keep it as golden image.
Golden images are useful when you want to create more containers with same configuration. This also ensure that when you ship an image from Dev to UAT or Prod it will be exactly same as it was when you tested it.
This also avoid problems during release.
So how you create an image from a running container.
Let’s have a look. We have a running container with ID d885f4ea3cff.
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
d885f4ea3cff d355ed3537e9 "/bin/bash" 25 hours ago Up 53 minutes ansible
We already did all the installations in it.  So we will commit and create an image “ansible-base” .
docker commit d885f4ea3cff ansible-base
Now if you look at the image list you should see the new image.
docker image ls
REPOSITORY    TAG     IMAGE ID    CREATED       SIZE
ansible-base latest 1fbdr6f81e4c 39 minutes ago 159 MB
You can create a new container with this image
docker run -t -i -d 1fbdr6f81e4c /bin/bash
Notice the “-d” above it will run the container in detached mode so even if you logout of the container it will remain up.
List the containers that are running
docker container ls
Login to the container with container id d985be2f2c3e you created now.
 docker exec -it d985be2f2c3e bash
If you want to rename the new container check this post .

Solved: Error when allocating new name - Docker

Error response from daemon: Error when allocating new name: Conflict. The container name "/webserver" is already in use by container 6c34a8wetwyetwy7463462d329c9601812tywetdyud76767d65f7dc7ea58d8541. You have to remove (or rename) that container to be able to reuse that name.
If you see the above error it is because a container with same name exist.
Let’s check our running containers
docker container ls
If you don’t see any running container with that name, check the stopped containers.
docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb45bd14c8987 1fbd5d581e4c "/bin/bash" 2 minutes ago Up 2 minutes competent_keller59b9f5a63ba0 ansible-base "/bin/base" 3 minutes ago Created wizardly_payne6c34a8a6edb6 d355ed3537e9 "/bin/bash" 9 minutes ago Exited (0) 4 minutes ago webserver
Above we can see that we already have a container with name webserver.
So we will rename the old container.
docker rename webserver webserver_old
Now if we check again. Our container is renamed to webserver_old .
docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb45bd14c8987 1fbd5d581e4c "/bin/bash" 4 minutes ago Up 4 minutes competent_keller59b9f5a63ba0 ansible-base "/bin/base" 5 minutes ago Created wizardly_payne6c34a8a6edb6 d355ed3537e9 "/bin/bash" 10 minutes ago Exited (0) 5 minutes ago webserver_old
And if you don’t need the old container you can also delete it to free up the space.
docker rm webserver_old
Now if you try to create a container with “webserver” name you should not get any error.

Solved: AWS4-HMAC-SHA256 encryption error while updating S3 bucket

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
Above error means that you are trying to do activity on a bucket in a region which doesn’t support AWS4-HMAC-SHA256 encryption.
Some of the new AWS regions do not support AWS4-HMAC-SHA256.
So if you have created the bucket in regions like Frankfurt or Mumbai you may see this error.
AWS in future may fix this issue. But, till that time you can try to create the bucket in another older region in US and continue working.

Solved: Conflicting conditional operation error while creating S3 bucket

A conflicting conditional operation is currently in progress against this resource. Please try again.
You can get above error when you are creating an S3 bucket.
This error generally comes if you have deleted an S3 bucket in one region and immediately trying to create a bucket in other region with same bucket name.
Problem is that the S3 syncing is not instant across regions. It may take anything between 2 minutes to 30 minutes for the information to update in all S3 region that you have already deleted the bucket with that name.
You may get the same error even when you try creating same name bucket in different AWS account. Reason being same of syncing.
So if you want the bucket name to be same try creating it again after a coffee break.

AWS Crash Course - Elastic Beanstalk

Welcome back to AWS Crash Course.
In the last section we discussed about EBS.
In this section we will discuss about AWS Elastic Beanstalk.
AWS Elastic Beanstalk makes it even easier for developers to quickly deploy and manage applications in the AWS cloud. Developers simply upload their application, and Elastic Beanstalk automatically handles the deployment details of capacity provisioning, load balancing, auto-scaling, and application health monitoring.
  • You can push updates from GIT and only the modified files are transmitted to AWS elastic beanstalk.
  • Elastic beanstalk supports IAM, EC2, VPC and RDS instances.
  • You have full access to the resources under elastic beanstalk
  • Code is stored in S3
  • Multiple environments are allowed to support version control. You can roll back changes.
  • Amazon Linux AMI and Windows 2008 R2 supported.
    What are the supported Languages and Development Stacks?
  • Apache Tomcat for Java applications
  • Apache HTTP Server for PHP applications
  • Apache HTTP Server for Python applications
  • Nginx or Apache HTTP Server for Node.js applications
  • Passenger or Puma for Ruby applications
  • Microsoft IIS 7.5, 8.0, and 8.5 for .NET applications
  • Java SE
  • Docker
  • Go
How can you update Elastic Beanstalk?
  • You can upload the code for updating on AWS elastic beanstalk
  • It support multiple running environments like test, pre-prod and prod etc
  • Each environment is independently configured and runs on its own separate AWS resources
  • Elastic beanstalk also stores and tracks application versions over time so an existing environment can easily rolled back to a prior version.
  • New environment can be launched using an older version to try and reproduce a customer problem.
Fault Tolerance
  • Always design, implement, and deploy for automated recovery from failure
  • Use multiple Availability Zones for your Amazon EC2 instances and for Amazon RDS
  • Use ELB for balancing the load.
  • Configure your Auto Scaling settings to maintain your fleet of Amazon EC2 instances.
  • If you are using Amazon RDS, then set the retention period for backups, so that Amazon RDS can perform automated backups.
  What about Security?
  • Security on AWS is a shared responsibility
  • You are responsible for the security of data coming in and out of your Elastic Beanstalk environment.
  • Configure SSL to protect information from your clients.
  • Configure security groups and NACL with least privilege.
This short course was to give you an understanding of elastic beanstalk. If you want to try some hands on follow this AWS tutorial.

Solved: Share AMI with other AWS accounts

At times you may have to safely share an AMI(Amazon machine Image) with another AWS account. You can do it without making the AMI Public.
Here we will show you how you can do it easily.
  1. Login to your EC2 console by this link . EC2 Console
  2. In the left navigation panel choose AMIs in Image section.
  3. Select the AMI you want to share.
  4. Click on Actions > Modify Image Permissions
  5. In the Modify Image Permissions box do the following :-

    a) This image is currently “Private”
    b) Enter the AWS account number with which you want to share the AMI. Click Add Permissions.
    c) Check the box with Add “create volume” permissions to the following associated snapshots when creating permissions.
  6. Finally click on Save .
Once above steps are done in the source account you have to go in the destination account AMIs section in EC2 Console and in the filter select Private images. You should now be able to see the image you shared earlier.

If you want to do the same with AWS CLI, Use these two commands:-

Here we are granting launch permission to a specific AMI(ami-a2n4b68kl) for a specific AWS account number (123456789) .
aws ec2 modify-image-attribute --image-id ami-a2n4b68kl --launch-permission "{\"Add\":[{\"UserId\":\"123456789\"}]}"
Below command will grant create volume permission for snapshot(snap-try657hvndh909) as we did in Step 5(c)
aws ec2 modify-snapshot-attribute --snapshot-id snap-try657hvndh909 \
--attribute createVolumePermission --operation-type add --user-ids 123456789
After doing this the AMI should be visible in AMIs of the new account.

AWS Crash Course - EBS

In the last section we discussed about VPC. In this section we will discuss about EBS.
What is EBS?
  • EBS is Elastic Block Storage.
  • EBS volume is a durable, block-level storage. It’s similar to the hard disk that you have in your laptop or desktop.
  • EBS volumes can be used as primary storage for data that requires frequent updates.
  • EBS volume in an Availability Zone is automatically replicated within that zone to prevent data loss due to failure.
  • You can create encrypted EBS volumes with the Amazon EBS encryption feature or use 3rd party software for encryption.
  • To improve performance use RAID Groups e.g. RAID 0, RAID 1, RAID 10
What are the different types of EBS volumes?
  • General Purpose SSD (gp2) – It provides you upto 10,000 IOPS(Input/output operations per second)  and it can be of size from 1GB to 16TB . This is used for for normal loads. And should be enough for your you Dev or UAT setups.
  • Provisioned IOPS SSD (io1) – It provides you upto 20000 IOPS  and it can be of size from 4GB to 16TB . These are generally used for Large SQL/NoSQL Databases.
  • Throughput Optimized HDD (st1) – These provide you upto 500 IOPS  and can range in size from 500GB to 16TB. These are mostly useful for Big Data/ Data warehouses.
  • Cold HDD (sc1) – These are the cheapest kind of disks.  They provide upto 250 IOPS -and can range in size from 500GB to 16TB. These are commonly used fro data archiving as they provide low IOPS but are cheap for storing data which is not used frequently.
You can take snapshots of EBS volumes.
So what is a snapshot?
  • You can back up the data on your EBS volumes to Amazon S3 by taking point-in-time snapshots
  • Snapshots are incremental backups – Saves time and storage costs
  • Snapshots support encryption
  • Snapshots exclude data that has been cached by any applications or the OS
  • You can share your unencrypted snapshots with others
  • You can use a copy of a snapshot for Migrations, DR, Data retention etc.
You can try handson with EBS by using this exercise .

Azure Crash Course - Web Apps

Azure is quickly adding services in it’s portfolio.
Three of the Azure services “Web Apps”, “Cloud Services” and “API Apps” provide services which are equivalent to AWS Elastic Benastalk.
In this article we will be discussing about Web Apps.
  • Web Apps is a PaaS offering of Azure.
  • Web Apps allows developers to quickly build, deploy and manage websites easily.
  • It provides you shared or dedicated virtual machines.
  • These are managed VMs so you don’t have to worry about hardware or patching.
  • Languages supported are ASP.NET, Node.js, Java, PHP, or Python. These are basically the languages which are supported by Azure App Service.
  • It supports Scaling up or Scaling out both.
  • It supports High Availability.
  • You can select application templates from Azure marketplace and deploy using Web Apps.  Some examples of supported templates are WordPress, Joomla and Drupal.
Web Apps can be deployed in three ways:-
  1. Azure CLI
  2. Azure ARM Portal
  3. Visual Studio
For DevOps you can easily integrate Web Apps with GitHub, Bitbucket, or Visual Studio Team Services.
Web Apps has a great feature of Deployment Slots. So what is Deployment slots?
It allows you to validate the change first in Dev or UAT before pushing it in Production. By deploying a web app to a slot first and swapping it into production ensures that all instances of the slot are warmed up before being swapped into production. This eliminates cold start for your application.
It gives you flexibility to roll back.
You can also configure auto swap. When a deployment slot is configured for Auto Swap into production, every time you push your code update to that slot, App Service will automatically swap the app into production after it has already warmed up in the slot.
Pricing Plans:-
Web Apps supports different pricing plans.
  1. Free
  2. Shared
  3. Basic
  4. Standard
  5. Premium
Free and Shared Plans don’t have SLAs associated with them.  While the remaining three provide 99.95% SLA.
You can also buy custom domains and SSL certificate through Web Apps.
This article was written to give you quick snapshot of Web Apps.  You can follow this Azure Doc to check how to quickly deploy apps with Web Apps.
This series is created to give you quick snapshot of Azure services. You can check other services in this series over here .

AWS Crash Course – Route 53

Route 53 is a DNS service that route user requests.
  • Amazon Route 53 (Route 53) is a scalable and highly available Domain Name System (DNS).
  • The name route 53 is reference to UDP port 53 which is generally used for DNS.
  • Route 53 with its DNS service that allows administrators to direct traffic by simply updating DNS records in the hosted zone.
  • TTL(Time to Live) can be adjusted for resource records to be shorter which allow record changes to propagate faster to clients.
  • One of the key features of Route 53 is programmatic access to the service that allows customers to modify DNS records via web service calls.
Three Main functions of Route 53 are:-
Domain registration:- It allows you to register domain names from your AWS accounts.
DNS service:- This service is used for mapping your website IP to a name. e.g.54.168.4.10 to example.com. It also supports many other formats which we will discuss below.
Health Monitoring:- It can monitor the health of your servers/VMs/instances and can route traffic as per the routing policy. It can also work as a load balancer for region level traffic management.
Route 53 supports different routing policies and you can use the one which is most suitable for your applications.
Routing Policies :-
  • Simple:- In this Route 53 will respond to DNS queries that are only in the record set.
  • Weighted:- This policy let you split the traffic based on different weights assigned. for e.g. 10% traffic goes to us-east-1 and 90% goes to eu-west-1
  • Latency:- Allows to route your traffic based on lowest network latency for your end user.(ie which region will give end user the fastest response time)
  • Failover:- This policy is used when you create an active/passive setup. Route 53 will monitor the health of your primary site using a health check.
  • Geolocation:- This routing lets you choose where your traffic will go based on geographic location of end users. So the user requesting from France will be served from server which is nearest to France.
Route 53 supports many DNS record formats:-
  • A Format :- Returns a 32-bit IPv4 address, most commonly used to map hostnames to an IP address of the host.
  • AAAA Format:-  Returns a 128-bit IPv6 address, most commonly used to map hostnames to an IP address of the host.
  • CNAME Format:- Alias of one name to another. So with CNAME you can set example.com and www.example.com as alias of each other.
  • MX Format :- Maps a domain name to a list of message transfer agents for that domain
  • NS Format:- Delegates a DNS zone to use the given authoritative name servers.
  • PTR Format :- Pointer to a canonical name. Unlike a CNAME, DNS processing stops and just the name is returned. The most common use is for implementing reverse DNS lookups, but other uses include such things as DNS-SD.
  • SOA Format:- Specifies authoritative information about a DNS zone, including the primary name server, the email of the domain administrator, the domain serial number, and several timers relating to refreshing the zone.
  • SRV Format:- Generalized service location record, used for newer protocols instead of creating protocol-specific records such as MX.
  • TXT Format :- Originally for arbitrary human-readable text in a DNS record.
Tip:- For the exam understanding A format and CNAME should be enough.
If you want to try some handson try this exercise .
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

Solved: Error while deleting docker network

When you try to delete a network you may get the below error.
C:\>docker network rm network_name
Eresponse from daemon: network network_name has active endpoints
This error comes when you have an active endpoint. First you have to inspect the network to check if any container is still running which is using the network.
docker inspect network_name
Check running containers using
docker container ls
If you find any container running which is using the network that you are trying to delete, you will have to first stop and remove that container. Be sure that you don’t need that container.
docker stop container_name
docker rm container_name
If you don’t see any container running with the name given in the inspect output. Than it means that the container that you deleted earlier was not removed properly and it has traces remaining in network.
To remove the endpoint from network run this command.
docker network disconnect --force network_name container_name
Finally you should be able to remove the network now.
docker network rm network_name

Solved: How to download a complete S3 bucket or a S3 folder?

If you ever want to download an entire S3 folder you can do it with CLI.
You can download the AWS CLI from this page. AWS CLI Download
Download the AWS CLI as per your system Window, Linux or Mac.
In our case we use Windows 64 bit. Once you donwload the .exe simply double click on it to install the AWS CLI.
Once the AWS CLI is installed go to windows command prompt(CMD) and enter command
aws configure
It will ask for the AWS user details with which you want to login and region name. Check this post to know How to create an IAM user.
You can get the AWS IAM user access details from IAM console .
Get Region name here .
Fill in the user details as below:

AWS Access Key ID: <Key ID of the user>
AWS Secret Access Key: <Secret key of the user>D
region Name : <us-east-1> 
Default output format: None

Once you have downloaded and configured AWS CLI on your machine you have to exceute “sync” command as shown below.
aws s3 sync s3://mybucket/dir  /local/folder
You can also do the same with “cp” command. It will need –recursive option to recursively copy the contents of subdirectories also.
aws s3 cp s3://myBucket/dir /local/folder --recursive
Refer to this S3 cheat sheet to learn more tricks.

Solved: How to remove the AWS EC2 instance name from the URL?

After installing the website using AWS image you would have noticed that the URL still have reference of the EC2.
It can be in post URLs like
http://ec2-65-231-192-68.compute-1.amazonaws.com/your-post-name
How to get rid of that?
The easiest way to do it is by going to your wordpress dashboard.
In the dashboard go to Settings>General.
In the General Settings page you will see two parameters, WordPress Address (URL) and Site Address (URL).  Change them to your website name e.g. http://yourwebsite.com .
Finally save it. Your post URLs should now look like
http://yourwebsite.com/your-post-name
In Bitnami WordPress Image you will find that WordPress Address (URL) and Site Address (URL) will be greyed out and it won’t allow to modify them. In that case you will have to modify wp-config.php file.
For bitnami image the file location is /opt/bitnami/apps/wordpress/htdocs/wp-config.php. Keep a copy of old file and modify the current file
cp -p /opt/bitnami/apps/wordpress/htdocs/wp-config.php /opt/bitnami/apps/wordpress/htdocs/wp-config.php.oldsudo vi /opt/bitnami/apps/wordpress/htdocs/wp-config.php
Modify two lines which has entries for WP_HOME and WP_SITEURL . They  should now look like.
define('WP_HOME','http://yourwebsite.com');define('WP_SITEURL','http://yourwebsite.com');
If your website is having SSL certificate and you want all your posts and pages to have https. Than the above entries should look like.

define('WP_HOME','https://yourwebsite.com');define('WP_SITEURL','https://yourwebsite.com');
Finally save the file.
When you refresh the page it should now show your desired URL.
If the URL is still not showing correctly and you are sure that you have modified the file correctly than restart apache.
sudo /opt/bitnami/ctlscript.sh restart apache
This should get you done!

AWS Crash Course – VPC

In the last section we discussed about EC2.  In case you missed it you can check it here AWS Crash Course – EC2 .
In this section we will discuss about VPC.
What is VPC?
  • VPC is Virtual Private Cloud.
  • VPC is like your own private cloud inside the AWS public cloud.
  • You can decide the network range.
  • Your VPC is not shared with others.
  • You can launch instances in VPC and restrict inbound/outbound access to them.
  • You can leverage multiple layers of security, including security groups and network access control lists.
  • You can create a Virtual Private Network (VPN) connection between your corporate datacenter and your VPC.
Components of Amazon VPC:-
  • Subnet: A segment of a VPC’s IP address range this is basically the network range of IPs which you assign to your resource e.g. EC2.
  • Internet Gateway: If you want your instance in VPC to be able to access Public Internet, you create an internet gateway.
  • NAT Gateway: You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the Internet or other AWS services, but prevent the Internet from initiating a connection with those instances.
  • Hardware VPN Connection: A hardware-based VPN connection between your Amazon VPC and your datacenter, home network, or co-location facility.
  • Virtual Private Gateway: A virtual private gateway is the VPN concentrator on the Amazon side of the VPN connection..
  • Customer Gateway: A customer gateway is a physical device or software application on your side of the VPN connection.
  • Router: Routers acts like a mediator for your sunets in VPC. It interconnect subnets and direct traffic between Internet gateways, virtual private gateways, NAT gateways, and subnets.
  • Peering Connection: A peering connection enables you to route traffic via private IP addresses between two peered VPCs. Peering connection is used to do VPC Peering by which you can establish connections/tunnel between two different VPCs.
VPC has few more components but to avoid confusion we will discuss about them in later sections.
This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

AWS Crash Course - EC2

We are starting this series on AWS to give you a decent understanding of different AWS services. These will be short articles which you can go through in 15-20 mins everyday.
You can check the complete series here of AWS Crash Course .
Introduction:-
  • AWS compute is part of it’s IaaS offerings.
  • With compute, you can deploy virtual servers to run your applications.
  • Don’t have to wait for days or weeks to get your desired server capacity.
  • You can manage the OS or let AWS manage it for you.
  • It can be used to build mobile apps or running massive clusters.
  • You can even deploy application serverless.
  • It provides high fault tolerance.
  • Easy scalability and load balancing.
  • You are billed as per your usage.
What is EC2?
  • EC2 is Elastic Compute Cloud
  • It’s VM (virtual machine) in cloud.
  • You can commission one or thousands of instances simultaneously, and pay only for what you use, making web-scale cloud computing easy.
  • Amazon EC2 reduces the time required to obtain and boot new server instances to minutes, allowing you to quickly scale capacity, both up and down, as your computing requirements change.
  • Amazon EC2 provides developers the tools to build failure resilient applications and isolate them from common failure scenarios.
What are EC2 pricing models?
  • On Demand – Pay by hour no long term commitment.
  • Reserved – Yearly reservations up to 75% cheaper compared to On Demand.
  • Dedicated – A dedicated Physical server is provided to you. Up to 70% cheaper compared to On Demand.
  • Spot – Bid on spare Amazon computing capacity. Up to 90% cheaper compared to On Demand.
EC2 Instance Types:-
  • General Purpose (T2, M4 and M3) – Small and mid-size databases
  • Compute Optimized (C4 and C3) – High performance front-end fleets, web-servers, batch processing etc.
  • Memory Optimized (X1, R4 and R3) – High performance databases, data mining & analysis, in-memory databases
  • Accelerated Computing Instances(P2, G2 and F1) – Used for graphic workloads
  • Storage Optimized I3 – High I/O Instances – NoSQL databases like Cassandra, MongoDB
  • D2 – Dense-storage Instances – Massively Parallel Processing (MPP) data warehousing, MapReduce and Hadoop distributed computing
Check out more details in next section .  AWS Crash Course – VPC
If you want to try some hands on, you can follow this guide to launch Amazon Linux Instance or this for Windows instance.

How to close AWS Free Tier account before expiry

You have been using the AWS account from around last 1 year and now the free tier period is about to get over.  If you have created this account for only testing purpose and don’t want the resources in it anymore, than it’s best to close this account. 
Closing the account will save you from unexpected AWS bills for resources which you may have started in some region and forgot to stop/delete.
Here are simple steps on How to Close the AWS account.
  1. Go to your AWS Settings Page .
  2. Scroll Down to “Close Account”
  3. Tick on the check box.
  4. Click “Close Account”

    Once you click on “Close Account” you will get confirmation mail from AWS for account closure. If you have any unpaid bill for that month you will receive it as per your billing cycle.
    After clearing unpaid bill you should not get any bill from next month.

Solved: How to delete revisions of wordpress posts?

You may have noticed that wordpress keep revisions of your old posts.
These revisions can be useful if you want to go back to an older version of a post. But if you are sure that you no longer need those old versions of post than it’s best to get rid of them.
Deleting the old posts will give you precious space which you can use for posting new articles. Also it will make your database querying fast.
Here we will discuss about how to delete old revisions.
If you don’t like plugins you can follow these simple steps. Else, directly go to Step 4 to check plugins for doing this.
1) Login to your hosting server and get inside your sql database.(If you are using AWS check this post on how to login.)
mysql -u root -p
2) Select the wordpress database
mysql> USE wordpress;
3) Delete the posts with type “revision”. This command will delete all revisions of all your posts leaving only the latest version.
DELETE from wp_posts WHERE post_type = “revision”;
You can run this DELETE query in your phpmyadmin console also.
4) If you are not comfortable with command line or coding . You can simply install a plugin called “Simple Revisions Delete” in your wordpress dashboard. It will provide you “Purge” option in each of your post to purge old revisions.
Caution:- Before you decide to delete revisions, it’s better to take backup of DB and be absolutely sure that you want to do this. As the only way to get them back after deletion will be to restore them from backup.

Difference between Classic and Application Load Balancer - Which one is best?

Often people are confused about different type of load balancers in Cloud. Today we will discuss about them.
Both AWS and Azure offer two type of load balancers.
1) Classic Load Balancers
2) Application Load Balancers
Both load balancers are obviously used to balance the load among multiple clients.
So what is the difference between two?
Before we look into that, we have to take a crash course in communication standards.
Open Systems Interconnection model (OSI model) is a standard for telecom or computing system without regard to their underlying internal structure. To keep it simple, it has 7 layers . Each layer act as a standard for communication at that layer.
Below are the 7 layers:-
Layer 1: Physical Layer
Layer 2: Data Link Layer
Layer 3: Network Layer
Layer 4: Transport Layer
Layer 5: Session Layer
Layer 6: Presentation Layer
Layer 7: Application Layer
We won’t go deeper into each of them as they all have books written on them separately.
Now coming back to our load balancers.

Classic Load Balancer:-

Amazon calls it “AWS Classic ELB” and Microsoft calls it “Azure Load Balancer”.
Classic Load balancer works on Layer 4 in OSI network reference stack.
As we have seen above OSI Layer 4 is Transport Layer. The best known protocols for transport layer are TCP and UDP . If you want to know more about Transport layer you can check it out on wiki .
So the classic load balancer will mostly route the traffic via IPs on port 80 (HTTP) or port 443 (HTTPS) .
Now coming next to the other load balancer.

Application Load Balancer:-

Amazon calls it “AWS Application ELB” and Microsoft calls it “Azure Application Gateway” .
Application load balancer works on Layer 7 of OSI i.e. Application Layer .
Main protocols in application layer are DHCP, DNS, FTP, HTTP, LDAP, SSH, Telnet etc.
You can check out more about Layer 7 on wiki .
The application load balancer has the ability to inspect the application-level content and route requests not just based on the IP and port as is the case with classic LB.
So the rules on Application LB can be more complex for example it will process the request by not just looking at say receiving port 80 but also by checking the destination URL.
So it can process the request for different URLs like
www.cloudvedas.com/images
www.cloudvedas.com/login
The two URLs can be processed differently based on different Application LB rule.
You can modify or add new rules later also.

So what is best for you?

Above we have given you the details of both load balancers. In most of the cases classic ELB should be good enough for you when you want the requests to be routed based on IP and ports only.
The application load balancer can be useful when you want to create complex applications based on dockers or other containerization technology.
Be Sociable. Share It. Happy Learning!

Solved: How to calculate number of available IPs in a Subnet

Many people are confused about how many usable IPs you can get in a subnet and how to calculate it.
So here I am giving you a simple way to calculate it.
Here is the formula.
Maximum Number of IPs = 2**(32 - netmask_length)
Let’s say you have subnet mask  /28 then the maximum number of IPs you can have is
Maximum Number of IPs= 2**(32-28) = 2**(4) = 2*2*2*2 = 16
So you can have max 16 IPs in a  /28 subnet.
First and last IP of subnet is reserved for Network Address and Broadcast Address. So you are left with only 14 IPs in normal networks.
But, generally cloud providers like AWS, Azure etc. reserve 5 IPs instead of 2 IPs in each subnet . Thus, the the usable IPs available for you in AWS or Azure for /28 subnet will be 11.
Similarly, you can calculate the usable IPs in each subnet when working on cloud .

For simplicity we have created an AWS Subnet Calculator which you can use.


Be Sociable. Share It. Happy Learning!

How to load database in mysql docker container?

After creating the mysql docker container i wanted to load a new database dump to it.
In case you are wondering how to create dockers on you windows machine you can refer my post here .
If you are just testing it you can download a sample mysql database from here .
Once you have downloaded the sample DB unzip it in a folder.
First, copy the database into the container:
$ docker cp mydump.sql c598nvcvc190:/root  # Here c598nvcvc190 is name of database container
Second, Connect to your docker container:
$ docker exec -it c598nvcvc190 /bin/bash
Finally, restore the database dump file into your database “After creating it first”:
# mysql -uroot -prootpassword < /root/mydump.sql
Now you should be able to see the new database listed in mysql.
Be Sociable. Share It. Happy Learning!

How to become an AWS Certified Solution Architect in 30 days ?

In this post we will be discussing on how to clear the “AWS Certified Solutions Architect – Associate” exam in 30 days .
AWS exams are not restricted to any version as you see in other exams like RHCSA on RHEL 6.
The syllabus is vast and keep on changing as AWS keep on adding new services. So, hard work alone won’t help. Also, your prior experience on few specific AWS services won’t help you clear the exam easily. As the questions in exam are on wide range of services.
Below I am listing a smart plan which can get you ready for the exam in 30 days.
First 7 days.
If you are looking for very quick overview of all services so  that you can sound familiar with AWS you can refer this post.
You can also refer to our free AWS Crash Course  if you want to go little deeper. It will give you good knowledge of key topics in short time.
Later go through the online training and videos. You can look at AWS re:invent videos. But if you are new to AWS it’s recommended that you buy an online course. Content of both acloudguru and linuxacademy is good but I used the acloudguru course on Udemy as it provide lifetime access to the course. You can read my complete review for the acloudguru course here .
TIP:- We found that buying the acloudguru course from  Udemy is cheaper in comparison to  acloud.guru website. It’s the same course AWS Certified Solutions Architect – Associate on Udemy at cheaper rate, as generally Udemy provide heavy discounts on courses.
Day 8 to 14
For the next seven days repeat the exercises in the course doing hands on in your own AWS account.
TIP:- Create a billing alert in the account. It will remind you if you are going above the free tier limits and save you from unpleasant surprises.
To see how to create a billing alert refer here.
Day 15 to 21
Online course will give you a good base. As you don’t have to worry about the syllabus. Next step is to go through the listed AWS white papers.
  • Overview of Amazon Web Services
  • Storage Options in the Cloud
  • Overview of Security Processes
  • AWS Risk & Compliance Whitepaper
  • Architecting for the AWS Cloud: Best Practices
Here you can get all the AWS Whitepapers .
Day 22 to 30
Finally go through FAQs of AWS services. Here I am listing few key services from which you can expect most questions.
  • EC2
  • S3
  • EBS
  • RDS
  • VPC
  • ELB
  • Route 53
  • Glacier
  • Cloudfront
  • Direct Connect
    Tips for the exam:-
  • You won’t get more than 3 mins per question.
  • You may find some very long questions in exam. Best strategy to tackle them is to read the answer options first and than check for relevant info in question.
  • If you find a question confusing, better to mark it for review and check it later.
  • Since it’s an AWS exam so look for AWS related options in the answers.  Chances are high that  Non-AWS related option in answer will be wrong.
  • AWS exams generally don’t focus on mugging their datasheets. So you won’t get a question like “How much RAM does a C3.xlarge offer?” .
  • For cost optimization Spot instances are best. If you are confused about option between dedicated and spot, choose spot if question talks about cost.
You can check out the exam blue print here. And can refer to sample exam questions here.
Once you are done with Associate level and you want to move to the next level check How to prepare for AWS Certified Solutions Architect – Professional .
Hope this post helps you in your preparation. Do let me know if you have any query.

Solved: How to Create AWS billing alert

  1. Sign in to the AWS Management Console and open the Billing and Cost Management console .
  2. On the left navigation pane, choose Preferences.
  3. Select the Receive Billing Alerts check box.
  4. Choose Save preferences.
Now we will create the alarm in cloudwatch.
On top left of AWS console choose “Services” and select “Cloudwatch”.
  • In cloudwatch on the left pane select “Alarms”.
  • Now Select Create Alarm .
  • In Billing Metrics select “Total Estimated Charges”.


  • A new window will open . Put a check mark on USD. and click Select Metric.


  • In the section “Whenever charges for : Estimated charges ” specify threshold as 0.01 and click Next. Refer image below.


  • For “send a notification to”, specify an email address.
  • Finally click on create alarm. Cloudwatch will send you a confirmation mail.
Be Sociable. Share It. Happy Learning!

PAN of all major Banks

It’s tax season again and you must file your ITR claims. Now the Government of India has made it compulsory that for claiming tax benefits on home loan you must provide the PAN number of Lender bank. Below we are providing a list of banks with their PAN no.
PAN of Banks
  • Allahabad Bank – AACCA8464F
  • Andhra Bank – AABCA7375C
  • Axis Bank – AAACU2414K
  • Bank of Baroda – AAACB1534F
  • Bank of India – AAACB0472C
  • Bank of Maharashtra – AACCB0774B
  • BMW India Financial Services Private Limited – AADCB8986G
  • Canara Bank – AAACC6106G
  • Canfin Homes Limited – AAACC7241A
  • Central Bank of India – AAACC2498P
  • CITI Bank – AAACC0462F
  • City Union Bank – AAACC1287E
  • Corporation Bank – AAACC7245E
  • Dena Bank – AAACD4249B
  • DCB Bank – AAACD1461F
  • Deutsche Bank – AAACD1390F
  • DHFL – AAACD1977A
  • FEDERAL BANK – AABCT0020H
  • GIC Housing Finance Limited – AAACG2755R
  • GRUH FINANCE LTD. – AAACG7010K
  • HDFC Bank Limited – AAACH2702H
  • Housing & Urban Development Corporation Ltd. ( HUDCO )- AAACH0632A
  • ICICI Bank Limited – AAACI1195H
  • ICICI Home Finance Company Ltd – AAACI6285N
  • IDBI Bank – AABCI8842G
  • Indian Bank – AAACI1607G
  • India bulls – AABCI3612A
  • India Infoline Housing Finance Ltd – AABCI6154K
  • Indian Overseas Bank – AAACI1223J
  • Indusind Bank – AAACI1314G
  • Karur Vysya Bank – AAACH3962K
  • Kotak Mahindra Bank – AAACK4409J
  • Karnatka Bank Limited –  AABCT5589K
  • L&T FinCorp Limited – AAACI4598Q
  • L&T Infrastructure Finance Company Limited – AABCL2283L
  • LIC Housing Finance Limited – AAACL1799C
  • Oriental Bank of Commerce – AAACO0191M
  • PNB Housing Finance Limited – AAACP3682N
  • Power Finance Corporation Limited – AAACP1570H
  • Punjab & Sind Bank – AAACP1206G
  • Punjab National Bank ( PNB ) – AAACP0165G
  • RBL Bank – AABCT3335M
  • Reliance Home Loan Finance Limited – AAECR0305E
  • State Bank of Hyderabad ( SBH )– AGCPB9929M
  • State Bank of India ( SBI )– AAACS8577K
  • State bank of Mysore ( SBM ) – AACCS0155P
  • Syndicate  Bank – AACCS4699E
  • TATA Capital Housing Finance Ltd – AADCT0491L
  • TATA Capital Ltd – AADCP9147P
  • TATA Motors Finance Limited – AACCT4644A
  • The HongKong Shanghai Banking Corporation Limited (HSBC)– AAACT2786P
  • UCO Bank – AAACU3561B
  • Union Bank of India – AAACU0564G
  • United Bank of India – AAACU5624P
  • Vijaya Bank – AAACV4791J
  • YES Bank Limited – AAACY2068D
Note:- This list is for your reference only. Please cross check with your bank for confirmation. We are not responsible for any discrepancies in the above data. All the information is collected from the details available on public internet.

How to use Wordpress in Docker Container on your Windows Laptop.

Pre-requisites:-
I am using “Docker for Windows” software to run dockers on my Windows 10 laptop. You can get “Docker for Windows” by clicking on this link .
If you have Windows 7 download Docker Toolbox for Windows with Virtualbox.
Once you’ve installed Docker Toolbox, install a VM with Docker Machine using the VirtualBox provider:
docker-machine create --driver=virtualbox default
docker-machine lseval "$(docker-machine env default)"


How to create docker containers?

Once you have installed pre-requisite software create a docker-compose.yml file in a text editor.
If you are using windows simply copy/paste the below in a text file and rename the file to docker-compose.yml . Ensure that the extension of file is changed from .txt to .yml .

version: '3'

services:
   db:
     image: mysql:5.7
     volumes:
       - db_data:/var/lib/mysql
     restart: always
     environment:
       MYSQL_ROOT_PASSWORD: somewordpress
       MYSQL_DATABASE: wordpress
       MYSQL_USER: wordpress
       MYSQL_PASSWORD: wordpress

   wordpress:
     depends_on:
       - db
     image: wordpress:latest
     ports:
       - "8000:80"
     restart: always
     environment:
       WORDPRESS_DB_HOST: db:3306
       WORDPRESS_DB_USER: wordpress
       WORDPRESS_DB_PASSWORD: wordpress
       WORDPRESS_DB_NAME: wordpress
volumes:
    db_data: {}

After this just go to the directory which has your docker-compose.yml file and run below command in CMD prompt i.e. windows command line or docker shell prompt.
docker-compose up -d
Above command will run your containers in detached mode. So they won’t shutdown automatically, you can go inside and explore them.
Now run below command to see the list of running containers. Here it shows that we have 2 containers and they are up since 3 days.
docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
45dfdfd78306 wordpress:latest "docker-entrypoint..." 3 days ago Up 3 days 0.0.0.0:8000->80/tcp mywordpress_wordpress_1
c598nvcvc190 mysql:5.7 "docker-entrypoint..." 3 days ago Up 3 days 3306/tcp mywordpress_db_1
To check the list of networks created by docker run:-
docker network ls
NETWORK ID NAME DRIVER SCOPE
784dgfgda0bf bridge bridge local
09811ere912d host host local
51dddfdfd38a mywordpress_default bridge local
bbbdatyura8c none null local
To check the details of a particular network, run:
docker network inspect mywordpress_default
It will show you IP, subnets and MAC address associated with your containers.
Now if you want to login to your Linux docker use below command. By this you will login to the docker and after this you can work normally as you do in a  normal Linux VM.
docker exec -it c598nvcvc190 bash
If you want to access the wordpress website from browser type localhost:8000 in browser. It will show your website. You can access it’s admin page by localhost:8000/wp-admin .
If you want to check connection to your mysql container from php container you can do so by creating a php file.
So create a file testmyconnection.php in the wordpress container in directory /var/www/html/ and paste below content in it. You will have to put your own user id and password in the file.
If you want to test with root user you can get the root user login credentials from docker-compose.yml file.
<?php$con = mysqli_connect("db","youruser","yourpassword","wordpress");if(! $con ) { die('Could not connect: ' . mysqli_error()); } echo 'Connected successfully'; mysqli_close($con);?>
Give the testmyconnection.php file execution rights using below comamnd. For testing purpose i am giving it full rights. You should restrict the rights as per your requirement.
chmod 777 testmyconnection.php
Now you can test the connection by opening the  below link in browser .
 http://localhost:8000/testmyconnection.php
It will show you “Connected successfully” message if all is fine.
Hope this article helped you get started on dockers. Do let me know your opinions in comment section.

How to prepare for EX210 – RHCSA in Red Hat OpenStack Exam

Red Hat Certified System Administrator in Red Hat OpenStack exam tests the Openstack management skills of candidate.  Before you start preparing for this exam you should understand basic Linux. Redhat recommends skills of at least the level of RHCSA-RHEL. And this level is important as it’s a lab exam and you have to answer the questions by doing them practically.
If you are a newbie on linux  and want to quickly master linux you can check Learn Linux course on Udemy.
Here I am listing few Linux commands and topics which you should know before sitting for the Redhat Openstack exam.
  1. Comfortable with vi/vim editor. You should know how to edit a file because you will have to edit the configuration file for starting the openstack installation.
  2. Know how to navigate in linux and view files using commands like cd , ls, more, cat etc.
  3. You can use commands like cp, mv to copy and move or rename a file.
  4. Should understand how a repository is created in linux. The repo file is generally saved in /etc/yum.repos.d .
  5. Know how to update the patches in linux using YUM or rpm tools.
  6. Understand basic networking using commands like “ifconfig -a” and “ip addr” . Network configuration files are generally stored in /etc/sysconfig/network-scripts directory. Check how to configure IP.
  7. Check the files where the server names are saved like /etc/hosts .
  8. Understand how the LVM(Logical Volume Manager) works in Linux. For details refer free LVM crash course .
  9. How to start/stop and enable/disable services in linux.
  10. How to sftp and ssh to and from the openstack server.
If you are comfortable up to the above mentioned level of Linux. You can start checking the different components of openstack. Below is listing of few components for which questions will be asked in exam.
  1. Compute (Nova)
  2. Networking (Neutron)
  3. Block storage (Cinder)
  4. Identity (Keystone)
  5. Image (Glance)
  6. Object storage (Swift)
  7. Dashboard (Horizon)
  8. Orchestration (Heat)
  9. Heat(Orchestration)
In exam, you can do installation in two ways. Either you install each component manually or use packstack package/tool to do the installation. If you are new to openstack I recommend using packstack. For packstack installation you will generate a file and modify it as per the requirement mentioned in exam questions. Once the file is ready give it as input to packstack. Packstack will do all the installations for you.
Once the installation is done you can use CLI or GUI for doing further questions in exam. If you are new to Openstack, GUI is best option. It is good and gives you a graphical overview of what you are doing.
Some of the questions you can expect in exam post installation are:-
  1. Create Public and Private subnets
  2. Create Users and Projects
  3. Launch instance
  4. Create Volumes
  5. Attach volumes to instance.
Tip: The exercises given at the end of Openstack EX210 official training book are good test for your skills. If you can do those exercises successfully without taking any help, you will be easily able to clear the exam.
Be Sociable. Share It. Happy Learning!

AWS VS AZURE VS OPENSTACK

Most of the services provided by different cloud providers are same as what you do in on-premises setup, they just have a different name in cloud. Below is comparison of major services offered by different cloud providers and what they mean in simple laymen terms. Hope this is helpful to you.

S3 CROSS ACCOUNT ACCESS WITH FOLDER RESTRICTION

 

This document is created to show you how to grant cross account access to a user and restrict it to a folder in S3 bucket. It can be a very useful cost saving measure where you don’t have to duplicate the data in QA bucket. While keeping the data safe as you are granting only read access to data.

Problem:- We want to allow the QA user (qauser) to get files which are in Production bucket (prodbucket) but it should only be able to access folder1 which is in prodbucket. Also both Production user (produser) and qauser should be able to access the buckets which are in their own accounts.
Hirearchy of prod bucket is prodbucket/folder1 .

Solution:-

SMALLEST AND LARGEST SUBNET SIZE IN AZURE



What is the smallest and largest subnet in Azure?
The smallest subnet in Azure is /29. So theoretically you have 8 IPs in subnet but Azure reserve 5 IPs so you will only get 3 IPs for yourself.
While the largest subnet in Azure is /8 . 5 IPs are reserved in this also.
Refer this post to know how to calculate the total number of IPs available in a subnet.
You can get more details on this page of Azure site .

Max Hospital Gurgaon Review

Here I would like to share my experience at Max hospital Gurgaon.
We visited max first when we were looking for a good gynecologist.
We already met  two doctors in Gurgaon but didn’t find their response satisfactory. So while looking for another doctor one of my friend suggested Dr. Deepa Dewan who sits at max Gurgaon and also at her private clinic.
Our first experience with max was good. After taking the telephonic appointment we visited max. While i made the payment for doctor fees, one of the nurse took stats of my wife like weight, BP etc.  After waiting for around 15-20 mins we were seen by the doctor.
Regarding Dr. Dewan what i found good was that she patiently listen to your queries and  replies to them very calmly. She speaks to the point and straight. My wife was comfortable after discussion with her.
Dr. Dewan gave us her number so that we can whatsapp her if we have any doubt or in case of emergency we can call her.  This is very helpful and make you think that you are not on your own and doctor is available to help you anytime.

Solved: HTTP ERROR 500 example.com currently unable to handle thisrequest.

Problem:-
example.com currently unable to handle this request. HTTP ERROR 500
Solution:-
Above error is general  and can be related to multiple things. Here we will look at a step by step approach to troubleshoot this. Login to your linux server.(If it’s AWS instance refer How to login to AWS EC2 Linux Instance )