AWS Crash Course – S3

Welcome to AWS Crash Course.

What is S3?

S3 is Simple Storage Service. It’s an object storage. That means, it’s used for storing objects like photos, videos etc.

  • S3 provides 11 9’s durability 99.999999999%.  Means Losing 1 out of 100 Billion objects.
  • S3 provides 99.99% availability.
  • Files can be 1 byte to 5 TB in size
  • Read after Write consistency for PUTS of new objects – means you can immediately read what you have written.
  • Eventual consistency for overwrite PUTS and DELETES – if you overwrite or delete an existing object it takes time to propagate in S3 globally.
  • Secure your data using ACL and bucket policies.
  • S3 is designed to sustain the loss of 2 facilities concurrently i.e. 2 Availability Zone failures.

    S3 has multiple classes. One is S3 Standard which we discussed above. Others classes are:-

S3-IA (Infrequently Accessed) 
  • S3-IA is for data which is not frequently accessed but still needed an immediate access.
  • You get same durability and availability as S3 but at reduced price.
  • Can manage upto 2 Concurrent facility fault tolerance.
S3-RRS (S3- Reduced Redundancy Storage)
  • 99.99% durability and availability.
  • Use RRS if you are storing non-critical data that can be easily reproduced. Like thumbnails of images.
  • No Concurrent facility fault tolerance
S3 Glacier 
  • Data is stored in Amazon Glacier in “archives.“
  • Archive can be any data such as a photo, video, or document.
  • A single archive can be as large as 40 terabytes.
  • You can store an unlimited number of archives in Glacier
  • Amazon Glacier uses “vaults” as containers to store archives.
  • Under a single AWS account, you can have up to 1000 vaults.

Here is quick comparison of different S3 classes as per Amazon.

S3 Supports versioning. 

What does that mean?
It means that if you change a file it can keep versions of both old and new files.

  • If you enable versioning in S3 it will keep all the versions even if you delete or update the old version.
  • Great backup tool
  • Once enabled versioning cannot be disabled ,only suspended
  • Integrates with lifecycle rules
  • Versioning’s MFA delete capability which uses multi factor authentication, can be used to provide additional layer of security.
  • Cross region replication , requires versioning enabled on the source bucket
S3  Supports Lifecycle Management

What does that mean?
It means you can move objects from one storage class to another storage class after few days as per your schedule. This is used to reduce cost by moving less critical data to cheaper storage class.

  • Lifecycle configuration enables you to specify the lifecycle management of objects in a bucket
  • Can be used with versioning
  • Can be applied to current versions and previous versions
  • Transition from standard to infrequent access storage class can be done only after the data is in standard class storage for 30 days.
  • You can directly put data from standard to glacier
  • Lifecycle policy will not transition objects that are less than 128KB

If you want to try some handson try this exercise . You can also refer to this AWS S3 CLI cheat sheet to get your hands dirty with command line.

This series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

Solved: Install ifconfig and ssh in Ubuntu

In this post we will see how to install ifconfig and ssh in Ubuntu 16.04 Xenial Xerus. You can install both or either of them independently .


By default you can check IP details in Ubuntu using

ip addr

But if you are more comfortable with “ifconfig”  then follow on.

Ensure that your Ubuntu instance is connected to internet or to your local package repository server from which it can pull packages.

If you need ifconfig in your ubuntu server use following commands.

sudo apt-get update
sudo apt-get install net-tools

“ifconfig” command should now work

ifconfig -a


If you are trying to get ssh. Use the below command.

sudo apt-get install openssh-server

Once the ssh installation is done check the status of ssh service

sudo service ssh status

If the service is not running you will have to start it with following command.

sudo service ssh start

Once this is done you should get a message of service started ok. You can check service status again with

sudo service ssh status

The default configuration file for ssh is /etc/ssh/sshd_config . If you make any changes to this file you will have to restart the ssh service to make the changes effective.

sudo service ssh restart

If you are using docker you can create a golden image after doing this installation. So that you don’t have to do this installation in all new containers. To check how to create a golden image for a docker container check this post .

Solved: How to create an image of a docker container.

So you have completed all the installation on a docker container and now you want to keep it as golden image.

Golden images are useful when you want to create more containers with same configuration. This also ensure that when you ship an image from Dev to UAT or Prod it will be exactly same as it was when you tested it.

This also avoid problems during release.

So how you create an image from a running container.

Let’s have a look. We have a running container with ID d885f4ea3cff.

docker container ls
d885f4ea3cff d355ed3537e9 "/bin/bash" 25 hours ago Up 53 minutes ansible

We already did all the installations in it.  So we will commit and create an image “ansible-base” .

docker commit d885f4ea3cff ansible-base

Now if you look at the image list you should see the new image.

docker image ls
ansible-base latest 1fbdr6f81e4c 39 minutes ago 159 MB

You can create a new container with this image

docker run -t -i -d 1fbdr6f81e4c /bin/bash

Notice the “-d” above it will run the container in detached mode so even if you logout of the container it will remain up.

List the containers that are running

docker container ls

Login to the container with container id d985be2f2c3e you created now.

 docker exec -it d985be2f2c3e bash

If you want to rename the new container check this post .

Solved: Error when allocating new name – Docker

Error response from daemon: Error when allocating new name: Conflict. The container name "/webserver" is already in use by container 6c34a8wetwyetwy7463462d329c9601812tywetdyud76767d65f7dc7ea58d8541. You have to remove (or rename) that container to be able to reuse that name.

If you see the above error it is because a container with same name exist.

Let’s check our running containers

docker container ls

If you don’t see any running container with that name, check the stopped containers.

docker ps -a

b45bd14c8987 1fbd5d581e4c "/bin/bash" 2 minutes ago Up 2 minutes competent_keller
59b9f5a63ba0 ansible-base "/bin/base" 3 minutes ago Created wizardly_payne
6c34a8a6edb6 d355ed3537e9 "/bin/bash" 9 minutes ago Exited (0) 4 minutes ago webserver

Above we can see that we already have a container with name webserver.

So we will rename the old container.

docker rename webserver webserver_old

Now if we check again. Our container is renamed to webserver_old .

docker ps -a

b45bd14c8987 1fbd5d581e4c "/bin/bash" 4 minutes ago Up 4 minutes competent_keller
59b9f5a63ba0 ansible-base "/bin/base" 5 minutes ago Created wizardly_payne
6c34a8a6edb6 d355ed3537e9 "/bin/bash" 10 minutes ago Exited (0) 5 minutes ago webserver_old

And if you don’t need the old container you can also delete it to free up the space.

docker rm webserver_old

Now if you try to create a container with “webserver” name you should not get any error.

Solved: AWS4-HMAC-SHA256 encryption error while updating S3 bucket

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.

Above error means that you are trying to do activity on a bucket in a region which doesn’t support AWS4-HMAC-SHA256 encryption.

Some of the new AWS regions do not support AWS4-HMAC-SHA256.

So if you have created the bucket in regions like Frankfurt or Mumbai you may see this error.

AWS in future may fix this issue. But, till that time you can try to create the bucket in another older region in US and continue working.