Solved: How to resize Docker Quickstart Terminal Window

By default in Windows 7 Docker Quickstart Terminal Window size will be small. But it can be very annoying to work in such small window.
Here we will show you how to increase the window size as per your requirement.
  • Open the Docker Quickstart Terminal as an Administrator.
  • Right Click on the Blue whale icon on top of  Docker Quickstart Terminal .
  • Click “Properties” and Select “Layout” tab.
  • Increase the “Width” and “Height” of “Window Size” as per your requirement.
  • Finally Click OK and try re-opening the Terminal.
That’s all folks!

Solved: How to restart a docker container automatically on crash

In this post we will see how we can restart a container automatically if it crashes.
If you want a Docker container to always restart use:-
docker run -dit --name cldvds-always-restart --restart=always busybox
But if you want container to always restart unless it is explicitly stopped  or restarted, use:-
docker run -dit --name cldvds-except-stop --restart unless-stopped busybox
In case you want the container to stop after 3 restart attempt use below command.
docker run -dit --name cldvds-restart-3 --restart=on-failure:3 busybox
You can see the logs of a container using
docker logs cldvds-restart-3
If you want to change the restart policy of running container you can do it with “docker update” e.g. here we are changing restart attempt from 3 to 4 of container cldvds-restart-3.
docker update --restart=on-failure:4 cldvds-restart-3

Solved: How to convert .usb file to .vmdk to .vdi to .vhd and vice versa

In this post we will see how we can convert a .usb file to .vmdk or .vdi  or .vhd
We will be doing this conversion using a free tool Oracle Virtualbox.
Pre-requisites:-
Download latest Virtualbox version
Once you have downloaded and installed the latest version of Virtualbox. Go to the directory in which virtual box is installed. There you will find a program "VBoxManage.exe".  We will be using "VBoxManage.exe" for the conversion.
Convert .usb to .vmdk 
First let's see how to convert .usb file to .vmdk . Syntax is simple:
VBoxManage.exe convertfromraw filename.usb filename.vmdk --format VMDK
In the below example we will be converting a Solaris 11 USB file  sol-11_2-openstack-x86_Copy.usb  to vmdk.
E:\>"C:\VBoxManage.exe" convertfromraw sol-11_2-openstack-x86.usb sol-11_2_cldvds-openstack-x86_Copy.vmdk --format VMDK
Converting from raw image file="sol-11_2-openstack-x86.usb" to file="sol-11_2_cldvds-openstack-x86_Copy.vmdk"...
Creating dynamic image with size 8655764992 bytes (8255MB)...
Convert  .usb to .vdi
Similarly you can convert a .usb file to .vdi . Refer below example.
E:\>"C:\VBoxManage.exe" convertfromraw sol-11_2-openstack-x86.usb sol-11_2_cldvds_openstack-x86_Copy.vdi --format VDI
Converting from raw image file="sol-11_2-openstack-x86.usb" to file="sol-11_2_cldvds_openstack-x86_Copy.vdi"...
Creating dynamic image with size 8655764992 bytes (8255MB)...
Convert .vdi to .vmdk
Now let's try converting a .vdi file to .vmdk
E:\>"C:\VBoxManage.exe" convertfromraw sol-11_2-openstack-x86_Copy.vdi sol-11_2_cldvds-openstack-x86_Copy.vmdk --format VMDK
Converting from raw image file="sol-11_2-openstack-x86_Copy.vdi" to file="sol-11_2_cldvds-openstack-x86_Copy.vmdk"...
Creating dynamic image with size 6624903168 bytes (6318MB)...
If you want to convert from vmdk to vdi the syntax will be:
VBoxManage.exe convertfromraw filename.vmdk filename.vdi --format VDI
Convert .vdi to .vhd
Same steps you can use to convert from .vdi to .vhd
E:\>"C:\VBoxManage.exe" convertfromraw sol-11_2-openstack-x86_Copy.vdi sol-11_2_cldvds-openstack-x86_Copy.vhd --format VHD
Converting from raw image file="sol-11_2-openstack-x86_Copy.vdi" to file="sol-11_2_cldvds-openstack-x86_Copy.vhd"...
Creating dynamic image with size 6624903168 bytes (6318MB)...
You can use similar steps to convert the other way round like convert .vmdk to .vdi  or .vhd to .vdi.
Hope this post is helpful to you. Do let me know if you have any query.

Review of 70-534 70-535 Architecting Microsoft Azure SolutionsCertification course on Udemy by Scott Duffy

As I mentioned in my earlier post that I had a good experience with Udemy’s online course while preparing for AWS Certified Solutions Architect – Associate exam.
So for preparation of Architecting Microsoft Azure Solutions Certification also I wanted to buy an online course.
When I started my search for a suitable course I zeroed in on  Architecting Microsoft Azure Solutions Certification  course by Scott Duffy on Udemy.
Scott himself is a certified architect and got a rich industry experience. Going through the course content I found that it covered almost all exam topics. Also Scott keeps on updating the content as the syllabus changes. And if you buy from Udemy you get free life time access to the course so I went with this one.
Now coming to the course.
After buying the course as I went through it I found the content to be good but the content delivery of Scott is not as good as Ryan of acloudguru whose AWS course I have completed.
Scott mixes many Developer course topics with this course which can be good if you are from development background but not essential for the exam.  For the exam you need the basic understanding of coding so that you can read and understand the code.
Course has many quizzes after the modules which are helpful in testing what you have learnt. It covers hands on exercises which you can follow in your own MS Azure free trial account.
It has mock exam at the end of sections but when I gave them I found that no matter how many times you give the mock exam it reflects the same score that you got when you appeared for the mock exam first time. I informed Scott about this on the forum page and he said he will get it sorted. Hope it’s fixed now.
Overall this course covers most of the exam topics and if you are a beginner in Azure it will give you a good base.  Also as of Apr-18 this is the only course on Udemy which covers most topic and get you exam ready.
If you already have background in AWS then I recommend that you also go through the course Microsoft Azure for AWS Experts by Microsoft(Update: This is now a paid course).  Few of the Azure terminologies are confusing and if you relate them to AWS it will help you understand them quickly. You can also refer to our post where we have compared the services of AWS vs Azure .
Read the post Azure exam pattern to know more about  what’s asked in exam. You can supplement your preparation with sample exam questions by Scott.
Scott also got a good course on 70-533 Implementing Microsoft Azure Infrastructure Solutions .
Do share your queries or opinion with us.

Parallel Patching in Solaris 10

When you patch a Solaris 10 server it applies each patch to each zone one-at-a-time . So if you have 3 zones and it takes 1 minute to apply a patch on global zone then it will take another 1 minute each to apply on other 3 zones. Thus in total you will be spending around 4 minutes to apply single patch on the server. You can imagine the time it will take to apply a 300 patches bundle.
From Solaris 10 10/09 onward you have got an option to patch multiple zones in parallel.
(For releases prior to Solaris 10 10/09, this feature is delivered in the patch utilities patch, 119254-66 or later revision (SPARC) and 119255-66 or later revision (x86). Check latest patch on My Oracle Support Website)
Parallel patching is really helpful as it will apply patches in parallel to all the zones. So all your zones on a server will be patched at the same time thus drastically reducing your patching time. If we consider the above scenario if we use parallel patching, the total patching time for applying the patch in all zones can reduce to around 2 to 2.5 minutes. As global zone will still be patched first and then the patch will be applied on local zones in parallel.
The number of global zones that can be patched together is decided by a parameter num_proc=  which is defined in /etc/patch/pdo.conf .
The value for num_proc=  is decided based on number of online CPUs in your system. The maximum number is 1.5 times the number of online CPUs.
For example :-
If number of online CPUs is 6
In /etc/patch/pdo.conf make the entry
num_proc=9
Thus as per above example 9 zones can be patched in parallel.  This will reduce lot of downtime if you have 9 zones running in a server. Once the entry in pdo.conf is done you can continue with normal patching process.
So just update the num_proc value in  /etc/patch/pdo.conf  as per the available CPUs in your system and enjoy some free time 🙂
Note:- The time estimate I mentioned above are as per my own experience and I have not maintained any data for this. So please expect variations in time as per your system.
Do let me know if you have any query.

Review of acloudguru course AWS Certified Solutions Architect –Associate

In this post we are writing a review of acloudguru course  AWS Certified Solutions Architect – Associate on Udemy .
When I decided to go for AWS Certified Solutions Architect – Associate exam I was confused about how to start with preparation. Syllabus of the exam is vast and covers a lot of topics. I needed someone to guide me for this exam.
After going through many forums I came to the conclusion that I’ve two options.
  1. To attend the course at one of the training institutes. The official training cost range from $800 to $1500. Even the institutes offering non-official training quoted $400 to $900. This was huge money considering that my last company was not sponsoring this course and I have to pay for it myself.
  2. Second option was to look for online courses. These course are cheap ranging from $10 to $100 but I was not sure about the quality of content and how much I can learn with just online training.
After going through multiple forums and blogs I ended up listing online courses from two vendors  Linux Academy and acloudguru.
Linux Academy has good material and people have written some good feedback for them. But the problem with them is that you have to buy their monthly subscription with plans starting from $29. I knew that with a full-time job I won’t be able to complete the course properly in 1 month.
So I checked on acloudguru course.  Many people have given good feedback for this course too. The course was listed on acloud.guru website for $29. But I found the same course on Udemy for $10 in a special offer at that time. Udemy also provides you free lifetime access to the course. I found this was most suitable for me so purchased it. Also once you buy the course from Udemy you can get free access to the same course on acloud.guru website also.
Now coming to the acloudguru  AWS Certified Solutions Architect – Associate course itself.
As of Apr-2018  course has around 22 hours of on-demand video.
The course is very comprehensive and it gives you a good understanding of topics. Instructor(Ryan) is very energetic and he tries to teach you the topic in a very simple way. So even if you don’t have background in AWS then also you will understand the topics with not much effort.
It covers a lot of practical labs which you can follow by creating your own AWS free tier account.  It covers almost all topics of the exam. So you don’t have to worry about the syllabus.
It also has quizzes in between the course for knowledge check and also full length mock exams to test your understanding.
One of the recommendation for the exam preparation is to go through the white papers. Each of the white paper is a big pdf of 50 to 200 pages. And for the associate level you are supposed to read around 6 to 7 of them. This can be overwhelming but Ryan cover the gist of these whitepapers in his course.
The course is very useful to get you started on the AWS journey.  But the course alone is not enough to clear the exam.  You need to have good practical knowledge of AWS for clearing the exam and this course give you the correct direction.
You can get good sample exam questions for practice from another course of acloudguru on Udemy  Exam Questions – AWS Solution Architect Associate or from Jon AWS Certified Solutions Architect Associate Practice Exams for which most students have written great feedback.

Solved: How to plumb IP in a Solaris zone without reboot

In this post we will discuss how to add an IP to a  running Solaris zone.

If you want to add a new IP address to a running local zone(zcldvds01) you can do it by plumbing the IP manually from the global zone.

root@cldvds-global()# ifconfig aggr1:2 plumb
root@cldvds-global()# ifconfig aggr1:2 inet 10.248.3.167 netmask 255.255.255.0 broadcast 10.248.3.255 zone zcldvds01 up

This change is not persistent across reboot. To make it permanent you will have to make an entry through zonecfg:-

root@cldvds-global()# zonecfg -z zcldvds01
zonecfg:zcldvds01> add net
zonecfg:zcldvds01:net> set physical=aggr1
zonecfg:zcldvds01:net> set address=10.248.3.167
zonecfg:zcldvds01:net> end
zonecfg:zcldvds01> verify
zonecfg:zcldvds01> commit
zonecfg:zcldvds01> exit

Now if you run "ifconfig -a" in the zone. You should see the new IP plumbed.

Change IP in zone

If you want to change the IP address of a zone you can simply do it by using "remove". If we take above example and we want to change IP from 10.248.3.167 to 10.248.3.175 we will do as below:-

root@cldvds-global()# zonecfg -z zcldvds01
zonecfg:zcldvds01> remove net address=10.248.3.167
zonecfg:zcldvds01> add net
zonecfg:zcldvds01:net> set physical=aggr1
zonecfg:zcldvds01:net> set address=10.248.3.175
zonecfg:zcldvds01:net> end
zonecfg:zcldvds01> verify
zonecfg:zcldvds01> commit
zonecfg:zcldvds01> exit

Solved: How to grow or extend ZFS filesystem in Solaris 10

Below are the steps to grow a zfs filesystem
  • Identify the zpool of the zfs filesystem.
df -h | grep -i sagufs
df -Z | grep -i sagufs
Above command will give you the complete path of the filesystem and zpool name even if it's in zone.
  • Check that  the pool doesn't have any errors.
root# zpool status sagu-zpool
 pool: sagu-zpool
 state: ONLINE
 scan: none requested
config:

NAME STATE READ WRITE CKSUM
 sagu-zpool ONLINE 0 0 0
 c0t911602657A702A0004D339BDCF15E111d0 ONLINE 0 0 0
 c0t911602657A702A00BE158E94CF15E111d0 ONLINE 0 0 0
 c0t911602657A702A004CD071A9CF15E111d0 ONLINE 0 0 0

errors: No known data errors
  • Check the current size of the pool
root# zpool list sagu-zpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
sagu-zpool 249G 178G 71.1G 71% ONLINE -
  • Label the new LUN.
root# format c0t9007538111C02A004E73B39A155BE211d0
  • Add the LUN to appropriate zpool. Be careful about pool name.
root# zpool add sagu-zpool c0t9007538111C02A004E73B39A155BE211d0
  • Now let's say we want to increase the filesystem from 100GB to 155GB. To increase FS first increase its quota.
root# zfs set quota=155G sagu-zpool/sagufs
  • Finally increase the FS reservation
root# zfs set reservation=155G sagu-zpool/sagufs
  • Now you should be able to see the increased space.

How to create an IAM user in AWS

In this post we will see how to create an IAM user which can be used to access S3 using CLI.
  • Login to  AWS IAM console.
  • In the left pane click on “Users”
  • Click on “Add user”
a) User name: S3User
b) Access Type: Check Programmatic Access.
  • Click Next
a) Select Attach existing policies directly.
b) Search for AmazonS3FullAccess and select it.
  • Click Next.
  • Review everything and click “Create user”
It will show you “Access key ID”  and “Secret access key”. Save them as you won’t be able to see the “Secret access key” once you close this page.
In this tutorial we have selected the existing S3 policy but you can also attach your own customized policy to make the access more secure.
Congrats you have created an IAM user successfully. You can use this to access S3 using CLI. Check this post for details.

Solved: How to copy paste in Docker Quickstart Terminal

If you want to copy/paste the contents on Docker Quickstart Terminal using mouse follow these steps.
  • Open the Docker Quickstart Terminal as an Administrator.
  • At the top of terminal right Click on the Blue Whale icon and select “Defaults”.
  • In the “Options” tab of new window check the QuickEdit Mode and click OK.
  • Now with mouse left click you can select the content and paste with right click.

Solved: Getting nobody:nobody as owner of NFS filesystem on Solaris client

If the NFS Version 4 client does not recognize a user or group name from the server, the client is unable to map the string to its unique ID, an integer value. Under such circumstances, the client maps the inbound user or group string to the nobody user. This mapping to nobody creates varied problems for different applications.
Because of these ownership issues you may see filesystem has permission of nobody:nobody on the NFS client.
To avoid this situation you can mount filesystem with NFS version 3 as shown below.
On the NFS client, mount a file system using the NFS v3
# mount -F nfs -o vers=3 host:/export/XXX /YYY
e.g.
# mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test
If this works fine than make the entry permanent by modifying the /etc/default/nfs file and uncomment the variable NFS_CLIENT_VERSMAX and put entry 3
vi /etc/default/nfsNFS_CLIENT_VERSMAX=3
If you are still getting permissions as nobody:nobody then you have to share the filesystem on NFS server as anon.
share -o anon=0 /home/cv/share_fs
Now try re-mount on NFS Client
mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test

How to Fix Crawl Errors in Google Search console or Google WebmasterTool

If you have moved your website links recently or deleted them, then you may see crawl error in your Google Webmaster Tools or Google Search Console(new name of the tool).
You will see two type of errors on the Crawl Errors page.
Site errors: In a normal operating site you generally won’t have these errors. Also as per google if they see large number of Site errors for your website they will try to notify you in the form of a message no matter how big or small your site is.  These type of errors generally comes when your site is down for long time or is not reachable to google bots because of issues like DNS errors or excessive page load time.
URL errors: These are the most common errors you will find for a website. It can be because of multiple reasons like you moved or renamed the pages, or you have permanently deleted a post.
These errors may impact your search engine rankings as google don’t want the users to go to pages that doesn’t exists.
So let’s see how you can fix these URL errors.
Once you login to the Google Webmaster Tools and select the verified property you should see the URL errors marked for your site.
It has two tabs Desktop and Smartphone that shows the errors in respective version of your website.
Select the error and you will see the website link which is broken. It can be your old post which you have moved or deleted.
If you are a developer you can redirect the pages of the broken links to working pages.
But if you don’t want to mess with the code you can install a free plugin called Redirection . Below we will see how you can install and use this plugin.
  • For installing the plugin go to Dashboard> Plugins> Add New
  • Search the plugin “Redirection” and click Install > Activate.
  • After you have installed the plugin go to Dashboard> Tools> Redirection.
  • Once on the Redirection settings pages select “Redirects” from the top.
  • In the “Source URL” copy/paste the URL for which you are getting error.
  • In the “Target URL” copy/paste the working URL.

  • Click Add Redirect.
You can also redirect all your broken URLs to  your Homepage. But if the post is available on different link, then it’s recommended that you redirect the broken link to new working link of that post. This will enhance the user experience
Last step is that you go back to the Google Webmaster Tools page. Select the URL you just corrected and click on “Mark as Fixed” .
Hope this post helps you. Do let me know your opinion in comments section.

In what sequence startup and shutdown RC scripts are executed in Solaris

We can use RC (Run Control) scripts present in /etc/rc* directories to start or stop a service during bootup and shutdown.
Each rc directory is associated with a run level. For example the scripts in rc3.d will be executed when the system is going in Run Level 3.
All the scripts in these directories must follow a special pattern so that they can be considered for execution.
The startup script file starts with a “S” and kill script start with a “K”. The uppercase is very important or the file will be ignored.
The sequence of these startup and shutdown script is crucial if the applications are dependent on each other.
For example while booting up a database should start before application. While during shutdown the application should shutdown first followed by Database.
Here we will see how we can sequence these scripts.
So as given in below example during startup S90datetest.sh script was executed first and than S91datetest.sh .
Time during execution of CloudVedas script S90datetest.sh is Monday, 23 September 2016 16:19:43 IST
Time during execution of CloudVedas script S91datetest.sh is Monday, 23 September 2016 16:19:48 IST
Similarly during shutdown K90datetest.sh script was executed first and then K91datetest.sh .
Time during execution of CloudVedas script K90datetest.sh is Monday, 23 September 2016 16:11:43 IST
Time during execution of CloudVedas script K91datetest.sh is Monday, 23 September 2016 16:11:48 IST
This sequencing is also a trick interview question and it confuses many people.

AWS Crash Course - EMR

What is EMR?
  • AWS EMR(Elastic MapReduce) is a managed hadoop framework.
  • It provides you an easy, cost-effective and highly scalable way to process large amount of data.
  • It can be used for multiple things like indexing, log analysis, financial analysis, scientific simulation, machine learning etc.
Cluster and Nodes
  • The centerpiece of EMR is Cluster.
  • Cluster is a collection of EC2 instances also called as nodes.
  • All nodes of an EMR cluster are launched in same availability zone.
  • Each node has a role in cluster.
Type of EMR Cluster Nodes
Master Node:- It’s the main boss which manages the cluster by running software components and distributing the tasks to other nodes. Master node will monitor task status and health of cluster.
Core Node:- It’s a slave node which “run tasks” and “store data” in HDFS (Hadoop Distributed Filesystem).
Task Node:- This is also a slave node but it only “run tasks”. It doesn’t store any data. It’s an optional node.
Cluster Types
EMR has two type of clusters
1) Transient :- These are clusters which are shutdown once the jobs is done. These are useful when you don’t need cluster to be running all day long and can save money by shutting them down.
2) Persistent :- Persistent clusters are those which need to be always available to process the continuous stream of jobs or you want the data to be always available on HDFS.
Different Cluster States
An EMR cluster goes through multiple stages as described below:-
STARTING – The cluster provisions, starts, and configures EC2 instances.
BOOTSTRAPPING – Bootstrap actions are being executed on the cluster.
RUNNING – A step for the cluster is currently being run.
WAITING – The cluster is currently active, but has no steps to run.
TERMINATING – The cluster is in the process of shutting down.
TERMINATED – The cluster was shut down without error.
TERMINATED_WITH_ERRORS – The cluster was shut down with errors.


Types of filesystem in EMR
Hadoop Distributed File System (HDFS)
Hadoop Distributed File System (HDFS) is a distributed, scalable file system for Hadoop. HDFS distributes the data it stores across instances in the cluster, storing multiple copies of data on different instances to ensure that no data is lost if an individual instance fails. HDFS is ephemeral storage that is reclaimed when you terminate a cluster.
EMR File System (EMRFS)
Using the EMR File System (EMRFS), Amazon EMR extends Hadoop to add the ability to directly access data stored in Amazon S3 as if it were a file system like HDFS. You can use either HDFS or Amazon S3 as the file system in your cluster. Most often, Amazon S3 is used to store input and output data and intermediate results are stored in HDFS.
Local File System
The local file system refers to a locally connected disk. When you create a Hadoop cluster, each node is created from an Amazon EC2 instance that comes with a preconfigured block of preattached disk storage called an instance store. Data on instance store volumes persists only during the lifecycle of its Amazon EC2 instance.
Programming languages supported by EMR
  • Perl
  • Python
  • Ruby
  • C++
  • PHP
  • R
EMR Security
  • EMR integrates with IAM to manage permissions.
  • EMR has Master and Slave security groups for nodes to control the traffic access.
  • EMR supports S3 server-side and client-side encryption with EMRFS.
  • You can launch EMR clusters in your VPC to make it more secure.
  • EMR integrates with CloudTrail so you will have log of all activites done on cluster.
  • You can login via ssh to EMR cluster nodes using EC2 Key Pairs.
EMR Management Interfaces
  • Console :-  You can manage your EMR clusters from AWS EMR Console .
  • AWS CLI :-  Command line provides you a rich way of controlling the EMR. Refer here the EMR CLI .
  • Software Development Kits (SDKs) :- SDKs provide functions that call Amazon EMR to create and manage clusters. It’s currently available only for the supported languages mentioned above. You can check here some sample code and libraries.
  • Web Service API :- You can use this interface to call the Web Service directly using JSON. You can get more information from API reference Guide .
EMR Billing
  • You pay for EC2 instances used in cluster and EMR.
  • You are charged for per instance hours.
  • EMR supports  On-Demand, Spot, and Reserved Instances
  • As a cost saving measure it is recommenced that task nodes should be Spot instances
  • It’s not a good idea to use spot instances for Master or Core Node as they store data on them. And you will lose data once the node is terminated.
If you want to try some EMR hands on refer this tutorial.

  • This AWS Crash Course series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .

Solved: How to create a soft link in Linux or Solaris

In this post we will see how to create a softlink.
Execute the below command to create a softlink.
[root@cloudvedas ~]# ln -s /usr/interface/HB0 CLV
So now when you list using  “ls -l”  the softlink thus created will look like.
[root@cloudvedas ~]# ls -llrwxrwxrwx. 1 root root 18 Aug 8 23:16 CLV -> /usr/interface/HB0[root@cloudvedas ~]#
Try going inside the link and list the contents.
[root@cloudvedas ~]# cd CLV[root@cloudvedas CLV]# lscloud1 cloud2 cloud3[root@cloudvedas CLV]#
You can see the contents of /usr/interface/HB0 directory.

Solved: How to create a flar image in Solaris and restore it forrecovery

Flar image is a good way to recover your system from crashes. In this post we will see how to create a flar image and use it for recovery of the system.
Flar Creation
  • It is recommended that you create flar image in single user mode. Shutdown server and boot it in single user.
#init 0ok>boot -s
  • In this example, the FLAR image will be stored to a directory under /flash. The FLAR image will be named recovery_image.flar .
flarcreate -n my_bkp_image1 -c -S -R / -x /flash /flash/recovery_image.flar
  • Once the flar image is created. Copy it to your repository system. Here we are using NFS.
cp -p /flash/recovery_image.flar /net/FLAR_recovery/recovery_image.flar
Flar Restoration
  • To restore a flar image start the boot process.
  • You can boot server either with Solaris CD/DVD or Network
  • Go to the ok prompt and run one of the below command:-
For booting the boot media (installation CD/DVD). ok> boot cdromIf you want to boot from network do. 

ok> boot net
  • Provide the network, date/time, and password information for the system.
  • Once you reach the “Solaris Interactive Installation” part, select “Flash”.
  • Provide the path to the system with location of the FLAR image:
    /net/FLAR_recovery/recovery_image.flar
  • Select the correct Retrieval Method (HTTP, FTP, NFS) to locate the FLAR image.
  • At the Disk Selection screen, select the disk where the FLAR image is to be installed.
  • Choose not to preserve existing data.(Be sure you want to restore on selected disk)
  • At the File System and Disk Layout screen, select Customize to edit the disk slices to input the values of the disk partition table from the original disk.
  • Once the system is rebooted the recovery is complete.

What are the maximum number of usable partitions in a disk in Linux

Linux can generally have two types of Disks. IDE and SCSI.
IDE
By convention, IDE drives will be given device names /dev/hda to /dev/hdd. Hard Drive A (/dev/hda) is the first drive and Hard Drive C (/dev/hdc) is the third.
A typical PC has two IDE controllers, each of which can have two drives connected to it. For example, /dev/hda is the first drive (master) on the first IDE controller and /dev/hdd is the second (slave) drive on the second controller (the fourth IDE drive in the computer).
Maximum usable partitions 63 for IDE disks.
SCSI
SCSI drives follow a similar pattern; They are represented by ‘sd’ instead of ‘hd’. The first partition of the second SCSI drive would therefore be /dev/sdb1.
Maximum usable partitions 15 for SCSI disks.
A partition is labeled to host a certain kind of file system (not to be confused with a volume label). Such a file system could be the linux standard ext2 file system or linux swap space, or even foreign file systems like (Microsoft) NTFS or (Sun) UFS. There is a numerical code associated with each partition type. For example, the code for ext2 is 0x83 and linux swap is 0x82.
To see a list of partition types and their codes, execute /sbin/sfdisk -T

Solved: How to cap memory on a Solaris 10 zone.

If you want to cap the usage of memory for a zone, follow below steps:-

Here we will ensure that zone(zcldvdas) doesn't use more than 3072mb memory.

# zonecfg -z zcldvdas

zonecfg:zcldvdas> add capped-memory

zonecfg:zcldvdas:capped-memory> set physical=3072m

zonecfg:zcldvdas:capped-memory> end

zonecfg:zcldvdas> verify

zonecfg:zcldvdas> commit

zonecfg:zcldvdas> exit

Now if you want to dedicate  3072mb memory to a zone so that it's always available only to this zone. Follow below steps:-

# zonecfg -z zcldvdas

zonecfg:zcldvdas> add capped-memory

zonecfg:zcldvdas:capped-memory> set locked=3072m

zonecfg:zcldvdas:capped-memory> end

zonecfg:zcldvdas> verify

zonecfg:zcldvdas> commit

zonecfg:zcldvdas> exit

You can also use a combination of physical and locked to assign max and min memory to a zone.

In the next example we are assigning maximum memory the zone can use as 3072mb while minimum 1024mb which should always be available to zone.

# zonecfg -z zcldvdas

zonecfg:zcldvdas> add capped-memory

zonecfg:zcldvdas:capped-memory> set physical=3072m

zonecfg:zcldvdas:capped-memory> set locked=1024m

zonecfg:zcldvdas:capped-memory> end

zonecfg:zcldvdas> verify

zonecfg:zcldvdas> commit

zonecfg:zcldvdas> exit

This change will be effective after reboot of the local zone.

zoneadm -z zcldvdas reboot

From Solaris10u4 onwards you can cap the memory online also using rcapadm.

rcapadm -z zcldvdas -m 3G

But remember the changes made my rcapadm are not persistent across reboot so you will still have to make the entry in zonecfg as discussed above.

You can view the set memory using rcapstat from Global Zone.

rcapstat -z 2 5

From local zone you can check this with prtconf.

prtconf -vp | grep Mem

Solved: How to enable auditing of zones from Global Zone in a Solaris10 Server

Auditing is a good way to keep logs of all the activities happening in your Solaris server. In this post we will see how to enable auditing of both global and local zones and store the logs of all in a single file in global zone.

1) In the global zone create a new FS of 20GB and mount it.

mkdir /var/audit/gaudit
mount /dev/md/dsk/d100 /var/audit/gaudit
chmod -R 750 /var/audit/gaudit

2) Modify /etc/security/audit_control and add "lo,ex" before flags and naflags as below.

vi audit_control
#
# Copyright (c) 1988 by Sun Microsystems, Inc.
#
# ident "@(#)audit_control.txt 1.4 00/07/17 SMI"
#
dir:/var/audit/gaudit
flags:lo,ex
minfree:20
naflags:lo,ex

3) Modify /etc/security/audit_startup and add +argv and +zonename entries as described below. This entry will create audit logs for all zones in /var/audit/gaudit .

vi audit_startup
#! /bin/sh
#
# Copyright 2004 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)audit_startup.txt 1.1 04/06/04 SMI"

/usr/bin/echo "Starting BSM services."
/usr/sbin/auditconfig -setpolicy +cnt
/usr/sbin/auditconfig -conf
/usr/sbin/auditconfig -aconf
/usr/sbin/auditconfig -setpolicy +argv
/usr/sbin/auditconfig -setpolicy +zonename
#

4) Copy audit_control file to /etc/security of each zone or loopback mount them in each zone.

5) Once all the zones are configured enable the audit service by running /etc/security/bsmconv. This will require reboot of system.

6) Check audit logs in /var/audit/gaudit using

auditreduce 20170709091522.not_terminated.solaris1 | praudit

7) For checking logs of a specific zone follow below

root@solaris1 # auditreduce -z zone1 20170709091522.not_terminated.solaris1 | praudit
file,2017-07-09 16:26:00.000 +02:00,
zone,zone1
header,160,2,execve(2),,solaris1,2017-07-09 16:26:00.697 +02:00
path,/usr/sbin/ping
attribute,104555,root,bin,85,200509,0
exec_args,2,ping,127.0.0.1
subject,root,root,root,root,root,2164,2187,0 0 0.0.0.0
return,success,0
zone,zone1
file,2017-07-09 16:26:00.000 +02:00,
root@solaris1 #

Solved: How to take XSCF snapshot of M-Series server running Solaris

In this post we will see how to take XSCF snapshot of an M-Series server

Save snapshot on different server

  • First create a user "test" in OS of server in which you want to save snapshot.
  • Next login to XSCF of server whose snapshot you want to take.
  • Take snapshot by giving IP of destination server on which you want to save the data using the below syntax.
    snapshot -LF -t username@serverip:/full_path_to_data_location -k download

Here is an example. We created test user in 192.168.99.10 destination server, and snapshot will be saved in it's /var/tmp directory.

XSCF> snapshot -LF -t test@192.168.99.10:/var/tmp -k download

Save snapshot on same server.

If you want to save snapshot on same server of which you are collecting snapshot use below steps.

  • Login to XSCF and check the DSCP config to know the IP of each domain.
XSCF> showdscp

DSCP Configuration:

Network: 10.1.1.0
Netmask: 255.255.255.0

Location Address
---------- ---------
XSCF 10.1.1.1
Domain #00 10.1.1.2
Domain #01 10.1.1.3
Domain #02 10.1.1.4
Domain #03 10.1.1.5
  • Check the running domain
XSCF> showdomainstatus -a
DID Domain Status
00 Running
01 -
02 -
03 -
  • Ping to ensure you can connect to the network
    XSCF> ping 10.1.1.2
    
    PING 10.1.1.2 (10.1.1.2): 56 data bytes
    64 bytes from 10.1.1.2: icmp_seq=0 ttl=255 time=2.1 ms
    64 bytes from 10.1.1.2: icmp_seq=1 ttl=255 time=2.0 ms
  • Take snapshot after creating a user on the OS.
    XSCF> snapshot -LF -t test@10.1.1.2:/var/tmp -k download

Solved: How to scan new LUNs in Redhat Linux

In this post  we will discuss how to scan new LUNs allocated by storage team to a Redhat Linux system.
There are two ways of scanning the LUNs
Method 1:-
Find how many SCSI bus controllers you have

  • Go to directory /sys/class/scsi_host/  and list it’s contents.

cd /sys/class/scsi_host/ 
[root@scsi_host]# ls
host0 host1 host2
[root@scsi_host]#
  • Here we can see we have three SCSI bus controllers. So in below command replace hostX with these directory names.
Run the Command ,
echo "- - -" > /sys/class/scsi_host/hostX/scan 
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host2/scan
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host3/scan
TIP:- Here the “- – -” denotes CxTxDx i.e. Channel(controller) , Target ID and Disk or LUN number. This is asked in Linux Admin Interviews also.
  • Repeat the above step for all three directories.
If you have FC HBA in the system you can follow the steps as below:-
  • First check number of FC controllers in your system
#ls /sys/class/fc_hosthost0 host1 host2
  • To scan FC LUNs execute commands as
echo "1" > /sys/class/fc_host/host0/issue_lip
echo "1" > /sys/class/fc_host/host1/issue_lip
echo "1" > /sys/class/fc_host/host2/issue_lip

Tip :- Here echo “1” operation performs a Loop Initialization Protocol (LIP) and then scans the interconnect and causes the SCSI layer to be updated to reflect the devices currently on the bus. A LIP is, essentially, a bus reset,  and will cause device addition and removal. This procedure is necessary to configure a new SCSI target on a Fibre Channel interconnect. Bear in mind that issue_lip is an asynchronous operation.
  • Verify if the new disk is visible now
fdisk -l |egrep '^Disk' |egrep -v 'dm-'
Method 2 :-
  • Next method is to scan using SG3 utility. You can install it using
yum install sg3_utils
  • Once installed  run the command
/usr/bin/rescan-scsi-bus.sh

Solved: How to add swap space in Redhat or Ubuntu Linux

In this post  we will see how we can add a file as swap space in Linux. Same steps are to be followed for Redhat and Ubuntu Linux.
Type the following command to create 100MB swap file (1024 * 100MB = 102400 block size):
dd if=/dev/zero of=/swap1 bs=1024 count=102400
Secure swap file
Setup correct file permission for security reasons, enter:
# sudo chown root:root /swap1# sudo chmod 0600 /swap1
Set up a Linux swap area
Type the following command to set up a Linux swap area in a file:
# sudo mkswap /swap1
Activate /swap1 swap space :
# sudo swapon /swap1
Update /etc/fstab file to make it persistent across reboot.
vi /etc/fstab
Add the following line in file:
/swap1 swap swap defaults 0 0
To check if the swap file is added or not
Type the following swapon command:
#sudo swapon -sFilename Type Size Used Priority/dev/dm-0 partition 839676 0 -1/swap1 file 102396 0 -2
It should show you the new file.
If you want add a logical volume for swap please refer how to add LV for swap .

How to add logical volume for swap in Redhat Linux

In our last post we have seen how to add a file for swap space.
In this post we will see how to add a LVM2 Logical Volume as swap.
Here we have a VG name VG1 in which we will create a volume LV1 of 1GB.
# lvcreate VG1 -n LV1 -L 1G
Format the new swap space using mkswap:
# mkswap /dev/VG1/LV1
Update /etc/fstab file with below entry:
# /dev/VG1/LV1 swap swap defaults 0 0
Enable the extended logical volume:
# swapon -v /dev/VG1/LV1

Solved: How to change the keyboard layout for Redhat Linux

Nowadays we work as global teams and with people speaking different languages.
Some times you may face a situation where the Linux OS is installed with preference to another language e.g. French.  The layout of french keyboard is different from that of US keyboard. Thus if you type  “A” in US keyboard it will actually print “Q” . (Here you can get images of french keyboard)
This can be very frustrating since you are already accustomed with the US keyboard layout.
To  get you out of this situation the easiest way is that once you login to the linux box. Run below command.
loadkeys us
This simple command will map the session with US keyboard layout. So now when you type “A” in your US layout keyboard it will be printed as “A” only. And this won’t change any langs in OS as it is mapped only to your session.
Do note that before you login you will still have to type userid and password in french layout only.  Command can obviously be executed only after you login. So the image link i shared above should be helpful in getting you through the login stage.
Hope this post helps you.

Solved: How to change hostname in AWS EC2 instance of RHEL 7

In our last post we have seen how to change hostname of an RHEL server.
But if you are using the RHEL 7 AMI provided on AWS marketplace the steps will be slightly different.
First login to your EC2 instance. (Check this post to know How to login to AWS EC2 Linux instance.)
Once you login to your EC2 instance execute below command.
 sudo hostnamectl set-hostname --static cloudvedas
(Here “cloudvedas” is the new hostname.)
If you want to make it persistent across reboot follow further.
Now using vi or vim editor edit the file /etc/cloud/cloud.cfg
sudo vi /etc/cloud/cloud.cfg
At the end of file add the following line and save the file
preserve_hostname: true
Finally reboot the server
sudo reboot
Once the server is up, check the hostname.
ec2-user# hostname
cloudvedas
ec2-user#
It should now show you the new hostname.

Solved: How to change hostname in Redhat Linux

In this post we will see how to change hostname in Redhat Linux 4, 5, 6 and 7 .
For changing the hostname you should have root access. To switch to root do:-
sudo su -
If you want to change the hostname temporarily do as below:-
hostname <new-hostname>
e.g.
hostname cloudvedas
But above change is not permanent and will change back to old name after reboot.
To make the change permanent you will have to make the entry in couple of files. If you are using RHEL 4,5,6 use the below steps, else directly go down to section for RHEL 7. If you don’t want to modify files manually refer this post to change hostname using nmtui or nmcli tools.
RedHat based system use the file /etc/sysconfig/network to read the saved hostname at system boot.
NETWORKING=yes
HOSTNAME=cloudvedas
Next you have to modify /etc/hosts file. Entry in hosts file should look like.
172.25.31.65 cloudvedas
RHEL 7
If you are having RHEL 7 you just have to execute one command which will take care of everything.
hostnamectl set-hostname --static cloudvedas
The above steps will make the hostname change persistent across reboot.
If you want, try to reboot the server with command “reboot” .
Once the server is up, check the hostname.
root# hostname
cloudvedas
root#
It should now show you the new hostname.
If you are using RHEL AMI from AWS marketplace the steps will be slightly different. For that refer our other post How to change hostname in AWS EC2 instance of RHEL 7 .

Solved: How to change the IAM role in Redshift cluster

In our last post we checked How to create a Redshift Cluster?
In this post we will see how to change the IAM role assigned to Redshift cluster.(Check earlier post to know how to create an IAM role.)
Note: Be careful to test the new role first in dev. Don’t try this in prod directly.
  • First create a role as per your requirement.
  • Next login to Redshift Console . Do check you are in correct region.
  • Select the Cluster by clicking on check box.
  • On the top you will see an option of “Manage IAM Roles”.  Click on that.
  • Select the role you created recently from the drop down and click “Apply Changes”
  • It will take 3-5 minutes to reflect the changes.

Solved: How to reset the Master User Password of Redshift Cluster

In our last post we checked How to create a Redshift Cluster?
At times you may want to reset the password of the master user in DB of Redshift.
In this post we will see how to reset the password of  master user in Redshift cluster.
Note:- Ensure that you really want to change the password.  Changing the password in production can break apps which may have old password stored anywhere.
  • Go to Redshift console and click on cluster for which you want to reset the password. In our case it was “testdw” .
  • Once you are inside the cluster configuration click on the “Cluster” drop down and select “Modify”.
  • You will see the field “Master user password”.  Enter the new password in it.
  • Finally click “Modify” .
This will change the master user password. Try connecting to the cluster with the new password.

Install vagrant and create VM on your windows laptop or desktop

In this post we will show you how to install vagrant in your laptop and start a Ubuntu Linux VM.
You will have to download these softwares and install them on your laptop.
  • Once they are downloaded.
First Install Virtual Box.  Just click on exe and follow installation instructions.
Next Install Vagrant. Just click on exe and follow installation instructions.
Putty and PuttyGen don’t need any installation, you can use them directly.
Once the pre-requisites are installed let’s move ahead.
  • Open the windows command prompt. CMD
  • Go to the directory where you will need the vagrant machines to be installed. e.g.
cd C:\users
  • Once you are in desired directory create a new directory .
mkdir ProjectVagrantcd ProjectVagrant
  • Create a new Vagrant File
vagrant init ubuntu/trusty64
  • Below command will download the Ubuntu Trusty 64 OS image and create VM. Ensure you are connected to network.
vagrant up
  • Setup ssh
vagrant ssh
Above command will give you IP, Port and Private key for the machine.
  • Using putty you can connect to the machine. Add entries like below.
Hostname:- vagrant@127.0.0.1
Port :- 2222
Default password :- vagrant


If you want to login using the ssh key you will have to first convert the .pem key to .ppk. Follow this post on How to convert .pem key to .ppk and login to VM ?
That’s all in this post! Hope it’s useful for you. If you have any query please write in comments section.