Solved: How to copy paste in Docker Quickstart Terminal

If you want to copy/paste the contents on Docker Quickstart Terminal using mouse follow these steps.
  • Open the Docker Quickstart Terminal as an Administrator.
  • At the top of terminal right Click on the Blue Whale icon and select “Defaults”.
  • In the “Options” tab of new window check the QuickEdit Mode and click OK.
  • Now with mouse left click you can select the content and paste with right click.

Solved: Getting nobody:nobody as owner of NFS filesystem on Solaris client

If the NFS Version 4 client does not recognize a user or group name from the server, the client is unable to map the string to its unique ID, an integer value. Under such circumstances, the client maps the inbound user or group string to the nobody user. This mapping to nobody creates varied problems for different applications.
Because of these ownership issues you may see filesystem has permission of nobody:nobody on the NFS client.
To avoid this situation you can mount filesystem with NFS version 3 as shown below.
On the NFS client, mount a file system using the NFS v3
# mount -F nfs -o vers=3 host:/export/XXX /YYY
e.g.
# mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test
If this works fine than make the entry permanent by modifying the /etc/default/nfs file and uncomment the variable NFS_CLIENT_VERSMAX and put entry 3
vi /etc/default/nfsNFS_CLIENT_VERSMAX=3
If you are still getting permissions as nobody:nobody then you have to share the filesystem on NFS server as anon.
share -o anon=0 /home/cv/share_fs
Now try re-mount on NFS Client
mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test

How to Fix Crawl Errors in Google Search console or Google WebmasterTool

If you have moved your website links recently or deleted them, then you may see crawl error in your Google Webmaster Tools or Google Search Console(new name of the tool).
You will see two type of errors on the Crawl Errors page.
Site errors: In a normal operating site you generally won’t have these errors. Also as per google if they see large number of Site errors for your website they will try to notify you in the form of a message no matter how big or small your site is.  These type of errors generally comes when your site is down for long time or is not reachable to google bots because of issues like DNS errors or excessive page load time.
URL errors: These are the most common errors you will find for a website. It can be because of multiple reasons like you moved or renamed the pages, or you have permanently deleted a post.
These errors may impact your search engine rankings as google don’t want the users to go to pages that doesn’t exists.
So let’s see how you can fix these URL errors.
Once you login to the Google Webmaster Tools and select the verified property you should see the URL errors marked for your site.
It has two tabs Desktop and Smartphone that shows the errors in respective version of your website.
Select the error and you will see the website link which is broken. It can be your old post which you have moved or deleted.
If you are a developer you can redirect the pages of the broken links to working pages.
But if you don’t want to mess with the code you can install a free plugin called Redirection . Below we will see how you can install and use this plugin.
  • For installing the plugin go to Dashboard> Plugins> Add New
  • Search the plugin “Redirection” and click Install > Activate.
  • After you have installed the plugin go to Dashboard> Tools> Redirection.
  • Once on the Redirection settings pages select “Redirects” from the top.
  • In the “Source URL” copy/paste the URL for which you are getting error.
  • In the “Target URL” copy/paste the working URL.

  • Click Add Redirect.
You can also redirect all your broken URLs to  your Homepage. But if the post is available on different link, then it’s recommended that you redirect the broken link to new working link of that post. This will enhance the user experience
Last step is that you go back to the Google Webmaster Tools page. Select the URL you just corrected and click on “Mark as Fixed” .
Hope this post helps you. Do let me know your opinion in comments section.

In what sequence startup and shutdown RC scripts are executed in Solaris

We can use RC (Run Control) scripts present in /etc/rc* directories to start or stop a service during bootup and shutdown.
Each rc directory is associated with a run level. For example the scripts in rc3.d will be executed when the system is going in Run Level 3.
All the scripts in these directories must follow a special pattern so that they can be considered for execution.
The startup script file starts with a “S” and kill script start with a “K”. The uppercase is very important or the file will be ignored.
The sequence of these startup and shutdown script is crucial if the applications are dependent on each other.
For example while booting up a database should start before application. While during shutdown the application should shutdown first followed by Database.
Here we will see how we can sequence these scripts.
So as given in below example during startup S90datetest.sh script was executed first and than S91datetest.sh .
Time during execution of CloudVedas script S90datetest.sh is Monday, 23 September 2016 16:19:43 IST
Time during execution of CloudVedas script S91datetest.sh is Monday, 23 September 2016 16:19:48 IST
Similarly during shutdown K90datetest.sh script was executed first and then K91datetest.sh .
Time during execution of CloudVedas script K90datetest.sh is Monday, 23 September 2016 16:11:43 IST
Time during execution of CloudVedas script K91datetest.sh is Monday, 23 September 2016 16:11:48 IST
This sequencing is also a trick interview question and it confuses many people.

AWS Crash Course - EMR

What is EMR?
  • AWS EMR(Elastic MapReduce) is a managed hadoop framework.
  • It provides you an easy, cost-effective and highly scalable way to process large amount of data.
  • It can be used for multiple things like indexing, log analysis, financial analysis, scientific simulation, machine learning etc.
Cluster and Nodes
  • The centerpiece of EMR is Cluster.
  • Cluster is a collection of EC2 instances also called as nodes.
  • All nodes of an EMR cluster are launched in same availability zone.
  • Each node has a role in cluster.
Type of EMR Cluster Nodes
Master Node:- It’s the main boss which manages the cluster by running software components and distributing the tasks to other nodes. Master node will monitor task status and health of cluster.
Core Node:- It’s a slave node which “run tasks” and “store data” in HDFS (Hadoop Distributed Filesystem).
Task Node:- This is also a slave node but it only “run tasks”. It doesn’t store any data. It’s an optional node.
Cluster Types
EMR has two type of clusters
1) Transient :- These are clusters which are shutdown once the jobs is done. These are useful when you don’t need cluster to be running all day long and can save money by shutting them down.
2) Persistent :- Persistent clusters are those which need to be always available to process the continuous stream of jobs or you want the data to be always available on HDFS.
Different Cluster States
An EMR cluster goes through multiple stages as described below:-
STARTING – The cluster provisions, starts, and configures EC2 instances.
BOOTSTRAPPING – Bootstrap actions are being executed on the cluster.
RUNNING – A step for the cluster is currently being run.
WAITING – The cluster is currently active, but has no steps to run.
TERMINATING – The cluster is in the process of shutting down.
TERMINATED – The cluster was shut down without error.
TERMINATED_WITH_ERRORS – The cluster was shut down with errors.


Types of filesystem in EMR
Hadoop Distributed File System (HDFS)
Hadoop Distributed File System (HDFS) is a distributed, scalable file system for Hadoop. HDFS distributes the data it stores across instances in the cluster, storing multiple copies of data on different instances to ensure that no data is lost if an individual instance fails. HDFS is ephemeral storage that is reclaimed when you terminate a cluster.
EMR File System (EMRFS)
Using the EMR File System (EMRFS), Amazon EMR extends Hadoop to add the ability to directly access data stored in Amazon S3 as if it were a file system like HDFS. You can use either HDFS or Amazon S3 as the file system in your cluster. Most often, Amazon S3 is used to store input and output data and intermediate results are stored in HDFS.
Local File System
The local file system refers to a locally connected disk. When you create a Hadoop cluster, each node is created from an Amazon EC2 instance that comes with a preconfigured block of preattached disk storage called an instance store. Data on instance store volumes persists only during the lifecycle of its Amazon EC2 instance.
Programming languages supported by EMR
  • Perl
  • Python
  • Ruby
  • C++
  • PHP
  • R
EMR Security
  • EMR integrates with IAM to manage permissions.
  • EMR has Master and Slave security groups for nodes to control the traffic access.
  • EMR supports S3 server-side and client-side encryption with EMRFS.
  • You can launch EMR clusters in your VPC to make it more secure.
  • EMR integrates with CloudTrail so you will have log of all activites done on cluster.
  • You can login via ssh to EMR cluster nodes using EC2 Key Pairs.
EMR Management Interfaces
  • Console :-  You can manage your EMR clusters from AWS EMR Console .
  • AWS CLI :-  Command line provides you a rich way of controlling the EMR. Refer here the EMR CLI .
  • Software Development Kits (SDKs) :- SDKs provide functions that call Amazon EMR to create and manage clusters. It’s currently available only for the supported languages mentioned above. You can check here some sample code and libraries.
  • Web Service API :- You can use this interface to call the Web Service directly using JSON. You can get more information from API reference Guide .
EMR Billing
  • You pay for EC2 instances used in cluster and EMR.
  • You are charged for per instance hours.
  • EMR supports  On-Demand, Spot, and Reserved Instances
  • As a cost saving measure it is recommenced that task nodes should be Spot instances
  • It’s not a good idea to use spot instances for Master or Core Node as they store data on them. And you will lose data once the node is terminated.
If you want to try some EMR hands on refer this tutorial.

  • This AWS Crash Course series is created to give you a quick snapshot of AWS technologies.  You can check about other AWS services in this series over here .