How to create an IAM user in AWS

In this post we will see how to create an IAM user which can be used to access S3 using CLI.
  • Login to  AWS IAM console.
  • In the left pane click on “Users”
  • Click on “Add user”
a) User name: S3User
b) Access Type: Check Programmatic Access.
  • Click Next
a) Select Attach existing policies directly.
b) Search for AmazonS3FullAccess and select it.
  • Click Next.
  • Review everything and click “Create user”
It will show you “Access key ID”  and “Secret access key”. Save them as you won’t be able to see the “Secret access key” once you close this page.
In this tutorial we have selected the existing S3 policy but you can also attach your own customized policy to make the access more secure.
Congrats you have created an IAM user successfully. You can use this to access S3 using CLI. Check this post for details.

Solved: How to copy paste in Docker Quickstart Terminal

If you want to copy/paste the contents on Docker Quickstart Terminal using mouse follow these steps.
  • Open the Docker Quickstart Terminal as an Administrator.
  • At the top of terminal right Click on the Blue Whale icon and select “Defaults”.
  • In the “Options” tab of new window check the QuickEdit Mode and click OK.
  • Now with mouse left click you can select the content and paste with right click.

Solved: Getting nobody:nobody as owner of NFS filesystem on Solaris client

If the NFS Version 4 client does not recognize a user or group name from the server, the client is unable to map the string to its unique ID, an integer value. Under such circumstances, the client maps the inbound user or group string to the nobody user. This mapping to nobody creates varied problems for different applications.
Because of these ownership issues you may see filesystem has permission of nobody:nobody on the NFS client.
To avoid this situation you can mount filesystem with NFS version 3 as shown below.
On the NFS client, mount a file system using the NFS v3
# mount -F nfs -o vers=3 host:/export/XXX /YYY
e.g.
# mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test
If this works fine than make the entry permanent by modifying the /etc/default/nfs file and uncomment the variable NFS_CLIENT_VERSMAX and put entry 3
vi /etc/default/nfsNFS_CLIENT_VERSMAX=3
If you are still getting permissions as nobody:nobody then you have to share the filesystem on NFS server as anon.
share -o anon=0 /home/cv/share_fs
Now try re-mount on NFS Client
mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test

How to Fix Crawl Errors in Google Search console or Google WebmasterTool

If you have moved your website links recently or deleted them, then you may see crawl error in your Google Webmaster Tools or Google Search Console(new name of the tool).
You will see two type of errors on the Crawl Errors page.
Site errors: In a normal operating site you generally won’t have these errors. Also as per google if they see large number of Site errors for your website they will try to notify you in the form of a message no matter how big or small your site is.  These type of errors generally comes when your site is down for long time or is not reachable to google bots because of issues like DNS errors or excessive page load time.
URL errors: These are the most common errors you will find for a website. It can be because of multiple reasons like you moved or renamed the pages, or you have permanently deleted a post.
These errors may impact your search engine rankings as google don’t want the users to go to pages that doesn’t exists.
So let’s see how you can fix these URL errors.
Once you login to the Google Webmaster Tools and select the verified property you should see the URL errors marked for your site.
It has two tabs Desktop and Smartphone that shows the errors in respective version of your website.
Select the error and you will see the website link which is broken. It can be your old post which you have moved or deleted.
If you are a developer you can redirect the pages of the broken links to working pages.
But if you don’t want to mess with the code you can install a free plugin called Redirection . Below we will see how you can install and use this plugin.
  • For installing the plugin go to Dashboard> Plugins> Add New
  • Search the plugin “Redirection” and click Install > Activate.
  • After you have installed the plugin go to Dashboard> Tools> Redirection.
  • Once on the Redirection settings pages select “Redirects” from the top.
  • In the “Source URL” copy/paste the URL for which you are getting error.
  • In the “Target URL” copy/paste the working URL.

  • Click Add Redirect.
You can also redirect all your broken URLs to  your Homepage. But if the post is available on different link, then it’s recommended that you redirect the broken link to new working link of that post. This will enhance the user experience
Last step is that you go back to the Google Webmaster Tools page. Select the URL you just corrected and click on “Mark as Fixed” .
Hope this post helps you. Do let me know your opinion in comments section.

In what sequence startup and shutdown RC scripts are executed in Solaris

We can use RC (Run Control) scripts present in /etc/rc* directories to start or stop a service during bootup and shutdown.
Each rc directory is associated with a run level. For example the scripts in rc3.d will be executed when the system is going in Run Level 3.
All the scripts in these directories must follow a special pattern so that they can be considered for execution.
The startup script file starts with a “S” and kill script start with a “K”. The uppercase is very important or the file will be ignored.
The sequence of these startup and shutdown script is crucial if the applications are dependent on each other.
For example while booting up a database should start before application. While during shutdown the application should shutdown first followed by Database.
Here we will see how we can sequence these scripts.
So as given in below example during startup S90datetest.sh script was executed first and than S91datetest.sh .
Time during execution of CloudVedas script S90datetest.sh is Monday, 23 September 2016 16:19:43 IST
Time during execution of CloudVedas script S91datetest.sh is Monday, 23 September 2016 16:19:48 IST
Similarly during shutdown K90datetest.sh script was executed first and then K91datetest.sh .
Time during execution of CloudVedas script K90datetest.sh is Monday, 23 September 2016 16:11:43 IST
Time during execution of CloudVedas script K91datetest.sh is Monday, 23 September 2016 16:11:48 IST
This sequencing is also a trick interview question and it confuses many people.