Showing posts with label Tech Blog. Show all posts
Showing posts with label Tech Blog. Show all posts

How to add logical volume for swap in Redhat Linux

In our last post we have seen how to add a file for swap space.
In this post we will see how to add a LVM2 Logical Volume as swap.
Here we have a VG name VG1 in which we will create a volume LV1 of 1GB.
# lvcreate VG1 -n LV1 -L 1G
Format the new swap space using mkswap:
# mkswap /dev/VG1/LV1
Update /etc/fstab file with below entry:
# /dev/VG1/LV1 swap swap defaults 0 0
Enable the extended logical volume:
# swapon -v /dev/VG1/LV1

Solved: How to change the keyboard layout for Redhat Linux

Nowadays we work as global teams and with people speaking different languages.
Some times you may face a situation where the Linux OS is installed with preference to another language e.g. French.  The layout of french keyboard is different from that of US keyboard. Thus if you type  “A” in US keyboard it will actually print “Q” . (Here you can get images of french keyboard)
This can be very frustrating since you are already accustomed with the US keyboard layout.
To  get you out of this situation the easiest way is that once you login to the linux box. Run below command.
loadkeys us
This simple command will map the session with US keyboard layout. So now when you type “A” in your US layout keyboard it will be printed as “A” only. And this won’t change any langs in OS as it is mapped only to your session.
Do note that before you login you will still have to type userid and password in french layout only.  Command can obviously be executed only after you login. So the image link i shared above should be helpful in getting you through the login stage.
Hope this post helps you.

Solved: How to change hostname in AWS EC2 instance of RHEL 7

In our last post we have seen how to change hostname of an RHEL server.
But if you are using the RHEL 7 AMI provided on AWS marketplace the steps will be slightly different.
First login to your EC2 instance. (Check this post to know How to login to AWS EC2 Linux instance.)
Once you login to your EC2 instance execute below command.
 sudo hostnamectl set-hostname --static cloudvedas
(Here “cloudvedas” is the new hostname.)
If you want to make it persistent across reboot follow further.
Now using vi or vim editor edit the file /etc/cloud/cloud.cfg
sudo vi /etc/cloud/cloud.cfg
At the end of file add the following line and save the file
preserve_hostname: true
Finally reboot the server
sudo reboot
Once the server is up, check the hostname.
ec2-user# hostname
cloudvedas
ec2-user#
It should now show you the new hostname.

Install vagrant and create VM on your windows laptop or desktop

In this post we will show you how to install vagrant in your laptop and start a Ubuntu Linux VM.
You will have to download these softwares and install them on your laptop.
  • Once they are downloaded.
First Install Virtual Box.  Just click on exe and follow installation instructions.
Next Install Vagrant. Just click on exe and follow installation instructions.
Putty and PuttyGen don’t need any installation, you can use them directly.
Once the pre-requisites are installed let’s move ahead.
  • Open the windows command prompt. CMD
  • Go to the directory where you will need the vagrant machines to be installed. e.g.
cd C:\users
  • Once you are in desired directory create a new directory .
mkdir ProjectVagrantcd ProjectVagrant
  • Create a new Vagrant File
vagrant init ubuntu/trusty64
  • Below command will download the Ubuntu Trusty 64 OS image and create VM. Ensure you are connected to network.
vagrant up
  • Setup ssh
vagrant ssh
Above command will give you IP, Port and Private key for the machine.
  • Using putty you can connect to the machine. Add entries like below.
Hostname:- vagrant@127.0.0.1
Port :- 2222
Default password :- vagrant


If you want to login using the ssh key you will have to first convert the .pem key to .ppk. Follow this post on How to convert .pem key to .ppk and login to VM ?
That’s all in this post! Hope it’s useful for you. If you have any query please write in comments section.

Solved: How to login to AWS EC2 Linux Instance

In this post we will discuss how you can login to your AWS EC2 linux instance using Putty.
Pre-requisites :-
Once you are done with the pre-requisites let’s move ahead.
Convert .pem key to .ppk
  • First we will convert the .pem key to .ppk key.
  • Click on PuttyGen you downloaded.
  • Click on “Load”. Browse and select your private key with .pem extension.
Now click on “Save private key” .
It will ask if you want to add passphrase. It’s like additional password when you login. If you want you can enter passphrase in “Key passphrase”.
For this exercise i just clicked on “Yes” .

  • Save key with the name you like. Check that the new key file now have .ppk extension.
Using the  Key for Login
Now we will use the .ppk key we just created to login to our ec2 instance.
  • Open Putty that we downloaded earlier.
  • In the left Pane click on Session.
In hostname enter you server details like user name and IP.
If you are using Amazon Linux Image the default user is ec2-user.  So entry will be like ec2-user@33.44.55.66  and Port 22.

  • In the left navigation pane  click on “Connection” and expand it.
Next expand “SSH” and click on “Auth” (refer image below).

In the right pane click on Browse and select the .ppk key we created earlier.
  • Now in the left navigation pane click on “Session” again. In the right pane in the “Saved sessions”, name the session as “test” or whatever you like and click save. This will save your session so that you don’t have to do this activity again.
  • Finally select the session you created and click “Open”. If all is configured correctly you will now be logged in to you EC2 instance.
Note:- If your ssh session gets timed out after being idle for few minutes check this post on how to set putty keep alive time.

Solved: "Network error: Software caused connection abort"

Some time you may have noticed that your putty session is getting disconnected with error “Network error: Software caused connection abort”
This can happen because of time out setting on server or sometime due to firewall. To resolve this issue you will have to set a keep alive time for the session.
After you set a keep alive time putty will send a packet after the specified seconds to keep the session live.
Generally you can set it to 240 seconds i.e. 4 minutes. But at times you may have to keep it low. Like when I connect from my home laptop to my AWS EC2 instance I’ve to keep it at 2 secs.
To set it:
  • Open Putty.
  • Load the session for which you are facing timeout issue.
  • Click on Connection in the left pane .
Here we have set Seconds between keepalives to 2 . (refer image below)



  • Finally click on “Session” in left pane and save the session.
If you already have other saved sessions in putty you will have to repeat above steps for each of the saved session if needed.

How to load data in Amazon Redshift Cluster and query it?

In the last post we checked How to build a Redshift Cluster? . In this post we will see how to load data in that cluster and query it.
Pre-requisite :-
Download SQL Workbench
Download Redshift Driver
Once you have downloaded the above mentioned pre-requisites let's move ahead.
First we will obtain the JDBC URL.
  • Login to your AWS Redshift Console .
  • Click on the cluster you have created. If you have followed the last post it will be "testdw" .
  • In the "Configuration" tab look for JDBC URL .
  • Copy the JDBC URL and save it in notepad.
Now open the SQL Workbench. You just have to click on 32-bit or 64-bit exe as per your OS version. In my case I am using Windows 10 64-bit so the exe name is SQLWorkbench64  .
  • Click on File > Connect window.
  • In the bottom left of the "Select Connection Profile" window click on "Manage Drivers"
  • In the "Manage Drivers" window click on the folder icon, browse to the location of the Redshift driver you downloaded earlier and select it.
Fill other details in "Manage Drivers" Window as below.
Name:- Redshiftdriver JDBC 4.2
Classname:- com.amazon.redshift.jdbc.Driver
  • Click OK
  • Now in the "Select Connection Profile" window
Fill details as below. You can also refer the image below for this.
Driver:- Select the Redshift driver you added.
URL:- Mention the JDBC URL you saved earlier.
Username:- DB username you mentioned during cluster creation.
Password:- Enter password of the DB user .

Check the Autocommit box.






  • Finally click on OK.
  • If everything is configured correctly. You will get connected to DB .
  • Try executing the query
select * from information_schema.tables;
  • If you connection is successful you will see results in the window.
  • Now we will load some sample data which is provided by AWS and kept on S3. In the SQL Workbench copy/paste the below query and execute to create a table.
create table users(
 userid integer not null distkey sortkey,
 username char(8),
 firstname varchar(30),
 lastname varchar(30),
 city varchar(30),
 state char(2),
 email varchar(100),
 phone char(14),
 likesports boolean,
 liketheatre boolean,
 likeconcerts boolean,
 likejazz boolean,
 likeclassical boolean,
 likeopera boolean,
 likerock boolean,
 likevegas boolean,
 likebroadway boolean,
 likemusicals boolean);

create table venue(
 venueid smallint not null distkey sortkey,
 venuename varchar(100),
 venuecity varchar(30),
 venuestate char(2),
 venueseats integer);

create table category(
 catid smallint not null distkey sortkey,
 catgroup varchar(10),
 catname varchar(10),
 catdesc varchar(50));

create table date(
 dateid smallint not null distkey sortkey,
 caldate date not null,
 day character(3) not null,
 week smallint not null,
 month character(5) not null,
 qtr character(5) not null,
 year smallint not null,
 holiday boolean default('N'));

create table event(
 eventid integer not null distkey,
 venueid smallint not null,
 catid smallint not null,
 dateid smallint not null sortkey,
 eventname varchar(200),
 starttime timestamp);

create table listing(
 listid integer not null distkey,
 sellerid integer not null,
 eventid integer not null,
 dateid smallint not null  sortkey,
 numtickets smallint not null,
 priceperticket decimal(8,2),
 totalprice decimal(8,2),
 listtime timestamp);

create table sales(
 salesid integer not null,
 listid integer not null distkey,
 sellerid integer not null,
 buyerid integer not null,
 eventid integer not null,
 dateid smallint not null sortkey,
 qtysold smallint not null,
 pricepaid decimal(8,2),
 commission decimal(8,2),
 saletime timestamp);
Now load sample data. Ensure that in below query you replace "<iam-role-arn>" with your ARN.
So your query should look like.
copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt' 
credentials 'aws_iam_role=arn:aws:iam::123456789123:role/redshiftrole' 
delimiter '|' region 'us-west-2';
copy users from 's3://awssampledbuswest2/tickit/allusers_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy venue from 's3://awssampledbuswest2/tickit/venue_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy category from 's3://awssampledbuswest2/tickit/category_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy date from 's3://awssampledbuswest2/tickit/date2008_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy event from 's3://awssampledbuswest2/tickit/allevents_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' timeformat 'YYYY-MM-DD HH:MI:SS' region 'us-west-2';

copy listing from 's3://awssampledbuswest2/tickit/listings_pipe.txt' 
credentials 'aws_iam_role=<iam-role-arn>' 
delimiter '|' region 'us-west-2';

copy sales from 's3://awssampledbuswest2/tickit/sales_tab.txt'
credentials 'aws_iam_role=<iam-role-arn>'
delimiter '\t' timeformat 'MM/DD/YYYY HH:MI:SS' region 'us-west-2';
Once you have loaded the data you can run sample queries like below in your SQL Workbench.



Congrats! You have finally created the Redshift cluster and run queries on it after loading data.
Refer this post if you want to reset the master user password.
Don't forget to cleanup the cluster or you will be billed.
  • For deleting the cluster just click on the Cluster(in our case it's testdw) in the AWS console.
  • Click on "Cluster" drop down and select delete.
That will cleanup everything.
Hope this guide was helpful to you! Do let me know in the comment section if you have any queries or suggestions .

Solved: Error when allocating new name - Docker

Error response from daemon: Error when allocating new name: Conflict. The container name "/webserver" is already in use by container 6c34a8wetwyetwy7463462d329c9601812tywetdyud76767d65f7dc7ea58d8541. You have to remove (or rename) that container to be able to reuse that name.
If you see the above error it is because a container with same name exist.
Let’s check our running containers
docker container ls
If you don’t see any running container with that name, check the stopped containers.
docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb45bd14c8987 1fbd5d581e4c "/bin/bash" 2 minutes ago Up 2 minutes competent_keller59b9f5a63ba0 ansible-base "/bin/base" 3 minutes ago Created wizardly_payne6c34a8a6edb6 d355ed3537e9 "/bin/bash" 9 minutes ago Exited (0) 4 minutes ago webserver
Above we can see that we already have a container with name webserver.
So we will rename the old container.
docker rename webserver webserver_old
Now if we check again. Our container is renamed to webserver_old .
docker ps -aCONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMESb45bd14c8987 1fbd5d581e4c "/bin/bash" 4 minutes ago Up 4 minutes competent_keller59b9f5a63ba0 ansible-base "/bin/base" 5 minutes ago Created wizardly_payne6c34a8a6edb6 d355ed3537e9 "/bin/bash" 10 minutes ago Exited (0) 5 minutes ago webserver_old
And if you don’t need the old container you can also delete it to free up the space.
docker rm webserver_old
Now if you try to create a container with “webserver” name you should not get any error.

Solved: AWS4-HMAC-SHA256 encryption error while updating S3 bucket

The authorization mechanism you have provided is not supported. Please use AWS4-HMAC-SHA256.
Above error means that you are trying to do activity on a bucket in a region which doesn’t support AWS4-HMAC-SHA256 encryption.
Some of the new AWS regions do not support AWS4-HMAC-SHA256.
So if you have created the bucket in regions like Frankfurt or Mumbai you may see this error.
AWS in future may fix this issue. But, till that time you can try to create the bucket in another older region in US and continue working.

Solved: Conflicting conditional operation error while creating S3 bucket

A conflicting conditional operation is currently in progress against this resource. Please try again.
You can get above error when you are creating an S3 bucket.
This error generally comes if you have deleted an S3 bucket in one region and immediately trying to create a bucket in other region with same bucket name.
Problem is that the S3 syncing is not instant across regions. It may take anything between 2 minutes to 30 minutes for the information to update in all S3 region that you have already deleted the bucket with that name.
You may get the same error even when you try creating same name bucket in different AWS account. Reason being same of syncing.
So if you want the bucket name to be same try creating it again after a coffee break.

Solved: Error while deleting docker network

When you try to delete a network you may get the below error.
C:\>docker network rm network_name
Eresponse from daemon: network network_name has active endpoints
This error comes when you have an active endpoint. First you have to inspect the network to check if any container is still running which is using the network.
docker inspect network_name
Check running containers using
docker container ls
If you find any container running which is using the network that you are trying to delete, you will have to first stop and remove that container. Be sure that you don’t need that container.
docker stop container_name
docker rm container_name
If you don’t see any container running with the name given in the inspect output. Than it means that the container that you deleted earlier was not removed properly and it has traces remaining in network.
To remove the endpoint from network run this command.
docker network disconnect --force network_name container_name
Finally you should be able to remove the network now.
docker network rm network_name

Solved: How to remove the AWS EC2 instance name from the URL?

After installing the website using AWS image you would have noticed that the URL still have reference of the EC2.
It can be in post URLs like
http://ec2-65-231-192-68.compute-1.amazonaws.com/your-post-name
How to get rid of that?
The easiest way to do it is by going to your wordpress dashboard.
In the dashboard go to Settings>General.
In the General Settings page you will see two parameters, WordPress Address (URL) and Site Address (URL).  Change them to your website name e.g. http://yourwebsite.com .
Finally save it. Your post URLs should now look like
http://yourwebsite.com/your-post-name
In Bitnami WordPress Image you will find that WordPress Address (URL) and Site Address (URL) will be greyed out and it won’t allow to modify them. In that case you will have to modify wp-config.php file.
For bitnami image the file location is /opt/bitnami/apps/wordpress/htdocs/wp-config.php. Keep a copy of old file and modify the current file
cp -p /opt/bitnami/apps/wordpress/htdocs/wp-config.php /opt/bitnami/apps/wordpress/htdocs/wp-config.php.oldsudo vi /opt/bitnami/apps/wordpress/htdocs/wp-config.php
Modify two lines which has entries for WP_HOME and WP_SITEURL . They  should now look like.
define('WP_HOME','http://yourwebsite.com');define('WP_SITEURL','http://yourwebsite.com');
If your website is having SSL certificate and you want all your posts and pages to have https. Than the above entries should look like.

define('WP_HOME','https://yourwebsite.com');define('WP_SITEURL','https://yourwebsite.com');
Finally save the file.
When you refresh the page it should now show your desired URL.
If the URL is still not showing correctly and you are sure that you have modified the file correctly than restart apache.
sudo /opt/bitnami/ctlscript.sh restart apache
This should get you done!

Solved: How to delete revisions of wordpress posts?

You may have noticed that wordpress keep revisions of your old posts.
These revisions can be useful if you want to go back to an older version of a post. But if you are sure that you no longer need those old versions of post than it’s best to get rid of them.
Deleting the old posts will give you precious space which you can use for posting new articles. Also it will make your database querying fast.
Here we will discuss about how to delete old revisions.
If you don’t like plugins you can follow these simple steps. Else, directly go to Step 4 to check plugins for doing this.
1) Login to your hosting server and get inside your sql database.(If you are using AWS check this post on how to login.)
mysql -u root -p
2) Select the wordpress database
mysql> USE wordpress;
3) Delete the posts with type “revision”. This command will delete all revisions of all your posts leaving only the latest version.
DELETE from wp_posts WHERE post_type = “revision”;
You can run this DELETE query in your phpmyadmin console also.
4) If you are not comfortable with command line or coding . You can simply install a plugin called “Simple Revisions Delete” in your wordpress dashboard. It will provide you “Purge” option in each of your post to purge old revisions.
Caution:- Before you decide to delete revisions, it’s better to take backup of DB and be absolutely sure that you want to do this. As the only way to get them back after deletion will be to restore them from backup.

Solved: How to calculate number of available IPs in a Subnet

Many people are confused about how many usable IPs you can get in a subnet and how to calculate it.
So here I am giving you a simple way to calculate it.
Here is the formula.
Maximum Number of IPs = 2**(32 - netmask_length)
Let’s say you have subnet mask  /28 then the maximum number of IPs you can have is
Maximum Number of IPs= 2**(32-28) = 2**(4) = 2*2*2*2 = 16
So you can have max 16 IPs in a  /28 subnet.
First and last IP of subnet is reserved for Network Address and Broadcast Address. So you are left with only 14 IPs in normal networks.
But, generally cloud providers like AWS, Azure etc. reserve 5 IPs instead of 2 IPs in each subnet . Thus, the the usable IPs available for you in AWS or Azure for /28 subnet will be 11.
Similarly, you can calculate the usable IPs in each subnet when working on cloud .

For simplicity we have created an AWS Subnet Calculator which you can use.


Be Sociable. Share It. Happy Learning!

How to load database in mysql docker container?

After creating the mysql docker container i wanted to load a new database dump to it.
In case you are wondering how to create dockers on you windows machine you can refer my post here .
If you are just testing it you can download a sample mysql database from here .
Once you have downloaded the sample DB unzip it in a folder.
First, copy the database into the container:
$ docker cp mydump.sql c598nvcvc190:/root  # Here c598nvcvc190 is name of database container
Second, Connect to your docker container:
$ docker exec -it c598nvcvc190 /bin/bash
Finally, restore the database dump file into your database “After creating it first”:
# mysql -uroot -prootpassword < /root/mydump.sql
Now you should be able to see the new database listed in mysql.
Be Sociable. Share It. Happy Learning!