Showing posts with label Tech Blog. Show all posts
Showing posts with label Tech Blog. Show all posts

Solved: How to mount ISO in a linux VM

In this post we will discuss how to mount ISO in Linux VM running inside Virtualbox or VMware.
Virtualbox
You can mount ISO in Linux VM running on Virtualbox by following these steps
  • Select the running machine window.
  • Click on Devices > Optical Devices
  • Choose disk image, browse and select ISO .
  • Now go to the redhat linux server and execute the mount command
 mount /dev/cdrom /mnt
VMware
  • If the linux VM is in VMware. You can select the iso image in vmware console similar to what we did for virtual box.
  • Right click on the machine > “Removable Devices” > “CD/DVD” > Settings. Browse and select the ISO. Check mark on “Connected”.
  • Finally execute the following command in Linux VM.
    mount /dev/sr0 /mnt
  • If you do “df -h” new ISO should be mounted and visible to you as /mnt .
Tip: If you are getting error /mnt busy ensure that /mnt is not already mounted. If /mnt is already mounted either unmount /mnt first and try again or create a new directory and mount the ISO on the new directory.

Solved: Reactivate hardware raid volume after system board replacement- Solaris

Replacing the systemboard/motherboard of a server which has hardware RAID enabled is a major activity. In this post we will discuss about the steps that you should follow.
Caution:- You should be extremely careful while doing this activity and take complete backup of all your data.
1) Collect the explorer and take full backup of all data specially OS.
2) Bring down the server.
3) Replace the motherboard and put back the old disks in server.
4) Change the setting from SP to factory default as many refurbished system board may have old LDOM config in them. It can be done as below:-
LDOM Resetting through the Service Processor.
-> set /HOST/bootmode config="factory-default"

-> stop /SYS

Are you sure you want to stop /SYS (y/n)? y
Wait for machine to power off.
We can then power on the system.
-> start /SYS

Are you sure you want to start /SYS (y/n)? y

Starting /SYS

-> start /SP/console
5) Now upgrade the firmware so that it's same as the firmware of old board. It can be done using sysfwdownload utility. Follow the readme of firmware patch for detailed instructions.
6) If you cannot see the disks in OS which are in hardware RAID it can be because the volume is inactive. It can be activated from ok prompt as follows:-
Go to the OBP prompt.
At the command line, set the auto-boot? and fcode-debug? variables to false and reset the system.
 ok setenv auto-boot? false
 auto-boot? = false
 ok setenv fcode-debug? true
 fcode-debug? = true
 ok reset-all
Find the path to the controller.
 ok show-disks
 a) /pci@0/pci@0/pci@2/LSILogic,sas@0/disk
 b) /pci@0/pci@0/pci@1/pci@0/usb@1,2/storage@1/disk
 q) NO SELECTION
 Enter Selection, q to quit: q
 ok
Note - You are looking for the path to the controller. It generally contains the phrase “LSILogic,sas@0” or the phrase “scsi@0“.
Select the controller.
 ok select /pci@0/pci@0/pci@2/LSILogic,sas@0
Show the volumes, look for any inactive volumes.
 ok show-volumes
Activate the inactive volumes. Repeat the command to activate all inactive volumes. For example, to activate volume number 1 type:
ok 1 activate-volume
Note - There might be more than two inactive RAID volumes, but you cannot activate more than two.
Deselect the controller.
ok unselect-dev
Set the auto-boot? and fcode-debug? variables to true and reset the system.
ok setenv auto-boot? true
 auto-boot? = true
 ok setenv fcode-debug? true
 fcode-debug? = true
 ok reset-all
7) Once the volume is enabled you should be able to see the LSI disk in probe-scsi-all and also in format once the server is booted.
Note:- This post is created just for your reference. Please try it first in your test server. We are not responsible for any loss caused by following this tutorial.

Solved: How to come out of VMware console

If you want to come out of vmware console window simply press Alt + Ctrl or Ctrl + Alt any of the two sequence will work.
If you want the movement between your host machine and VMware console to be seamless, install VMware tools in the guest VM.  This will ensure that you don’t have to press Ctrl+ Alt  every time.
Check this site to download VMware tools.

Solved: How to create hardware RAID in Solaris

If you want to create hardware RAID in Solaris you can use raidctl utility.
  • Let’s create a mirrored RAID volume. Before you start remember that raidctl will destroy all data on the disk so be absolutely sure that you selected right disks.
# raidctl -c c0t0d0 c0t1d0Creating RAID volume c0t0d0 will destroy all data on member disks, proceed (yes/no)? yes...Volume c0t0d0 is created successfully!#
  • Once you have created the RAID it can be in four below states:-
OPTIMAL – Disks in RAID are online and fully synchronized.
SYNC – Disks syncing is in progress.
DEGRADED – Shows that one of the disk in RAID is failed.
FAILED – When one or both disks are lost and you have to recreate the volume.
  • Let’s check the current status of our volume.
# raidctl -l c0t0d0Volume            Size     Stripe     Status    Cache     RAID       Sub                 Size                           Level            Disk----------------------------------------------------------------c0t0d0            136.6G   N/A        SYNC       OFF      RAID1            0.0.0 136.6G              GOOD            0.1.0 136.6G              GOOD
  • In the above output we can see that the sync is going on. While in below output we can see that disks are in sync now and RAID is optimal. Also we can see that it’s RAID 1 which is mirroring.
# raidctl -l c0t0d0Volume            Size     Stripe     Status    Cache     RAID       Sub                 Size                           Level            Disk----------------------------------------------------------------c0t0d0            136.6G   N/A        OPTIMAL   OFF      RAID1            0.0.0 136.6G              GOOD            0.1.0 136.6G              GOOD
If you want to change the system board of a system with hardware RAID, you will have to reactivate the volumes after hardware replacement. Refer to the post,  steps to follow for hardware replacement and reactivate hardware .

Solved: Three options to convert ova to ovf files

Sometime you may have to open OVA files, which are actually just zip of OVF files.
You can extract or convert OVA to OVF file in three ways as mentioned below
Option 1
Tar Command
If you have access to Linux or Unix box you can use tar command to extract the file.
tar -xvf cloudvedas-OpenStackHOL.ova
Option 2
In Windows you can use a tool like like 7-Zip or winrar to extract the file.
Option 3
Vmware ovftool
Download Vmware OVF tool 
Once you have downloaded and installed the tool go to the command prompt and change the directory to where you have installed the ovf tool. You should see an exe called “ovftool.exe”.
Execute the command:-
ovftool.exe "E:\Software\Openstack\cloudvedas-OpenStackHOL.ova" "E:\Software\Openstack\cloudvedas-OpenStackHOL.ovf"

Check out our post on Most useful Tar command examples .

Solved: How to resize Docker Quickstart Terminal Window

By default in Windows 7 Docker Quickstart Terminal Window size will be small. But it can be very annoying to work in such small window.
Here we will show you how to increase the window size as per your requirement.
  • Open the Docker Quickstart Terminal as an Administrator.
  • Right Click on the Blue whale icon on top of  Docker Quickstart Terminal .
  • Click “Properties” and Select “Layout” tab.
  • Increase the “Width” and “Height” of “Window Size” as per your requirement.
  • Finally Click OK and try re-opening the Terminal.
That’s all folks!

Solved: How to restart a docker container automatically on crash

In this post we will see how we can restart a container automatically if it crashes.
If you want a Docker container to always restart use:-
docker run -dit --name cldvds-always-restart --restart=always busybox
But if you want container to always restart unless it is explicitly stopped  or restarted, use:-
docker run -dit --name cldvds-except-stop --restart unless-stopped busybox
In case you want the container to stop after 3 restart attempt use below command.
docker run -dit --name cldvds-restart-3 --restart=on-failure:3 busybox
You can see the logs of a container using
docker logs cldvds-restart-3
If you want to change the restart policy of running container you can do it with “docker update” e.g. here we are changing restart attempt from 3 to 4 of container cldvds-restart-3.
docker update --restart=on-failure:4 cldvds-restart-3

Parallel Patching in Solaris 10

When you patch a Solaris 10 server it applies each patch to each zone one-at-a-time . So if you have 3 zones and it takes 1 minute to apply a patch on global zone then it will take another 1 minute each to apply on other 3 zones. Thus in total you will be spending around 4 minutes to apply single patch on the server. You can imagine the time it will take to apply a 300 patches bundle.
From Solaris 10 10/09 onward you have got an option to patch multiple zones in parallel.
(For releases prior to Solaris 10 10/09, this feature is delivered in the patch utilities patch, 119254-66 or later revision (SPARC) and 119255-66 or later revision (x86). Check latest patch on My Oracle Support Website)
Parallel patching is really helpful as it will apply patches in parallel to all the zones. So all your zones on a server will be patched at the same time thus drastically reducing your patching time. If we consider the above scenario if we use parallel patching, the total patching time for applying the patch in all zones can reduce to around 2 to 2.5 minutes. As global zone will still be patched first and then the patch will be applied on local zones in parallel.
The number of global zones that can be patched together is decided by a parameter num_proc=  which is defined in /etc/patch/pdo.conf .
The value for num_proc=  is decided based on number of online CPUs in your system. The maximum number is 1.5 times the number of online CPUs.
For example :-
If number of online CPUs is 6
In /etc/patch/pdo.conf make the entry
num_proc=9
Thus as per above example 9 zones can be patched in parallel.  This will reduce lot of downtime if you have 9 zones running in a server. Once the entry in pdo.conf is done you can continue with normal patching process.
So just update the num_proc value in  /etc/patch/pdo.conf  as per the available CPUs in your system and enjoy some free time 🙂
Note:- The time estimate I mentioned above are as per my own experience and I have not maintained any data for this. So please expect variations in time as per your system.
Do let me know if you have any query.

Solved: How to plumb IP in a Solaris zone without reboot

In this post we will discuss how to add an IP to a  running Solaris zone.

If you want to add a new IP address to a running local zone(zcldvds01) you can do it by plumbing the IP manually from the global zone.

root@cldvds-global()# ifconfig aggr1:2 plumb
root@cldvds-global()# ifconfig aggr1:2 inet 10.248.3.167 netmask 255.255.255.0 broadcast 10.248.3.255 zone zcldvds01 up

This change is not persistent across reboot. To make it permanent you will have to make an entry through zonecfg:-

root@cldvds-global()# zonecfg -z zcldvds01
zonecfg:zcldvds01> add net
zonecfg:zcldvds01:net> set physical=aggr1
zonecfg:zcldvds01:net> set address=10.248.3.167
zonecfg:zcldvds01:net> end
zonecfg:zcldvds01> verify
zonecfg:zcldvds01> commit
zonecfg:zcldvds01> exit

Now if you run "ifconfig -a" in the zone. You should see the new IP plumbed.

Change IP in zone

If you want to change the IP address of a zone you can simply do it by using "remove". If we take above example and we want to change IP from 10.248.3.167 to 10.248.3.175 we will do as below:-

root@cldvds-global()# zonecfg -z zcldvds01
zonecfg:zcldvds01> remove net address=10.248.3.167
zonecfg:zcldvds01> add net
zonecfg:zcldvds01:net> set physical=aggr1
zonecfg:zcldvds01:net> set address=10.248.3.175
zonecfg:zcldvds01:net> end
zonecfg:zcldvds01> verify
zonecfg:zcldvds01> commit
zonecfg:zcldvds01> exit

Solved: How to grow or extend ZFS filesystem in Solaris 10

Below are the steps to grow a zfs filesystem
  • Identify the zpool of the zfs filesystem.
df -h | grep -i sagufs
df -Z | grep -i sagufs
Above command will give you the complete path of the filesystem and zpool name even if it's in zone.
  • Check that  the pool doesn't have any errors.
root# zpool status sagu-zpool
 pool: sagu-zpool
 state: ONLINE
 scan: none requested
config:

NAME STATE READ WRITE CKSUM
 sagu-zpool ONLINE 0 0 0
 c0t911602657A702A0004D339BDCF15E111d0 ONLINE 0 0 0
 c0t911602657A702A00BE158E94CF15E111d0 ONLINE 0 0 0
 c0t911602657A702A004CD071A9CF15E111d0 ONLINE 0 0 0

errors: No known data errors
  • Check the current size of the pool
root# zpool list sagu-zpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
sagu-zpool 249G 178G 71.1G 71% ONLINE -
  • Label the new LUN.
root# format c0t9007538111C02A004E73B39A155BE211d0
  • Add the LUN to appropriate zpool. Be careful about pool name.
root# zpool add sagu-zpool c0t9007538111C02A004E73B39A155BE211d0
  • Now let's say we want to increase the filesystem from 100GB to 155GB. To increase FS first increase its quota.
root# zfs set quota=155G sagu-zpool/sagufs
  • Finally increase the FS reservation
root# zfs set reservation=155G sagu-zpool/sagufs
  • Now you should be able to see the increased space.

Solved: Getting nobody:nobody as owner of NFS filesystem on Solaris client

If the NFS Version 4 client does not recognize a user or group name from the server, the client is unable to map the string to its unique ID, an integer value. Under such circumstances, the client maps the inbound user or group string to the nobody user. This mapping to nobody creates varied problems for different applications.
Because of these ownership issues you may see filesystem has permission of nobody:nobody on the NFS client.
To avoid this situation you can mount filesystem with NFS version 3 as shown below.
On the NFS client, mount a file system using the NFS v3
# mount -F nfs -o vers=3 host:/export/XXX /YYY
e.g.
# mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test
If this works fine than make the entry permanent by modifying the /etc/default/nfs file and uncomment the variable NFS_CLIENT_VERSMAX and put entry 3
vi /etc/default/nfsNFS_CLIENT_VERSMAX=3
If you are still getting permissions as nobody:nobody then you have to share the filesystem on NFS server as anon.
share -o anon=0 /home/cv/share_fs
Now try re-mount on NFS Client
mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test

How to Fix Crawl Errors in Google Search console or Google WebmasterTool

If you have moved your website links recently or deleted them, then you may see crawl error in your Google Webmaster Tools or Google Search Console(new name of the tool).
You will see two type of errors on the Crawl Errors page.
Site errors: In a normal operating site you generally won’t have these errors. Also as per google if they see large number of Site errors for your website they will try to notify you in the form of a message no matter how big or small your site is.  These type of errors generally comes when your site is down for long time or is not reachable to google bots because of issues like DNS errors or excessive page load time.
URL errors: These are the most common errors you will find for a website. It can be because of multiple reasons like you moved or renamed the pages, or you have permanently deleted a post.
These errors may impact your search engine rankings as google don’t want the users to go to pages that doesn’t exists.
So let’s see how you can fix these URL errors.
Once you login to the Google Webmaster Tools and select the verified property you should see the URL errors marked for your site.
It has two tabs Desktop and Smartphone that shows the errors in respective version of your website.
Select the error and you will see the website link which is broken. It can be your old post which you have moved or deleted.
If you are a developer you can redirect the pages of the broken links to working pages.
But if you don’t want to mess with the code you can install a free plugin called Redirection . Below we will see how you can install and use this plugin.
  • For installing the plugin go to Dashboard> Plugins> Add New
  • Search the plugin “Redirection” and click Install > Activate.
  • After you have installed the plugin go to Dashboard> Tools> Redirection.
  • Once on the Redirection settings pages select “Redirects” from the top.
  • In the “Source URL” copy/paste the URL for which you are getting error.
  • In the “Target URL” copy/paste the working URL.

  • Click Add Redirect.
You can also redirect all your broken URLs to  your Homepage. But if the post is available on different link, then it’s recommended that you redirect the broken link to new working link of that post. This will enhance the user experience
Last step is that you go back to the Google Webmaster Tools page. Select the URL you just corrected and click on “Mark as Fixed” .
Hope this post helps you. Do let me know your opinion in comments section.

In what sequence startup and shutdown RC scripts are executed in Solaris

We can use RC (Run Control) scripts present in /etc/rc* directories to start or stop a service during bootup and shutdown.
Each rc directory is associated with a run level. For example the scripts in rc3.d will be executed when the system is going in Run Level 3.
All the scripts in these directories must follow a special pattern so that they can be considered for execution.
The startup script file starts with a “S” and kill script start with a “K”. The uppercase is very important or the file will be ignored.
The sequence of these startup and shutdown script is crucial if the applications are dependent on each other.
For example while booting up a database should start before application. While during shutdown the application should shutdown first followed by Database.
Here we will see how we can sequence these scripts.
So as given in below example during startup S90datetest.sh script was executed first and than S91datetest.sh .
Time during execution of CloudVedas script S90datetest.sh is Monday, 23 September 2016 16:19:43 IST
Time during execution of CloudVedas script S91datetest.sh is Monday, 23 September 2016 16:19:48 IST
Similarly during shutdown K90datetest.sh script was executed first and then K91datetest.sh .
Time during execution of CloudVedas script K90datetest.sh is Monday, 23 September 2016 16:11:43 IST
Time during execution of CloudVedas script K91datetest.sh is Monday, 23 September 2016 16:11:48 IST
This sequencing is also a trick interview question and it confuses many people.

Solved: How to create a soft link in Linux or Solaris

In this post we will see how to create a softlink.
Execute the below command to create a softlink.
[root@cloudvedas ~]# ln -s /usr/interface/HB0 CLV
So now when you list using  “ls -l”  the softlink thus created will look like.
[root@cloudvedas ~]# ls -llrwxrwxrwx. 1 root root 18 Aug 8 23:16 CLV -> /usr/interface/HB0[root@cloudvedas ~]#
Try going inside the link and list the contents.
[root@cloudvedas ~]# cd CLV[root@cloudvedas CLV]# lscloud1 cloud2 cloud3[root@cloudvedas CLV]#
You can see the contents of /usr/interface/HB0 directory.

Solved: How to create a flar image in Solaris and restore it forrecovery

Flar image is a good way to recover your system from crashes. In this post we will see how to create a flar image and use it for recovery of the system.
Flar Creation
  • It is recommended that you create flar image in single user mode. Shutdown server and boot it in single user.
#init 0ok>boot -s
  • In this example, the FLAR image will be stored to a directory under /flash. The FLAR image will be named recovery_image.flar .
flarcreate -n my_bkp_image1 -c -S -R / -x /flash /flash/recovery_image.flar
  • Once the flar image is created. Copy it to your repository system. Here we are using NFS.
cp -p /flash/recovery_image.flar /net/FLAR_recovery/recovery_image.flar
Flar Restoration
  • To restore a flar image start the boot process.
  • You can boot server either with Solaris CD/DVD or Network
  • Go to the ok prompt and run one of the below command:-
For booting the boot media (installation CD/DVD). ok> boot cdromIf you want to boot from network do. 

ok> boot net
  • Provide the network, date/time, and password information for the system.
  • Once you reach the “Solaris Interactive Installation” part, select “Flash”.
  • Provide the path to the system with location of the FLAR image:
    /net/FLAR_recovery/recovery_image.flar
  • Select the correct Retrieval Method (HTTP, FTP, NFS) to locate the FLAR image.
  • At the Disk Selection screen, select the disk where the FLAR image is to be installed.
  • Choose not to preserve existing data.(Be sure you want to restore on selected disk)
  • At the File System and Disk Layout screen, select Customize to edit the disk slices to input the values of the disk partition table from the original disk.
  • Once the system is rebooted the recovery is complete.

What are the maximum number of usable partitions in a disk in Linux

Linux can generally have two types of Disks. IDE and SCSI.
IDE
By convention, IDE drives will be given device names /dev/hda to /dev/hdd. Hard Drive A (/dev/hda) is the first drive and Hard Drive C (/dev/hdc) is the third.
A typical PC has two IDE controllers, each of which can have two drives connected to it. For example, /dev/hda is the first drive (master) on the first IDE controller and /dev/hdd is the second (slave) drive on the second controller (the fourth IDE drive in the computer).
Maximum usable partitions 63 for IDE disks.
SCSI
SCSI drives follow a similar pattern; They are represented by ‘sd’ instead of ‘hd’. The first partition of the second SCSI drive would therefore be /dev/sdb1.
Maximum usable partitions 15 for SCSI disks.
A partition is labeled to host a certain kind of file system (not to be confused with a volume label). Such a file system could be the linux standard ext2 file system or linux swap space, or even foreign file systems like (Microsoft) NTFS or (Sun) UFS. There is a numerical code associated with each partition type. For example, the code for ext2 is 0x83 and linux swap is 0x82.
To see a list of partition types and their codes, execute /sbin/sfdisk -T

Solved: How to enable auditing of zones from Global Zone in a Solaris10 Server

Auditing is a good way to keep logs of all the activities happening in your Solaris server. In this post we will see how to enable auditing of both global and local zones and store the logs of all in a single file in global zone.

1) In the global zone create a new FS of 20GB and mount it.

mkdir /var/audit/gaudit
mount /dev/md/dsk/d100 /var/audit/gaudit
chmod -R 750 /var/audit/gaudit

2) Modify /etc/security/audit_control and add "lo,ex" before flags and naflags as below.

vi audit_control
#
# Copyright (c) 1988 by Sun Microsystems, Inc.
#
# ident "@(#)audit_control.txt 1.4 00/07/17 SMI"
#
dir:/var/audit/gaudit
flags:lo,ex
minfree:20
naflags:lo,ex

3) Modify /etc/security/audit_startup and add +argv and +zonename entries as described below. This entry will create audit logs for all zones in /var/audit/gaudit .

vi audit_startup
#! /bin/sh
#
# Copyright 2004 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)audit_startup.txt 1.1 04/06/04 SMI"

/usr/bin/echo "Starting BSM services."
/usr/sbin/auditconfig -setpolicy +cnt
/usr/sbin/auditconfig -conf
/usr/sbin/auditconfig -aconf
/usr/sbin/auditconfig -setpolicy +argv
/usr/sbin/auditconfig -setpolicy +zonename
#

4) Copy audit_control file to /etc/security of each zone or loopback mount them in each zone.

5) Once all the zones are configured enable the audit service by running /etc/security/bsmconv. This will require reboot of system.

6) Check audit logs in /var/audit/gaudit using

auditreduce 20170709091522.not_terminated.solaris1 | praudit

7) For checking logs of a specific zone follow below

root@solaris1 # auditreduce -z zone1 20170709091522.not_terminated.solaris1 | praudit
file,2017-07-09 16:26:00.000 +02:00,
zone,zone1
header,160,2,execve(2),,solaris1,2017-07-09 16:26:00.697 +02:00
path,/usr/sbin/ping
attribute,104555,root,bin,85,200509,0
exec_args,2,ping,127.0.0.1
subject,root,root,root,root,root,2164,2187,0 0 0.0.0.0
return,success,0
zone,zone1
file,2017-07-09 16:26:00.000 +02:00,
root@solaris1 #

Solved: How to take XSCF snapshot of M-Series server running Solaris

In this post we will see how to take XSCF snapshot of an M-Series server

Save snapshot on different server

  • First create a user "test" in OS of server in which you want to save snapshot.
  • Next login to XSCF of server whose snapshot you want to take.
  • Take snapshot by giving IP of destination server on which you want to save the data using the below syntax.
    snapshot -LF -t username@serverip:/full_path_to_data_location -k download

Here is an example. We created test user in 192.168.99.10 destination server, and snapshot will be saved in it's /var/tmp directory.

XSCF> snapshot -LF -t test@192.168.99.10:/var/tmp -k download

Save snapshot on same server.

If you want to save snapshot on same server of which you are collecting snapshot use below steps.

  • Login to XSCF and check the DSCP config to know the IP of each domain.
XSCF> showdscp

DSCP Configuration:

Network: 10.1.1.0
Netmask: 255.255.255.0

Location Address
---------- ---------
XSCF 10.1.1.1
Domain #00 10.1.1.2
Domain #01 10.1.1.3
Domain #02 10.1.1.4
Domain #03 10.1.1.5
  • Check the running domain
XSCF> showdomainstatus -a
DID Domain Status
00 Running
01 -
02 -
03 -
  • Ping to ensure you can connect to the network
    XSCF> ping 10.1.1.2
    
    PING 10.1.1.2 (10.1.1.2): 56 data bytes
    64 bytes from 10.1.1.2: icmp_seq=0 ttl=255 time=2.1 ms
    64 bytes from 10.1.1.2: icmp_seq=1 ttl=255 time=2.0 ms
  • Take snapshot after creating a user on the OS.
    XSCF> snapshot -LF -t test@10.1.1.2:/var/tmp -k download

Solved: How to scan new LUNs in Redhat Linux

In this post  we will discuss how to scan new LUNs allocated by storage team to a Redhat Linux system.
There are two ways of scanning the LUNs
Method 1:-
Find how many SCSI bus controllers you have

  • Go to directory /sys/class/scsi_host/  and list it’s contents.

cd /sys/class/scsi_host/ 
[root@scsi_host]# ls
host0 host1 host2
[root@scsi_host]#
  • Here we can see we have three SCSI bus controllers. So in below command replace hostX with these directory names.
Run the Command ,
echo "- - -" > /sys/class/scsi_host/hostX/scan 
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host0/scan
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host1/scan
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host2/scan
[root@cloudvedas]# echo "- - -" > /sys/class/scsi_host/host3/scan
TIP:- Here the “- – -” denotes CxTxDx i.e. Channel(controller) , Target ID and Disk or LUN number. This is asked in Linux Admin Interviews also.
  • Repeat the above step for all three directories.
If you have FC HBA in the system you can follow the steps as below:-
  • First check number of FC controllers in your system
#ls /sys/class/fc_hosthost0 host1 host2
  • To scan FC LUNs execute commands as
echo "1" > /sys/class/fc_host/host0/issue_lip
echo "1" > /sys/class/fc_host/host1/issue_lip
echo "1" > /sys/class/fc_host/host2/issue_lip

Tip :- Here echo “1” operation performs a Loop Initialization Protocol (LIP) and then scans the interconnect and causes the SCSI layer to be updated to reflect the devices currently on the bus. A LIP is, essentially, a bus reset,  and will cause device addition and removal. This procedure is necessary to configure a new SCSI target on a Fibre Channel interconnect. Bear in mind that issue_lip is an asynchronous operation.
  • Verify if the new disk is visible now
fdisk -l |egrep '^Disk' |egrep -v 'dm-'
Method 2 :-
  • Next method is to scan using SG3 utility. You can install it using
yum install sg3_utils
  • Once installed  run the command
/usr/bin/rescan-scsi-bus.sh

Solved: How to add swap space in Redhat or Ubuntu Linux

In this post  we will see how we can add a file as swap space in Linux. Same steps are to be followed for Redhat and Ubuntu Linux.
Type the following command to create 100MB swap file (1024 * 100MB = 102400 block size):
dd if=/dev/zero of=/swap1 bs=1024 count=102400
Secure swap file
Setup correct file permission for security reasons, enter:
# sudo chown root:root /swap1# sudo chmod 0600 /swap1
Set up a Linux swap area
Type the following command to set up a Linux swap area in a file:
# sudo mkswap /swap1
Activate /swap1 swap space :
# sudo swapon /swap1
Update /etc/fstab file to make it persistent across reboot.
vi /etc/fstab
Add the following line in file:
/swap1 swap swap defaults 0 0
To check if the swap file is added or not
Type the following swapon command:
#sudo swapon -sFilename Type Size Used Priority/dev/dm-0 partition 839676 0 -1/swap1 file 102396 0 -2
It should show you the new file.
If you want add a logical volume for swap please refer how to add LV for swap .