Showing posts with label solaris. Show all posts
Showing posts with label solaris. Show all posts

Solved: How to change from EFI to SMI label and vice-versa

Hello!
In this post we will see how to change the disk label from EFI to SMI in a Solaris server.
Before you decide to change the disk label ensure that the disk doesn't have any important data. If needed take backup of the disk. As during label change all the data will be removed from the disk.
We will use "format -e" command to change the label on disk.
root@cldvds# format -e c0t6006023764A62A00C5H174886B5BC267d0

format> la
 [0] SMI Label
 [1] EFI Label
 Specify Label type[1]: 0
 Auto configuration via format.dat[no]?
 Auto configuration via generic SCSI-2[no]?

Geometry: 256 heads, 10 sectors 40960 cylinders result in 104857600 out of 104857600 blocks.
 Do you want to modify the device geometry[no]? yes
 format> p
The disk partition should now change from 0 to 9 slices(EFI) to 0 to 7 slices (SMI).
If you want to change from SMI to EFI you can follow the same steps and instead of choosing option 0 choose option 1.
That's all folks! Do comment if you have any concern or query!

Solved : Comparing Run Levels in Linux and Solaris and precautions withthem

Many people get confused between run levels in Linux and Solaris. One major difference among these can be disastrous also.  In this post we will show you the different Run levels in both these OS and what precautions you should take while working on these.
Let’s first take a look at Run levels.
Linux Run Levels
IDNameDescription
0HaltNo activity, System can be safely shut down.
1Single-user modeFor administrative tasks only. Rarely used.
2Multi-user modeMultiple users but no NFS (Network File System).
3Multi-user mode with networkingMultiple user but command line mode only.
4Not used/user-definableFor special purposes. User definable.
5Start the system normally with appropriate display manager (with GUI)It’s similar to run level 3 but with GUI display.
6RebootReboots the system.
Solaris Run levels
IDNameDescription
0Power-down statePower down state(OBP level after POST). Will bring server to OK prompt for maintenance.
s or SSingle-user stateTo run as a single user with all file systems mounted and accessible. Only root user is allowed login.
1Single User – Administrative stateTo access all available file systems with user logins allowed.
2Multi-user modeMultiple users but no NFS(Network File System). i.e. all daemons running except NFS daemon.
3Multi-user mode with networkingAll daemons running including NFS with GUI.
4Not used/user-definableFor special purposes. User definable.
5Power-offShutdown gracefully. Difference from Level 0 is that you won’t get any OBP (OK) prompt in Level 5
6RebootReboots the system.
Precaution
Not sure if you have noticed but there is major difference in Run Level 5 of both the OS. For Linux, run level 5 means multi user with GUI, all good. But for Solaris, run level 5 means power-off, ouch! . Many Linux admins who start working on Solaris makes the mistake of executing “init 5” on Solaris to get the GUI but, that actually brings down a Solaris server. Hope you never make this mistake on production box.
Check current run level
who -r
Above command will tell  you the current level of your system.

How to take zfs snapshot and rollback

In this post we will discuss how to take ZFS snapshot and restore data using that snapshot.

If you want to take snapshot of a ZFS filesystem the syntax is simple.

zfs snapshot pool/filesystem@somename

Let’s take an example, we have a zpool named “cvpool” and it has a filesystem "cldvdsfs".

  • If we wanting​ to take a snapshot of this filesystem on weekend we will give snapshot a name let’s say “sunday”.
zfs snapshot cvpool/cldvdsfs@sunday
  • Now if you do a “zfs list” you should see the zfs snapshot.
# zfs list
NAME                      USED AVAIL REFER MOUNTPOINT
cvpool                     500M 4.40G 22K /cvpool
cvpool/cldvdsfs             22K 500M  22K /cvpool/cldvdsfs
cvpool/cldvdsfs@sunday        0   -   22K   -
#
  • Have a look at the content of the filesystem. We can see 5 test files.
# cd /cvpool/cldvdsfs

# ls
test1 test2 test3 test4 test5
#

Rollback

  • If you want to do a rollback/restore of this snapshot on the filesystem you can simply do it by:-
zfs rollback pool/filesystem@sunday
  • Let's give it a try  by removing some files.
# rm test5
# rm test4
# ls
 test1 test2 test3
  • Now when we try rollback we should see all our data back.
# zfs rollback cvpool/cldvdsfs@sunday

# cd /cvpool/cldvdsfs
# ls
 test1 test2 test3 test4 test5
#

So we can see above we got our removed files back.

Remote backup

  • Let's try sending the snapshot on a different filesystem or on a remote server NFS filesystem. This is very useful for backup purposes where the data is stored on a different server.
# zfs send cvpool/cldvdsfs@sunday > /remoteNFS/sunday.snap
  • Let's check our snapshot size
-rw-r--r-- 1 root root 14K Sep 23 07:52 sunday.snap
  • Zip the snapshot

You also have an option to zip the snapshot to save space. Like in below example the snapshot got shrinked by 94% .

# gzip -9 -v /remoteNFS/sunday.snap
 /remoteNFS/sunday.snap: 94.0% -- replaced with /remoteNFS/sunday.snap.gz
#

-rw-r--r-- 1 root root 899 Sep 23 07:52 sunday.snap.gz
  • Now let's  create a new ZFS filesystem sunday and try to restore the snapshot on it.
# zfs create cvpool/sunday

# zfs list
 NAME USED AVAIL REFER MOUNTPOINT
 cvpool 500M 4.40G 22K /cvpool
 cvpool/cldvdsfs 22K 500M 22K /cvpool/cldvdsfs
 cvpool/cldvdsfs@sunday 0 - 22K -
 cvpool/sunday 21K 4.40G 21K /cvpool/sunday
  • Currently our new filesystem has nothing in it.
# cd /cvpool/sunday
# ls
#

Unzip the snapshot

Let's unzip and restore the snapshot.

# gzip -d -c /remoteNFS/sunday.snap.gz | zfs receive -F cvpool/sunday

Note:- When you restore the snapshot the filesystem /cvpool/sunday should not be in use else you will get device busy error.

  • We can now see our files in the the sunday filesystem.
 # cd /cvpool/sunday
 # ls
 test1 test2 test3 test4 test5
 #
  • You can make the restored snapshot as your main filesystem by renaming it.

So here we will first rename the current filesystem to old.

 # zfs rename cvpool/cldvdsfs cvpool/cldvdsfs.old
 # zfs list
 NAME USED AVAIL REFER MOUNTPOINT
 cvpool 500M 4.40G 24K /cvpool
 cvpool/cldvdsfs.old 40K 500M 22K /cvpool/cldvdsfs.old
 cvpool/cldvdsfs.old@sunday 18K - 22K -
 cvpool/sunday 22K 4.40G 22K /cvpool/sunday
 cvpool/sunday@sunday 0 - 22K -
  • Now we will make the filesystem which was restored from sunday snapshot as main.
 # zfs rename cvpool/sunday cvpool/cldvdsfs
 # zfs list
 NAME USED AVAIL REFER MOUNTPOINT
 cvpool 500M 4.40G 24K /cvpool
 cvpool/cldvdsfs 40K 4.40G 22K /cvpool/cldvdsfs
 cvpool/cldvdsfs@sunday 18K - 22K -
 cvpool/cldvdsfs.old 40K 500M 22K /cvpool/cldvdsfs.old
 cvpool/cldvdsfs.old@sunday 18K - 22K -
 #

Solved: Error on labeling the disk

You may get below error when you are trying to label a new disk in Solaris.

format> la
 WARNING - This disk may be in use by an application that has
 modified the fdisk table. Ensure that this disk is
 not currently in use before proceeding to use fdisk.

Solution:-

Before you proceed for the solution ensure that this is the correct disk which you want to label.

  • Once you are sure about the correct disk, select the disk from the format menu.
  • Before you try labeling the disk execute "fdisk" while in format menu.
format> fdisk
 No fdisk table exists. The default partition for the disk is:

a 100% "SOLARIS System" partition

Type "y" to accept the default partition, otherwise type "n" to edit the
 partition table.
  • Once you press "y" it will accept the default disk layout.
  • Now you can also change the disk layout format as per your requirement and finally label the disk.

Solved: How to allow remote root login in Solaris

In this post we will show you how you allow root login to a solaris sevrer from a remote machine.
  • Login to the Solaris Server via console as root user.
  • Modify the /etc/ssh/sshd_config . Look for the PermitRootLogin entry in the file and change it from no to yes .
PermitRootLogin yes
  • Finally refresh the ssh service so that it re-read the configuration changes you made to sshd_config. For Solaris 10 you can refresh ssh service using svcadm.
     svcadm refresh svc:/network/ssh:default
  • For Solaris 7,8 & 9 you can restart ssh service as below.
     /etc/init.d/sshd restart

Solved: Reactivate hardware raid volume after system board replacement- Solaris

Replacing the systemboard/motherboard of a server which has hardware RAID enabled is a major activity. In this post we will discuss about the steps that you should follow.
Caution:- You should be extremely careful while doing this activity and take complete backup of all your data.
1) Collect the explorer and take full backup of all data specially OS.
2) Bring down the server.
3) Replace the motherboard and put back the old disks in server.
4) Change the setting from SP to factory default as many refurbished system board may have old LDOM config in them. It can be done as below:-
LDOM Resetting through the Service Processor.
-> set /HOST/bootmode config="factory-default"

-> stop /SYS

Are you sure you want to stop /SYS (y/n)? y
Wait for machine to power off.
We can then power on the system.
-> start /SYS

Are you sure you want to start /SYS (y/n)? y

Starting /SYS

-> start /SP/console
5) Now upgrade the firmware so that it's same as the firmware of old board. It can be done using sysfwdownload utility. Follow the readme of firmware patch for detailed instructions.
6) If you cannot see the disks in OS which are in hardware RAID it can be because the volume is inactive. It can be activated from ok prompt as follows:-
Go to the OBP prompt.
At the command line, set the auto-boot? and fcode-debug? variables to false and reset the system.
 ok setenv auto-boot? false
 auto-boot? = false
 ok setenv fcode-debug? true
 fcode-debug? = true
 ok reset-all
Find the path to the controller.
 ok show-disks
 a) /pci@0/pci@0/pci@2/LSILogic,sas@0/disk
 b) /pci@0/pci@0/pci@1/pci@0/usb@1,2/storage@1/disk
 q) NO SELECTION
 Enter Selection, q to quit: q
 ok
Note - You are looking for the path to the controller. It generally contains the phrase “LSILogic,sas@0” or the phrase “scsi@0“.
Select the controller.
 ok select /pci@0/pci@0/pci@2/LSILogic,sas@0
Show the volumes, look for any inactive volumes.
 ok show-volumes
Activate the inactive volumes. Repeat the command to activate all inactive volumes. For example, to activate volume number 1 type:
ok 1 activate-volume
Note - There might be more than two inactive RAID volumes, but you cannot activate more than two.
Deselect the controller.
ok unselect-dev
Set the auto-boot? and fcode-debug? variables to true and reset the system.
ok setenv auto-boot? true
 auto-boot? = true
 ok setenv fcode-debug? true
 fcode-debug? = true
 ok reset-all
7) Once the volume is enabled you should be able to see the LSI disk in probe-scsi-all and also in format once the server is booted.
Note:- This post is created just for your reference. Please try it first in your test server. We are not responsible for any loss caused by following this tutorial.

Solved: How to create hardware RAID in Solaris

If you want to create hardware RAID in Solaris you can use raidctl utility.
  • Let’s create a mirrored RAID volume. Before you start remember that raidctl will destroy all data on the disk so be absolutely sure that you selected right disks.
# raidctl -c c0t0d0 c0t1d0Creating RAID volume c0t0d0 will destroy all data on member disks, proceed (yes/no)? yes...Volume c0t0d0 is created successfully!#
  • Once you have created the RAID it can be in four below states:-
OPTIMAL – Disks in RAID are online and fully synchronized.
SYNC – Disks syncing is in progress.
DEGRADED – Shows that one of the disk in RAID is failed.
FAILED – When one or both disks are lost and you have to recreate the volume.
  • Let’s check the current status of our volume.
# raidctl -l c0t0d0Volume            Size     Stripe     Status    Cache     RAID       Sub                 Size                           Level            Disk----------------------------------------------------------------c0t0d0            136.6G   N/A        SYNC       OFF      RAID1            0.0.0 136.6G              GOOD            0.1.0 136.6G              GOOD
  • In the above output we can see that the sync is going on. While in below output we can see that disks are in sync now and RAID is optimal. Also we can see that it’s RAID 1 which is mirroring.
# raidctl -l c0t0d0Volume            Size     Stripe     Status    Cache     RAID       Sub                 Size                           Level            Disk----------------------------------------------------------------c0t0d0            136.6G   N/A        OPTIMAL   OFF      RAID1            0.0.0 136.6G              GOOD            0.1.0 136.6G              GOOD
If you want to change the system board of a system with hardware RAID, you will have to reactivate the volumes after hardware replacement. Refer to the post,  steps to follow for hardware replacement and reactivate hardware .

Parallel Patching in Solaris 10

When you patch a Solaris 10 server it applies each patch to each zone one-at-a-time . So if you have 3 zones and it takes 1 minute to apply a patch on global zone then it will take another 1 minute each to apply on other 3 zones. Thus in total you will be spending around 4 minutes to apply single patch on the server. You can imagine the time it will take to apply a 300 patches bundle.
From Solaris 10 10/09 onward you have got an option to patch multiple zones in parallel.
(For releases prior to Solaris 10 10/09, this feature is delivered in the patch utilities patch, 119254-66 or later revision (SPARC) and 119255-66 or later revision (x86). Check latest patch on My Oracle Support Website)
Parallel patching is really helpful as it will apply patches in parallel to all the zones. So all your zones on a server will be patched at the same time thus drastically reducing your patching time. If we consider the above scenario if we use parallel patching, the total patching time for applying the patch in all zones can reduce to around 2 to 2.5 minutes. As global zone will still be patched first and then the patch will be applied on local zones in parallel.
The number of global zones that can be patched together is decided by a parameter num_proc=  which is defined in /etc/patch/pdo.conf .
The value for num_proc=  is decided based on number of online CPUs in your system. The maximum number is 1.5 times the number of online CPUs.
For example :-
If number of online CPUs is 6
In /etc/patch/pdo.conf make the entry
num_proc=9
Thus as per above example 9 zones can be patched in parallel.  This will reduce lot of downtime if you have 9 zones running in a server. Once the entry in pdo.conf is done you can continue with normal patching process.
So just update the num_proc value in  /etc/patch/pdo.conf  as per the available CPUs in your system and enjoy some free time 🙂
Note:- The time estimate I mentioned above are as per my own experience and I have not maintained any data for this. So please expect variations in time as per your system.
Do let me know if you have any query.

Solved: How to plumb IP in a Solaris zone without reboot

In this post we will discuss how to add an IP to a  running Solaris zone.

If you want to add a new IP address to a running local zone(zcldvds01) you can do it by plumbing the IP manually from the global zone.

root@cldvds-global()# ifconfig aggr1:2 plumb
root@cldvds-global()# ifconfig aggr1:2 inet 10.248.3.167 netmask 255.255.255.0 broadcast 10.248.3.255 zone zcldvds01 up

This change is not persistent across reboot. To make it permanent you will have to make an entry through zonecfg:-

root@cldvds-global()# zonecfg -z zcldvds01
zonecfg:zcldvds01> add net
zonecfg:zcldvds01:net> set physical=aggr1
zonecfg:zcldvds01:net> set address=10.248.3.167
zonecfg:zcldvds01:net> end
zonecfg:zcldvds01> verify
zonecfg:zcldvds01> commit
zonecfg:zcldvds01> exit

Now if you run "ifconfig -a" in the zone. You should see the new IP plumbed.

Change IP in zone

If you want to change the IP address of a zone you can simply do it by using "remove". If we take above example and we want to change IP from 10.248.3.167 to 10.248.3.175 we will do as below:-

root@cldvds-global()# zonecfg -z zcldvds01
zonecfg:zcldvds01> remove net address=10.248.3.167
zonecfg:zcldvds01> add net
zonecfg:zcldvds01:net> set physical=aggr1
zonecfg:zcldvds01:net> set address=10.248.3.175
zonecfg:zcldvds01:net> end
zonecfg:zcldvds01> verify
zonecfg:zcldvds01> commit
zonecfg:zcldvds01> exit

Solved: How to grow or extend ZFS filesystem in Solaris 10

Below are the steps to grow a zfs filesystem
  • Identify the zpool of the zfs filesystem.
df -h | grep -i sagufs
df -Z | grep -i sagufs
Above command will give you the complete path of the filesystem and zpool name even if it's in zone.
  • Check that  the pool doesn't have any errors.
root# zpool status sagu-zpool
 pool: sagu-zpool
 state: ONLINE
 scan: none requested
config:

NAME STATE READ WRITE CKSUM
 sagu-zpool ONLINE 0 0 0
 c0t911602657A702A0004D339BDCF15E111d0 ONLINE 0 0 0
 c0t911602657A702A00BE158E94CF15E111d0 ONLINE 0 0 0
 c0t911602657A702A004CD071A9CF15E111d0 ONLINE 0 0 0

errors: No known data errors
  • Check the current size of the pool
root# zpool list sagu-zpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
sagu-zpool 249G 178G 71.1G 71% ONLINE -
  • Label the new LUN.
root# format c0t9007538111C02A004E73B39A155BE211d0
  • Add the LUN to appropriate zpool. Be careful about pool name.
root# zpool add sagu-zpool c0t9007538111C02A004E73B39A155BE211d0
  • Now let's say we want to increase the filesystem from 100GB to 155GB. To increase FS first increase its quota.
root# zfs set quota=155G sagu-zpool/sagufs
  • Finally increase the FS reservation
root# zfs set reservation=155G sagu-zpool/sagufs
  • Now you should be able to see the increased space.

Solved: Getting nobody:nobody as owner of NFS filesystem on Solaris client

If the NFS Version 4 client does not recognize a user or group name from the server, the client is unable to map the string to its unique ID, an integer value. Under such circumstances, the client maps the inbound user or group string to the nobody user. This mapping to nobody creates varied problems for different applications.
Because of these ownership issues you may see filesystem has permission of nobody:nobody on the NFS client.
To avoid this situation you can mount filesystem with NFS version 3 as shown below.
On the NFS client, mount a file system using the NFS v3
# mount -F nfs -o vers=3 host:/export/XXX /YYY
e.g.
# mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test
If this works fine than make the entry permanent by modifying the /etc/default/nfs file and uncomment the variable NFS_CLIENT_VERSMAX and put entry 3
vi /etc/default/nfsNFS_CLIENT_VERSMAX=3
If you are still getting permissions as nobody:nobody then you have to share the filesystem on NFS server as anon.
share -o anon=0 /home/cv/share_fs
Now try re-mount on NFS Client
mount -F nfs -o vers=3 hostA:/home/cv/share_fs /tmp/test

In what sequence startup and shutdown RC scripts are executed in Solaris

We can use RC (Run Control) scripts present in /etc/rc* directories to start or stop a service during bootup and shutdown.
Each rc directory is associated with a run level. For example the scripts in rc3.d will be executed when the system is going in Run Level 3.
All the scripts in these directories must follow a special pattern so that they can be considered for execution.
The startup script file starts with a “S” and kill script start with a “K”. The uppercase is very important or the file will be ignored.
The sequence of these startup and shutdown script is crucial if the applications are dependent on each other.
For example while booting up a database should start before application. While during shutdown the application should shutdown first followed by Database.
Here we will see how we can sequence these scripts.
So as given in below example during startup S90datetest.sh script was executed first and than S91datetest.sh .
Time during execution of CloudVedas script S90datetest.sh is Monday, 23 September 2016 16:19:43 IST
Time during execution of CloudVedas script S91datetest.sh is Monday, 23 September 2016 16:19:48 IST
Similarly during shutdown K90datetest.sh script was executed first and then K91datetest.sh .
Time during execution of CloudVedas script K90datetest.sh is Monday, 23 September 2016 16:11:43 IST
Time during execution of CloudVedas script K91datetest.sh is Monday, 23 September 2016 16:11:48 IST
This sequencing is also a trick interview question and it confuses many people.

Solved: How to create a soft link in Linux or Solaris

In this post we will see how to create a softlink.
Execute the below command to create a softlink.
[root@cloudvedas ~]# ln -s /usr/interface/HB0 CLV
So now when you list using  “ls -l”  the softlink thus created will look like.
[root@cloudvedas ~]# ls -llrwxrwxrwx. 1 root root 18 Aug 8 23:16 CLV -> /usr/interface/HB0[root@cloudvedas ~]#
Try going inside the link and list the contents.
[root@cloudvedas ~]# cd CLV[root@cloudvedas CLV]# lscloud1 cloud2 cloud3[root@cloudvedas CLV]#
You can see the contents of /usr/interface/HB0 directory.

Solved: How to create a flar image in Solaris and restore it forrecovery

Flar image is a good way to recover your system from crashes. In this post we will see how to create a flar image and use it for recovery of the system.
Flar Creation
  • It is recommended that you create flar image in single user mode. Shutdown server and boot it in single user.
#init 0ok>boot -s
  • In this example, the FLAR image will be stored to a directory under /flash. The FLAR image will be named recovery_image.flar .
flarcreate -n my_bkp_image1 -c -S -R / -x /flash /flash/recovery_image.flar
  • Once the flar image is created. Copy it to your repository system. Here we are using NFS.
cp -p /flash/recovery_image.flar /net/FLAR_recovery/recovery_image.flar
Flar Restoration
  • To restore a flar image start the boot process.
  • You can boot server either with Solaris CD/DVD or Network
  • Go to the ok prompt and run one of the below command:-
For booting the boot media (installation CD/DVD). ok> boot cdromIf you want to boot from network do. 

ok> boot net
  • Provide the network, date/time, and password information for the system.
  • Once you reach the “Solaris Interactive Installation” part, select “Flash”.
  • Provide the path to the system with location of the FLAR image:
    /net/FLAR_recovery/recovery_image.flar
  • Select the correct Retrieval Method (HTTP, FTP, NFS) to locate the FLAR image.
  • At the Disk Selection screen, select the disk where the FLAR image is to be installed.
  • Choose not to preserve existing data.(Be sure you want to restore on selected disk)
  • At the File System and Disk Layout screen, select Customize to edit the disk slices to input the values of the disk partition table from the original disk.
  • Once the system is rebooted the recovery is complete.

Solved: How to enable auditing of zones from Global Zone in a Solaris10 Server

Auditing is a good way to keep logs of all the activities happening in your Solaris server. In this post we will see how to enable auditing of both global and local zones and store the logs of all in a single file in global zone.

1) In the global zone create a new FS of 20GB and mount it.

mkdir /var/audit/gaudit
mount /dev/md/dsk/d100 /var/audit/gaudit
chmod -R 750 /var/audit/gaudit

2) Modify /etc/security/audit_control and add "lo,ex" before flags and naflags as below.

vi audit_control
#
# Copyright (c) 1988 by Sun Microsystems, Inc.
#
# ident "@(#)audit_control.txt 1.4 00/07/17 SMI"
#
dir:/var/audit/gaudit
flags:lo,ex
minfree:20
naflags:lo,ex

3) Modify /etc/security/audit_startup and add +argv and +zonename entries as described below. This entry will create audit logs for all zones in /var/audit/gaudit .

vi audit_startup
#! /bin/sh
#
# Copyright 2004 Sun Microsystems, Inc. All rights reserved.
# Use is subject to license terms.
#
# ident "@(#)audit_startup.txt 1.1 04/06/04 SMI"

/usr/bin/echo "Starting BSM services."
/usr/sbin/auditconfig -setpolicy +cnt
/usr/sbin/auditconfig -conf
/usr/sbin/auditconfig -aconf
/usr/sbin/auditconfig -setpolicy +argv
/usr/sbin/auditconfig -setpolicy +zonename
#

4) Copy audit_control file to /etc/security of each zone or loopback mount them in each zone.

5) Once all the zones are configured enable the audit service by running /etc/security/bsmconv. This will require reboot of system.

6) Check audit logs in /var/audit/gaudit using

auditreduce 20170709091522.not_terminated.solaris1 | praudit

7) For checking logs of a specific zone follow below

root@solaris1 # auditreduce -z zone1 20170709091522.not_terminated.solaris1 | praudit
file,2017-07-09 16:26:00.000 +02:00,
zone,zone1
header,160,2,execve(2),,solaris1,2017-07-09 16:26:00.697 +02:00
path,/usr/sbin/ping
attribute,104555,root,bin,85,200509,0
exec_args,2,ping,127.0.0.1
subject,root,root,root,root,root,2164,2187,0 0 0.0.0.0
return,success,0
zone,zone1
file,2017-07-09 16:26:00.000 +02:00,
root@solaris1 #

Solved: How to take XSCF snapshot of M-Series server running Solaris

In this post we will see how to take XSCF snapshot of an M-Series server

Save snapshot on different server

  • First create a user "test" in OS of server in which you want to save snapshot.
  • Next login to XSCF of server whose snapshot you want to take.
  • Take snapshot by giving IP of destination server on which you want to save the data using the below syntax.
    snapshot -LF -t username@serverip:/full_path_to_data_location -k download

Here is an example. We created test user in 192.168.99.10 destination server, and snapshot will be saved in it's /var/tmp directory.

XSCF> snapshot -LF -t test@192.168.99.10:/var/tmp -k download

Save snapshot on same server.

If you want to save snapshot on same server of which you are collecting snapshot use below steps.

  • Login to XSCF and check the DSCP config to know the IP of each domain.
XSCF> showdscp

DSCP Configuration:

Network: 10.1.1.0
Netmask: 255.255.255.0

Location Address
---------- ---------
XSCF 10.1.1.1
Domain #00 10.1.1.2
Domain #01 10.1.1.3
Domain #02 10.1.1.4
Domain #03 10.1.1.5
  • Check the running domain
XSCF> showdomainstatus -a
DID Domain Status
00 Running
01 -
02 -
03 -
  • Ping to ensure you can connect to the network
    XSCF> ping 10.1.1.2
    
    PING 10.1.1.2 (10.1.1.2): 56 data bytes
    64 bytes from 10.1.1.2: icmp_seq=0 ttl=255 time=2.1 ms
    64 bytes from 10.1.1.2: icmp_seq=1 ttl=255 time=2.0 ms
  • Take snapshot after creating a user on the OS.
    XSCF> snapshot -LF -t test@10.1.1.2:/var/tmp -k download