If you want to come out of vmware console window simply press Alt + Ctrl or Ctrl + Alt any of the two sequence will work.
If you want the movement between your host machine and VMware console to be seamless, install VMware tools in the guest VM. This will ensure that you don’t have to press Ctrl+ Alt every time.
Check this site to download VMware tools.
Solved: Install and use rsync in Linux and Solaris - Tips and Tricks
rsync is a very useful utility which can be used to take backup or copy data from one filesystem to other. The best part with rysnc is that it considers incremental data while copying and syntax is easy.
rsync is popular with Linux and included in most versions by default. If you don't have rsync in machine you can simply install it by following commands:-
Ubuntu / Debian
Now rsync is available in Solaris also. Oracle has now started including rsync in Solaris 10 and Solaris 11. You can also download it from My Oracle Support website or from here .
Using rsync
Once you have installed rsync you can use it for copying data from one FS to other. Check the the example here
rsync and NFS
If you want you can also sync the filesystem which is on another server.
One way of doing it is to mount the source or target filesystem as NFS.
Let's say we have mounted the the target filesystem as NFS on source server then the syntax will be:-
Or, if you don't want to use NFS you can do the same with ssh.
If you want you can schedule the rsync in cron also. In the below example we will schedule to run the cron job at 3:30 daily.
-v, --verbose increase verbosity
-q, --quiet suppress non-error messages
--no-motd suppress daemon-mode MOTD (see caveat)
-c, --checksum skip based on checksum, not mod-time & size
-a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)
--no-OPTION turn off an implied OPTION (e.g. --no-D)
-r, --recursive recurse into directories
-R, --relative use relative path names
--no-implied-dirs don’t send implied dirs with --relative
-b, --backup make backups (see --suffix & --backup-dir)
--backup-dir=DIR make backups into hierarchy based in DIR
--suffix=SUFFIX backup suffix (default ~ w/o --backup-dir)
-u, --update skip files that are newer on the receiver
--inplace update destination files in-place
--append append data onto shorter files
--append-verify --append w/old data in file checksum
-d, --dirs transfer directories without recursing
-l, --links copy symlinks as symlinks
-L, --copy-links transform symlink into referent file/dir
--copy-unsafe-links only "unsafe" symlinks are transformed
--safe-links ignore symlinks that point outside the tree
-k, --copy-dirlinks transform symlink to dir into referent dir
-K, --keep-dirlinks treat symlinked dir on receiver as dir
-H, --hard-links preserve hard links
-p, --perms preserve permissions
-E, --executability preserve executability
--chmod=CHMOD affect file and/or directory permissions
-A, --acls preserve ACLs (implies -p)
-X, --xattrs preserve extended attributes
-o, --owner preserve owner (super-user only)
-g, --group preserve group
--devices preserve device files (super-user only)
--specials preserve special files
-D same as --devices --specials
-t, --times preserve modification times
-O, --omit-dir-times omit directories from --times
--super receiver attempts super-user activities
--fake-super store/recover privileged attrs using xattrs
-S, --sparse handle sparse files efficiently
-n, --dry-run perform a trial run with no changes made
-W, --whole-file copy files whole (w/o delta-xfer algorithm)
-x, --one-file-system don’t cross filesystem boundaries
-B, --block-size=SIZE force a fixed checksum block-size
-e, --rsh=COMMAND specify the remote shell to use
--rsync-path=PROGRAM specify the rsync to run on remote machine
--existing skip creating new files on receiver
--ignore-existing skip updating files that exist on receiver
--remove-source-files sender removes synchronized files (non-dir)
--del an alias for --delete-during
--delete delete extraneous files from dest dirs
--delete-before receiver deletes before transfer (default)
--delete-during receiver deletes during xfer, not before
--delete-delay find deletions during, delete after
--delete-after receiver deletes after transfer, not before
--delete-excluded also delete excluded files from dest dirs
--ignore-errors delete even if there are I/O errors
--force force deletion of dirs even if not empty
--max-delete=NUM don’t delete more than NUM files
--max-size=SIZE don’t transfer any file larger than SIZE
--min-size=SIZE don’t transfer any file smaller than SIZE
--partial keep partially transferred files
--partial-dir=DIR put a partially transferred file into DIR
--delay-updates put all updated files into place at end
-m, --prune-empty-dirs prune empty directory chains from file-list
--numeric-ids don’t map uid/gid values by user/group name
--timeout=SECONDS set I/O timeout in seconds
--contimeout=SECONDS set daemon connection timeout in seconds
-I, --ignore-times don’t skip files that match size and time
--size-only skip files that match in size
--modify-window=NUM compare mod-times with reduced accuracy
-T, --temp-dir=DIR create temporary files in directory DIR
-y, --fuzzy find similar file for basis if no dest file
--compare-dest=DIR also compare received files relative to DIR
--copy-dest=DIR ... and include copies of unchanged files
--link-dest=DIR hardlink to files in DIR when unchanged
-z, --compress compress file data during the transfer
--compress-level=NUM explicitly set compression level
--skip-compress=LIST skip compressing files with suffix in LIST
-C, --cvs-exclude auto-ignore files in the same way CVS does
-f, --filter=RULE add a file-filtering RULE
-F same as --filter=’dir-merge /.rsync-filter’
repeated: --filter=’- .rsync-filter’
--exclude=PATTERN exclude files matching PATTERN
--exclude-from=FILE read exclude patterns from FILE
--include=PATTERN don’t exclude files matching PATTERN
--include-from=FILE read include patterns from FILE
--files-from=FILE read list of source-file names from FILE
-0, --from0 all *from/filter files are delimited by 0s
-s, --protect-args no space-splitting; wildcard chars only
--address=ADDRESS bind address for outgoing socket to daemon
--port=PORT specify double-colon alternate port number
--sockopts=OPTIONS specify custom TCP options
--blocking-io use blocking I/O for the remote shell
--stats give some file-transfer stats
-8, --8-bit-output leave high-bit chars unescaped in output
-h, --human-readable output numbers in a human-readable format
--progress show progress during transfer
-P same as --partial --progress
-i, --itemize-changes output a change-summary for all updates
--out-format=FORMAT output updates using the specified FORMAT
--log-file=FILE log what we’re doing to the specified FILE
--log-file-format=FMT log updates using the specified FMT
--password-file=FILE read daemon-access password from FILE
--list-only list the files instead of copying them
--bwlimit=KBPS limit I/O bandwidth; KBytes per second
--write-batch=FILE write a batched update to FILE
--only-write-batch=FILE like --write-batch but w/o updating dest
--read-batch=FILE read a batched update from FILE
--protocol=NUM force an older protocol version to be used
--iconv=CONVERT_SPEC request charset conversion of filenames
--checksum-seed=NUM set block/file checksum seed (advanced)
-4, --ipv4 prefer IPv4
-6, --ipv6 prefer IPv6
--version print version number
(-h) --help show this help (see below for -h comment)
rsync is popular with Linux and included in most versions by default. If you don't have rsync in machine you can simply install it by following commands:-
Ubuntu / Debian
sudo ap-get update sudo apt-get install rsyncRHEL / CentOS
yum install rsyncSolaris
Now rsync is available in Solaris also. Oracle has now started including rsync in Solaris 10 and Solaris 11. You can also download it from My Oracle Support website or from here .
Using rsync
Once you have installed rsync you can use it for copying data from one FS to other. Check the the example here
rsync -avzp --ignore-existing --exclude="lost+found" --exclude=".snapshot" /source/ /target/-p option is very important as it as it preserves all permissions including ACLs.
rsync and NFS
If you want you can also sync the filesystem which is on another server.
One way of doing it is to mount the source or target filesystem as NFS.
Let's say we have mounted the the target filesystem as NFS on source server then the syntax will be:-
rsync -avzp --ignore-existing --exclude="lost+found" --exclude=".snapshot" /source/ /targetNFS/rsync and ssh
Or, if you don't want to use NFS you can do the same with ssh.
- First exchange ssh keys between the source and destination server so that the servers can ssh to each other.
- Once you ensure that ssh is working fine between the source and destination server let's move ahead.
- In the below example we are initiating rsync from target server so we have mentioned the IP of source server.
rsync -avzp --ignore-existing --exclude="lost+found" --exclude=".snapshot" --rsh='ssh -p 22' root@Source_Server_IP_ADDRESS:/source/ /target/Scheduling rsync in cron
If you want you can schedule the rsync in cron also. In the below example we will schedule to run the cron job at 3:30 daily.
30 3 * * * rsync -avzp --ignore-existing --exclude="lost+found" --exclude=".snapshot" --rsh='ssh -p 22' root@Source_IP:/source/ /target/ >/dev/null 2>&1Options available in rsync.
-v, --verbose increase verbosity
-q, --quiet suppress non-error messages
--no-motd suppress daemon-mode MOTD (see caveat)
-c, --checksum skip based on checksum, not mod-time & size
-a, --archive archive mode; equals -rlptgoD (no -H,-A,-X)
--no-OPTION turn off an implied OPTION (e.g. --no-D)
-r, --recursive recurse into directories
-R, --relative use relative path names
--no-implied-dirs don’t send implied dirs with --relative
-b, --backup make backups (see --suffix & --backup-dir)
--backup-dir=DIR make backups into hierarchy based in DIR
--suffix=SUFFIX backup suffix (default ~ w/o --backup-dir)
-u, --update skip files that are newer on the receiver
--inplace update destination files in-place
--append append data onto shorter files
--append-verify --append w/old data in file checksum
-d, --dirs transfer directories without recursing
-l, --links copy symlinks as symlinks
-L, --copy-links transform symlink into referent file/dir
--copy-unsafe-links only "unsafe" symlinks are transformed
--safe-links ignore symlinks that point outside the tree
-k, --copy-dirlinks transform symlink to dir into referent dir
-K, --keep-dirlinks treat symlinked dir on receiver as dir
-H, --hard-links preserve hard links
-p, --perms preserve permissions
-E, --executability preserve executability
--chmod=CHMOD affect file and/or directory permissions
-A, --acls preserve ACLs (implies -p)
-X, --xattrs preserve extended attributes
-o, --owner preserve owner (super-user only)
-g, --group preserve group
--devices preserve device files (super-user only)
--specials preserve special files
-D same as --devices --specials
-t, --times preserve modification times
-O, --omit-dir-times omit directories from --times
--super receiver attempts super-user activities
--fake-super store/recover privileged attrs using xattrs
-S, --sparse handle sparse files efficiently
-n, --dry-run perform a trial run with no changes made
-W, --whole-file copy files whole (w/o delta-xfer algorithm)
-x, --one-file-system don’t cross filesystem boundaries
-B, --block-size=SIZE force a fixed checksum block-size
-e, --rsh=COMMAND specify the remote shell to use
--rsync-path=PROGRAM specify the rsync to run on remote machine
--existing skip creating new files on receiver
--ignore-existing skip updating files that exist on receiver
--remove-source-files sender removes synchronized files (non-dir)
--del an alias for --delete-during
--delete delete extraneous files from dest dirs
--delete-before receiver deletes before transfer (default)
--delete-during receiver deletes during xfer, not before
--delete-delay find deletions during, delete after
--delete-after receiver deletes after transfer, not before
--delete-excluded also delete excluded files from dest dirs
--ignore-errors delete even if there are I/O errors
--force force deletion of dirs even if not empty
--max-delete=NUM don’t delete more than NUM files
--max-size=SIZE don’t transfer any file larger than SIZE
--min-size=SIZE don’t transfer any file smaller than SIZE
--partial keep partially transferred files
--partial-dir=DIR put a partially transferred file into DIR
--delay-updates put all updated files into place at end
-m, --prune-empty-dirs prune empty directory chains from file-list
--numeric-ids don’t map uid/gid values by user/group name
--timeout=SECONDS set I/O timeout in seconds
--contimeout=SECONDS set daemon connection timeout in seconds
-I, --ignore-times don’t skip files that match size and time
--size-only skip files that match in size
--modify-window=NUM compare mod-times with reduced accuracy
-T, --temp-dir=DIR create temporary files in directory DIR
-y, --fuzzy find similar file for basis if no dest file
--compare-dest=DIR also compare received files relative to DIR
--copy-dest=DIR ... and include copies of unchanged files
--link-dest=DIR hardlink to files in DIR when unchanged
-z, --compress compress file data during the transfer
--compress-level=NUM explicitly set compression level
--skip-compress=LIST skip compressing files with suffix in LIST
-C, --cvs-exclude auto-ignore files in the same way CVS does
-f, --filter=RULE add a file-filtering RULE
-F same as --filter=’dir-merge /.rsync-filter’
repeated: --filter=’- .rsync-filter’
--exclude=PATTERN exclude files matching PATTERN
--exclude-from=FILE read exclude patterns from FILE
--include=PATTERN don’t exclude files matching PATTERN
--include-from=FILE read include patterns from FILE
--files-from=FILE read list of source-file names from FILE
-0, --from0 all *from/filter files are delimited by 0s
-s, --protect-args no space-splitting; wildcard chars only
--address=ADDRESS bind address for outgoing socket to daemon
--port=PORT specify double-colon alternate port number
--sockopts=OPTIONS specify custom TCP options
--blocking-io use blocking I/O for the remote shell
--stats give some file-transfer stats
-8, --8-bit-output leave high-bit chars unescaped in output
-h, --human-readable output numbers in a human-readable format
--progress show progress during transfer
-P same as --partial --progress
-i, --itemize-changes output a change-summary for all updates
--out-format=FORMAT output updates using the specified FORMAT
--log-file=FILE log what we’re doing to the specified FILE
--log-file-format=FMT log updates using the specified FMT
--password-file=FILE read daemon-access password from FILE
--list-only list the files instead of copying them
--bwlimit=KBPS limit I/O bandwidth; KBytes per second
--write-batch=FILE write a batched update to FILE
--only-write-batch=FILE like --write-batch but w/o updating dest
--read-batch=FILE read a batched update from FILE
--protocol=NUM force an older protocol version to be used
--iconv=CONVERT_SPEC request charset conversion of filenames
--checksum-seed=NUM set block/file checksum seed (advanced)
-4, --ipv4 prefer IPv4
-6, --ipv6 prefer IPv6
--version print version number
(-h) --help show this help (see below for -h comment)
Solved: How to create hardware RAID in Solaris
If you want to create hardware RAID in Solaris you can use raidctl utility.
SYNC – Disks syncing is in progress.
DEGRADED – Shows that one of the disk in RAID is failed.
FAILED – When one or both disks are lost and you have to recreate the volume.
- Let’s create a mirrored RAID volume. Before you start remember that raidctl will destroy all data on the disk so be absolutely sure that you selected right disks.
# raidctl -c c0t0d0 c0t1d0Creating RAID volume c0t0d0 will destroy all data on member disks, proceed (yes/no)? yes...Volume c0t0d0 is created successfully!#
- Once you have created the RAID it can be in four below states:-
SYNC – Disks syncing is in progress.
DEGRADED – Shows that one of the disk in RAID is failed.
FAILED – When one or both disks are lost and you have to recreate the volume.
- Let’s check the current status of our volume.
# raidctl -l c0t0d0Volume Size Stripe Status Cache RAID Sub Size Level Disk----------------------------------------------------------------c0t0d0 136.6G N/A SYNC OFF RAID1 0.0.0 136.6G GOOD 0.1.0 136.6G GOOD
- In the above output we can see that the sync is going on. While in below output we can see that disks are in sync now and RAID is optimal. Also we can see that it’s RAID 1 which is mirroring.
# raidctl -l c0t0d0Volume Size Stripe Status Cache RAID Sub Size Level Disk----------------------------------------------------------------c0t0d0 136.6G N/A OPTIMAL OFF RAID1 0.0.0 136.6G GOOD 0.1.0 136.6G GOODIf you want to change the system board of a system with hardware RAID, you will have to reactivate the volumes after hardware replacement. Refer to the post, steps to follow for hardware replacement and reactivate hardware .
Solved: How to use nmcli for adding new IP and managing network in Redhat Linux
In this post we will see how to use nmcli for adding and configuring the IP. To do the same activity with nmuti refer this post.
"nmcli" is a command line interface. Benefit with both nmtui and nmcli is that you don't have to update the network configuration files manually. All the required changes are done by nmtui and nmcli themselves.
Good thing with nmcli is that you don't have to install any extra package like nmtui. nmcli is generally included with NetworkManager.
Connection name:- cldvds-2
Network Interface:- enp0s3
IP/Netmask:- 192.168.99.103/24
Gateway:- 192.168.99.1
Yes we can see the new IP up and running
If you want to add an IPv6 IP to the interface you can do it with ip6 and gw6 options.
"nmcli" is a command line interface. Benefit with both nmtui and nmcli is that you don't have to update the network configuration files manually. All the required changes are done by nmtui and nmcli themselves.
Good thing with nmcli is that you don't have to install any extra package like nmtui. nmcli is generally included with NetworkManager.
- Let's create our new connection.
[root@cloudvedas ~]# nmcli con add type ethernet con-name cldvds-2 ifname enp0s3 ip4 192.168.99.103/24 gw4 192.168.99.1 Connection 'cldvds-2' (33de1c00-4ab4-4777-0a43-c004f0bd47ff) successfully added. [root@cloudvedas ~]#In the above example we are using these parameters:-
Connection name:- cldvds-2
Network Interface:- enp0s3
IP/Netmask:- 192.168.99.103/24
Gateway:- 192.168.99.1
- Check in nmcli if our new connection is present.
[root@cloudvedas ~]# nmcli con show NAME UUID TYPE DEVICE System enp0s8 00cb8200-feb0-44b7-a378-3fdc720e0bc7 802-3-ethernet enp0s8 enp0s3 1a03478c-0307-4f23-a7fa-247ad74c37bf 802-3-ethernet -- cldvds-1 303ccf07-e77c-4770-b787-370407f73edc 802-3-ethernet enp0s3 cldvds-2 33de1c00-4ab4-4777-0a43-c004f0bd47ff 802-3-ethernet -- [root@cloudvedas ~]#We can see the new connection cldvds-2 is created.
- Activate the new connection
nmcli con up cldvds-2
- If you are following up from the last post we will first deactivate our old connection cldvds-1 and then activate our new connection cldvds-2 .
[root@cloudvedas network-scripts]# nmcli con down cldvds-1 ; nmcli con up cldvds-2
- Let's check if our ip got plumbed using command "ip addr"
Yes we can see the new IP up and running
- As I said earlier "nmcli" will automatically update the config file. You can find the file in the directory "/etc/sysconfig/network-scripts" . Cross check the contents of the file
cd /etc/sysconfig/network-scripts cat ifcfg-cldvds-2You can also modify the connection properties. Let's say you want to add DNS server to the connection we created.
[root@cloudvedas ~]# nmcli con mod cldvds-2 ipv4.dns "192.168.99.254"If you want to assign IP using DHCP you can create a new dhcp connection
[root@cloudvedas ~]# nmcli con add type ethernet con-name dhcp-1 ifname enp0s3You can activate the new dhcp connection.(Be careful because if you are connected with the same interface which you are changing, you will lose the connection.)
nmcli con down cldvds-2 ; nmcli con up dhcp-1Bonus:-
If you want to add an IPv6 IP to the interface you can do it with ip6 and gw6 options.
[root@cloudvedas ~]# nmcli con add type ethernet con-name cldvds-ip6 ifname enp0s3 ip4 192.168.99.104/24 gw4 192.168.99.1 ip6 fe80::cafe gw6 2001:db8::1 Connection 'cldvds-ip6' (83fb7f17-fc2f-424c-b446-33c929abcb56) successfully added.Activate the connection
[root@cloudvedas ~]# nmcli con down cldvds-2 ; nmcli con up cldvds-ip6If you no longer need a connection you can delete it after deactivating it.
[root@cloudvedas ~]# nmcli con down cldvds-ip6 [root@cloudvedas ~]# nmcli con delete cldvds-ip6Hope this post is helpful to you. Do let me know if you have any query.
Solved: How to change hostname using nmtui or nmcli in redhat linux
In our last post we have seen how to change hostname manually or with hostnamectl .
In this post we will discuss how to change hostname with nmtui and nmcli tools.
nmtui
Let’s first try with nmtui tool
Let’s try changing the hostname with nmcli tool in Redhat Linux.
If you are trying to change hostname of AWS EC2 linux instance the process will be slightly different. Refer this post to change AWS EC2 instance hostname.
In this post we will discuss how to change hostname with nmtui and nmcli tools.
nmtui
Let’s first try with nmtui tool
- If you don’t have the nmtui tool installed, you can install it using yum
yum install NetworkManager-tui
- Invoke the nmtui interface from the command line by executing as root user
nmtui
- In the menu select “Set system hostname” .
- Write the new hostname which you want to keep. We want to change it to “prodvedas”.
- To make it effective restart the “hostnamed” service
systemctl restart systemd-hostnamed
- Now if you login with new session or do “su -” in same session you should see new hostname.
Let’s try changing the hostname with nmcli tool in Redhat Linux.
- To check the current hostname
nmcli general hostname
- If you want to change the hostname
nmcli general hostname prodvedas
- To make it effective restart hostnamed service
systemctl restart systemd-hostnamed
- Now if you login with new session or do “su -” in same session you should see new hostname.
If you are trying to change hostname of AWS EC2 linux instance the process will be slightly different. Refer this post to change AWS EC2 instance hostname.
Subscribe to:
Posts (Atom)