Showing posts with label zfs. Show all posts
Showing posts with label zfs. Show all posts

How to take zfs snapshot and rollback

In this post we will discuss how to take ZFS snapshot and restore data using that snapshot.

If you want to take snapshot of a ZFS filesystem the syntax is simple.

zfs snapshot pool/filesystem@somename

Let’s take an example, we have a zpool named “cvpool” and it has a filesystem "cldvdsfs".

  • If we wanting​ to take a snapshot of this filesystem on weekend we will give snapshot a name let’s say “sunday”.
zfs snapshot cvpool/cldvdsfs@sunday
  • Now if you do a “zfs list” you should see the zfs snapshot.
# zfs list
NAME                      USED AVAIL REFER MOUNTPOINT
cvpool                     500M 4.40G 22K /cvpool
cvpool/cldvdsfs             22K 500M  22K /cvpool/cldvdsfs
cvpool/cldvdsfs@sunday        0   -   22K   -
#
  • Have a look at the content of the filesystem. We can see 5 test files.
# cd /cvpool/cldvdsfs

# ls
test1 test2 test3 test4 test5
#

Rollback

  • If you want to do a rollback/restore of this snapshot on the filesystem you can simply do it by:-
zfs rollback pool/filesystem@sunday
  • Let's give it a try  by removing some files.
# rm test5
# rm test4
# ls
 test1 test2 test3
  • Now when we try rollback we should see all our data back.
# zfs rollback cvpool/cldvdsfs@sunday

# cd /cvpool/cldvdsfs
# ls
 test1 test2 test3 test4 test5
#

So we can see above we got our removed files back.

Remote backup

  • Let's try sending the snapshot on a different filesystem or on a remote server NFS filesystem. This is very useful for backup purposes where the data is stored on a different server.
# zfs send cvpool/cldvdsfs@sunday > /remoteNFS/sunday.snap
  • Let's check our snapshot size
-rw-r--r-- 1 root root 14K Sep 23 07:52 sunday.snap
  • Zip the snapshot

You also have an option to zip the snapshot to save space. Like in below example the snapshot got shrinked by 94% .

# gzip -9 -v /remoteNFS/sunday.snap
 /remoteNFS/sunday.snap: 94.0% -- replaced with /remoteNFS/sunday.snap.gz
#

-rw-r--r-- 1 root root 899 Sep 23 07:52 sunday.snap.gz
  • Now let's  create a new ZFS filesystem sunday and try to restore the snapshot on it.
# zfs create cvpool/sunday

# zfs list
 NAME USED AVAIL REFER MOUNTPOINT
 cvpool 500M 4.40G 22K /cvpool
 cvpool/cldvdsfs 22K 500M 22K /cvpool/cldvdsfs
 cvpool/cldvdsfs@sunday 0 - 22K -
 cvpool/sunday 21K 4.40G 21K /cvpool/sunday
  • Currently our new filesystem has nothing in it.
# cd /cvpool/sunday
# ls
#

Unzip the snapshot

Let's unzip and restore the snapshot.

# gzip -d -c /remoteNFS/sunday.snap.gz | zfs receive -F cvpool/sunday

Note:- When you restore the snapshot the filesystem /cvpool/sunday should not be in use else you will get device busy error.

  • We can now see our files in the the sunday filesystem.
 # cd /cvpool/sunday
 # ls
 test1 test2 test3 test4 test5
 #
  • You can make the restored snapshot as your main filesystem by renaming it.

So here we will first rename the current filesystem to old.

 # zfs rename cvpool/cldvdsfs cvpool/cldvdsfs.old
 # zfs list
 NAME USED AVAIL REFER MOUNTPOINT
 cvpool 500M 4.40G 24K /cvpool
 cvpool/cldvdsfs.old 40K 500M 22K /cvpool/cldvdsfs.old
 cvpool/cldvdsfs.old@sunday 18K - 22K -
 cvpool/sunday 22K 4.40G 22K /cvpool/sunday
 cvpool/sunday@sunday 0 - 22K -
  • Now we will make the filesystem which was restored from sunday snapshot as main.
 # zfs rename cvpool/sunday cvpool/cldvdsfs
 # zfs list
 NAME USED AVAIL REFER MOUNTPOINT
 cvpool 500M 4.40G 24K /cvpool
 cvpool/cldvdsfs 40K 4.40G 22K /cvpool/cldvdsfs
 cvpool/cldvdsfs@sunday 18K - 22K -
 cvpool/cldvdsfs.old 40K 500M 22K /cvpool/cldvdsfs.old
 cvpool/cldvdsfs.old@sunday 18K - 22K -
 #

Solved: How to grow or extend ZFS filesystem in Solaris 10

Below are the steps to grow a zfs filesystem
  • Identify the zpool of the zfs filesystem.
df -h | grep -i sagufs
df -Z | grep -i sagufs
Above command will give you the complete path of the filesystem and zpool name even if it's in zone.
  • Check that  the pool doesn't have any errors.
root# zpool status sagu-zpool
 pool: sagu-zpool
 state: ONLINE
 scan: none requested
config:

NAME STATE READ WRITE CKSUM
 sagu-zpool ONLINE 0 0 0
 c0t911602657A702A0004D339BDCF15E111d0 ONLINE 0 0 0
 c0t911602657A702A00BE158E94CF15E111d0 ONLINE 0 0 0
 c0t911602657A702A004CD071A9CF15E111d0 ONLINE 0 0 0

errors: No known data errors
  • Check the current size of the pool
root# zpool list sagu-zpool
NAME SIZE ALLOC FREE CAP HEALTH ALTROOT
sagu-zpool 249G 178G 71.1G 71% ONLINE -
  • Label the new LUN.
root# format c0t9007538111C02A004E73B39A155BE211d0
  • Add the LUN to appropriate zpool. Be careful about pool name.
root# zpool add sagu-zpool c0t9007538111C02A004E73B39A155BE211d0
  • Now let's say we want to increase the filesystem from 100GB to 155GB. To increase FS first increase its quota.
root# zfs set quota=155G sagu-zpool/sagufs
  • Finally increase the FS reservation
root# zfs set reservation=155G sagu-zpool/sagufs
  • Now you should be able to see the increased space.