Showing posts with label zfs. Show all posts
Showing posts with label zfs. Show all posts

Monday, May 9, 2011

ZFS Pool Versions

I was looking for an overview of the available ZFS Pool Versions. The Opensolaris XWiki Page of the ZFS Group show on page for every page, but not an complete overview.


7.0+ - original ZFS import, ZFS v6; requires significant tuning for stable operation (no longer supported)
7.2 - still ZFS v6, improved memory handling, amd64 may need no memory tuning (no longer supported)
7.3+ - backport of new ZFS v13 code, similar to the 8.0 code
8.0 - new ZFS v13 code, lots of bug fixes - recommended over all past versions. (no longer supported)
8.1+ - ZFS v14
8.2+ - ZFS v15
9.0+ - ZFS v28


Here I list of all current Versions until today 9-May-2011



This version includes support for improved 'zfs list' performance.

Pool version 31 is available in this release:

Nevada, build 150
The change record for the version 31 change is:

6980669 zfs rename -r is slow, causes appliance UI to hang



This version includes support for ZFS encryption.

Pool version 30 is available in this release:

Nevada, build 149
The change record for the version 30 change is:

4854202 ZFS data set encryption



This version includes support for the RAID-Z/mirror hybrid allocator.

Pool version 29 is available in this release:

Nevada, build 148
The change record for the version 29 change is:

6977913 RAID-Z/mirror hybrid allocator



This version includes support for multiple virtual device replacements.

Pool version 28 is available in this release:

Nevada, build 147
The change record for the version 28 change is:

6782540 zpool cannot replace a replacing device



This version includes support for improved snapshot creation performance.

Pool version 27 is available in this release:

Nevada, build 145
The change record for the version 27 change is:

6844896 recursive snapshots take a long time



This version includes support for improved snapshot deletion performance.

Pool version 26 is available in this release:

Nevada, build 141
The change record for the version 26 change is:

6948890 snapshot deletion can induce pathologically long spa_sync() times



This version includes support for improved pool scrubbing and resilvering statistics.

Pool version 25 is available in this release:

Nevada, build 140
The change record for the version 25 change is:

6391915 provide interval arg to zpool status to monitor resilvering



This version includes support for system attributes.

Pool version 24 is available in this release:

Nevada, build 137
The change records for the version 24 change are:

6716117 ZFS needs native system attribute infrastructure
6516171 zpl symlinks should have their own object type



This version includes support for the slim ZIL.

Pool version 23 is available in this release:

Nevada, build 135
The change record for the version 23 change is:

6595532 ZIL is too talkative



This version includes support for zfs receive properties.

Pool version 22 is available in this release:

Nevada, build 128
The PSARC case for the version 22 change is:

PSARC/2009/510 ZFS Received Properties



This version includes support for ZFS deduplication properties.

Pool version 21 is available in this release:

Nevada, build 128
The PSARC case for the version 21 change is:

PSARC/2009/571 ZFS Deduplication Properties



This version includes the zle compression algorithm that is needed to support the ZFS deduplication properties in ZFS pool version 21. Both pool versions are available in this release:

Nevada, build 128
The PSARC case for the version 20 change is:

PSARC/2009/571 ZFS Deduplication Properties



This version includes support for the following feature:

ZFS log device removal
This feature is available in:

Nevada, build 125
The related change record for the version 19 change is:

6574286 removing a slog doesn't work



This version includes support for the following feature:

ZFS snapshot holds
This feature is available in:

Nevada, build 121
The related change record for the version 18 change is:

6803121 want user-settable refcounts on snapshots



This version includes support for the following feature:

triple-parity RAID-Z
This feature is available in:

Nevada, build 120
The related change record for the version 17 change is:

6854612 triple-parity RAID-Z



This version includes support for the following feature:

STMF property support
This feature is available in:

Nevada, build 116
The related bug for the version 16 change is:

6736004 ZFS volumes need an additional property for COMSTAR support



This version includes support for the following features:

userused@... and groupused@... properties
userquota@... and groupquota@... properties
These features are available in:

Nevada, build 114
The related bug and PSARC case for version 15 changes are:

6501037 want user/group quotas on ZFS
PSARC/2009/204 ZFS user/group quotas and space accounting



This version includes support for the following feature:

passthrough-x aclinherit property support
This feature is available in:

OpenSolaris 2009.06, Nevada, build 103
The related bug and PSARC case for the version 14 change are:

6765166 Need to provide mechanism to optionally inherit ACE_EXECUTE
PSARC 2008/659 New ZFS "passthrough-x" ACL inheritance rules



This version includes support for the following features:

usedbysnapshots property
usedbychildren property
usedbyrefreservation property
usedbydataset property
These features are available in:

OpenSolaris 2008.11, Nevada, build 98
The related bug and PSARC case for version 13 change are:

6730799 want user properties on snapshots
PSARC/2008/518 ZFS space accounting enhancements



This version includes support for the following feature:

Snapshot properties
This feature is available in:

OpenSolaris 2008.11, Nevada build 96
The related bug for the version 12 change is:

6701797 want user properties on snapshot



This version includes support for the following feature:

Improved zpool scrub / resilver performance
This feature is available in:

OpenSolaris 2008.11, Nevada, build 94
The related bug for the version 11 change is:

6343667 scrub/resilver has to start over when a snapshot is taken
Note: this bug is fixed when using build 94 even with older pool versions. However, upgrading the pool can improve scrub performance when there are many file systems, snapshots, and clones.



This version includes support for the following feature:

Devices can be added to a storage pool as "cache devices." These devices provide an additional layer of caching between main memory and disk. Using cache devices provides the greatest performance improvement for random read-workloads of mostly static content.
This feature is available in Nevada, build 78.

The Solaris 10 10/08 release includes ZFS pool version 10, but support for cache devices is not included in this Solaris release.

The related bug for the version 10 change is:

6536054 second tier ("external") ARC



This version includes support for the following features:

In addition to the existing ZFS quota and reservation features, this release includes dataset quotas and reservations that do not include descendent datasets, such as snapshots and clones, in the space consumption. ("zfs set refquota" and "zfs set refreservation".)
A reservation is automatically set when a non-sparse ZFS volume is created that matches the size of the volume. This release provides an immediate reservation feature so that you set a reservation on a non-sparse volume with enough space to take snapshots and modify the contents of the volume.
CIFS server support
These features are available in Nevada, build 77.

The related bugs for version 9 changes are:

6431277 want filesystem-only quotas
6483677 need immediate reservation
6617183 CIFS Service - PSARC 2006/715



This version now supports the ability to delegate zfs(1M) administrative tasks to ordinary users.

This feature is available in:

Nevada, build 69
Solaris 10 10/08 release
The related bug for the version 8 change is:

6349470 investigate non-root restore/backup



This version includes support for the following feature:

The ZFS Intent Log (ZIL) satisfies the need of some applications to know the data they changed is on stable storage on return from a system call. The Intent Log holds records of those system calls and they are replayed if the system power fails or panics if they have not been committed to the main pool. When the Intent Log is allocated from the main pool, it allocates blocks that chain through the pool. This version adds the capability to specify a separate Intent Log device or devices.

This feature is available in:

Nevada, build 68
Solaris 10 10/08 release
The related bug for the version 7 change is:

6339640 Make ZIL use NVRAM when available



This version includes support for the following feature:

'bootfs' pool property
This feature is available in:

Nevada, build 62
Solaris 10 10/08 release
The related bugs for version 6 changes are as follows:

4929890 ZFS boot support for the x86 platform
6479807 pools need properties



This version includes support for the following feature:

gzip compression for ZFS datasets
This feature is available in:

Nevada, build 62
Solaris 10 10/08 release
The related bug for the version 5 changes is:

6536606 gzip compression for ZFS



This version includes support for the following feature:

zpool history
This feature is available in:

Nevada, build 62
Solaris 10 8/07 release
The related bugs for version 4 changes are as follows:

6529406 zpool history needs to bump the on-disk version
6343741 want to store a command history on disk



This version includes support for the following features:

Hot spares
Double-parity RAID-Z (raidz2)
Improved RAID-Z accounting
These features are available in:

Nevada, build 42
Solaris 10 11/06 release, (build 3)
The related bugs for version 3 changes are as follows:

6405966 Hot Spare support in ZFS
6417978 double parity RAID-Z a.k.a. RAID6
6288488 du reports misleading size on RAID-Z



This version includes support for "Ditto Blocks", or replicated metadata. Due to the tree-like structure of the ZFS on-disk format, an uncorrectable error in a leaf block may be relatively benign, while an uncorrectable error in pool metadata can result in an unopenable pool. This feature introduces automatic replication of metadata (up to 3 copies of each block) independent of any underlying pool-wide redundancy. For example, on a pool with a single mirror, the most critical metadata will appear three times on each side of the mirror, for a total of six copies. This ensures that while user data may be lost due to corruption, all data in the pool will be discoverable and the pool will still be usable. This will be expanded in the future to allow user data replication on a per-dataset basis.

This feature integrated on 4/10/06 with the following bug fix:

6410698 ZFS metadata needs to be more highly replicated (ditto blocks)

This feature is available in:

Nevada, build 38
Solaris 10 10/06 release (build 09)



This is the initial ZFS on-disk format as integrated on 10/31/05. During the next six months of internal use, there were a few on-disk format changes that did not result in a version number change, but resulted in a flag day since earlier versions could not read the newer changes. The first official releases supporting this version are:

Nevada, build 36
Solaris 10 6/06 release
Earlier releases may not support this version, despite being formatted with the same on-disk number. This is due to:

6389368 fat zap should use 16k blocks (with backwards compatibility)
6390677 version number checking makes upgrades challenging

Monday, June 7, 2010

A Closer Look at ZFS, Vdevs and Performance

Constantin gives here an great introduction into ZFS, vdevs and performance.

Tuesday, May 4, 2010

Lots of information about ZFS and storage

Saturday, December 5, 2009

The NEW Future of FreeNAS...

Olivier left me here a comment...

The FreeNAS project will continue to use FreeBSD in the future! This sounds excellent! Please read the original message on the blog and forum post.

Here is a copy of the message:

Hi all,

FreeNAS needs some big modification for removing its present limitation (one of the biggest is the non support of easly users add-ons).
We think that a full-rewriting of the FreeNAS base is needed. From this idea, we will take 2 differents paths:

  • Volker will create a new project called "'OpenMediaVault" based on a GNU/Linux using all its experience acquired with all its nights and week-ends spent to improve FreeNAS during the last 2 years. He still continue to work on FreeNAS (and try to share its time with this 2 projects).
  • And, a great surprise: iXsystems, a company specialized in professional FreeBSD offers to take FreeNAS under their wings as an open source community driven project. This mean that they will involve their professionals FreeBSD developers to FreeNAS! Their manpower will permit to do a full-rewriting of FreeNAS.
Personally, I come back to actively work in FreeNAS and begin to upgrade it to FreeBSD 8.0 (that is "production ready" for ZFS).

So we will see 'more' developers working on FreeNAS. Excellent!

Monday, November 23, 2009

Mac OSX Time Machine and FreeNAS 0.7

Since FreeNAS 0.7 it is easy to configure TimeMachine to use a AFP share (like Apple's Time Capsule)... Here is a short howto :-)

First step: Configure FreeNAS

-> Enable AFP
-> Configure the share
  • Automatic disk discovery - Enable automatic disk dicovery
  • Automatic disk discovery mode - Time Machine
Second Step: Configure OSX Time Machine

-> Select System Preferences -> Time Machine
-> Select Backup Disk
-> and Authenticate
Thats it!


Your FreeNAS will work now similar than a TimeCapsule. Enjoy! Details can be found here...

I've combined this with ZFS. With two clients and regular backups since two months, I have here a compression (gzip) enabled volume with 238 GByte. The compression ratio (compressratio) is at 1.24x

freenas:~# zfs get compressratio data0/timemachine
NAME PROPERTY VALUE SOURCE
data0/timemachine compressratio 1.24x -


Instead of 295 GByte, only 238 GBytes are used. IMHO this is great :-)


If you want to see how to do a complete restore of your system, please see my new blogpost "Mac OS X system restore using time machine and FreeNAS"

Friday, November 20, 2009

The Future of FreeNAS...

Please read here to get the latest news...

Read through this forum thread to see the actual discussion. In short...

FreeNAS 0.8 will be based on Debian GNU/Linux! Volker (the core developer) started an intermediate project called CoreNAS. FreeNAS 0.8 will be based on that.

Here is a short list of pros by Volker:
- Text and graphical installer that can be customized. This means no hand written install scripts anymore which causes some problems in FreeNAS
- WOL works in Linux
- lmsensor - A WORKING sensor framework which is a really needed feature in FreeNAS to check the CPU/MB temps and fan speeds
- Better Samba performance
- Ability to implement HA features
- System can be updated via 'apt-get' or any other deb package manager
- Better driver support
- Maybe 'ZFS' over FUSE (there is already one commercial product available that uses this feature)
- NFS4
- ...

What does this mean to me? I definetly want to have an OS which supports native ZFS (zfs on fuse is not an option). I don't need an extra small footprint or all of these 'special' features that some users of FreeNAS frequently requested (boinc, printserver...). I really appreciated the wonderful WebGUI of FreeNAS but if it is necessary, I'm not bound to it. As I have quite a lot experience with solaris, I will switch to Opensolaris. It has all I need except the AFP support. So I need to get more experience with Netatalk.
I am also really looking forward for the new features of ZFS like deduplication and crypto. If you are intersted about my experiences, you will find some here...

Nevertheless I would like to say many, many thanks for this wonderful piece of software!

Friday, October 9, 2009

FreeNAS 0.7, ZFS snapshots and scrubbing

I've described here how I am doing snapshots. Cron and a little script works perfect, but I also scrub from time to time the zpools. But the scrub never finishes :-(
The reason behind this is described here. Scrubbing or resilvering a pool starts over when a snapshot is taken.
So it is necessary to change the script that creates the snapshots

snapshot_hourly.sh

#!/bin/sh

# If a scrub is running, don't do a snapshot (otherwise scrub will restarts at 0%)
pools=$(zpool list -H -o name)
for pool in $pools;
do
if zpool status -v $pool | grep -q "scrub in progress"; then
exit
fi
done

# Destroy the old snapshot and create the new
zfs destroy $1@hourly.`date "+%H"` > /dev/null 2>&1
zfs snapshot $1@hourly.`date "+%H"`

snapshot_daily.sh

#!/bin/sh

# If a scrub is running, don't do a snapshot (otherwise scrub will restarts at $
pools=$(zpool list -H -o name)
for pool in $pools;
do
if zpool status -v $pool | grep -q "scrub in progress"; then
exit
fi
done

# Destroy the old snapshot and create the new
zfs destroy $1@daily.`date "+%a"` > /dev/null 2>&1
zfs snapshot $1@daily.`date "+%a"`

and snapshot_weekly.sh (please be aware that I use this script to keep snapshots of the last twelve weeks. If necessary, you need to change this...)

#!/bin/sh

# If a scrub is running, don't do a snapshot (otherwise scrub will restarts at $
pools=$(zpool list -H -o name)
for pool in $pools;
do
if zpool status -v $pool | grep -q "scrub in progress"; then
exit
fi
done

# Destroy the oldest snapshot, rotate the other and create the new
zfs destroy $1@weekly.12 > /dev/null 2>&1
zfs rename $1@weekly.11 @weekly.12 > /dev/null 2>&1
zfs rename $1@weekly.10 @weekly.11 > /dev/null 2>&1
zfs rename $1@weekly.9 @weekly.10 > /dev/null 2>&1
zfs rename $1@weekly.8 @weekly.9 > /dev/null 2>&1
zfs rename $1@weekly.7 @weekly.8 > /dev/null 2>&1
zfs rename $1@weekly.6 @weekly.7 > /dev/null 2>&1
zfs rename $1@weekly.5 @weekly.6 > /dev/null 2>&1
zfs rename $1@weekly.4 @weekly.5 > /dev/null 2>&1
zfs rename $1@weekly.3 @weekly.4 > /dev/null 2>&1
zfs rename $1@weekly.2 @weekly.3 > /dev/null 2>&1
zfs rename $1@weekly.1 @weekly.2 > /dev/null 2>&1
zfs snapshot $1@weekly.1

I've found a script from http://hype-o-thetic.com to run a scrub regularly. See his blogpost here...

Just for backup reasons, I post this script below...

#!/bin/bash

#VERSION: 0.2
#AUTHOR: gimpe
#EMAIL: gimpe [at] hype-o-thetic.com
#WEBSITE: http://hype-o-thetic.com
#DESCRIPTION: Created on FreeNAS 0.7RC1 (Sardaukar)
# This script will start a scrub on each ZFS pool (one at a time) and
# will send an e-mail or display the result when everyting is completed.

#CHANGELOG
# 0.2: 2009-08-27 Code clean up
# 0.1: 2009-08-25 Make it work

#SOURCES:
# http://aspiringsysadmin.com/blog/2007/06/07/scrub-your-zfs-file-systems-regularly/
# http://www.sun.com/bigadmin/scripts/sunScripts/zfs_completion.bash.txt
# http://www.packetwatch.net/documents/guides/2009073001.php

# e-mail variables
FROM=from@devnull.com
TO=to@devnull.com
SUBJECT="$0 results"
BODY=""

# arguments
VERBOSE=0
SENDEMAIL=1
args=("$@")
for arg in $args; do
case $arg in
"-v" | "--verbose")
VERBOSE=1
;;
"-n" | "--noemail")
SENDEMAIL=0
;;
"-a" | "--author")
echo "by gimpe at hype-o-thetic.com"
exit
;;
"-h" | "--help" | *)
echo "
usage: $0 [-v --verbose|-n --noemail]
-v --verbose output display
-n --noemail don't send an e-mail with result
-a --author display author info (by gimpe at hype-o-thetic.com)
-h --help display this help
"
exit
;;
esac
done

# work variables
ERROR=0
SEP=" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - "
RUNNING=1

# commands & configuration
ZPOOL=/sbin/zpool
PRINTF=/usr/bin/printf
MSMTP=/usr/local/bin/msmtp
MSMTPCONF=/var/etc/msmtp.conf

# print a message
function _log {
DATE="`date +"%Y-%m-%d %H:%M:%S"`"
# add message to e-mail body
BODY="${BODY}$DATE: $1\n"

# output to console if verbose mode
if [ $VERBOSE = 1 ]; then
echo "$DATE: $1"
fi
}

# find all pools
pools=$($ZPOOL list -H -o name)

# for each pool
for pool in $pools; do
# start scrub for $pool
_log "starting scrub on $pool"
zpool scrub $pool
RUNNING=1
# wait until scrub for $pool has finished running
while [ $RUNNING = 1 ]; do
# still running?
if $ZPOOL status -v $pool | grep -q "scrub in progress"; then
sleep 60
# not running
else
# finished with this pool, exit
_log "scrub ended on $pool"
_log "`$ZPOOL status -v $pool`"
_log "$SEP"
RUNNING=0
# check for errors
if ! $ZPOOL status -v $pool | grep -q "No known data errors"; then
_log "data errors detected on $pool"
ERROR=1
fi
fi
done
done

# change e-mail subject if there was error
if [ $ERROR = 1 ]; then
SUBJECT="${SUBJECT}: ERROR(S) DETECTED"
fi

# send e-mail
if [ $SENDEMAIL = 1 ]; then
$PRINTF "From:$FROM\nTo:$TO\nSubject:$SUBJECT\n\n$BODY" | $MSMTP --file=$MSMTPCONF -t
fi