Thursday, December 18, 2008

Backup FreeNAS using rsync to a rsync-server

It is a little bit difficult to find out how to backup data from a FreeNAS-Server to a different rsync-server. In my case I have a QNAP TS-209 configured as a rsync-server. To this NAS device, I want to backup my data once a day.

The trick is to use the LOCAL rsync configuration of FreeNAS. The rsync-server is here called 'soulcube'. 


I've configured four backup jobs. Here is the example of my photo share.


Source share - the_directory_you_want_to_backup
Destination share - servername::rsync_share

Tuesday, December 9, 2008

FreeNAS 0.7 (rev. 3953) and ZFS

I've tried to use with the latest Nightly Buid of FreeNAS 0.7 (rev. 3953) ZFS. Unfortunately I was not able to import my ZFS-Pool.

The error message just showed the following...

kernel: KLD zfs.ko: depends on opensolaris - not available

Currently there is no solution for this problem :-(

Tuesday, October 14, 2008

Benchmarking FreeNAS WritePerformance

Jonas left me a comment here about strange write performance of his FreeNAS 0.7. 
Jonas said...

Im haveing issues with performance on raidz for Freenas. Right now i have four 500GB disks in a raidz1 pool.

Trying to dd 1GB file to my pool is only giving a write performance of 47MB/s. If i test one single disk with diskinfo -tv i get write 48-82MB.

After some private mail and some testing, we've found out that this has something to do with the dd options Jonas used.

freenas:/mnt/Media# dd if=/dev/zero of=mytestfile.out bs=1000 count=1000000 
1000000+0 records in 
1000000+0 records out 
1000000000 bytes transferred in 21.231106 secs (47100702 bytes/sec)


Around 47 MB/s for a very performant system is not enough (AMD Athlon x2 64 Bit 4200+, 2200 MHz, 2 GByte RAM, 4x 500 GB Harddisks configured as RAIDZ1)

Speed of a single disk was measured with diskinfo

freenas:/mnt/Media# diskinfo -tv ad4 
ad4 
512 # sectorsize 
500107862016 # mediasize in bytes (466G) 
976773168 # mediasize in sectors 
969021 # Cylinders according to firmware. 
16 # Heads according to firmware. 
63 # Sectors according to firmware. 
ad:S13TJ1MQ702083 # Disk ident. 
 
Seek times: 
Full stroke: 250 iter in 5.368466 sec = 21.474 msec 
Half stroke: 250 iter in 3.997566 sec = 15.990 msec 
Quarter stroke: 500 iter in 6.592644 sec = 13.185 msec 
Short forward: 400 iter in 2.437786 sec = 6.094 msec 
Short backward: 400 iter in 1.262446 sec = 3.156 msec 
Seq outer: 2048 iter in 0.251187 sec = 0.123 msec 
Seq inner: 2048 iter in 0.243935 sec = 0.119 msec 
Transfer rates: 
outside: 102400 kbytes in 1.245477 sec = 82217 kbytes/sec 
middle: 102400 kbytes in 1.417240 sec = 72253 kbytes/sec 
inside: 102400 kbytes in 2.351795 sec = 43541 kbytes/sec


This looks OK. So whats the problem?

It is the blocksize (bs) dd used to write the testfile! It's just 1000 Bytes!

Use higher numbers and a 'binary prefix' (see http://en.wikipedia.org/wiki/Binary_prefix). I mean something like 512k, 1024k, 2048k, 4096k, 8192k.

Jonas repeated the test with the following results:

freenas:/mnt/Media# dd if=/dev/zero of=testfile bs=8192k count=100
100+0 records in
100+0 records out
838860800 bytes transferred in 5.715633 secs (146766032 bytes/sec)

freenas:/mnt/Media# dd if=/dev/zero of=testfile bs=4096k count=200
200+0 records in
200+0 records out
838860800 bytes transferred in 5.663804 secs (148109079 bytes/sec)

freenas:/mnt/Media# dd if=/dev/zero of=testfile bs=2048k count=200
200+0 records in
200+0 records out
419430400 bytes transferred in 3.429654 secs (122295248 bytes/sec)

freenas:/mnt/Media# dd if=/dev/zero of=testfile bs=1024k count=200
200+0 records in
200+0 records out
209715200 bytes transferred in 0.357848 secs (586045588 bytes/sec)  << To little data gives strange result. Probably because of HD cache.

freenas:/mnt/Media# dd if=/dev/zero of=testfile bs=512k count=1000
1000+0 records in
1000+0 records out
524288000 bytes transferred in 2.576585 secs (203481736 bytes/sec)

freenas:/mnt/Media# dd if=/dev/zero of=testfile bs=256k count=10000
10000+0 records in
10000+0 records out
2621440000 bytes transferred in 16.343732 secs (160394212 bytes/sec)

freenas:/mnt/Media# dd if=/dev/zero of=testfile bs=25k count=100000
100000+0 records in
100000+0 records out
2560000000 bytes transferred in 20.552446 secs (124559383 bytes/sec)


So, it depends how you measure the performance ;-)

BackupPC on FreeNAS - Part 3 (Issues with Samba)

I've showed you in two posts how to install BackupPC on FreeNAS (BackupPC on FreeNAS & BackupPC on FreeNAS - Part 2)

It looks like that there is a problem with samba if you install it as I've described this in my previous posts. The FreeNAS binaries are compiled with different path-settings as the original pkg's from FreeBSD.

So if you do not need samba, don't install the packages! Skip the following steps:

freenas:~# pkg_add -r net/samba-libsmbclient-3.0.28

freenas:~# pkg_add -r net/samba-nmblookup-3.0.28.tbz

freenas:~# pkg_add -r net/samba-3.0.28,1


If you need samba, please be aware that you might see some problems!

One issue I've seen is that the /var/etc/private/smbpasswd file was not updated during the boot. My workaround is to add the following in /etc/rc.d/smbpasswd:

(/bin/echo "${_password}"; /bin/echo "${_password}") | ${command} -c /var/etc/smb.conf -s -a "${_username}" > /dev/null

Can anyone confirm similar problems?

Sunday, October 5, 2008

Tuning FreeNAS & ZFS

I am running FreeNAS 0.7 (rev. 3514) since several weeks as my 'production' server at home on a Mini-ITX Atom based system (see FreeNAS 0.7 on a Intel D945GCLF).

I've done some tuning of FreeBSD and ZFS with good experiences (good performance, no panics, etc.).

Here is what I've done...

First of all it is important to 'tune' ZFS. I've seen some panics of my systems without using this parameters. It is necessary to use lots of RAM for ZFS. I have 2GB in my little server...

I've followed the FreeBSD ZFSTuning Guide and added the following lines to the /boot/loader.conf file

vm.kmem_size_max="1073741824"
vm.kmem_size="1073741824"
vfs.zfs.prefetch_disable=1

I am not sure about the vfs.zfs.prefetch_disable=1, but as I said, my experiences are good with this settings.

Editing of the /boot/loader.conf file is easy :-) Use your Web-GUI, go to -> Advanced -> Edit File -> add /boot/loader.conf to the File path and hit Load. Add the lines to that file and hit the Save Button.

Please be aware that these changes are not saved to the 'XML system configuration' (-> System -> Backup/Restore -> Download configuration).


FreeBSD tuning... I've simply used the 'Tuning' of FreeNAS. You can find that also on your Web-GUI.
Go to -> System -> Advanced -> mark the Tuning Box and hit Save.

And last but not least there are some nice tuning variables to be found on the TCP Tuning Guide for FreeBSD (here http://acs.lbl.gov/TCP-tuning/FreeBSD.html or here http://fasterdata.es.net/TCP-tuning/FreeBSD.html)

Add this variables via -> System -> Advanced -> sysctl.conf

Here are the variables in detail:

# ups spinup time for drive recognition
hw.ata.to=15
# System tuning - Original -> 2097152
kern.ipc.maxsockbuf=16777216
# System tuning
kern.ipc.nmbclusters=32768
# System tuning
kern.ipc.somaxconn=8192
# System tuning
kern.maxfiles=65536
# System tuning
kern.maxfilesperproc=32768
# System tuning
net.inet.tcp.delayed_ack=0
# System tuning
net.inet.tcp.inflight.enable=0
# System tuning
net.inet.tcp.path_mtu_discovery=0
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.recvbuf_auto=1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.recvbuf_inc=16384
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.recvbuf_max=16777216
# System tuning
net.inet.tcp.recvspace=65536
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.rfc1323=1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.sendbuf_auto=1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.sendbuf_inc =8192
# System tuning
net.inet.tcp.sendspace=65536
# System tuning
net.inet.udp.maxdgram=57344
# System tuning
net.inet.udp.recvspace=65536
# System tuning
net.local.stream.recvspace=65536
# System tuning
net.local.stream.sendspace=65536
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.sendbuf_max=16777216

I would really appreciate any comments about this tuning variables!

Tuesday, September 2, 2008

Installing Debian on QNAP TS-209

What to do with a QNAP TS-209? Install a REAL Linux... Install Debian...


I have a QNAP TS-209 at home, but I am not really happy with it. One of my concerns is security. With the firmware I've used (2.0.1 Build 0324T) there was a massive security hole.

The file .ssh/authorized_keys will be overwritten on ach reboot, with an unknown user (admin@Richard-TS209). See authorized_keys oerwritten at reboot.


I am not sure if QNAP has fixed this issue with the new Firmware, but this showed me, that my QNAP is not the device where I've want to store my private data.


Some time ago I've read a Post that Martin Michlmayr is working on a QNAP-Port of debian. See more details abount Martin on his website and on wikipedia.

Now it is possible on an easy way to install debian on your QNAP. See here is Martins description. Read carefully!


I want to show you here my experiences with Debian on my QNAP TS-209


The first step is to create a backup of the original Firmware


cd /share/HDA_DATA/public

cat /dev/mtdblock1 > mtd1

cat /dev/mtdblock2 > mtd2


Save these both files to a USB-Stick or download them to your workstation!


Now download the required installer images


wget http://people.debian.org/~joeyh/d-i/armel/images/daily/orion5x/netboot/qnap/ts-209/flash-debian

wget http://people.debian.org/~joeyh/d-i/armel/images/daily/orion5x/netboot/qnap/ts-209/initrd.gz

wget http://people.debian.org/~joeyh/d-i/armel/images/daily/orion5x/netboot/qnap/ts-209/kernel


Run the following script to write the kernel and the initrd.gz to flash


sh flash-debian


This will take some time...


Writing debian-installer to flash... done.

Please reboot your QNAP device.


When you've rebooted the TS-209, you can login via ssh. The Firmware will configure the IP-Address via DHCP or if no DHCP-Server is available, it will fallback to 192.168.1.100.

ssh installer@192.168.1.238 (Password in install)

Here are the screenshots of the installer
Choose a mirror

Choose your language

I want to use the Manual Parttioning method

After setting up the partitions, creating the Filesystem, the installer start the installation of the base system.
Setting up users and passwords

I just want to install the standard software and configure the system later

It will take some time to install the software

Flash memory will be configured

The installation is complete!

After the reboot you are able to login via ssh...

Wednesday, August 20, 2008

FreeNAS 0.7 on a Intel D945GCLF

The Intel D945GCLF will be my new Homeserver. Currently I am using a QNAP TS-209 but it is not optimal for my requirements. As I am using often big files (videos from my HD Camera) and I have lots of photos, the QNAP is a littlebit too slow. 

I've installed FreeNAS 0.7 AMD64 (revision 3514) on that Motherboard.

CPU: Intel Atom processor 230 @ 1.6GHz (standard)
Mem: 2 GB RAM (Kingston KVR667DSN5/2G)
Harddisk: 1x SAMSUNG HD103UJ (configured as a ZPOOL)
Network: Intel Pro/1000 GT

The HELIOS LAN Test showed me up to 45-55 MByte/s Read and Write...

Friday, August 15, 2008

FreeNAS 0.7 and ZFS snapshots

Currently it is not possible to use the ZFS snapshot functionality via the Web-GUI. But you can use a script and cron to schedule snapshots.
You have to think about a schedule for the snapshots. I want to save my home-directories every hour (and have this snapshots available for 24 hours). Also I'd like to have a daily and a weekly snapshot (the daily snapshot will be available for 7 days the weeklys are stored for 4 weeks). All of the snapshots will be overwritten after that time.

First you need a place where you can store the script(s) that will create the snapshots.

I'm using a ZFS dataset called 'datapool/opt'. Under the directory 'bin' I'll have three scripts:


snapshot_hourly.sh
#!/bin/sh
zfs destroy $1@hourly.`date "+%H"` > /dev/null 2>&1
zfs snapshot $1@hourly.`date "+%H"`

snapshot_daily.sh
#!/bin/sh
zfs destroy $1@daily.`date "+%a"` > /dev/null 2>&1
zfs snapshot $1@daily.`date "+%a"`

snapshot_weekly.sh

#!/bin/sh
zfs destroy $1@weekly.4 > /dev/null 2>&1
zfs rename $1@weekly.3 @weekly.4 > /dev/null 2>&1
zfs rename $1@weekly.2 @weekly.3 > /dev/null 2>&1
zfs rename $1@weekly.1 @weekly.2 > /dev/null 2>&1
zfs snapshot $1@weekly.1

Be aware that these scripts should be executable!


freenas:/mnt/datapool/opt/bin# chmod 744 snapshot_*

Now you have to schedule the cron-jobs. You should do this via the Web-GUI. Go to -> System -> Advanced -> Cron

Command: /mnt/datapool/opt/bin/snapshot_hourly.sh datapool/home
User: root
Descritpion: Hourly snapshot of datapool/home
Schedule time:

minutes -> 0
hours -> All
days -> All
months -> All
week days -> All

Command: /mnt/datapool/opt/bin/snapshot_daily.sh datapool/home
User: root
Descritpion: Daily snapshot of datapool/home @ 20:00
Schedule time:
minutes -> 0
hours -> 20
days -> All
months -> All
week days -> All

Command: /mnt/datapool/opt/bin/snapshot_weekly.sh datapool/home
User: root
Descritpion: Weekly snapshot of datapool/home Sun @ 20:00
Schedule time:
minutes -> 0
hours -> 20
days -> All
months -> All
week days -> Sunday

After some time you will see somthing like this

freenas:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
...
datapool/home 14.9G 266G 21K /mnt/datapool/home
datapool/home@daily.Sat 0 - 21K -
datapool/home@weekly.2 0 - 21K -
datapool/home@daily.Fri 0 - 21K -
datapool/home@daily.Sun 0 - 21K -
datapool/home@weekly.1 0 - 21K -
datapool/home@daily.Mon 0 - 21K -
datapool/home@daily.Tue 0 - 21K -
datapool/home@daily.Wed 0 - 21K -
datapool/home@hourly.14 0 - 21K -
datapool/home@hourly.15 0 - 21K -
datapool/home@hourly.16 0 - 21K -
datapool/home@hourly.17 0 - 21K -
datapool/home@hourly.18 0 - 21K -
datapool/home@hourly.19 0 - 21K -
datapool/home@hourly.20 0 - 21K -
datapool/home@daily.Thu 0 - 21K -
datapool/home@hourly.21 0 - 21K -
datapool/home@hourly.22 0 - 21K -
datapool/home@hourly.23 0 - 21K -
datapool/home@hourly.00 0 - 21K -
datapool/home@hourly.01 0 - 21K -
datapool/home@hourly.02 0 - 21K -
datapool/home@hourly.03 0 - 21K -
datapool/home@hourly.04 0 - 21K -
datapool/home@hourly.05 0 - 21K -
datapool/home@hourly.06 0 - 21K -
datapool/home@hourly.07 0 - 21K -
datapool/home@hourly.08 0 - 21K -
datapool/home@hourly.09 0 - 21K -
datapool/home@hourly.10 0 - 21K -
datapool/home@hourly.11 0 - 21K -
datapool/home@hourly.12 0 - 21K -
datapool/home@hourly.13 0 - 21K -
...

How it works...

I've created a 10MB testfile

freenas:/mnt/datapool/home# dd if=/dev/zero of=testfile bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes transferred in 0.159408 secs (65779346 bytes/sec)
freenas:/mnt/datapool/home# ls -l
total 10237
-rw-r--r-- 1 root wheel 10485760 Aug 15 15:13 testfile
freenas:/mnt/datapool/home# df -h
Filesystem Size Used Avail Capacity Mounted on
...
datapool/home 5.3G 10M 5.3G 0% /mnt/datapool/home

Now there will be a snapshot created... (either via cron or the script -> freenas:/mnt/datapool/home# /mnt/datapool/opt/bin/snapshot_hourly.sh datapool/home)

freenas:/mnt/datapool/home# zfs list
NAME USED AVAIL REFER MOUNTPOINT
...
datapool/home 10.0M 5.34G 10.0M /mnt/datapool/home
datapool/home@hourly.15 0 - 10.0M -
...

If the file will be deleted...

freenas:/mnt/datapool/home# rm testfile
freenas:/mnt/datapool/home# zfs list
NAME USED AVAIL REFER MOUNTPOINT
...
datapool/home 10.0M 5.34G 26.9K /mnt/datapool/home
datapool/home@hourly.15 10.0M - 10.0M -

As you can see, the snapshot growed by 10MByte. But the file is gone...

freenas:/mnt/datapool/home# ls -al
total 4
drwxr-xr-x 2 root wheel 2 Aug 15 15:16 .
drwxrwxrwx 5 root wheel 5 Aug 15 13:58 ..

But how can I access the snapshot? Change to the .zfs/snapshot directory. And there you can see the snapshots.

freenas:/mnt/datapool/home# cd .zfs/snapshot
freenas:/mnt/datapool/home/.zfs/snapshot# ls
...
hourly.15
...

freenas:/mnt/datapool/home/.zfs/snapshot# cd hourly.15
freenas:/mnt/datapool/home/.zfs/snapshot/hourly.15# ls -l
total 10237
-rw-r--r-- 1 root wheel 10485760 Aug 15 15:13 testfile

Be aware! Snapshots are Read-Only. So you can copy this file back to the origin, or wherever you want.

Tuesday, August 12, 2008

How to resize ZFS

Please read also the update of this How to resize ZFS - Part 2 (the real world)!

One of the frequently asked questions regarding ZFS. Is it possible to resize a RAIDZ, RAIDZ2 or MIRRORED ZPOOL?

The answer is a littlebit complicated...

If you want to change the 'geometry' of the ZPOOL (for example: change from a mirrored pool to a raidz, or simply add a disk to a raidz, or change from raidz to raidz2) then the answer is no.

But it is possible to change the disks of a pool with bigger ones and use the space.

Here is what I've tested with a FreeNAS 0.7 (rev 3514) installed as a Virtual Machine.

I've used four 1 GByte HDs and four 2 GByte HDs. My mission was to get a raidz from 4 GByte (usable 3 GByte) to around 8 GByte (usable 6 GByte). The initial setup was one raidz with the four 1 GByte-HDs called 'datapool'. The disks da0, da1, da2, da3 are the 1 GByte-Drives. The disks da4, da5, da6, da7 are the 2 GByte-Drives.


Replace the first disk:

freenas:~# zpool replace datapool da0 da4

Check the status:

freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 16.44% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
replacing ONLINE 0 0 0
da0 ONLINE 0 0 0
da4 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

It will take a while until the pool is completly 'resilvered'

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:03:34 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

Proceed with the next disk...

freenas:~# zpool replace datapool da1 da5
freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 7.86% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
replacing ONLINE 0 0 0
da1 ONLINE 0 0 0
da5 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:05:34 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

Next one...

freenas:~# zpool replace datapool da2 da6
freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 6.01% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
replacing ONLINE 0 0 0
da2 ONLINE 0 0 0
da6 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

You can also monitor the status via this command:

freenas:~# zpool iostat -v 5
capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 16 13 1.94M 912K
raidz1 715M 3.27G 16 13 1.94M 912K
da4 - - 11 11 968K 752K
da5 - - 5 20 440K 1.40M
replacing - - 0 45 0 1.56M
da2 - - 7 7 597K 399K
da6 - - 0 23 3.79K 1.60M
da3 - - 8 6 688K 313K
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 62 5 7.82M 19.6K
raidz1 715M 3.27G 62 5 7.82M 19.6K
da4 - - 10 0 893K 6.13K
da5 - - 23 1 1.87M 28.3K
replacing - - 0 67 0 2.62M
da2 - - 0 17 0 1.40M
da6 - - 0 33 0 2.64M
da3 - - 16 1 1.29M 26.2K
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 31 0 3.82M 2.29K
raidz1 715M 3.27G 31 0 3.82M 2.29K
da4 - - 38 1 3.09M 44.2K
da5 - - 26 1 2.09M 22.9K
replacing - - 0 31 0 1.28M
da2 - - 0 24 0 1.87M
da6 - - 0 17 0 1.30M
da3 - - 32 1 2.68M 22.9K
------------- ----- ----- ----- ----- ----- -----

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:07:31 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

And this is the last one...

freenas:~# zpool replace datapool da3 da7
freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 3.02% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
replacing ONLINE 0 0 0
da3 ONLINE 0 0 0
da7 ONLINE 0 0 0

errors: No known data errors

freenas:~# zpool iostat -v 5
capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 14 0 1.75M 0
raidz1 715M 3.27G 14 0 1.75M 0
da4 - - 13 0 1.12M 0
da5 - - 14 0 1.19M 510
da6 - - 13 0 1.13M 0
replacing - - 0 14 0 599K
da3 - - 0 19 0 1.58M
da7 - - 0 7 0 599K
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 68 0 8.51M 4.49K
raidz1 715M 3.27G 68 0 8.51M 4.49K
da4 - - 15 0 1.26M 1.70K
da5 - - 6 0 546K 1.20K
da6 - - 18 0 1.51M 1.40K
replacing - - 0 68 0 2.84M
da3 - - 0 21 0 1.75M
da7 - - 0 35 0 2.84M
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 21 11 2.62M 696K
raidz1 715M 3.27G 21 11 2.62M 696K
da4 - - 15 7 1.22M 430K
da5 - - 13 9 1.11M 583K
da6 - - 9 12 825K 834K
da7 - - 0 21 1007 1.59M
---------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 0 0 0 0
raidz1 715M 3.27G 0 0 0 0
da4 - - 0 0 0 0
da5 - - 0 0 0 0
da6 - - 0 0 0 0
da7 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:09:45 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0

errors: No known data errors

Finally it is necessary to reboot

freenas:~# reboot

And here you can see the result. It is possible to resize a ZPOOL if you have bigger disks...

freenas:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 7.97G 715M 7.27G 8% ONLINE -