Wednesday, August 20, 2008

FreeNAS 0.7 on a Intel D945GCLF

The Intel D945GCLF will be my new Homeserver. Currently I am using a QNAP TS-209 but it is not optimal for my requirements. As I am using often big files (videos from my HD Camera) and I have lots of photos, the QNAP is a littlebit too slow. 

I've installed FreeNAS 0.7 AMD64 (revision 3514) on that Motherboard.

CPU: Intel Atom processor 230 @ 1.6GHz (standard)
Mem: 2 GB RAM (Kingston KVR667DSN5/2G)
Harddisk: 1x SAMSUNG HD103UJ (configured as a ZPOOL)
Network: Intel Pro/1000 GT

The HELIOS LAN Test showed me up to 45-55 MByte/s Read and Write...

Friday, August 15, 2008

FreeNAS 0.7 and ZFS snapshots

Currently it is not possible to use the ZFS snapshot functionality via the Web-GUI. But you can use a script and cron to schedule snapshots.
You have to think about a schedule for the snapshots. I want to save my home-directories every hour (and have this snapshots available for 24 hours). Also I'd like to have a daily and a weekly snapshot (the daily snapshot will be available for 7 days the weeklys are stored for 4 weeks). All of the snapshots will be overwritten after that time.

First you need a place where you can store the script(s) that will create the snapshots.

I'm using a ZFS dataset called 'datapool/opt'. Under the directory 'bin' I'll have three scripts:


snapshot_hourly.sh
#!/bin/sh
zfs destroy $1@hourly.`date "+%H"` > /dev/null 2>&1
zfs snapshot $1@hourly.`date "+%H"`

snapshot_daily.sh
#!/bin/sh
zfs destroy $1@daily.`date "+%a"` > /dev/null 2>&1
zfs snapshot $1@daily.`date "+%a"`

snapshot_weekly.sh

#!/bin/sh
zfs destroy $1@weekly.4 > /dev/null 2>&1
zfs rename $1@weekly.3 @weekly.4 > /dev/null 2>&1
zfs rename $1@weekly.2 @weekly.3 > /dev/null 2>&1
zfs rename $1@weekly.1 @weekly.2 > /dev/null 2>&1
zfs snapshot $1@weekly.1

Be aware that these scripts should be executable!


freenas:/mnt/datapool/opt/bin# chmod 744 snapshot_*

Now you have to schedule the cron-jobs. You should do this via the Web-GUI. Go to -> System -> Advanced -> Cron

Command: /mnt/datapool/opt/bin/snapshot_hourly.sh datapool/home
User: root
Descritpion: Hourly snapshot of datapool/home
Schedule time:

minutes -> 0
hours -> All
days -> All
months -> All
week days -> All

Command: /mnt/datapool/opt/bin/snapshot_daily.sh datapool/home
User: root
Descritpion: Daily snapshot of datapool/home @ 20:00
Schedule time:
minutes -> 0
hours -> 20
days -> All
months -> All
week days -> All

Command: /mnt/datapool/opt/bin/snapshot_weekly.sh datapool/home
User: root
Descritpion: Weekly snapshot of datapool/home Sun @ 20:00
Schedule time:
minutes -> 0
hours -> 20
days -> All
months -> All
week days -> Sunday

After some time you will see somthing like this

freenas:~# zfs list
NAME USED AVAIL REFER MOUNTPOINT
...
datapool/home 14.9G 266G 21K /mnt/datapool/home
datapool/home@daily.Sat 0 - 21K -
datapool/home@weekly.2 0 - 21K -
datapool/home@daily.Fri 0 - 21K -
datapool/home@daily.Sun 0 - 21K -
datapool/home@weekly.1 0 - 21K -
datapool/home@daily.Mon 0 - 21K -
datapool/home@daily.Tue 0 - 21K -
datapool/home@daily.Wed 0 - 21K -
datapool/home@hourly.14 0 - 21K -
datapool/home@hourly.15 0 - 21K -
datapool/home@hourly.16 0 - 21K -
datapool/home@hourly.17 0 - 21K -
datapool/home@hourly.18 0 - 21K -
datapool/home@hourly.19 0 - 21K -
datapool/home@hourly.20 0 - 21K -
datapool/home@daily.Thu 0 - 21K -
datapool/home@hourly.21 0 - 21K -
datapool/home@hourly.22 0 - 21K -
datapool/home@hourly.23 0 - 21K -
datapool/home@hourly.00 0 - 21K -
datapool/home@hourly.01 0 - 21K -
datapool/home@hourly.02 0 - 21K -
datapool/home@hourly.03 0 - 21K -
datapool/home@hourly.04 0 - 21K -
datapool/home@hourly.05 0 - 21K -
datapool/home@hourly.06 0 - 21K -
datapool/home@hourly.07 0 - 21K -
datapool/home@hourly.08 0 - 21K -
datapool/home@hourly.09 0 - 21K -
datapool/home@hourly.10 0 - 21K -
datapool/home@hourly.11 0 - 21K -
datapool/home@hourly.12 0 - 21K -
datapool/home@hourly.13 0 - 21K -
...

How it works...

I've created a 10MB testfile

freenas:/mnt/datapool/home# dd if=/dev/zero of=testfile bs=1024k count=10
10+0 records in
10+0 records out
10485760 bytes transferred in 0.159408 secs (65779346 bytes/sec)
freenas:/mnt/datapool/home# ls -l
total 10237
-rw-r--r-- 1 root wheel 10485760 Aug 15 15:13 testfile
freenas:/mnt/datapool/home# df -h
Filesystem Size Used Avail Capacity Mounted on
...
datapool/home 5.3G 10M 5.3G 0% /mnt/datapool/home

Now there will be a snapshot created... (either via cron or the script -> freenas:/mnt/datapool/home# /mnt/datapool/opt/bin/snapshot_hourly.sh datapool/home)

freenas:/mnt/datapool/home# zfs list
NAME USED AVAIL REFER MOUNTPOINT
...
datapool/home 10.0M 5.34G 10.0M /mnt/datapool/home
datapool/home@hourly.15 0 - 10.0M -
...

If the file will be deleted...

freenas:/mnt/datapool/home# rm testfile
freenas:/mnt/datapool/home# zfs list
NAME USED AVAIL REFER MOUNTPOINT
...
datapool/home 10.0M 5.34G 26.9K /mnt/datapool/home
datapool/home@hourly.15 10.0M - 10.0M -

As you can see, the snapshot growed by 10MByte. But the file is gone...

freenas:/mnt/datapool/home# ls -al
total 4
drwxr-xr-x 2 root wheel 2 Aug 15 15:16 .
drwxrwxrwx 5 root wheel 5 Aug 15 13:58 ..

But how can I access the snapshot? Change to the .zfs/snapshot directory. And there you can see the snapshots.

freenas:/mnt/datapool/home# cd .zfs/snapshot
freenas:/mnt/datapool/home/.zfs/snapshot# ls
...
hourly.15
...

freenas:/mnt/datapool/home/.zfs/snapshot# cd hourly.15
freenas:/mnt/datapool/home/.zfs/snapshot/hourly.15# ls -l
total 10237
-rw-r--r-- 1 root wheel 10485760 Aug 15 15:13 testfile

Be aware! Snapshots are Read-Only. So you can copy this file back to the origin, or wherever you want.

Tuesday, August 12, 2008

How to resize ZFS

Please read also the update of this How to resize ZFS - Part 2 (the real world)!

One of the frequently asked questions regarding ZFS. Is it possible to resize a RAIDZ, RAIDZ2 or MIRRORED ZPOOL?

The answer is a littlebit complicated...

If you want to change the 'geometry' of the ZPOOL (for example: change from a mirrored pool to a raidz, or simply add a disk to a raidz, or change from raidz to raidz2) then the answer is no.

But it is possible to change the disks of a pool with bigger ones and use the space.

Here is what I've tested with a FreeNAS 0.7 (rev 3514) installed as a Virtual Machine.

I've used four 1 GByte HDs and four 2 GByte HDs. My mission was to get a raidz from 4 GByte (usable 3 GByte) to around 8 GByte (usable 6 GByte). The initial setup was one raidz with the four 1 GByte-HDs called 'datapool'. The disks da0, da1, da2, da3 are the 1 GByte-Drives. The disks da4, da5, da6, da7 are the 2 GByte-Drives.


Replace the first disk:

freenas:~# zpool replace datapool da0 da4

Check the status:

freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 16.44% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
replacing ONLINE 0 0 0
da0 ONLINE 0 0 0
da4 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

It will take a while until the pool is completly 'resilvered'

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:03:34 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da1 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

Proceed with the next disk...

freenas:~# zpool replace datapool da1 da5
freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 7.86% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
replacing ONLINE 0 0 0
da1 ONLINE 0 0 0
da5 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:05:34 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da2 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

Next one...

freenas:~# zpool replace datapool da2 da6
freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 6.01% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
replacing ONLINE 0 0 0
da2 ONLINE 0 0 0
da6 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

You can also monitor the status via this command:

freenas:~# zpool iostat -v 5
capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 16 13 1.94M 912K
raidz1 715M 3.27G 16 13 1.94M 912K
da4 - - 11 11 968K 752K
da5 - - 5 20 440K 1.40M
replacing - - 0 45 0 1.56M
da2 - - 7 7 597K 399K
da6 - - 0 23 3.79K 1.60M
da3 - - 8 6 688K 313K
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 62 5 7.82M 19.6K
raidz1 715M 3.27G 62 5 7.82M 19.6K
da4 - - 10 0 893K 6.13K
da5 - - 23 1 1.87M 28.3K
replacing - - 0 67 0 2.62M
da2 - - 0 17 0 1.40M
da6 - - 0 33 0 2.64M
da3 - - 16 1 1.29M 26.2K
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 31 0 3.82M 2.29K
raidz1 715M 3.27G 31 0 3.82M 2.29K
da4 - - 38 1 3.09M 44.2K
da5 - - 26 1 2.09M 22.9K
replacing - - 0 31 0 1.28M
da2 - - 0 24 0 1.87M
da6 - - 0 17 0 1.30M
da3 - - 32 1 2.68M 22.9K
------------- ----- ----- ----- ----- ----- -----

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:07:31 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da3 ONLINE 0 0 0

errors: No known data errors

And this is the last one...

freenas:~# zpool replace datapool da3 da7
freenas:~# zpool status -v
pool: datapool
state: ONLINE
status: One or more devices is currently being resilvered. The pool will
continue to function, possibly in a degraded state.
action: Wait for the resilver to complete.
scrub: resilver in progress, 3.02% done, 0h0m to go
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
replacing ONLINE 0 0 0
da3 ONLINE 0 0 0
da7 ONLINE 0 0 0

errors: No known data errors

freenas:~# zpool iostat -v 5
capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 14 0 1.75M 0
raidz1 715M 3.27G 14 0 1.75M 0
da4 - - 13 0 1.12M 0
da5 - - 14 0 1.19M 510
da6 - - 13 0 1.13M 0
replacing - - 0 14 0 599K
da3 - - 0 19 0 1.58M
da7 - - 0 7 0 599K
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
------------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 68 0 8.51M 4.49K
raidz1 715M 3.27G 68 0 8.51M 4.49K
da4 - - 15 0 1.26M 1.70K
da5 - - 6 0 546K 1.20K
da6 - - 18 0 1.51M 1.40K
replacing - - 0 68 0 2.84M
da3 - - 0 21 0 1.75M
da7 - - 0 35 0 2.84M
------------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 21 11 2.62M 696K
raidz1 715M 3.27G 21 11 2.62M 696K
da4 - - 15 7 1.22M 430K
da5 - - 13 9 1.11M 583K
da6 - - 9 12 825K 834K
da7 - - 0 21 1007 1.59M
---------- ----- ----- ----- ----- ----- -----

capacity operations bandwidth
pool used avail read write read write
---------- ----- ----- ----- ----- ----- -----
datapool 715M 3.27G 0 0 0 0
raidz1 715M 3.27G 0 0 0 0
da4 - - 0 0 0 0
da5 - - 0 0 0 0
da6 - - 0 0 0 0
da7 - - 0 0 0 0
---------- ----- ----- ----- ----- ----- -----

freenas:~# zpool status -v
pool: datapool
state: ONLINE
scrub: resilver completed with 0 errors on Tue Aug 12 16:09:45 2008
config:

NAME STATE READ WRITE CKSUM
datapool ONLINE 0 0 0
raidz1 ONLINE 0 0 0
da4 ONLINE 0 0 0
da5 ONLINE 0 0 0
da6 ONLINE 0 0 0
da7 ONLINE 0 0 0

errors: No known data errors

Finally it is necessary to reboot

freenas:~# reboot

And here you can see the result. It is possible to resize a ZPOOL if you have bigger disks...

freenas:~# zpool list
NAME SIZE USED AVAIL CAP HEALTH ALTROOT
datapool 7.97G 715M 7.27G 8% ONLINE -


Friday, August 8, 2008

FreeNAS 0.7 on an Intel DG965WH


I've installed FreeNAS 0.7 AMD64 (revision 3514) on an Intel DG965WH Mainboard.

CPU: Core2Duo 6300 @ 1.86GHz (standard)
Mem: 2 GB RAM (Kingston HyperX - KHX6400D2K2/2G)
Harddisks: 4x SAMSUNG HD501LJ (configured as RAIDZ)

The HELIOS LAN Test showed me up to 80 MByte/s Read and Write...

Friday, August 1, 2008

Time machine Backups on FreeNAS

Please read this new post!

It is possible to use a NAS-Volume for Apples Mac OSX 'Time machine' backups! As described on many websites you have to enable this function. But I've only found one howto that really works!

Here are the steps...

1. Enable the function that allows Time machine to do backups on Network-Volumes in the terminal (Applications -> Utilities -> Terminal)
defaults write com.apple.systempreferences TMShowUnsupportedNetworkVolumes 1
2. You have to create a growing Sparsebundle that will be used by Time machine

It is necessary that this Sparsebundle has a special name.
The name must be COMPUTERNAME_SOMECHARACTERS.sparsebundle. The SOMECHARACTERS are the MAC-Address of the en0 network interface (without : ).
For example name.domain.something_001122334455.sparsebundle

The Parameters for this sparsebundle:
- Format: Mac OS Extended (Journaled)
- Partitions: No partition map
- Volumesize: As much as you want to use for your Time machine backups (the size will grow over the time to this value)

4. After you've copied this sparsebundle to your Network Volume you want to use, open the Time machine system preferences and choose the Volume you want to use. It is not necessary to choose the sparsebundle.

I use this method on two Macs at home on a AFP-Share since several months without any problem!