Wednesday, December 16, 2009

Episode about the future of FreeNAS on bsdtalk

You will find here an excellent talk with Josh Paetzel from iXsystems about the future of FreeNAS. Many thanks for this information!

Saturday, December 5, 2009

The NEW Future of FreeNAS...

Olivier left me here a comment...

The FreeNAS project will continue to use FreeBSD in the future! This sounds excellent! Please read the original message on the blog and forum post.

Here is a copy of the message:

Hi all,

FreeNAS needs some big modification for removing its present limitation (one of the biggest is the non support of easly users add-ons).
We think that a full-rewriting of the FreeNAS base is needed. From this idea, we will take 2 differents paths:

  • Volker will create a new project called "'OpenMediaVault" based on a GNU/Linux using all its experience acquired with all its nights and week-ends spent to improve FreeNAS during the last 2 years. He still continue to work on FreeNAS (and try to share its time with this 2 projects).
  • And, a great surprise: iXsystems, a company specialized in professional FreeBSD offers to take FreeNAS under their wings as an open source community driven project. This mean that they will involve their professionals FreeBSD developers to FreeNAS! Their manpower will permit to do a full-rewriting of FreeNAS.
Personally, I come back to actively work in FreeNAS and begin to upgrade it to FreeBSD 8.0 (that is "production ready" for ZFS).

So we will see 'more' developers working on FreeNAS. Excellent!

Monday, November 23, 2009

Mac OSX Time Machine and FreeNAS 0.7

Since FreeNAS 0.7 it is easy to configure TimeMachine to use a AFP share (like Apple's Time Capsule)... Here is a short howto :-)

First step: Configure FreeNAS

-> Enable AFP
-> Configure the share
  • Automatic disk discovery - Enable automatic disk dicovery
  • Automatic disk discovery mode - Time Machine
Second Step: Configure OSX Time Machine

-> Select System Preferences -> Time Machine
-> Select Backup Disk
-> and Authenticate
Thats it!


Your FreeNAS will work now similar than a TimeCapsule. Enjoy! Details can be found here...

I've combined this with ZFS. With two clients and regular backups since two months, I have here a compression (gzip) enabled volume with 238 GByte. The compression ratio (compressratio) is at 1.24x

freenas:~# zfs get compressratio data0/timemachine
NAME PROPERTY VALUE SOURCE
data0/timemachine compressratio 1.24x -


Instead of 295 GByte, only 238 GBytes are used. IMHO this is great :-)


If you want to see how to do a complete restore of your system, please see my new blogpost "Mac OS X system restore using time machine and FreeNAS"

Sunday, November 22, 2009

Overview of NAS Operation Systems

Do you have any experiences with one of those?

Friday, November 20, 2009

The Future of FreeNAS...

Please read here to get the latest news...

Read through this forum thread to see the actual discussion. In short...

FreeNAS 0.8 will be based on Debian GNU/Linux! Volker (the core developer) started an intermediate project called CoreNAS. FreeNAS 0.8 will be based on that.

Here is a short list of pros by Volker:
- Text and graphical installer that can be customized. This means no hand written install scripts anymore which causes some problems in FreeNAS
- WOL works in Linux
- lmsensor - A WORKING sensor framework which is a really needed feature in FreeNAS to check the CPU/MB temps and fan speeds
- Better Samba performance
- Ability to implement HA features
- System can be updated via 'apt-get' or any other deb package manager
- Better driver support
- Maybe 'ZFS' over FUSE (there is already one commercial product available that uses this feature)
- NFS4
- ...

What does this mean to me? I definetly want to have an OS which supports native ZFS (zfs on fuse is not an option). I don't need an extra small footprint or all of these 'special' features that some users of FreeNAS frequently requested (boinc, printserver...). I really appreciated the wonderful WebGUI of FreeNAS but if it is necessary, I'm not bound to it. As I have quite a lot experience with solaris, I will switch to Opensolaris. It has all I need except the AFP support. So I need to get more experience with Netatalk.
I am also really looking forward for the new features of ZFS like deduplication and crypto. If you are intersted about my experiences, you will find some here...

Nevertheless I would like to say many, many thanks for this wonderful piece of software!

Wednesday, October 21, 2009

FreeNAS 0.7 - Samba tuning

Here is a nice blogpost from learnedbyerror about tuning samba (and other...). Especially the samba/cifs tweaks should give you a performance boost.

I don't recommend to use the old zfs tuning settings as the latest FreeNAS Versions are based on FreeBSD 7.2 (see http://wiki.freebsd.org/ZFSTuningGuide).

Anyway, good blogpost!

Friday, October 9, 2009

FreeNAS 0.7, ZFS snapshots and scrubbing

I've described here how I am doing snapshots. Cron and a little script works perfect, but I also scrub from time to time the zpools. But the scrub never finishes :-(
The reason behind this is described here. Scrubbing or resilvering a pool starts over when a snapshot is taken.
So it is necessary to change the script that creates the snapshots

snapshot_hourly.sh

#!/bin/sh

# If a scrub is running, don't do a snapshot (otherwise scrub will restarts at 0%)
pools=$(zpool list -H -o name)
for pool in $pools;
do
if zpool status -v $pool | grep -q "scrub in progress"; then
exit
fi
done

# Destroy the old snapshot and create the new
zfs destroy $1@hourly.`date "+%H"` > /dev/null 2>&1
zfs snapshot $1@hourly.`date "+%H"`

snapshot_daily.sh

#!/bin/sh

# If a scrub is running, don't do a snapshot (otherwise scrub will restarts at $
pools=$(zpool list -H -o name)
for pool in $pools;
do
if zpool status -v $pool | grep -q "scrub in progress"; then
exit
fi
done

# Destroy the old snapshot and create the new
zfs destroy $1@daily.`date "+%a"` > /dev/null 2>&1
zfs snapshot $1@daily.`date "+%a"`

and snapshot_weekly.sh (please be aware that I use this script to keep snapshots of the last twelve weeks. If necessary, you need to change this...)

#!/bin/sh

# If a scrub is running, don't do a snapshot (otherwise scrub will restarts at $
pools=$(zpool list -H -o name)
for pool in $pools;
do
if zpool status -v $pool | grep -q "scrub in progress"; then
exit
fi
done

# Destroy the oldest snapshot, rotate the other and create the new
zfs destroy $1@weekly.12 > /dev/null 2>&1
zfs rename $1@weekly.11 @weekly.12 > /dev/null 2>&1
zfs rename $1@weekly.10 @weekly.11 > /dev/null 2>&1
zfs rename $1@weekly.9 @weekly.10 > /dev/null 2>&1
zfs rename $1@weekly.8 @weekly.9 > /dev/null 2>&1
zfs rename $1@weekly.7 @weekly.8 > /dev/null 2>&1
zfs rename $1@weekly.6 @weekly.7 > /dev/null 2>&1
zfs rename $1@weekly.5 @weekly.6 > /dev/null 2>&1
zfs rename $1@weekly.4 @weekly.5 > /dev/null 2>&1
zfs rename $1@weekly.3 @weekly.4 > /dev/null 2>&1
zfs rename $1@weekly.2 @weekly.3 > /dev/null 2>&1
zfs rename $1@weekly.1 @weekly.2 > /dev/null 2>&1
zfs snapshot $1@weekly.1

I've found a script from http://hype-o-thetic.com to run a scrub regularly. See his blogpost here...

Just for backup reasons, I post this script below...

#!/bin/bash

#VERSION: 0.2
#AUTHOR: gimpe
#EMAIL: gimpe [at] hype-o-thetic.com
#WEBSITE: http://hype-o-thetic.com
#DESCRIPTION: Created on FreeNAS 0.7RC1 (Sardaukar)
# This script will start a scrub on each ZFS pool (one at a time) and
# will send an e-mail or display the result when everyting is completed.

#CHANGELOG
# 0.2: 2009-08-27 Code clean up
# 0.1: 2009-08-25 Make it work

#SOURCES:
# http://aspiringsysadmin.com/blog/2007/06/07/scrub-your-zfs-file-systems-regularly/
# http://www.sun.com/bigadmin/scripts/sunScripts/zfs_completion.bash.txt
# http://www.packetwatch.net/documents/guides/2009073001.php

# e-mail variables
FROM=from@devnull.com
TO=to@devnull.com
SUBJECT="$0 results"
BODY=""

# arguments
VERBOSE=0
SENDEMAIL=1
args=("$@")
for arg in $args; do
case $arg in
"-v" | "--verbose")
VERBOSE=1
;;
"-n" | "--noemail")
SENDEMAIL=0
;;
"-a" | "--author")
echo "by gimpe at hype-o-thetic.com"
exit
;;
"-h" | "--help" | *)
echo "
usage: $0 [-v --verbose|-n --noemail]
-v --verbose output display
-n --noemail don't send an e-mail with result
-a --author display author info (by gimpe at hype-o-thetic.com)
-h --help display this help
"
exit
;;
esac
done

# work variables
ERROR=0
SEP=" - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - "
RUNNING=1

# commands & configuration
ZPOOL=/sbin/zpool
PRINTF=/usr/bin/printf
MSMTP=/usr/local/bin/msmtp
MSMTPCONF=/var/etc/msmtp.conf

# print a message
function _log {
DATE="`date +"%Y-%m-%d %H:%M:%S"`"
# add message to e-mail body
BODY="${BODY}$DATE: $1\n"

# output to console if verbose mode
if [ $VERBOSE = 1 ]; then
echo "$DATE: $1"
fi
}

# find all pools
pools=$($ZPOOL list -H -o name)

# for each pool
for pool in $pools; do
# start scrub for $pool
_log "starting scrub on $pool"
zpool scrub $pool
RUNNING=1
# wait until scrub for $pool has finished running
while [ $RUNNING = 1 ]; do
# still running?
if $ZPOOL status -v $pool | grep -q "scrub in progress"; then
sleep 60
# not running
else
# finished with this pool, exit
_log "scrub ended on $pool"
_log "`$ZPOOL status -v $pool`"
_log "$SEP"
RUNNING=0
# check for errors
if ! $ZPOOL status -v $pool | grep -q "No known data errors"; then
_log "data errors detected on $pool"
ERROR=1
fi
fi
done
done

# change e-mail subject if there was error
if [ $ERROR = 1 ]; then
SUBJECT="${SUBJECT}: ERROR(S) DETECTED"
fi

# send e-mail
if [ $SENDEMAIL = 1 ]; then
$PRINTF "From:$FROM\nTo:$TO\nSubject:$SUBJECT\n\n$BODY" | $MSMTP --file=$MSMTPCONF -t
fi

Friday, July 17, 2009

M0n0wall and SixXS IPv6 - My first time...

I have at home a little Ffirewall running on a VIA EPIA 5000 Mini-ITX board. It is the m0n0wall which runs since more than one year on this board without any issues.
As I want to join the IPv6 community, I've searched which is the best way to get 'connected'. And I've found sixxs.net.
The latest Beta-Release (1.3b16) supports AICCU of SixXS. The upgrade from the stable version worked flawless...

After I've created a user account at sixxs.net I was able to request a tunnel. As I have here a little router (Speedport W 701V from T-Online) in front of the firewall, it was not possible to get the tunnel to work in both ways. It was possible to connect my firewall to sixxs.net, but it was not possible to send a ping6 to my firewall. So... no alive packet reached my firewall.
Currently I've enabled the PPPOE-Passthrough on this router (and disabled the router function by deleting the login information).
Everything works perfect (I can ping IPv6 addresses from my firewall). Right no, I wait for more ISKs to request my own IPv6 subnet.

Stay tuned...

Thursday, June 25, 2009

Benchmark of FreeNAS 0.7 and a single SSD

As you can read in my previous blog post, I have a SuperTalent Ultra Drive ME MLC 64GB. I am interested about the performance of this little drives. So here is what I've found out...

First of all I've installed this SSD into a server for doing tests.

Server hardware specs
  • CPU Intel Core2Quad Q6000 (2.4 GHz)
  • 8 GB RAM
  • Mainboard Intel DG965WH
  • Network Card Intel Pro/1000 PT
FreeNAS 0.7 RC2 (AMD64, rev. 4744)

I've tuned the system a little bit :-)

-> Network|Interfaces|LAN -> Advanced Configuration -> MTU 9000
-> System|Advanced ->Tuning -> Enable tuning of some kernel variables

And -> System|Advanced|sysctl.conf

hw.ata.to = 15
# ATA disk timeout vis-a-vis power-saving

kern.coredump = 0
# Disable core dump

kern.ipc.maxsockbuf = 16777216
# System tuning - Original -> 2097152

kern.ipc.nmbclusters = 32768
# System tuning

kern.ipc.somaxconn = 8192
# System tuning

kern.maxfiles = 65536
# System tuning

kern.maxfilesperproc = 32768
# System tuning

net.inet.tcp.delayed_ack = 0
# System tuning

net.inet.tcp.inflight.enable = 0
# System tuning

net.inet.tcp.path_mtu_discovery = 0
# System tuning

net.inet.tcp.recvbuf_auto = 1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.recvbuf_inc = 524288
# http://fasterdata.es.net/TCP-tuning/FreeBSD.html

net.inet.tcp.recvbuf_max = 16777216
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.recvspace = 65536
# System tuning

net.inet.tcp.rfc1323 = 1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.sendbuf_auto = 1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.sendbuf_inc = 16384
# http://fasterdata.es.net/TCP-tuning/FreeBSD.html

net.inet.tcp.sendbuf_max = 16777216
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.sendspace = 65536
# System tuning

net.inet.udp.maxdgram = 57344
# System tuning

net.inet.udp.recvspace = 65536
# System tuning

net.local.stream.recvspace = 65536
# System tuning

net.local.stream.sendspace = 65536
# System tuning

net.inet.tcp.hostcache.expire = 1
# http://fasterdata.es.net/TCP-tuning/FreeBSD.html

Protocol, Client and benchmark programm

I've used AFP (Apple File Protocol), my new MacBook Pro (15", Core2Due 2.4 GHz, 4GB RAM, Unibody late 2008, I've lost my old one) and IOZONE.

IOZONE (Version 3.323 from macports)

Iozone is a w
onderful tool to benchmark filesystems, but it is also a feature monster. After some investigation I've found some useful options for my tests.
iozone -e -i0 -i1 -i2 -+n -r 64k -s8g -t2 -c -x
  • -e -- Include flush (fsync,fflush) in the timing calculations
  • -i0 -- Test write/rewrite
  • -i1 -- Test read/reread
  • -i2 -- Test random read/write
  • -+n -- Disable retests
  • -r 64k -- Record or block size
  • -s8g -- Size of the file to test (2x RAM of the client)
  • -t2 -- number of threads
  • -c -- Include close() in timing calculations
  • -x -- Turn of stone-walling (I've received slightly better results with turned this off, see iozone manpage for details)
First impression
The first tests didn't showed me the results I've expected! Diskinfo showed good resutls (140 MB/s read, low latency) but over the GBE-Network I was not able to get mor than 60 MB/s read and 80 MB/s write!

Troubleshooting

I've identified that the network switch I have here (Longshine LCS-GS8208A) was not able to deliver more speed. This is the reason why I've connected the MBP direct to the FreeNAS Server.

Screenshots

Write Performance - Traffic graph

A peak traffic of 890 Mbps isn't too bad for a single HDD

Read performance - Traffic graph
The read performance shows nearly constant bandwidth of more that 910 Mbps

IOZONE results

iozone -e -i0 -i1 -i2 -+n -r 64k -s8g -t2 -c -x

Iozone: Performance Test of File I/O
Version $Revision: 3.323 $
Compiled for 32 bit mode.
Build: macosx

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

Run began: Tue Jun 23 15:47:17 2009

Include fsync in write timing
No retest option selected
Record Size 64 KB
File size set to 8388608 KB
Include close in write timing
Stonewall disabled
Command line used: iozone -e -i0 -i1 -i2 -+n -r 64k -s8g -t2 -c -x
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 2 processes
Each process writes a 8388608 Kbyte file in 64 Kbyte records

Children see throughput for 2 initial writers = 79800.84 KB/sec
Parent sees throughput for 2 initial writers = 78177.66 KB/sec
Min throughput per process = 39088.91 KB/sec
Max throughput per process = 40711.93 KB/sec
Avg throughput per process = 39900.42 KB/sec
Min xfer = 8388608.00 KB

Children see throughput for 2 readers = 107705.98 KB/sec
Parent sees throughput for 2 readers = 107705.48 KB/sec
Min throughput per process = 53852.83 KB/sec
Max throughput per process = 53853.15 KB/sec
Avg throughput per process = 53852.99 KB/sec
Min xfer = 8388608.00 KB

Children see throughput for 2 random readers = 56735.84 KB/sec
Parent sees throughput for 2 random readers = 56735.20 KB/sec
Min throughput per process = 28367.65 KB/sec
Max throughput per process = 28368.19 KB/sec
Avg throughput per process = 28367.92 KB/sec
Min xfer = 8388608.00 KB

Children see throughput for 2 random writers = 24004.56 KB/sec
Parent sees throughput for 2 random writers = 23958.08 KB/sec

Min throughput per process = 11979.05 KB/sec

Max throughput per process = 12025.51 KB/sec
Avg throughput per process = 12002.28 KB/sec
Min xfer = 8388608.00 KB


iozone test complete.


Conclusion

You can see in the results a constant bandwidth of 77 MByte/s write and 105 MByte/s read performance which is IMHO quite good for a single HDD. I am not sure about the random read write performance. I need to digg a bit deeper into this :-)


P.S.
Maybe you want to know 'Why AFP?' First of all, I use Apple Computers at home and it showed me the best throughput compared to NFS or Samba

Friday, June 12, 2009

FreeNAS 0.7 - diskinfo of a SuperTalent UltraDrive ME MLC 64GB

I've bought a SuperTalent UltraDrive ME MLC 64GB for playing around with SSD's. So I will provide here my experiences


For the FreeNAS 0.7 ZFS tests I've installed this SSD into a PC with a Intel Core2Quad Q6000 (2.4 GHz), 8 GB RAM. Mainboard Intel DG965WH.


My first test is a quick diskinfo -ct


Here we go...


freenas:~# diskinfo -ct /dev/ad6

/dev/ad6

512 # sectorsize

64023257088 # mediasize in bytes (60G)

125045424 # mediasize in sectors

124053 # Cylinders according to firmware.

16 # Heads according to firmware.

63 # Sectors according to firmware.

ad:P565045-BDIX-6029019 # Disk ident.


I/O command overhead:

time to read 10MB block 0.068941 sec = 0.003 msec/sector

time to read 20480 sectors 1.169508 sec = 0.057 msec/sector

calculated command overhead = 0.054 msec/sector


Seek times:

Full stroke: 250 iter in 0.024970 sec = 0.100 msec

Half stroke: 250 iter in 0.025027 sec = 0.100 msec

Quarter stroke: 500 iter in 0.049870 sec = 0.100 msec

Short forward: 400 iter in 0.039449 sec = 0.099 msec

Short backward: 400 iter in 0.039230 sec = 0.098 msec

Seq outer: 2048 iter in 0.114818 sec = 0.056 msec

Seq inner: 2048 iter in 0.123779 sec = 0.060 msec

Transfer rates:

outside: 102400 kbytes in 0.696010 sec = 147124 kbytes/sec

middle: 102400 kbytes in 0.618375 sec = 165595 kbytes/sec

inside: 102400 kbytes in 0.623304 sec = 164286 kbytes/sec


As comparison... here is the same command with a WD20EADS 2TB


quake:~# diskinfo -ct /dev/ad4

/dev/ad4

512 # sectorsize

2000398934016 # mediasize in bytes (1.8T)

3907029168 # mediasize in sectors

3876021 # Cylinders according to firmware.

16 # Heads according to firmware.

63 # Sectors according to firmware.


I/O command overhead:

time to read 10MB block 0.072554 sec = 0.004 msec/sector

time to read 20480 sectors 2.515795 sec = 0.123 msec/sector

calculated command overhead = 0.119 msec/sector


Seek times:

Full stroke: 250 iter in 6.150143 sec = 24.601 msec

Half stroke: 250 iter in 4.960021 sec = 19.840 msec

Quarter stroke: 500 iter in 8.214024 sec = 16.428 msec

Short forward: 400 iter in 2.523203 sec = 6.308 msec

Short backward: 400 iter in 2.008208 sec = 5.021 msec

Seq outer: 2048 iter in 0.298850 sec = 0.146 msec

Seq inner: 2048 iter in 0.379041 sec = 0.185 msec

Transfer rates:

outside: 102400 kbytes in 1.077248 sec = 95057 kbytes/sec

middle: 102400 kbytes in 1.378671 sec = 74274 kbytes/sec

inside: 102400 kbytes in 2.267264 sec = 45165 kbytes/sec


As you can see... the seek times of the SSD are impressive!