Showing posts with label tuning. Show all posts
Showing posts with label tuning. Show all posts

Wednesday, October 21, 2009

FreeNAS 0.7 - Samba tuning

Here is a nice blogpost from learnedbyerror about tuning samba (and other...). Especially the samba/cifs tweaks should give you a performance boost.

I don't recommend to use the old zfs tuning settings as the latest FreeNAS Versions are based on FreeBSD 7.2 (see http://wiki.freebsd.org/ZFSTuningGuide).

Anyway, good blogpost!

Thursday, June 25, 2009

Benchmark of FreeNAS 0.7 and a single SSD

As you can read in my previous blog post, I have a SuperTalent Ultra Drive ME MLC 64GB. I am interested about the performance of this little drives. So here is what I've found out...

First of all I've installed this SSD into a server for doing tests.

Server hardware specs
  • CPU Intel Core2Quad Q6000 (2.4 GHz)
  • 8 GB RAM
  • Mainboard Intel DG965WH
  • Network Card Intel Pro/1000 PT
FreeNAS 0.7 RC2 (AMD64, rev. 4744)

I've tuned the system a little bit :-)

-> Network|Interfaces|LAN -> Advanced Configuration -> MTU 9000
-> System|Advanced ->Tuning -> Enable tuning of some kernel variables

And -> System|Advanced|sysctl.conf

hw.ata.to = 15
# ATA disk timeout vis-a-vis power-saving

kern.coredump = 0
# Disable core dump

kern.ipc.maxsockbuf = 16777216
# System tuning - Original -> 2097152

kern.ipc.nmbclusters = 32768
# System tuning

kern.ipc.somaxconn = 8192
# System tuning

kern.maxfiles = 65536
# System tuning

kern.maxfilesperproc = 32768
# System tuning

net.inet.tcp.delayed_ack = 0
# System tuning

net.inet.tcp.inflight.enable = 0
# System tuning

net.inet.tcp.path_mtu_discovery = 0
# System tuning

net.inet.tcp.recvbuf_auto = 1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.recvbuf_inc = 524288
# http://fasterdata.es.net/TCP-tuning/FreeBSD.html

net.inet.tcp.recvbuf_max = 16777216
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.recvspace = 65536
# System tuning

net.inet.tcp.rfc1323 = 1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.sendbuf_auto = 1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.sendbuf_inc = 16384
# http://fasterdata.es.net/TCP-tuning/FreeBSD.html

net.inet.tcp.sendbuf_max = 16777216
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html

net.inet.tcp.sendspace = 65536
# System tuning

net.inet.udp.maxdgram = 57344
# System tuning

net.inet.udp.recvspace = 65536
# System tuning

net.local.stream.recvspace = 65536
# System tuning

net.local.stream.sendspace = 65536
# System tuning

net.inet.tcp.hostcache.expire = 1
# http://fasterdata.es.net/TCP-tuning/FreeBSD.html

Protocol, Client and benchmark programm

I've used AFP (Apple File Protocol), my new MacBook Pro (15", Core2Due 2.4 GHz, 4GB RAM, Unibody late 2008, I've lost my old one) and IOZONE.

IOZONE (Version 3.323 from macports)

Iozone is a w
onderful tool to benchmark filesystems, but it is also a feature monster. After some investigation I've found some useful options for my tests.
iozone -e -i0 -i1 -i2 -+n -r 64k -s8g -t2 -c -x
  • -e -- Include flush (fsync,fflush) in the timing calculations
  • -i0 -- Test write/rewrite
  • -i1 -- Test read/reread
  • -i2 -- Test random read/write
  • -+n -- Disable retests
  • -r 64k -- Record or block size
  • -s8g -- Size of the file to test (2x RAM of the client)
  • -t2 -- number of threads
  • -c -- Include close() in timing calculations
  • -x -- Turn of stone-walling (I've received slightly better results with turned this off, see iozone manpage for details)
First impression
The first tests didn't showed me the results I've expected! Diskinfo showed good resutls (140 MB/s read, low latency) but over the GBE-Network I was not able to get mor than 60 MB/s read and 80 MB/s write!

Troubleshooting

I've identified that the network switch I have here (Longshine LCS-GS8208A) was not able to deliver more speed. This is the reason why I've connected the MBP direct to the FreeNAS Server.

Screenshots

Write Performance - Traffic graph

A peak traffic of 890 Mbps isn't too bad for a single HDD

Read performance - Traffic graph
The read performance shows nearly constant bandwidth of more that 910 Mbps

IOZONE results

iozone -e -i0 -i1 -i2 -+n -r 64k -s8g -t2 -c -x

Iozone: Performance Test of File I/O
Version $Revision: 3.323 $
Compiled for 32 bit mode.
Build: macosx

Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
Al Slater, Scott Rhine, Mike Wisner, Ken Goss
Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy,
Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root.

Run began: Tue Jun 23 15:47:17 2009

Include fsync in write timing
No retest option selected
Record Size 64 KB
File size set to 8388608 KB
Include close in write timing
Stonewall disabled
Command line used: iozone -e -i0 -i1 -i2 -+n -r 64k -s8g -t2 -c -x
Output is in Kbytes/sec
Time Resolution = 0.000001 seconds.
Processor cache size set to 1024 Kbytes.
Processor cache line size set to 32 bytes.
File stride size set to 17 * record size.
Throughput test with 2 processes
Each process writes a 8388608 Kbyte file in 64 Kbyte records

Children see throughput for 2 initial writers = 79800.84 KB/sec
Parent sees throughput for 2 initial writers = 78177.66 KB/sec
Min throughput per process = 39088.91 KB/sec
Max throughput per process = 40711.93 KB/sec
Avg throughput per process = 39900.42 KB/sec
Min xfer = 8388608.00 KB

Children see throughput for 2 readers = 107705.98 KB/sec
Parent sees throughput for 2 readers = 107705.48 KB/sec
Min throughput per process = 53852.83 KB/sec
Max throughput per process = 53853.15 KB/sec
Avg throughput per process = 53852.99 KB/sec
Min xfer = 8388608.00 KB

Children see throughput for 2 random readers = 56735.84 KB/sec
Parent sees throughput for 2 random readers = 56735.20 KB/sec
Min throughput per process = 28367.65 KB/sec
Max throughput per process = 28368.19 KB/sec
Avg throughput per process = 28367.92 KB/sec
Min xfer = 8388608.00 KB

Children see throughput for 2 random writers = 24004.56 KB/sec
Parent sees throughput for 2 random writers = 23958.08 KB/sec

Min throughput per process = 11979.05 KB/sec

Max throughput per process = 12025.51 KB/sec
Avg throughput per process = 12002.28 KB/sec
Min xfer = 8388608.00 KB


iozone test complete.


Conclusion

You can see in the results a constant bandwidth of 77 MByte/s write and 105 MByte/s read performance which is IMHO quite good for a single HDD. I am not sure about the random read write performance. I need to digg a bit deeper into this :-)


P.S.
Maybe you want to know 'Why AFP?' First of all, I use Apple Computers at home and it showed me the best throughput compared to NFS or Samba

Monday, January 12, 2009

Blank smbpasswd - FreeNAS


While I'm hitting this issue (again :-) ), so I want to document this here...

My problem was that I had a blank smbpasswd. Even when I added the passwd manually with 'smbpasswd'. After a reboot, the password settings in /var/etc/private/smbpasswd are gone.
With debugging /etc/rc.d/smbpasswd (with set -x) I've found a variable called 'smbpasswd_minuid=1001'. My UID's are bellow 1001 (I'm using the same as on my Mac).
So after some search, I've seen that this feature (Enable customizing of minimum UID for smbpasswd via rc.conf variable 'smbpasswd_minuid'.) was implemented with Build 3177.

It is easy to set this variable in -> System -> Advanced -> rc.conf

Variable = smbpasswd_minuid  ; Value = 500


After applying this, it is possible to run /etc/rc.d/smbpasswd. The users and the encrypted passwords are stored in /var/etc/private/smbpasswd.

Sunday, October 5, 2008

Tuning FreeNAS & ZFS

I am running FreeNAS 0.7 (rev. 3514) since several weeks as my 'production' server at home on a Mini-ITX Atom based system (see FreeNAS 0.7 on a Intel D945GCLF).

I've done some tuning of FreeBSD and ZFS with good experiences (good performance, no panics, etc.).

Here is what I've done...

First of all it is important to 'tune' ZFS. I've seen some panics of my systems without using this parameters. It is necessary to use lots of RAM for ZFS. I have 2GB in my little server...

I've followed the FreeBSD ZFSTuning Guide and added the following lines to the /boot/loader.conf file

vm.kmem_size_max="1073741824"
vm.kmem_size="1073741824"
vfs.zfs.prefetch_disable=1

I am not sure about the vfs.zfs.prefetch_disable=1, but as I said, my experiences are good with this settings.

Editing of the /boot/loader.conf file is easy :-) Use your Web-GUI, go to -> Advanced -> Edit File -> add /boot/loader.conf to the File path and hit Load. Add the lines to that file and hit the Save Button.

Please be aware that these changes are not saved to the 'XML system configuration' (-> System -> Backup/Restore -> Download configuration).


FreeBSD tuning... I've simply used the 'Tuning' of FreeNAS. You can find that also on your Web-GUI.
Go to -> System -> Advanced -> mark the Tuning Box and hit Save.

And last but not least there are some nice tuning variables to be found on the TCP Tuning Guide for FreeBSD (here http://acs.lbl.gov/TCP-tuning/FreeBSD.html or here http://fasterdata.es.net/TCP-tuning/FreeBSD.html)

Add this variables via -> System -> Advanced -> sysctl.conf

Here are the variables in detail:

# ups spinup time for drive recognition
hw.ata.to=15
# System tuning - Original -> 2097152
kern.ipc.maxsockbuf=16777216
# System tuning
kern.ipc.nmbclusters=32768
# System tuning
kern.ipc.somaxconn=8192
# System tuning
kern.maxfiles=65536
# System tuning
kern.maxfilesperproc=32768
# System tuning
net.inet.tcp.delayed_ack=0
# System tuning
net.inet.tcp.inflight.enable=0
# System tuning
net.inet.tcp.path_mtu_discovery=0
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.recvbuf_auto=1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.recvbuf_inc=16384
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.recvbuf_max=16777216
# System tuning
net.inet.tcp.recvspace=65536
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.rfc1323=1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.sendbuf_auto=1
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.sendbuf_inc =8192
# System tuning
net.inet.tcp.sendspace=65536
# System tuning
net.inet.udp.maxdgram=57344
# System tuning
net.inet.udp.recvspace=65536
# System tuning
net.local.stream.recvspace=65536
# System tuning
net.local.stream.sendspace=65536
# http://acs.lbl.gov/TCP-tuning/FreeBSD.html
net.inet.tcp.sendbuf_max=16777216

I would really appreciate any comments about this tuning variables!