Newbie ZFS and 10GB Questions and Help | ||||||
posted by Brian Smith on June 11, 2014, 11:54 a.m. | ||||||
| ||||||
Thread Tags: storage,zfs | ||||||
|
Hi all, I have recently assembled a storage server to act as a nearline solution to move jobs to, so they can be archived to tapes, and ease any strain on our "live" servers.
Super Micro Super Chassis - CSE-846E26-R1200B Supermicro - MBD-X9DR3-LN4F+-O 2 x Xeon E5-2620 6 core 2.0ghz 64gb DDR3 Ram 1600 Atto NT12 10gbe Areca Arc1882-16 Raid card 21 x Sata 6.0, 4tb Hitachi Drives Lsi 9207 8i - with 2 x 128 Intel SSD drives attached in Raid 1 - With CENTOS 6.5 Installed.
I created the raid set as raid5 with 1 hot spare giving me 76tb via the areca bios When it came to the partitioning/creating the filesystem I decided to dabble a little into the ZFS filesystem I created a pool and configured it with appropriate permissions,created a share point called nearline and then shared it onto our network via nfs. After a few glitches (mainly firewall related, and getting the share to automount on boot) the share is accessable from all the other servers on my network. I ran two tests to see performance and have a few questions from a mac i have on the 10gb network i ran the AJA system test to look at disk IO RW - although it seems to be giving aderquate IO, it doesnt seem fast enough to me Video Frame Size - 1920x1080 10bit - file size 1gb - 310Write 300 read
My question here is i created the ZFS pool on the already built RAID 5,volume set - 8kb stripe,block 512 cache attribute write back using the Areca BIOS, I used a low stripe size 8kb, thinking that alot of the data moving to this server is going to be maya files,jpegs/tiffs etc, and that a lower stripe size would increase performance, is there a set size i should be using?should HDD read ahead cache be enabled? I noticed on forums alot of people seemed to just created a RAID 0, and then when creating the ZFS pool they use the raidz then, will this increase performance? I would rather trash the whole raid now while iits in testing phase and configure it correctly
Secondly related to 10gb - All systems have Atto NT11/12 10gb cards and the the same sysctrl tuning added,all enabled with mtu=9000, If I run Iperf between 2 different servers to and from the new zfs server it seems to be giving me good speeds to the new server however from the new to the others its a little slow. [root@fas ~]# iperf -c 199.95.137.45 ------------------------------------------------------------ [ 3] local 199.95.137.12 port 35556 connected with 199.95.137.45 port 5001 CHUNK SERVER (ZFS) to C3X & FAS SERVER 4] local 199.95.137.12 port 5001 connected with 199.95.137.45 port 48837 4] local 199.95.137.12 port 5001 connected with 199.95.137.45 port 48837 C3X SERVER to FAS SERVER ^C[root@c3xlive ~]iperf -c 199.95.137.12
FAS SERVER to C3X SERVER Client connecting to 199.95.137.178, TCP port 5001
I am using this tuning in each sysctrl.conf file
#TUNING Any help in fine tuning the 10gb card and also fine tuning for NFS would be helpful I had read that increasing txqueuelen 10000 is helpful for theNFS connections?
Thanks
|