Quantcast
Channel: StudioSysAdmins Message Board
Viewing all articles
Browse latest Browse all 3749

Newbie ZFS and 10GB Questions and Help

$
0
0
Newbie ZFS and 10GB Questions and Help
posted by Brian Smith on June 11, 2014, 11:54 a.m.

Hi all,

I have recently assembled a storage server to act as a nearline solution to move jobs to, so they can be archived to tapes, and ease any strain on our

"live" servers.

 

Super Micro Super Chassis - CSE-846E26-R1200B

Supermicro - MBD-X9DR3-LN4F+-O

2 x Xeon E5-2620 6 core 2.0ghz

64gb DDR3 Ram 1600

Atto NT12 10gbe

Areca Arc1882-16 Raid card

21 x Sata 6.0, 4tb Hitachi Drives

Lsi 9207 8i - with 2 x 128 Intel SSD drives attached in Raid 1 - With CENTOS 6.5 Installed.

 

I created the raid set as raid5 with 1 hot spare giving me 76tb via the areca bios 

When it came to the partitioning/creating the filesystem I decided to dabble a little into the ZFS filesystem 

 I created a pool and configured it with appropriate permissions,created a share point called nearline and then shared it onto our network via nfs.

After a few glitches (mainly firewall related, and getting the share to automount on boot) the share is accessable from all the other servers on my network.

I ran two tests to see performance and have a few questions

from a mac i have on the 10gb network i ran the AJA system test to look at disk IO RW - although it seems to be giving aderquate IO, it doesnt seem fast enough to me

Video Frame Size - 1920x1080 10bit - file size 1gb  - 310Write 300 read
on a much older server i am getting relativly the same speeds

 

My question here is i created the ZFS pool on the already built RAID 5,volume set - 8kb stripe,block 512 cache attribute write back using the Areca BIOS,

I  used a low stripe size 8kb, thinking that alot of the data moving to this server is going to be maya files,jpegs/tiffs etc, and that a lower stripe size would increase performance, is there a set size i should be using?should HDD read ahead cache be enabled?

I noticed on forums alot of people seemed to just created a RAID 0, and then when creating the ZFS pool they use the raidz then, will this increase performance?

I would rather trash the whole raid now while iits in testing phase and configure it correctly

 

 

Secondly related to 10gb - All systems have Atto NT11/12 10gb cards and the the same sysctrl tuning added,all enabled with mtu=9000, 

If I run Iperf between 2 different servers to and from the new zfs server 

it seems to be giving me good speeds to the new server however from the new to the others its a little slow.

[root@fas ~]# iperf -c 199.95.137.45 ------------------------------------------------------------
Client connecting to 199.95.137.45, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.137.12 port 35555 connected with 199.95.137.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.6 GBytes 9.92 Gbits/sec
[root@fas ~]# iperf -c 199.95.137.45 -fM -m -i5 -t25

[ 3] local 199.95.137.12 port 35556 connected with 199.95.137.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 5.0 sec 5922 MBytes 1184 MBytes/sec
[ 3] 5.0-10.0 sec 5911 MBytes 1182 MBytes/sec
[ 3] 10.0-15.0 sec 5908 MBytes 1182 MBytes/sec
[ 3] 15.0-20.0 sec 5906 MBytes 1181 MBytes/sec
[ 3] 20.0-25.0 sec 5912 MBytes 1182 MBytes/sec
[ 3] 0.0-25.0 sec 29560 MBytes 1182 MBytes/sec

CHUNK SERVER (ZFS) to C3X & FAS SERVER

4] local 199.95.137.12 port 5001 connected with 199.95.137.45 port 48837
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 5.08 GBytes 4.36 Gbits/sec

4] local 199.95.137.12 port 5001 connected with 199.95.137.45 port 48837
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 5.08 GBytes 4.36 Gbits/sec

C3X SERVER to FAS SERVER

^C[root@c3xlive ~]iperf -c 199.95.137.12
------------------------------------------------------------
Client connecting to 199.95.137.12, TCP port 5001
TCP window size: 92.6 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.137.178 port 39013 connected with 199.95.137.12 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 5.41 GBytes 4.65 Gbits/sec

 

FAS SERVER to C3X SERVER

Client connecting to 199.95.137.178, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.137.12 port 55683 connected with 199.95.137.178 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.6 GBytes 9.92 Gbits/sec

 

I am using this tuning in each sysctrl.conf file

 

#TUNING
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.netdev_max_backlog = 250000

Any help in fine tuning the 10gb card and also fine tuning for NFS would be helpful

I had read that increasing txqueuelen 10000 is helpful for theNFS connections?

 

Thanks

 

Thread Tags:
  storage,zfs 

0 Responses   0 Plus One's   0 Comments  
 

Hi all,

I have recently assembled a storage server to act as a nearline solution to move jobs to, so they can be archived to tapes, and ease any strain on our

"live" servers.

 

Super Micro Super Chassis - CSE-846E26-R1200B

Supermicro - MBD-X9DR3-LN4F+-O

2 x Xeon E5-2620 6 core 2.0ghz

64gb DDR3 Ram 1600

Atto NT12 10gbe

Areca Arc1882-16 Raid card

21 x Sata 6.0, 4tb Hitachi Drives

Lsi 9207 8i - with 2 x 128 Intel SSD drives attached in Raid 1 - With CENTOS 6.5 Installed.

 

I created the raid set as raid5 with 1 hot spare giving me 76tb via the areca bios 

When it came to the partitioning/creating the filesystem I decided to dabble a little into the ZFS filesystem 

 I created a pool and configured it with appropriate permissions,created a share point called nearline and then shared it onto our network via nfs.

After a few glitches (mainly firewall related, and getting the share to automount on boot) the share is accessable from all the other servers on my network.

I ran two tests to see performance and have a few questions

from a mac i have on the 10gb network i ran the AJA system test to look at disk IO RW - although it seems to be giving aderquate IO, it doesnt seem fast enough to me

Video Frame Size - 1920x1080 10bit - file size 1gb  - 310Write 300 read
on a much older server i am getting relativly the same speeds

 

My question here is i created the ZFS pool on the already built RAID 5,volume set - 8kb stripe,block 512 cache attribute write back using the Areca BIOS,

I  used a low stripe size 8kb, thinking that alot of the data moving to this server is going to be maya files,jpegs/tiffs etc, and that a lower stripe size would increase performance, is there a set size i should be using?should HDD read ahead cache be enabled?

I noticed on forums alot of people seemed to just created a RAID 0, and then when creating the ZFS pool they use the raidz then, will this increase performance?

I would rather trash the whole raid now while iits in testing phase and configure it correctly

 

 

Secondly related to 10gb - All systems have Atto NT11/12 10gb cards and the the same sysctrl tuning added,all enabled with mtu=9000, 

If I run Iperf between 2 different servers to and from the new zfs server 

it seems to be giving me good speeds to the new server however from the new to the others its a little slow.

[root@fas ~]# iperf -c 199.95.137.45 ------------------------------------------------------------
Client connecting to 199.95.137.45, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.137.12 port 35555 connected with 199.95.137.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.6 GBytes 9.92 Gbits/sec
[root@fas ~]# iperf -c 199.95.137.45 -fM -m -i5 -t25

[ 3] local 199.95.137.12 port 35556 connected with 199.95.137.45 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0- 5.0 sec 5922 MBytes 1184 MBytes/sec
[ 3] 5.0-10.0 sec 5911 MBytes 1182 MBytes/sec
[ 3] 10.0-15.0 sec 5908 MBytes 1182 MBytes/sec
[ 3] 15.0-20.0 sec 5906 MBytes 1181 MBytes/sec
[ 3] 20.0-25.0 sec 5912 MBytes 1182 MBytes/sec
[ 3] 0.0-25.0 sec 29560 MBytes 1182 MBytes/sec

CHUNK SERVER (ZFS) to C3X & FAS SERVER

4] local 199.95.137.12 port 5001 connected with 199.95.137.45 port 48837
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 5.08 GBytes 4.36 Gbits/sec

4] local 199.95.137.12 port 5001 connected with 199.95.137.45 port 48837
[ ID] Interval Transfer Bandwidth
[ 4] 0.0-10.0 sec 5.08 GBytes 4.36 Gbits/sec

C3X SERVER to FAS SERVER

^C[root@c3xlive ~]iperf -c 199.95.137.12
------------------------------------------------------------
Client connecting to 199.95.137.12, TCP port 5001
TCP window size: 92.6 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.137.178 port 39013 connected with 199.95.137.12 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 5.41 GBytes 4.65 Gbits/sec

 

FAS SERVER to C3X SERVER

Client connecting to 199.95.137.178, TCP port 5001
TCP window size: 64.0 KByte (default)
------------------------------------------------------------
[ 3] local 199.95.137.12 port 55683 connected with 199.95.137.178 port 5001
[ ID] Interval Transfer Bandwidth
[ 3] 0.0-10.0 sec 11.6 GBytes 9.92 Gbits/sec

 

I am using this tuning in each sysctrl.conf file

 

#TUNING
net.core.rmem_max = 16777216
net.core.wmem_max = 16777216
net.ipv4.tcp_rmem = 4096 87380 16777216
net.ipv4.tcp_wmem = 4096 65536 16777216
net.core.netdev_max_backlog = 250000

Any help in fine tuning the 10gb card and also fine tuning for NFS would be helpful

I had read that increasing txqueuelen 10000 is helpful for theNFS connections?

 

Thanks

 


Viewing all articles
Browse latest Browse all 3749

Trending Articles