Calling All Hitachi Storage Savvy Admins! | ||||||
posted by Kevin McCartney on Feb. 5, 2015, 7:35 p.m. (1 day ago) | ||||||
| ||||||
Thread Tags: server | ||||||
|
We are about to be working on a project with about 150TB of data. We have purchased the disks and will be giving this project its own file system. I am used to the isilon model of storage so I am still trying to understand how the load is distributed. Doing some very heavy test renders we are seeing a large spike in the bossock fibre count of the node its running on. In doing some research its looking like the disk is not able to keep up with the requests. Is it possible to spread the load of one really large file system over mutlple heads? We have a lot of smaller file systems that I am able to spread between the two nodes. I am wondering when you have one large file system that is getting hit with loads of traffic what the best course of action is. Its already on its own EVS and the only file system running on that node. |