Quantcast
Channel: StudioSysAdmins Message Board
Viewing all 3749 articles
Browse latest View live

HTC Vive with SkyBox VR Player/Revive

$
0
0
HTC Vive with SkyBox VR Player/Revive
posted by Sam Frankiel on Feb. 9, 2017, 5 p.m. (2 days ago)
Good Afternoon - Is anyone working with SkyBox VR player and a HTC Vive? If you are, do you have any pointers? There were some articles we found that used Revive to inject Premiere (and SkyBox) into Steam to get output working but we haven't had any luck in getting it running. Many thanks in advance! SAM FRANKIEL ANTFARM 110 S. FAIRFAX AVENUE SUITE 200 LOS ANGELES, CALIFORNIA 90036 323.850.0700 | ANTFARM.NET This email may contain material that is subject to copyright or trade secret protection, confidential and/or privileged and, in all cases, provided for the sole use of the intended recipient. Any review, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. Ant Farm, LLC. may monitor the use of this email system for various purposes including security management, system operations, and intellectual property compliance. Ant Farm, LLC.'s email systems may not be used for the delivery of unsolicited bulk email communications. To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Good Afternoon - Is anyone working with SkyBox VR player and a HTC Vive? If you are, do you have any pointers? There were some articles we found that used Revive to inject Premiere (and SkyBox) into Steam to get output working but we haven't had any luck in getting it running. Many thanks in advance! SAM FRANKIEL ANTFARM 110 S. FAIRFAX AVENUE SUITE 200 LOS ANGELES, CALIFORNIA 90036 323.850.0700 | ANTFARM.NET This email may contain material that is subject to copyright or trade secret protection, confidential and/or privileged and, in all cases, provided for the sole use of the intended recipient. Any review, reliance or distribution by others or forwarding without express permission is strictly prohibited. If you are not the intended recipient, please contact the sender and delete all copies. Ant Farm, LLC. may monitor the use of this email system for various purposes including security management, system operations, and intellectual property compliance. Ant Farm, LLC.'s email systems may not be used for the delivery of unsolicited bulk email communications. To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

NYC backup ISP w/ fast & flexible terms

$
0
0
NYC backup ISP w/ fast & flexible terms
posted by Rob LaRose on Feb. 9, 2017, 6:25 p.m. (2 days ago)
Hi friends!

I'm reviewing my backup ISP -- my current solution is business-class-cheapo, with asymmetrical bandwidth, no SLA and so forth.

I'd ideally like to have a service with some flexible terms: low-bandwidth (say, 100Mbps) commit, but with the ability to make a phone call and turn up to 500Mbps or 1Gbps immediately when needed, paying a surcharge rate for those days.

Anybody got a service in New York that does that?

--Rob


--
rob larose| engineer | rock paper scissors |212-255-6446|www.rockpaperscissors.com
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hi friends!

I'm reviewing my backup ISP -- my current solution is business-class-cheapo, with asymmetrical bandwidth, no SLA and so forth.

I'd ideally like to have a service with some flexible terms: low-bandwidth (say, 100Mbps) commit, but with the ability to make a phone call and turn up to 500Mbps or 1Gbps immediately when needed, paying a surcharge rate for those days.

Anybody got a service in New York that does that?

--Rob


--
rob larose| engineer | rock paper scissors |212-255-6446|www.rockpaperscissors.com

Current state of "Linux on Windows"

$
0
0
Current state of "Linux on Windows"
posted by Michael Stein on Feb. 13, 2017, 1:40 p.m. (1 day ago)
Hi Hivemind!

I've got a client who is transitioning their pipelines from OSX to Windows 10. I looked in the archives and saw an old conversation regarding Bash/Ubuntu on Windows, but what's the current thinking/state of the art in terms of being able to handle environment vars, linux shell commands, etc. under Windows? Is it still cygwin, or does Ubuntu under Windows 10 handle enough now? Or is there something else?

I know this is a question with potentially a lot of details in the final answer, but any pointers in the right direction greatly appreciated.

thanks,
stein
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hi Hivemind!

I've got a client who is transitioning their pipelines from OSX to Windows 10. I looked in the archives and saw an old conversation regarding Bash/Ubuntu on Windows, but what's the current thinking/state of the art in terms of being able to handle environment vars, linux shell commands, etc. under Windows? Is it still cygwin, or does Ubuntu under Windows 10 handle enough now? Or is there something else?

I know this is a question with potentially a lot of details in the final answer, but any pointers in the right direction greatly appreciated.

thanks,
stein

NAB codes

$
0
0
NAB codes
posted by Dan Young on Feb. 13, 2017, 2:35 p.m. (1 day ago)
Who's got a good one?

I don't want some vendors name on my badge, yet I also do not want to pay....

DY

--

Framestore
Dan Young Lead Systems Engineer
LondonNew YorkLos AngelesMontral
T+1 212 775 0600
135 Spring Street, New York NY 10012
TwitterFacebookframestore.com
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Who's got a good one?

I don't want some vendors name on my badge, yet I also do not want to pay....

DY

--

Framestore
Dan Young Lead Systems Engineer
LondonNew YorkLos AngelesMontral
T+1 212 775 0600
135 Spring Street, New York NY 10012
TwitterFacebookframestore.com

Maya 2017: Problems with viewport.

$
0
0
Maya 2017: Problems with viewport.
posted by Mark Ryan on Feb. 14, 2017, 1:04 p.m. (1 day ago)

I'm running Maya 2017 Update 2 in Linux and when curves are cached in allembic they disappear when changing camera angles.

Here's is what we have determined to so far:
- we tested on two graphics cards (Quadro K4200 and M4000) and the problem persisted.
- we tested three drivers on the two cards listed above with no luck
- issue is not User profile related: the problem persists
- adjusting Viewport settings makes no difference

 Anyone seen this before?


0 Responses   0 Plus One's   0 Comments  
 

I'm running Maya 2017 Update 2 in Linux and when curves are cached in allembic they disappear when changing camera angles.

Here's is what we have determined to so far:
- we tested on two graphics cards (Quadro K4200 and M4000) and the problem persisted.
- we tested three drivers on the two cards listed above with no luck
- issue is not User profile related: the problem persists
- adjusting Viewport settings makes no difference

 Anyone seen this before?

SSA member Technology in M&E podcast

$
0
0
SSA member Technology in M&E podcast
posted by Jean-Francois Panisset on Feb. 14, 2017, 3:10 p.m. (1 day ago)
Julian Firminger just did an interview with the Packet Pushers podcast talking about technology in M&E, I'm sure we will all recognize a lot of our daily challenges. And he put in a nice plug for SSA!

Definitely worth a listen.

JF

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Julian Firminger just did an interview with the Packet Pushers podcast talking about technology in M&E, I'm sure we will all recognize a lot of our daily challenges. And he put in a nice plug for SSA!

Definitely worth a listen.

JF

SuperMicro TwinBlades, are they good?

$
0
0
SuperMicro TwinBlades, are they good?
posted by Michael Miller on Feb. 15, 2017, 10:22 a.m. (1 day ago)

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!

Thread Tags:
  render 

0 Responses   0 Plus One's   0 Comments  
 

I am greatly considering building a renderfarm using one of the these  for density, space and cost savings reasons but I don't see many people talk much about them in forums and was wondering if anybody here had any first hand experience with using them and how they compare more expensive solutions from HP or Dell.  Thankyou in advance!

gridFTP

$
0
0
gridFTP
posted by Greg Whynott on Feb. 15, 2017, 11:10 p.m. (1 day ago)
Thought this was worth sharing, at a geek level.

Black line is from server to DMZ VM, red line is from DMZ VM to Korea.

The VM is on a large array and the server is a HPC node with a single 7200 RPM drive, working on a job at the same time. Likely the bottle neck here is the drive/system itself, but still, its amusing it took less time to transfer the file to the other side of earth.


side note, if you have to compress large files or sets, check out pigz if you are not aware of it. will use every core, even on single files. it splits the file into chunks equal to the core count and works on them in parallel. Its so fast..






take good care,

_g


Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Thought this was worth sharing, at a geek level.

Black line is from server to DMZ VM, red line is from DMZ VM to Korea.

The VM is on a large array and the server is a HPC node with a single 7200 RPM drive, working on a job at the same time. Likely the bottle neck here is the drive/system itself, but still, its amusing it took less time to transfer the file to the other side of earth.


side note, if you have to compress large files or sets, check out pigz if you are not aware of it. will use every core, even on single files. it splits the file into chunks equal to the core count and works on them in parallel. Its so fast..






take good care,

_g



All-Flash 30-40TB Windows/FreeBSD storage gotchas ?

$
0
0
All-Flash 30-40TB Windows/FreeBSD storage gotchas ?
posted by Philippe Chotard on Feb. 17, 2017, 4:15 p.m. (2 days ago)
Hi,
I'm about to build a full-flash storage as our main production storage (nuke/maya/houdini/players).
I'm currently leaning towards a "self-built" 24x2.5 Supermicro server filled with S3610 series.
I might go for nvme instead, but I'm not confortable doing software raid with Windows Server so I might choose a FreeBSD/ZFS solution in that case.

If you already built something similar, either on Windows Server or FreeBSD, would you have any gotchas to share ? I never built a full SSD storage so I'm trying not to miss anything before spending my whole storage budget on it :)
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hi,
I'm about to build a full-flash storage as our main production storage (nuke/maya/houdini/players).
I'm currently leaning towards a "self-built" 24x2.5 Supermicro server filled with S3610 series.
I might go for nvme instead, but I'm not confortable doing software raid with Windows Server so I might choose a FreeBSD/ZFS solution in that case.

If you already built something similar, either on Windows Server or FreeBSD, would you have any gotchas to share ? I never built a full SSD storage so I'm trying not to miss anything before spending my whole storage budget on it :)

All-Flash 30-40TB Windows/FreeBSD storage gotchas ?

$
0
0
All-Flash 30-40TB Windows/FreeBSD storage gotchas ?
posted by William Sandler on Feb. 17, 2017, 4:45 p.m. (2 days ago)
Investing in a SATA based all flash array in 2017 is probably a bad idea.

Assuming you go with the SATA drives you mentioned, I don't see too many gotchas to worry about with this setup in a FreeBSD ZFS Server. I would just make sure the Supermicro server you choose has enough PCI-E lanes to accommodate the HBA cards and NICs. Which SM chassis are you considering? You want to make sure you get one that is wired in a way that the HBA(s) are not the bottleneck. For instance if you're only using a single HBA, and it can process 12Gb/s, it will be a bottleneck for your 24x SSDs.

The newer NVMe SM chassis seems ideal for your intended use but I haven't worked with it yet so I can't speak to it. Saker posted the other day that he is using it so maybe he can shed some light.



William Sandler

On Fri, Feb 17, 2017 at 4:13 PM, Philippe Chotard <chobolt@gmail.com> wrote:
Hi,
I'm about to build a full-flash storage as our main production storage (nuke/maya/houdini/players).
I'm currently leaning towards a "self-built" 24x2.5 Supermicro server filled with S3610 series.
I might go for nvme instead, but I'm not confortable doing software raid with Windows Server so I might choose a FreeBSD/ZFS solution in that case.

If you already built something similar, either on Windows Server or FreeBSD, would you have any gotchas to share ? I never built a full SSD storage so I'm trying not to miss anything before spending my whole storage budget on it :)

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Investing in a SATA based all flash array in 2017 is probably a bad idea.

Assuming you go with the SATA drives you mentioned, I don't see too many gotchas to worry about with this setup in a FreeBSD ZFS Server. I would just make sure the Supermicro server you choose has enough PCI-E lanes to accommodate the HBA cards and NICs. Which SM chassis are you considering? You want to make sure you get one that is wired in a way that the HBA(s) are not the bottleneck. For instance if you're only using a single HBA, and it can process 12Gb/s, it will be a bottleneck for your 24x SSDs.

The newer NVMe SM chassis seems ideal for your intended use but I haven't worked with it yet so I can't speak to it. Saker posted the other day that he is using it so maybe he can shed some light.



William Sandler

On Fri, Feb 17, 2017 at 4:13 PM, Philippe Chotard <chobolt@gmail.com> wrote:
Hi,
I'm about to build a full-flash storage as our main production storage (nuke/maya/houdini/players).
I'm currently leaning towards a "self-built" 24x2.5 Supermicro server filled with S3610 series.
I might go for nvme instead, but I'm not confortable doing software raid with Windows Server so I might choose a FreeBSD/ZFS solution in that case.

If you already built something similar, either on Windows Server or FreeBSD, would you have any gotchas to share ? I never built a full SSD storage so I'm trying not to miss anything before spending my whole storage budget on it :)

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Looking for a remote freelance linux sys admin

$
0
0
Looking for a remote freelance linux sys admin
posted by Nelson Lim on Feb. 17, 2017, 5:12 p.m. (2 days ago)

Hi guys,

I'm looking for a linux sys admin guru who would be able to help us remotely with AD authentication, VPN, VMs, SANs, samba, license servers, database servers, etc and advise on our linux infrastructure.

We are a new studio that's mainly windows but have been adding more linux infrastructure and now we need expert help to support our growing linux needs.

 

Let me know if you know someone who might fit the bill and would like to work remotely on a freelance basis.

Nelson Lim

Lead Pipeline TD

Brazen Animation Studios

nelson.lim@brazenanimation.com

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   1 Comments  
 

Hi guys,

I'm looking for a linux sys admin guru who would be able to help us remotely with AD authentication, VPN, VMs, SANs, samba, license servers, database servers, etc and advise on our linux infrastructure.

We are a new studio that's mainly windows but have been adding more linux infrastructure and now we need expert help to support our growing linux needs.

 

Let me know if you know someone who might fit the bill and would like to work remotely on a freelance basis.

Nelson Lim

Lead Pipeline TD

Brazen Animation Studios

nelson.lim@brazenanimation.com

looking for remote freelance sysadmin work

$
0
0
looking for remote freelance sysadmin work
posted by Chris Park on Feb. 17, 2017, 8:15 p.m. (2 days ago)

Hey, is anyone looking for a remote freelance sysadmin? If you are please contact me. Below is my linkedin and email.

Cheers,

Chris

cpark16@gmail.com

https://ca.linkedin.com/in/chris-park-620604b

 

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 

Hey, is anyone looking for a remote freelance sysadmin? If you are please contact me. Below is my linkedin and email.

Cheers,

Chris

cpark16@gmail.com

https://ca.linkedin.com/in/chris-park-620604b

 

studiosysadmin irc server

$
0
0
studiosysadmin irc server
posted by Chris Park on Feb. 17, 2017, 8:28 p.m. (2 days ago)

Hey,

Would anyone be interested in a studio sys admin irc server? I'm thinking about setting one up for all of us sysadmins in the industry to chat on and help eachother out.

 
Kanpai from Japan,
Chris
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   1 Plus One's   0 Comments  
 

Hey,

Would anyone be interested in a studio sys admin irc server? I'm thinking about setting one up for all of us sysadmins in the industry to chat on and help eachother out.

 
Kanpai from Japan,
Chris

Secure content processing.

$
0
0
Secure content processing.
posted by Julian Firminger on Feb. 18, 2017, 6:45 a.m. (1 day ago)
Grtz hivemind,

We're in the process of rolling out our first "MPAA-equivelant" secure content processing workflow, repleet with physically secure rooms and registered personel.

We've done a pretty good job in locking down the workstations, separating the servers that will do the processing, creating an independant workflow for command and control, complete with it's own authentication system and it's own credential storage restricted to only the registered engineers for the projects. However, there are still holes.

I'm wondering how others deal with things like root access to storage backend. I'm finding that there's often a disconnect between the security requirements of the project and the SLA requirements for the platforms supporting them. At some point, I'm likely to need to give the keys to a vendor engineer who, technically at least, then has (provisional) access to the secure data. This is regardless of if the storage is physically in the secure rooms or not.

We haven't been asked to yet, but is it common place among you guys for your engineers, or your whole department, to sign NDAs as well as the front office staff? Specifically, in operations where NDAs are a provisional, or project by project basis. We dont have facility wide contracts.

And how do you deal with vendor engineers in this regard?

Julian Firminger

Snr. Systems Administrator, - Attempted Full-Stack Engineer
United Broadcast Facilities
Amsterdam, The Netherlands
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Grtz hivemind,

We're in the process of rolling out our first "MPAA-equivelant" secure content processing workflow, repleet with physically secure rooms and registered personel.

We've done a pretty good job in locking down the workstations, separating the servers that will do the processing, creating an independant workflow for command and control, complete with it's own authentication system and it's own credential storage restricted to only the registered engineers for the projects. However, there are still holes.

I'm wondering how others deal with things like root access to storage backend. I'm finding that there's often a disconnect between the security requirements of the project and the SLA requirements for the platforms supporting them. At some point, I'm likely to need to give the keys to a vendor engineer who, technically at least, then has (provisional) access to the secure data. This is regardless of if the storage is physically in the secure rooms or not.

We haven't been asked to yet, but is it common place among you guys for your engineers, or your whole department, to sign NDAs as well as the front office staff? Specifically, in operations where NDAs are a provisional, or project by project basis. We dont have facility wide contracts.

And how do you deal with vendor engineers in this regard?

Julian Firminger

Snr. Systems Administrator, - Attempted Full-Stack Engineer
United Broadcast Facilities
Amsterdam, The Netherlands

Trying to Find the Bottleneck of SVN Checkout and Commit Speeds

$
0
0
Trying to Find the Bottleneck of SVN Checkout and Commit Speeds
posted by William Sandler on Feb. 18, 2017, 12:30 p.m. (1 day ago)
Hey fellow SSAs.

Was wondering if anyone had any tips on speeding up SVN commits and checkouts.

The project I'm testing with is ~10GB and ~24,000 files, mostly PNG, MAT, and FBX files.

The SVN server is baremetal with a E5-2623 v3, 64GB RAM, and an Intel NVMe SSD. For OS I've tried both Ubuntu with the latest SVN and Apache, and Windows with the latest VisualSVN Server. The client machines doing the committing and checking out are using TortoiseSVN and have last generation i5's, 32GB RAM, and 500GB Samsung SSDs. The machines are connected over 10Gb Intel NICs.

Commits average ~20MB/s and checkouts average ~50MB/s. Is this just the nature of single threaded SVN or can something be done to speed up this process?




William Sandler
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hey fellow SSAs.

Was wondering if anyone had any tips on speeding up SVN commits and checkouts.

The project I'm testing with is ~10GB and ~24,000 files, mostly PNG, MAT, and FBX files.

The SVN server is baremetal with a E5-2623 v3, 64GB RAM, and an Intel NVMe SSD. For OS I've tried both Ubuntu with the latest SVN and Apache, and Windows with the latest VisualSVN Server. The client machines doing the committing and checking out are using TortoiseSVN and have last generation i5's, 32GB RAM, and 500GB Samsung SSDs. The machines are connected over 10Gb Intel NICs.

Commits average ~20MB/s and checkouts average ~50MB/s. Is this just the nature of single threaded SVN or can something be done to speed up this process?




William Sandler

Trying to Find the Bottleneck of SVN Checkout andCommit Speeds

$
0
0
Trying to Find the Bottleneck of SVN Checkout andCommit Speeds
posted by Sam Edney on Feb. 18, 2017, 12:55 p.m. (1 day ago)

I suspect you have already seen this page, but there are a few tips here:

 

http://stackoverflow.com/questions/749337/best-practices-for-a-single-large-svn-project

 

It seems like separating out binary data may be the way to go.

 

If it was worth the investment, you could write some wrapper scripts around svn. They could build hashes of each file and keep the hashes as plain text in your repo. If the hashes change between checkouts, get the new binary file from a more suitable storage location. I suppose this gets away from the usefulness of svn.

 

Separately, we noticed some huge speed increases when we moved to git. Our repos are much smaller than yours (between 10 and 400mb) with binary and text mixed. I hear git doesnt play nicely with large binaries either but it may be worth running a test if it is likely you could ever switch.

 

From: William Sandler
Sent: Saturday, February 18, 2017 9:28 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] Trying to Find the Bottleneck of SVN Checkout andCommit Speeds

 

Hey fellow SSAs.  

 

Was wondering if anyone had any tips on speeding up SVN commits and checkouts.  

 

The project I'm testing with is ~10GB and ~24,000 files, mostly PNG, MAT, and FBX files.

 

The SVN server is baremetal with a E5-2623 v3, 64GB RAM, and an Intel NVMe SSD.  For OS I've tried both Ubuntu with the latest SVN and Apache, and Windows with the latest VisualSVN Server.  The client machines doing the committing and checking out are using TortoiseSVN and have last generation i5's, 32GB RAM, and 500GB Samsung SSDs.  The machines are connected over 10Gb Intel NICs.  

 

Commits average ~20MB/s and checkouts average ~50MB/s.  Is this just the nature of single threaded SVN or can something be done to speed up this process?  

 

 


http://i.imgur.com/waxLD6I.png

William Sandler 

All Things Media, LLC

Office: 201.818.1999 Ex 158.  

william.sandler@allthingsmedia.com

 

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 

I suspect you have already seen this page, but there are a few tips here:

 

http://stackoverflow.com/questions/749337/best-practices-for-a-single-large-svn-project

 

It seems like separating out binary data may be the way to go.

 

If it was worth the investment, you could write some wrapper scripts around svn. They could build hashes of each file and keep the hashes as plain text in your repo. If the hashes change between checkouts, get the new binary file from a more suitable storage location. I suppose this gets away from the usefulness of svn.

 

Separately, we noticed some huge speed increases when we moved to git. Our repos are much smaller than yours (between 10 and 400mb) with binary and text mixed. I hear git doesnt play nicely with large binaries either but it may be worth running a test if it is likely you could ever switch.

 

From: William Sandler
Sent: Saturday, February 18, 2017 9:28 AM
To: studiosysadmins-discuss@studiosysadmins.com
Subject: [SSA-Discuss] Trying to Find the Bottleneck of SVN Checkout andCommit Speeds

 

Hey fellow SSAs.  

 

Was wondering if anyone had any tips on speeding up SVN commits and checkouts.  

 

The project I'm testing with is ~10GB and ~24,000 files, mostly PNG, MAT, and FBX files.

 

The SVN server is baremetal with a E5-2623 v3, 64GB RAM, and an Intel NVMe SSD.  For OS I've tried both Ubuntu with the latest SVN and Apache, and Windows with the latest VisualSVN Server.  The client machines doing the committing and checking out are using TortoiseSVN and have last generation i5's, 32GB RAM, and 500GB Samsung SSDs.  The machines are connected over 10Gb Intel NICs.  

 

Commits average ~20MB/s and checkouts average ~50MB/s.  Is this just the nature of single threaded SVN or can something be done to speed up this process?  

 

 


http://i.imgur.com/waxLD6I.png

William Sandler 

All Things Media, LLC

Office: 201.818.1999 Ex 158.  

william.sandler@allthingsmedia.com

 

AWS Glacier archives

$
0
0
AWS Glacier archives
posted by Grace Rante Thompson on Feb. 18, 2017, 11:40 p.m. (1 day ago)

Anybody here already using aws glacier or google coldlinefor archives? Would love to hear some feedback.

thanks,

-g
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 

Anybody here already using aws glacier or google coldlinefor archives? Would love to hear some feedback.

thanks,

-g

Supermicro SSD/NVMe Servers and MS Storage Spaces 2016

$
0
0
Supermicro SSD/NVMe Servers and MS Storage Spaces 2016
posted by Saker Klippsten on Feb. 20, 2017, 2:30 p.m. (1 day ago)
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"

Will reply back with more data as we collect it:) 

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers

As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments.  More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.

 We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for our
Budget. We had been testing our POC1 (below) with great success for serving out apps.
Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup. 

These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using  Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :)  hopefully not the later.

We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.

If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability. 

Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )

Here are some Videos to check out on SSD2016



POC1 with a 1U  Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards. 

(1) Super Server 1028U-TN10RT+

(2) Intel E5-2640 V4 

(8) 16GB DDR4-2400 ECC REG DIMM  

(6) Intel P3600 400GB NVME

(2) Intel XL710QDA2


POC2 is the 2U Supermicro and two Melanox 100Gbe Cards

(2) Intel E56-2680 V4

(8) 16GB DDR4-2400 ECC REG DIMM 

(2) Mellanox ConnectX-4 MCX456A-ECAT 
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24) Intel P3600 400GB NVME expandable to 48


POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.

(1) SuperStorage Server SSG-2028R-E1CR48L


(24) Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am












Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"

Will reply back with more data as we collect it:) 

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers

As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments.  More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.

 We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for our
Budget. We had been testing our POC1 (below) with great success for serving out apps.
Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup. 

These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using  Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :)  hopefully not the later.

We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.

If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability. 

Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )

Here are some Videos to check out on SSD2016



POC1 with a 1U  Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards. 

(1) Super Server 1028U-TN10RT+

(2) Intel E5-2640 V4 

(8) 16GB DDR4-2400 ECC REG DIMM  

(6) Intel P3600 400GB NVME

(2) Intel XL710QDA2


POC2 is the 2U Supermicro and two Melanox 100Gbe Cards

(2) Intel E56-2680 V4

(8) 16GB DDR4-2400 ECC REG DIMM 

(2) Mellanox ConnectX-4 MCX456A-ECAT 
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24) Intel P3600 400GB NVME expandable to 48


POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.

(1) SuperStorage Server SSG-2028R-E1CR48L


(24) Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am












Supermicro SSD/NVMe Servers and MS Storage Spaces 2016

$
0
0
Supermicro SSD/NVMe Servers and MS Storage Spaces 2016
posted by Ali Poursamadi on Feb. 20, 2017, 3:35 p.m. (1 day ago)
Thanks Saker for sharing this, it's very valuable.
I was talking with iXsystem guys about running freenas on similar hardware (SSG-2028R-E1CR48L ) and they mentioned it might have Thermal issues, I was wondering what your experience is withthat system in terms of heat/power.

-Ali
2028R-E1CR48L

2028R-E1CR48L


On Mon, Feb 20, 2017 at 11:24 AM, Saker Klippsten <sakerk@gmail.com> wrote:
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"

Will reply back with more data as we collect it:)

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers

As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments. More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.

We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for our
Budget. We had been testing our POC1 (below) with great success for serving out apps.
Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup.

These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :) hopefully not the later.

We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.

If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability.

Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )

Here are some Videos to check out on SSD2016



POC1with a 1U Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards.

(1) Super Server1028U-TN10RT+

(2) IntelE5-2640 V4

(8)16GB DDR4-2400 ECC REG DIMM

(6)Intel P3600 400GB NVME

(2) Intel XL710QDA2


POC2is the 2U Supermicro and two Melanox 100Gbe Cards

(2)Intel E56-2680 V4

(8)16GB DDR4-2400 ECC REG DIMM

(2) Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24)Intel P3600 400GB NVME expandable to 48


POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.

(1) SuperStorage Server SSG-2028R-E1CR48L


(24)Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am













To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Thanks Saker for sharing this, it's very valuable.
I was talking with iXsystem guys about running freenas on similar hardware (SSG-2028R-E1CR48L ) and they mentioned it might have Thermal issues, I was wondering what your experience is withthat system in terms of heat/power.

-Ali
2028R-E1CR48L

2028R-E1CR48L


On Mon, Feb 20, 2017 at 11:24 AM, Saker Klippsten <sakerk@gmail.com> wrote:
**This is a continuation from the post Titled "SuperMicro TwinBlades, are they good?"

Will reply back with more data as we collect it:)

While we are lovers of Linux and Opensource. We are mostly a Windows Desktop Shop. We have opted for now to go with Windows Storage Spaces built into server 2016. If you have not checked it out, I highly recommend looking at these videos, do some testing and form your own opinion on its performance, simplicity and ease of use and all the benefits included, snapshots, quotas, scaling, storage tiers

As many know we run a pretty lean shop here at Zoic. ~430+ users and 5 Full time- IT Folk plus me ( someone has to change the light bulbs) across 3 locations. So while we could use a Unix setup, We opted to use Server 2016 because well.. we are all windows on the desktop/domain/email side. We will be 100% windows 10 by end 3rd quarter as projects wrap up , we can roll artists to new environments. More on this specs later but similar to POC1 below with a only a single 800GB nvme and 1TB system evo drive. single 40Gig port. And 256GB ram.

We had a requirement in the last month for lots of sims and rendering with Houdini. Our Isilons Clusters And our Qumulo were having a tuff time keeping up in our tests. The cost to scale those out were going to be too much for our
Budget. We had been testing our POC1 (below) with great success for serving out apps.
Acting as an RV box. as well as testing it with one off sets of renders for Maya/Vray and Houdini. So we opted to invest in a larger setup.

These are all new systems for us so more battle testing to ensue in the next few months as we hit them hard with 20 Houdini Artists, 50+ Dedicated Sim Nodes and about 500 Mantra Render based render nodes reading gigantic cache files.
POC 2 and POC3 will be tiered together using Storage Space Direct 2016. We will be collecting lots of Valuable data to share back here when everything is up and running or burning up :) hopefully not the later.

We have another fun project following the delivery of this one for a 30min 360 -4k projection setup with an obnoxious resolution. so we will then flop this system from Houdini users to Nuke / Nuke Studio and Maya /Vray Rendering and some Flame all working off this storage.

If all goes well. We will scale this out and we will be moving off Isilon as our primary performance storage to these supermicro systems by the 3rd quarter of this year . we have a great opportunity to RND this kit in production . We still love Isilon and Qumulo but currently their performance in the area of NVMe is lacking and while many Bag on MS (my self included) so far Storage Spaces has impressed me with its performance, simplicity. Only time will dictate the reliability.

Our next project is then tiering to cold storage using 90 bay supermicro chassis and 10TB He Drives but thats Q4( if anyone has played with those! speak up )

Here are some Videos to check out on SSD2016



POC1with a 1U Supermicro and two 40gig dual port intel X710 cards. remember a single port of 40Gig can max out an pcie3 x8 slot. So we have two cards in separate x16 slots for dual port 40 or single 100gig cards.

(1) Super Server1028U-TN10RT+

(2) IntelE5-2640 V4

(8)16GB DDR4-2400 ECC REG DIMM

(6)Intel P3600 400GB NVME

(2) Intel XL710QDA2


POC2is the 2U Supermicro and two Melanox 100Gbe Cards

(2)Intel E56-2680 V4

(8)16GB DDR4-2400 ECC REG DIMM

(2) Mellanox ConnectX-4 MCX456A-ECAT
http://www.mellanox.com/related-docs/prod_adapter_cards/PB_ConnectX-4_VPI_Card.pdf

(24)Intel P3600 400GB NVME expandable to 48


POC3 is a 2U Supermicro Like above but using Samsung 4TB SSD EVO Drives.

(1) SuperStorage Server SSG-2028R-E1CR48L


(24)Samsung 4TB EVO expandable to 48
http://www.samsung.com/us/computing/memory-storage/solid-state-drives/ssd-850-evo-25-sata-iii-4tb-mz-75e4t0b-am













To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Should I stay or go? ESXi vrs Oracle VM Server

$
0
0
Should I stay or go? ESXi vrs Oracle VM Server
posted by Greg Whynott on Feb. 22, 2017, 6 p.m.


Is anyone using Oracle VM Server? What do you like and dislike about it, and did you use ESX in the past?


ramblings:



We have a requirement to have 5-6 Dual CPU host machines in a cluster with potential to go to 7 or 8. Currently we have 3 using the 'essentials' license.

The ESX enterprise license is as everyone knows, expensive. I found this calculator that claims to show you the difference in pricing. Based on that and the fact I couldn't find any horror stories after a quick search, I decided to consider Oracle VM Server to replace ESXi entirely. I could buy 2 built hosts for the difference in pricing, if the calculator was accurate.


After thinking about things, I wondered if I needed to stay with VMware and what I'd be giving up if I moved away.

We use NFS to connect our DataStores, In the 6 years this cluster has been in operation we have never lost a node. There is no system where the company would instantly start purging dollars if it went down for a period of time.

I'm not even sure if Oracle has VM failover to a surviving host feature (this idea is very new, like today!) , but I'm thinking we could get away with out it, here. If they do, great.

I think having a solution where I can deploy VMs and migrate them off a host, either while live or shutdown would meet our basic requirements.


My other concern is latency. Currently we have 4 internet connections from as many providers. Some are gig some are 100 meg. Because they are predominately used for bulk data transfer, and due to the requirements passed onto me (each has to be a separate firewall, no two connections to any one firewall), I was considering virtualization them to gain a level of redundancy, on a separate cluster. I could either do this using the 'orphaned ESX license', or install Oracle VM server there too. Wondering if anyone has any insight into latency through both solutions. I'll end up testing both but am curious.

I've been running firewalls in ESX for years but always just to protect hosts/networks on the cluster itself, this would be the first attempt at deploying a corporate firewall solution. Crazy idea?


be well,
greg





Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 


Is anyone using Oracle VM Server? What do you like and dislike about it, and did you use ESX in the past?


ramblings:



We have a requirement to have 5-6 Dual CPU host machines in a cluster with potential to go to 7 or 8. Currently we have 3 using the 'essentials' license.

The ESX enterprise license is as everyone knows, expensive. I found this calculator that claims to show you the difference in pricing. Based on that and the fact I couldn't find any horror stories after a quick search, I decided to consider Oracle VM Server to replace ESXi entirely. I could buy 2 built hosts for the difference in pricing, if the calculator was accurate.


After thinking about things, I wondered if I needed to stay with VMware and what I'd be giving up if I moved away.

We use NFS to connect our DataStores, In the 6 years this cluster has been in operation we have never lost a node. There is no system where the company would instantly start purging dollars if it went down for a period of time.

I'm not even sure if Oracle has VM failover to a surviving host feature (this idea is very new, like today!) , but I'm thinking we could get away with out it, here. If they do, great.

I think having a solution where I can deploy VMs and migrate them off a host, either while live or shutdown would meet our basic requirements.


My other concern is latency. Currently we have 4 internet connections from as many providers. Some are gig some are 100 meg. Because they are predominately used for bulk data transfer, and due to the requirements passed onto me (each has to be a separate firewall, no two connections to any one firewall), I was considering virtualization them to gain a level of redundancy, on a separate cluster. I could either do this using the 'orphaned ESX license', or install Oracle VM server there too. Wondering if anyone has any insight into latency through both solutions. I'll end up testing both but am curious.

I've been running firewalls in ESX for years but always just to protect hosts/networks on the cluster itself, this would be the first attempt at deploying a corporate firewall solution. Crazy idea?


be well,
greg





Viewing all 3749 articles
Browse latest View live




Latest Images