Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Latest Message Board Threads at StudioSysAdmins

older | 1 | .... | 107 | 108 | (Page 109) | 110 | 111 | .... | 187 | newer

    0 0
  • 09/17/14--18:10: mac os x 10.9.5 update
  • mac os x 10.9.5 update
    posted by Wayne Chang on Sept. 17, 2014, 9:10 p.m. (1 day ago)
    has anyone updated to mavericks 10.9.5 yet? one of the fixes is "Improves the reliability of accessing files located on an SMB server"... anyone notice any difference? To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe
    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    has anyone updated to mavericks 10.9.5 yet? one of the fixes is "Improves the reliability of accessing files located on an SMB server"... anyone notice any difference? To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

    0 0

    Square Hole Racks in Vancouver!
    posted by Joseph Boswell on Sept. 17, 2014, 10:40 p.m. (1 day ago)
    Anyone have one or two full size square hole racks here in Vancouver that they would like to exchange currency or other potential favors/bribes for?

    We have a vendor that had a shipping snafu and it puts us in a little bit of bind at the moment.

    As if I wasn't asking for the world already, we could also use some vertical PDU's for said racks also...

    Thank you and have a fun/safe evening everyone!

    Joe
    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Anyone have one or two full size square hole racks here in Vancouver that they would like to exchange currency or other potential favors/bribes for?

    We have a vendor that had a shipping snafu and it puts us in a little bit of bind at the moment.

    As if I wasn't asking for the world already, we could also use some vertical PDU's for said racks also...

    Thank you and have a fun/safe evening everyone!

    Joe

    0 0

    do you have a cisco TAC contract?
    posted by Greg Whynott on Sept. 18, 2014, 2:05 p.m.
    At one time you could download firmware from Cisco for any device you wish, but that is no longer the case unless you have a smartnet contract with them.

    I have a device which I do not have a contract for (this is not work related) and I need (want) a newer IOS for it.

    If anyone can help please do hit me off list, and thank you very much!

    the smartnet requirement is a bit much I think.

    thanks again,
    -g





    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    At one time you could download firmware from Cisco for any device you wish, but that is no longer the case unless you have a smartnet contract with them.

    I have a device which I do not have a contract for (this is not work related) and I need (want) a newer IOS for it.

    If anyone can help please do hit me off list, and thank you very much!

    the smartnet requirement is a bit much I think.

    thanks again,
    -g






    0 0

    Maya - Workspace.mel - how do you set custom folders?
    posted by Todd Smith on Sept. 18, 2014, 3:55 p.m. (1 day ago)
    Hey Guys,

    We are currently trying to manipulate the custom folders available in the Maya File Open dialog (lower left hand corner of the window).

    Usually we would do this with a custom workspace.mel file, for example:

    workspace -fr "scene" "scenes/3d";

    Would set a folder pointing to the scenes/3d folder under whatever we've set the $MAYA_PROJECT env var.

    Now in
     

    It states

    // The following example shows how to set multiple paths into a fileRule value
    workspace -fileRule 'newMultiPathFileRuleName' '/h/userName/maya/projects/default;newFileRuleValue';
    and 

    -fileRule(-fr) string string createquery
    Set the default location for a file. The first parameter is the fileRule name(scenes, images, etc) and the second is the location. When queried, it returns a list of strings. The elements of the returned list alternate between fileRule names and the corresponding location. There is typically one file rule for each available translator. Environment variables are supported. You can set multiple path for the file rule by separating them with semicolons (;) on Windows and colons(:) on MacOSX and Linux. Note that whitespace at the beginning and end of each item in the separated sequence is significant and will be included as part of the path name (which is not usually desired unless the pathname does actually start or end with spaces). A valid filerule name cannot contain multiple byte characters.

    So ideally, we should be able to set multiple file paths for the "scenes" fileRuleName by using "scenes/3d/anim:scenes/3d/model" or more precisely:

    workspace -fr "scene" "scenes/3d/anim:scenes/3d/model"

    Using this "should" do something, preferably populate that custom folder bar in the lower left hand corner with two directories.  Alas it does not, in fact it does nothing, throws no errors and is generally useless.

    So, the question is how do we add project relative paths to the file open dialog?

    Additionally it looks like there are only certain fileRuleNames that will populate this area, those being the default names (scenes, assets, images, sourceimages, renderData, clips, sound, scripts, data, movies, autosave), but it would be great if either the multi file path worked because we actually store maya files in very specific directories and want to make those available.

    Is anyone out there populating this window using workspace.mel to populate this in a sensible way or are you doing it some other way (ie. overriding the np_get* files with your custom scripts folder)

    Thanks,


    Todd Smith
    Head of Information Technology

    soho vfx 
    99 Atlantic Ave. Suite 303, Toronto, Ontario M6K 3J8

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Hey Guys,

    We are currently trying to manipulate the custom folders available in the Maya File Open dialog (lower left hand corner of the window).

    Usually we would do this with a custom workspace.mel file, for example:

    workspace -fr "scene" "scenes/3d";

    Would set a folder pointing to the scenes/3d folder under whatever we've set the $MAYA_PROJECT env var.

    Now in
     

    It states

    // The following example shows how to set multiple paths into a fileRule value
    workspace -fileRule 'newMultiPathFileRuleName' '/h/userName/maya/projects/default;newFileRuleValue';
    and 

    -fileRule(-fr) string string createquery
    Set the default location for a file. The first parameter is the fileRule name(scenes, images, etc) and the second is the location. When queried, it returns a list of strings. The elements of the returned list alternate between fileRule names and the corresponding location. There is typically one file rule for each available translator. Environment variables are supported. You can set multiple path for the file rule by separating them with semicolons (;) on Windows and colons(:) on MacOSX and Linux. Note that whitespace at the beginning and end of each item in the separated sequence is significant and will be included as part of the path name (which is not usually desired unless the pathname does actually start or end with spaces). A valid filerule name cannot contain multiple byte characters.

    So ideally, we should be able to set multiple file paths for the "scenes" fileRuleName by using "scenes/3d/anim:scenes/3d/model" or more precisely:

    workspace -fr "scene" "scenes/3d/anim:scenes/3d/model"

    Using this "should" do something, preferably populate that custom folder bar in the lower left hand corner with two directories.  Alas it does not, in fact it does nothing, throws no errors and is generally useless.

    So, the question is how do we add project relative paths to the file open dialog?

    Additionally it looks like there are only certain fileRuleNames that will populate this area, those being the default names (scenes, assets, images, sourceimages, renderData, clips, sound, scripts, data, movies, autosave), but it would be great if either the multi file path worked because we actually store maya files in very specific directories and want to make those available.

    Is anyone out there populating this window using workspace.mel to populate this in a sensible way or are you doing it some other way (ie. overriding the np_get* files with your custom scripts folder)

    Thanks,


    Todd Smith
    Head of Information Technology

    soho vfx 
    99 Atlantic Ave. Suite 303, Toronto, Ontario M6K 3J8


    0 0
  • 09/18/14--13:56: Mac Avids on Isilon
  • Mac Avids on Isilon
    posted by Michael Miller on Sept. 18, 2014, 4:56 p.m. (1 day ago)

    Anyone here using Mac Avids on Isilon?  If so, can you share a little about your setup?  I have 1GbE connected Mac clients, and I can't seem to get Avid Media Composer to work with NFS shares on the Isilon.  EMC & Avid support have been of little help, except to recommend SMB and/or MXFserver as possible solutions.  SMB on Mac is not fast enough on a 1Gb connection, and my initial testing of SMB2 in Mavericks has not been a whole lot better.  MXFserver seems like a pretty good solution, but I will need to get a faily large budget first before I can actually implement that.

    I was thinking perhaps iSCSI might be a solution.  Any thoughts or experiences with iSCSI on Mac anyone has?  Anyway, if anyone else has been down this road already, I would love to hear how you set it up and how it is working for you.  Thanks!

     

    Mike

     


    0 Responses     0 Plus One's     0 Comments  
     

    Anyone here using Mac Avids on Isilon?  If so, can you share a little about your setup?  I have 1GbE connected Mac clients, and I can't seem to get Avid Media Composer to work with NFS shares on the Isilon.  EMC & Avid support have been of little help, except to recommend SMB and/or MXFserver as possible solutions.  SMB on Mac is not fast enough on a 1Gb connection, and my initial testing of SMB2 in Mavericks has not been a whole lot better.  MXFserver seems like a pretty good solution, but I will need to get a faily large budget first before I can actually implement that.

    I was thinking perhaps iSCSI might be a solution.  Any thoughts or experiences with iSCSI on Mac anyone has?  Anyway, if anyone else has been down this road already, I would love to hear how you set it up and how it is working for you.  Thanks!

     

    Mike

     


    0 0
  • 09/18/14--18:00: Budget rules of thumb?
  • Budget rules of thumb?
    posted by Scott Allen on Sept. 18, 2014, 9 p.m. (1 day ago)
    Hey all,
    Tried to look this up so forgive me if I missed it in the list.What is the yardstick you use to estimate futurebudgets? Other industries I've been involved in use some % of revenue that comes in earmarked for IT.

    I have been toldthat is unrealistic for VFX.

    But this isn't the only industry with huge swings in income. So I know how to make do with smaller pieces of the piein tough times. However, looking at the list of what is coming in the bids, matching it with what IT needs and then adjusting the markup of the bid to cover it seems reactive and laisse-faire.

    So how do you look into your fiscal, crystal ball? And how do you get managers to look beyond the next project?

    Cheers
    Scott A
    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Hey all,
    Tried to look this up so forgive me if I missed it in the list.What is the yardstick you use to estimate futurebudgets? Other industries I've been involved in use some % of revenue that comes in earmarked for IT.

    I have been toldthat is unrealistic for VFX.

    But this isn't the only industry with huge swings in income. So I know how to make do with smaller pieces of the piein tough times. However, looking at the list of what is coming in the bids, matching it with what IT needs and then adjusting the markup of the bid to cover it seems reactive and laisse-faire.

    So how do you look into your fiscal, crystal ball? And how do you get managers to look beyond the next project?

    Cheers
    Scott A

    0 0

    Flame/finishing shared storage...
    posted by Brian Krusic on Sept. 19, 2014, 12:20 a.m.
    Hi,

    I have an idea born from;

    We had a need to employ Fare, but they needed enough storage which we didnt have time or money to buy.  So we connected those Flares via 10Gb/jumbo to a Smokes framestore via wire.

    We were able to get ~80fps so happy about that.

    Each Flare had its own framstore dir on the shared Smokes framestore.

    At any rate, I would like to take this a step further by building a ZFS based storage hosted by some Centos box running wire.

    Then the Flames/Flares/Fame Assist would connect to it via wire and have there on directories and all over 40Gb/jumbo to satisfy 4K etc... 

    To move around, a stations will confect to the appropriate dir on the shared storage etc,..  No SAN, no NFS etc just pure wire to access.

    Has any one already done this?

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Hi,

    I have an idea born from;

    We had a need to employ Fare, but they needed enough storage which we didnt have time or money to buy.  So we connected those Flares via 10Gb/jumbo to a Smokes framestore via wire.

    We were able to get ~80fps so happy about that.

    Each Flare had its own framstore dir on the shared Smokes framestore.

    At any rate, I would like to take this a step further by building a ZFS based storage hosted by some Centos box running wire.

    Then the Flames/Flares/Fame Assist would connect to it via wire and have there on directories and all over 40Gb/jumbo to satisfy 4K etc... 

    To move around, a stations will confect to the appropriate dir on the shared storage etc,..  No SAN, no NFS etc just pure wire to access.

    Has any one already done this?

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"


    0 0

    Anyone have a Isilon 12000X node in LA?
    posted by  on Sept. 19, 2014, 12:35 p.m.
    We had a node freak out during a update, and needs to be replaced,I have one coming from Isilon, but it won't be here till Tuesday. Anyone have a Isilon 12000Xnode in LA I could borrow for a week or so, so I can start smartfailing the the bad node out? I am willing to come get it, of course!

    Hit me up off-list!

    Thanks
    -Brent
    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    We had a node freak out during a update, and needs to be replaced,I have one coming from Isilon, but it won't be here till Tuesday. Anyone have a Isilon 12000Xnode in LA I could borrow for a week or so, so I can start smartfailing the the bad node out? I am willing to come get it, of course!

    Hit me up off-list!

    Thanks
    -Brent

    0 0
  • 09/19/14--10:05: VDI a la PCoIP
  • VDI a la PCoIP
    posted by Brian Krusic on Sept. 19, 2014, 1:05 p.m.
    So Ive a few initiatives going.

    One is PCoIP via ClearCubes.  Special thanks to Rob Stine for being so patient in my barrage of clueless-ness.

    At any rate, based on some pre-lim tests, my new CoGent ckt end to end looks be giving me ping times of ~58ms.

    How will PCoIP fair over such a ckt which will be dedicated to PCoIP only?  Also curious what network footprint it has.  Latency is king here but how much BW per connection on avg?

    Im hoping CoGent will let honor me marking these specific packets for priority over there cloud in reducing latency if need be.

    Has any one ever worked with CoGent in this respect, i.e.; honoring your marked packets?

    Im waiting for my eval units as we speak.

    Ive also worked with Bell and ClearCubes so Bell is an official vendor of there fromage.  This offers a potential rent to own scenario which I find VERY attractive.

    At any rate, input welcome.

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    So Ive a few initiatives going.

    One is PCoIP via ClearCubes.  Special thanks to Rob Stine for being so patient in my barrage of clueless-ness.

    At any rate, based on some pre-lim tests, my new CoGent ckt end to end looks be giving me ping times of ~58ms.

    How will PCoIP fair over such a ckt which will be dedicated to PCoIP only?  Also curious what network footprint it has.  Latency is king here but how much BW per connection on avg?

    Im hoping CoGent will let honor me marking these specific packets for priority over there cloud in reducing latency if need be.

    Has any one ever worked with CoGent in this respect, i.e.; honoring your marked packets?

    Im waiting for my eval units as we speak.

    Ive also worked with Bell and ClearCubes so Bell is an official vendor of there fromage.  This offers a potential rent to own scenario which I find VERY attractive.

    At any rate, input welcome.

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"


    0 0

    Best 4K Graphics Cards for HPZ820?
    posted by Rob Gestone on Sept. 19, 2014, 1:09 p.m.

    Hey all,

    We recently purchased a 4K LED monitor to use as a review station--got an HPZ820, 64GB RAM, NVIDIA K2000 card. We'd like to output 4K over HDMI using Hiero and RV, so I was wondering if anyone had any recommendations for graphics cards that can output 4K and would work well with our setup.

     

    I was testing the Black Magic DeckLink 4K Extreme but can't get it to work properly, and Black Magic hasn't been the most helpful.

     

    Any thoughts out there?

    Thread Tags:
      display 

    0 Responses     0 Plus One's     0 Comments  
     

    Hey all,

    We recently purchased a 4K LED monitor to use as a review station--got an HPZ820, 64GB RAM, NVIDIA K2000 card. We'd like to output 4K over HDMI using Hiero and RV, so I was wondering if anyone had any recommendations for graphics cards that can output 4K and would work well with our setup.

     

    I was testing the Black Magic DeckLink 4K Extreme but can't get it to work properly, and Black Magic hasn't been the most helpful.

     

    Any thoughts out there?


    0 0

    iWARP RDMA Solution for Superfast NFS-RDMA Rendering
    posted by Jorg-Ulrich Mohnen on Sept. 19, 2014, 1:47 p.m.

    We are implimenting a new age renderfarm here for R&D. Looking heavily into RDMA over InfiniBand 56G. Associative iWARP technology. Standardization completed in 2013 and now part of Redhar and Debian and Windows Server 2012 via SMB.

    Anyone implimenting the RDMA 2013 Protocols?

    Initial tests show absolutely HUGE data speeds and with 10GB or InfiniBand 56G, can do memory to memory dumopoes of 40GB/sec without ANY CPU LOAD. Meaning alll that handling and error checking and other shite found on all current renderfarms (NFS or SMB or AFS) is completed avoided. Hence pure speed from CPU on rendernode flat to NFS/RDMA storage. Rendertimes looking to be sped up by an order of magnitude.

    Looked at a lot off GIS and Remote Sensing / Seimsology corps last night, and it seems that there data handling has been beefed up quite considerably with these new standards. 

    Jorg

    Thread Tags:
      iWARP RDMA Solution f 

    0 Responses     0 Plus One's     0 Comments  
     

    We are implimenting a new age renderfarm here for R&D. Looking heavily into RDMA over InfiniBand 56G. Associative iWARP technology. Standardization completed in 2013 and now part of Redhar and Debian and Windows Server 2012 via SMB.

    Anyone implimenting the RDMA 2013 Protocols?

    Initial tests show absolutely HUGE data speeds and with 10GB or InfiniBand 56G, can do memory to memory dumopoes of 40GB/sec without ANY CPU LOAD. Meaning alll that handling and error checking and other shite found on all current renderfarms (NFS or SMB or AFS) is completed avoided. Hence pure speed from CPU on rendernode flat to NFS/RDMA storage. Rendertimes looking to be sped up by an order of magnitude.

    Looked at a lot off GIS and Remote Sensing / Seimsology corps last night, and it seems that there data handling has been beefed up quite considerably with these new standards. 

    Jorg


    0 0

    iWARP RDMA Solution for Superfast NFS-RDMA Rendering
    posted by Brian Krusic on Sept. 19, 2014, 2:10 p.m.
    Have you messed around with an all Mellanox solution end to end which provides 56Gb ethernet?  Im curious how that would play out.

    I assume you went IB due to the extra 16Gb over a standard 40Gb ethernet solution?

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"

    On Sep 19, 2014, at 10:47 AM, content@studiosysadmins.com wrote:

    We are implimenting a new age renderfarm here for R&D. Looking heavily into RDMA over InfiniBand 56G. Associative iWARP technology. Standardization completed in 2013 and now part of Redhar and Debian and Windows Server 2012 via SMB.

    Anyone implimenting the RDMA 2013 Protocols?

    Initial tests show absolutely HUGE data speeds and with 10GB or InfiniBand 56G, can do memory to memory dumopoes of 40GB/sec without ANY CPU LOAD. Meaning alll that handling and error checking and other shite found on all current renderfarms (NFS or SMB or AFS) is completed avoided. Hence pure speed from CPU on rendernode flat to NFS/RDMA storage. Rendertimes looking to be sped up by an order of magnitude.

    Looked at a lot off GIS and Remote Sensing / Seimsology corps last night, and it seems that there data handling has been beefed up quite considerably with these new standards. 

    Jorg

    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Have you messed around with an all Mellanox solution end to end which provides 56Gb ethernet?  Im curious how that would play out.

    I assume you went IB due to the extra 16Gb over a standard 40Gb ethernet solution?

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"

    On Sep 19, 2014, at 10:47 AM, content@studiosysadmins.com wrote:

    We are implimenting a new age renderfarm here for R&D. Looking heavily into RDMA over InfiniBand 56G. Associative iWARP technology. Standardization completed in 2013 and now part of Redhar and Debian and Windows Server 2012 via SMB.

    Anyone implimenting the RDMA 2013 Protocols?

    Initial tests show absolutely HUGE data speeds and with 10GB or InfiniBand 56G, can do memory to memory dumopoes of 40GB/sec without ANY CPU LOAD. Meaning alll that handling and error checking and other shite found on all current renderfarms (NFS or SMB or AFS) is completed avoided. Hence pure speed from CPU on rendernode flat to NFS/RDMA storage. Rendertimes looking to be sped up by an order of magnitude.

    Looked at a lot off GIS and Remote Sensing / Seimsology corps last night, and it seems that there data handling has been beefed up quite considerably with these new standards. 

    Jorg

    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


    0 0
  • 09/19/14--13:50: Shout out to LSI support
  • Shout out to LSI support
    posted by Brian Krusic on Sept. 19, 2014, 4:50 p.m.
    Hi,

    Long story short, the heat sink on one of my monster LSI 9206-16e popped off due to the plastic tabs that keep it in place having burned off :)

    Chassis runs cool, server class Intel, cards is a hot chili is all!

    At any rate, waited a bit for its replacement due to back order via RMA.

    Got the replacement today and while looking at it, a chip popped off!

    Hey, Ive been told that Ive a face for radio but still!

    At any rate, the RMA package had a phone number, Palco Inc. something or other in Georgia.

    I called and spoke to an RMA dude who has been pretty much beleaguered by todays previous callers.

    He was kind enough to give me contact info to a head inside support tech whom I called and was dispatched an advanced replacement, no charge.

    This sort of thing stopped happening back in the late 90s so it was a real pleasure to have real quality experience, I am happy.

    im also amused at the entire heat sink/chip thing as well.

    Pretty cool.

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Hi,

    Long story short, the heat sink on one of my monster LSI 9206-16e popped off due to the plastic tabs that keep it in place having burned off :)

    Chassis runs cool, server class Intel, cards is a hot chili is all!

    At any rate, waited a bit for its replacement due to back order via RMA.

    Got the replacement today and while looking at it, a chip popped off!

    Hey, Ive been told that Ive a face for radio but still!

    At any rate, the RMA package had a phone number, Palco Inc. something or other in Georgia.

    I called and spoke to an RMA dude who has been pretty much beleaguered by todays previous callers.

    He was kind enough to give me contact info to a head inside support tech whom I called and was dispatched an advanced replacement, no charge.

    This sort of thing stopped happening back in the late 90s so it was a real pleasure to have real quality experience, I am happy.

    im also amused at the entire heat sink/chip thing as well.

    Pretty cool.

    - Brian

    Technologist...
    i.e., Purveyor of the ironic"


    0 0

    iWARP RDMA Solution for Superfast NFS-RDMA Rendering
    posted by Todd Smith on Sept. 21, 2014, 12:40 p.m. (1 day ago)
    Getting a ten times savings out of a renderfarm by speeding up file read times is hyperbole.  Unless your assets are growing at an enormous rate and your compute power is aging drastically.
    In the render process (at least here) compute time outstrips load in time by a large multiple in all but the smallest jobs, with localized asset caching those numbers approach the lower limit of simply the application load in time and the reference checks to ensure the proper asset is localized.

    Additionally having a big pipe to the renderwall doesn't solve the immediate issue of a pipe big to the storage backend, thus to truly enable this solution you would need a storage overhaul, in addition to render networking overhaul.  This is a very costly proposition.

    All in all, I think the technological leap is awesome, this would ease network complexity and would offer minor improvements to applications that don't have a solid methodology for file IO on HPC, but the viability of it from a cost/benefit perspective of an already established studio, it's untenable - the returns received are minute in comparison to the cost.

    Cheers,
    Todd Smith
    Head of Information Technology

    soho vfx 
    99 Atlantic Ave. Suite 303, Toronto, Ontario M6K 3J8


    Yes mellanox vs chelsio and at 56G. To reach 56G one must utilize infiniband and a certain set of adapters. That is the core concept behind really really big render time savings. One could go over ethernet in windows 2012 or Linux Debian/Redhat, but you need to be very sure that the PCI-x cards support. This is the next big thing guys. I guess many of the ramp ups across this nation and overseas (hardware procurement) probably did not take anything like this into account. Too bad for those responsible sysadmins..... ;) Cause it looks like this stuff is speeding things up quite a bit. 

    You can go 10G if you buy the right PCI-xadapter cards. But its also a bios thing as I have been reading up. Luckily we have ramped up over 800,000 USD in newest HP hardware here. So I believe the Z820's and Z840's Bios supports this nextgen technology. There are only a handful of IB cards and they are new so don't go buy used equipment for these tests, and there are also only a handful of Ethernet 10G adapter cards ready and waiting.

    In my dealings with renderfarms, the golden rule is to speed things up, right? Well if you can bump rendetimes down from say 90mins/fr to 9mins/fr, you are doing your job.

    FYI - Also try and always build you own storage too ;) Thats the easy part (and cheaper part).

     

    Jorg Mohnen, M.Sc. MBA

     


    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Getting a ten times savings out of a renderfarm by speeding up file read times is hyperbole.  Unless your assets are growing at an enormous rate and your compute power is aging drastically.
    In the render process (at least here) compute time outstrips load in time by a large multiple in all but the smallest jobs, with localized asset caching those numbers approach the lower limit of simply the application load in time and the reference checks to ensure the proper asset is localized.

    Additionally having a big pipe to the renderwall doesn't solve the immediate issue of a pipe big to the storage backend, thus to truly enable this solution you would need a storage overhaul, in addition to render networking overhaul.  This is a very costly proposition.

    All in all, I think the technological leap is awesome, this would ease network complexity and would offer minor improvements to applications that don't have a solid methodology for file IO on HPC, but the viability of it from a cost/benefit perspective of an already established studio, it's untenable - the returns received are minute in comparison to the cost.

    Cheers,
    Todd Smith
    Head of Information Technology

    soho vfx 
    99 Atlantic Ave. Suite 303, Toronto, Ontario M6K 3J8


    Yes mellanox vs chelsio and at 56G. To reach 56G one must utilize infiniband and a certain set of adapters. That is the core concept behind really really big render time savings. One could go over ethernet in windows 2012 or Linux Debian/Redhat, but you need to be very sure that the PCI-x cards support. This is the next big thing guys. I guess many of the ramp ups across this nation and overseas (hardware procurement) probably did not take anything like this into account. Too bad for those responsible sysadmins..... ;) Cause it looks like this stuff is speeding things up quite a bit. 

    You can go 10G if you buy the right PCI-xadapter cards. But its also a bios thing as I have been reading up. Luckily we have ramped up over 800,000 USD in newest HP hardware here. So I believe the Z820's and Z840's Bios supports this nextgen technology. There are only a handful of IB cards and they are new so don't go buy used equipment for these tests, and there are also only a handful of Ethernet 10G adapter cards ready and waiting.

    In my dealings with renderfarms, the golden rule is to speed things up, right? Well if you can bump rendetimes down from say 90mins/fr to 9mins/fr, you are doing your job.

    FYI - Also try and always build you own storage too ;) Thats the easy part (and cheaper part).

     

    Jorg Mohnen, M.Sc. MBA

     


    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


    0 0
  • 09/22/14--15:15: DM-Multipath speed issues
  • DM-Multipath speed issues
    posted by Ali Poursamadi on Sept. 22, 2014, 6:15 p.m. (1 day ago)
    Hi Gurus,

    I'm experiencing an issue with DM-Multipath on Centos 6.5, dd reads 2.5 times faster when reading from a device directly compared to Multipath-ed device.

    [root@JS2 ~]# multipath -l mpathaf
    mpathaf (--------) dm-89 Rorke,G4S-16L-4F8
    size=7.3T features='0' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=0 status=active
    |- 0:0:0:1 sdb 8:16 active undef running
    |- 0:0:10:2 sdbb 67:80 active undef running
    |- 1:0:11:1 sddl 71:48 active undef running
    `- 1:0:12:2 sddp 71:112 active undef running

    [root@JS2 ~]#dd if=/dev/sddl of=/dev/null bs=128k count=32000 skip=14000
    32000+0 records in
    32000+0 records out
    4194304000 bytes (4.2 GB) copied, 11.8166 s, 355 MB/s

    [root@JS2 ~]# dd if=/dev/dm-89 of=/dev/null bs=128k count=32000 skip=84000 # skipping 2GB array cache data
    32000+0 records in
    32000+0 records out
    4194304000 bytes (4.2 GB) copied, 31.4959 s, 133 MB/s


    This is on a machine connected over 2x8Gb/s links to a fiber switch and then to an HDX4 disk array, exporting the volume to 2 of it's 4 8Gbps fiber ports .

    Multipath.conf looks like this :

    defaults {
    user_friendly_names yes
    }
    devices {

    device { # normal HDX4 Disk arrays
    vendor "Rorke"
    product "G4S-16L-4F8"
    path_grouping_policy multibus
    }

    }
    blacklist {
    #blacklisting almost everything
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    }

    Any input and help is highly appreciated.

    Thanks
    Ali Poursamadi




    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Hi Gurus,

    I'm experiencing an issue with DM-Multipath on Centos 6.5, dd reads 2.5 times faster when reading from a device directly compared to Multipath-ed device.

    [root@JS2 ~]# multipath -l mpathaf
    mpathaf (--------) dm-89 Rorke,G4S-16L-4F8
    size=7.3T features='0' hwhandler='0' wp=rw
    `-+- policy='round-robin 0' prio=0 status=active
    |- 0:0:0:1 sdb 8:16 active undef running
    |- 0:0:10:2 sdbb 67:80 active undef running
    |- 1:0:11:1 sddl 71:48 active undef running
    `- 1:0:12:2 sddp 71:112 active undef running

    [root@JS2 ~]#dd if=/dev/sddl of=/dev/null bs=128k count=32000 skip=14000
    32000+0 records in
    32000+0 records out
    4194304000 bytes (4.2 GB) copied, 11.8166 s, 355 MB/s

    [root@JS2 ~]# dd if=/dev/dm-89 of=/dev/null bs=128k count=32000 skip=84000 # skipping 2GB array cache data
    32000+0 records in
    32000+0 records out
    4194304000 bytes (4.2 GB) copied, 31.4959 s, 133 MB/s


    This is on a machine connected over 2x8Gb/s links to a fiber switch and then to an HDX4 disk array, exporting the volume to 2 of it's 4 8Gbps fiber ports .

    Multipath.conf looks like this :

    defaults {
    user_friendly_names yes
    }
    devices {

    device { # normal HDX4 Disk arrays
    vendor "Rorke"
    product "G4S-16L-4F8"
    path_grouping_policy multibus
    }

    }
    blacklist {
    #blacklisting almost everything
    devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
    devnode "^hd[a-z][0-9]*"
    }

    Any input and help is highly appreciated.

    Thanks
    Ali Poursamadi





    0 0

    iWARP RDMA Solution for Superfast NFS-RDMA Rendering
    posted by Todd Smith on Sept. 23, 2014, 9:35 a.m.
    Question about TCP and RoCE compat.

    So if I'm running RoCE to my renderfarm and TCP to my workstations both connected to NAS, how does congestion control occur?  How does load balancing, QOS or any other traffic management occur?

    Basically is there any way to stop this super fat pipe from dominating the storage infrastructure?  From my basic understanding and initial research these things don't appear to play nice or at the very least they have independent congestion mechanisms.

    Cheers, 

    Todd Smith
    Head of Information Technology

    soho vfx 
    99 Atlantic Ave. Suite 303, Toronto, Ontario M6K 3J8


    Vendor addition

     

    From a vendor perspective I think you are all correct It actual comes down to what the render workload is and what the network IO is as a ratio of the CPU workload.

     

    Some additional pts.

     

    1.       RDMA can function on both Infiniband and Ethernet (its called RoCE on Ethernet). In fact a new RoCE version has recently come out allowing it to operate on L3 Ethernet making it routable (ie. Routable RoCE)

    2.       RDMA/RoCE will virtual remove all of your network overhead on your render CPUs (its a direct memory access) as it complete bypassed the kernel and TCP

    3.       RDMA/RoCE is already baked into loads of Upper layer protocols (ULPs) NFS, SMB (called SMB-Direct), iSCSI (called iSER), GPFS, SRP (SCSI over RDMA) and others. Its also native in the main-stream OSs inc. both Linux and Windows there are a few switches you need but its relatively easy to use

    4.       The entry point is 10GbE but obviously the benefits increase as you go to higher speeds as the delta to traditional TCP increases etc.  It also interoperates so you can run standard TCP and RoCE over the same network at the same time which drives a single-network and reduces cost especially as a single network could potentially mean no FC etc.

    5.       With Mellanox the network is agnostic and also the speed (its called VPI virtual protocol interconnect) so you can run RDMA/RoCE on both IB and Ethernet and up to 56Gb/56GbE. Yes I know 56GbE Ethernet is proprietary but its in the product set so you might as well use the extra 40% horsepower

    6.       You can also use commodity hardware take a look at PixitMedia their solution is entirely built on commodity hardware components and can scale to 30 simultaneous streams of EXR 25fps 16-bit 4K (4x3) sure you wont need this storage horsepower for render but it does scale-out linearly and cost effectively. Having a single name-space for the lot makes things a lot easier for people.

     

    In my opinion go with what you need. If you a building out a render-farm that doesnt have or need high network IO go with 10GbE and use RoCE this will give you CPU cycles back for no extra cost.  If your render does have high network IO go with something more powerful Its all workload dependent. Horses for courses as they say!

     

    Rich.

     

    PS. Tin-hat and flame suit are on. :-)

     

    From: studiosysadmins-discuss-bounces@studiosysadmins.com [mailto:studiosysadmins-discuss-bounces@studiosysadmins.com] On Behalf Of Nick Allevato
    Sent: 22 September 2014 05:10
    To: studiosysadmins-discuss@studiosysadmins.com
    Subject: Re: [SSA-Discuss] iWARP RDMA Solution for Superfast NFS-RDMA Rendering

     

    This seems like a heated topic already; I like it.

     

    I think I had a dirty dream about this once.

     

    Networked CPUs and RAM.

     

    It's like Open Compute 56.0

     

    Seems epic (and newish)

     

    -nick

     

     

    On Sun, Sep 21, 2014 at 9:40 AM, Saker Klippsten <sakerk@gmail.com> wrote:

    This is the next big thing guys. 

     

    Yes the goal is to speed things up and also scale them up while maintaining that speed up all on a  cost factor that the project can handle so that you end up with a profit. Not a loss. Which most projects these days seem to be trending towards. 

     

    What is the renderer / OS that you are testing? Software plays the largest factor in overall render time .

     

    It's one thing to transfer the dependencies around a cluster at X speed but there is still x amount of raw compute time to render the frame. If you are chopping a frame up into buckets and have  10 nodes render a single frame there is going to be a speed advantage for the lookdev stage. If you are going to render a sequence its going  to take just about the same time. 

     

    If I can use a GPU render with say 8 of them in a chassis I don't have to worry about scaling as much and even then the cost to transfer dependencies is small until the raw compute time is below that of the network IO which it's very rarely is. Right now CPU/GPU takes longer than to transfer the scene file and textures for most projects. Any decent 10GB network should be fine. 

     

    $ IMOP you should be speeding up the render for GPU and CPU and looking at power costs for that giant cluster ;) 

     

    -s

     

    My 2 cents

     

     

     

     

     

     

     

     

     

     

     

     

     


    On Sep 21, 2014, at 8:57 AM, content@studiosysadmins.com wrote:

    Yes mellanox vs chelsio and at 56G. To reach 56G one must utilize infiniband and a certain set of adapters. That is the core concept behind really really big render time savings. One could go over ethernet in windows 2012 or Linux Debian/Redhat, but you need to be very sure that the PCI-x cards support. This is the next big thing guys. I guess many of the ramp ups across this nation and overseas (hardware procurement) probably did not take anything like this into account. Too bad for those responsible sysadmins..... ;) Cause it looks like this stuff is speeding things up quite a bit. 

    You can go 10G if you buy the right PCI-xadapter cards. But its also a bios thing as I have been reading up. Luckily we have ramped up over 800,000 USD in newest HP hardware here. So I believe the Z820's and Z840's Bios supports this nextgen technology. There are only a handful of IB cards and they are new so don't go buy used equipment for these tests, and there are also only a handful of Ethernet 10G adapter cards ready and waiting.

    In my dealings with renderfarms, the golden rule is to speed things up, right? Well if you can bump rendetimes down from say 90mins/fr to 9mins/fr, you are doing your job.

    FYI - Also try and always build you own storage too ;) Thats the easy part (and cheaper part).

     

    Jorg Mohnen, M.Sc. MBA

     


    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



     

    --

    Nicolas Allevato, Ops, 5th Kind


    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Question about TCP and RoCE compat.

    So if I'm running RoCE to my renderfarm and TCP to my workstations both connected to NAS, how does congestion control occur?  How does load balancing, QOS or any other traffic management occur?

    Basically is there any way to stop this super fat pipe from dominating the storage infrastructure?  From my basic understanding and initial research these things don't appear to play nice or at the very least they have independent congestion mechanisms.

    Cheers, 

    Todd Smith
    Head of Information Technology

    soho vfx 
    99 Atlantic Ave. Suite 303, Toronto, Ontario M6K 3J8


    Vendor addition

     

    From a vendor perspective I think you are all correct It actual comes down to what the render workload is and what the network IO is as a ratio of the CPU workload.

     

    Some additional pts.

     

    1.       RDMA can function on both Infiniband and Ethernet (its called RoCE on Ethernet). In fact a new RoCE version has recently come out allowing it to operate on L3 Ethernet making it routable (ie. Routable RoCE)

    2.       RDMA/RoCE will virtual remove all of your network overhead on your render CPUs (its a direct memory access) as it complete bypassed the kernel and TCP

    3.       RDMA/RoCE is already baked into loads of Upper layer protocols (ULPs) NFS, SMB (called SMB-Direct), iSCSI (called iSER), GPFS, SRP (SCSI over RDMA) and others. Its also native in the main-stream OSs inc. both Linux and Windows there are a few switches you need but its relatively easy to use

    4.       The entry point is 10GbE but obviously the benefits increase as you go to higher speeds as the delta to traditional TCP increases etc.  It also interoperates so you can run standard TCP and RoCE over the same network at the same time which drives a single-network and reduces cost especially as a single network could potentially mean no FC etc.

    5.       With Mellanox the network is agnostic and also the speed (its called VPI virtual protocol interconnect) so you can run RDMA/RoCE on both IB and Ethernet and up to 56Gb/56GbE. Yes I know 56GbE Ethernet is proprietary but its in the product set so you might as well use the extra 40% horsepower

    6.       You can also use commodity hardware take a look at PixitMedia their solution is entirely built on commodity hardware components and can scale to 30 simultaneous streams of EXR 25fps 16-bit 4K (4x3) sure you wont need this storage horsepower for render but it does scale-out linearly and cost effectively. Having a single name-space for the lot makes things a lot easier for people.

     

    In my opinion go with what you need. If you a building out a render-farm that doesnt have or need high network IO go with 10GbE and use RoCE this will give you CPU cycles back for no extra cost.  If your render does have high network IO go with something more powerful Its all workload dependent. Horses for courses as they say!

     

    Rich.

     

    PS. Tin-hat and flame suit are on. :-)

     

    From: studiosysadmins-discuss-bounces@studiosysadmins.com [mailto:studiosysadmins-discuss-bounces@studiosysadmins.com] On Behalf Of Nick Allevato
    Sent: 22 September 2014 05:10
    To: studiosysadmins-discuss@studiosysadmins.com
    Subject: Re: [SSA-Discuss] iWARP RDMA Solution for Superfast NFS-RDMA Rendering

     

    This seems like a heated topic already; I like it.

     

    I think I had a dirty dream about this once.

     

    Networked CPUs and RAM.

     

    It's like Open Compute 56.0

     

    Seems epic (and newish)

     

    -nick

     

     

    On Sun, Sep 21, 2014 at 9:40 AM, Saker Klippsten <sakerk@gmail.com> wrote:

    This is the next big thing guys. 

     

    Yes the goal is to speed things up and also scale them up while maintaining that speed up all on a  cost factor that the project can handle so that you end up with a profit. Not a loss. Which most projects these days seem to be trending towards. 

     

    What is the renderer / OS that you are testing? Software plays the largest factor in overall render time .

     

    It's one thing to transfer the dependencies around a cluster at X speed but there is still x amount of raw compute time to render the frame. If you are chopping a frame up into buckets and have  10 nodes render a single frame there is going to be a speed advantage for the lookdev stage. If you are going to render a sequence its going  to take just about the same time. 

     

    If I can use a GPU render with say 8 of them in a chassis I don't have to worry about scaling as much and even then the cost to transfer dependencies is small until the raw compute time is below that of the network IO which it's very rarely is. Right now CPU/GPU takes longer than to transfer the scene file and textures for most projects. Any decent 10GB network should be fine. 

     

    $ IMOP you should be speeding up the render for GPU and CPU and looking at power costs for that giant cluster ;) 

     

    -s

     

    My 2 cents

     

     

     

     

     

     

     

     

     

     

     

     

     


    On Sep 21, 2014, at 8:57 AM, content@studiosysadmins.com wrote:

    Yes mellanox vs chelsio and at 56G. To reach 56G one must utilize infiniband and a certain set of adapters. That is the core concept behind really really big render time savings. One could go over ethernet in windows 2012 or Linux Debian/Redhat, but you need to be very sure that the PCI-x cards support. This is the next big thing guys. I guess many of the ramp ups across this nation and overseas (hardware procurement) probably did not take anything like this into account. Too bad for those responsible sysadmins..... ;) Cause it looks like this stuff is speeding things up quite a bit. 

    You can go 10G if you buy the right PCI-xadapter cards. But its also a bios thing as I have been reading up. Luckily we have ramped up over 800,000 USD in newest HP hardware here. So I believe the Z820's and Z840's Bios supports this nextgen technology. There are only a handful of IB cards and they are new so don't go buy used equipment for these tests, and there are also only a handful of Ethernet 10G adapter cards ready and waiting.

    In my dealings with renderfarms, the golden rule is to speed things up, right? Well if you can bump rendetimes down from say 90mins/fr to 9mins/fr, you are doing your job.

    FYI - Also try and always build you own storage too ;) Thats the easy part (and cheaper part).

     

    Jorg Mohnen, M.Sc. MBA

     


    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



     

    --

    Nicolas Allevato, Ops, 5th Kind


    To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe


    0 0
  • 09/23/14--10:30: TiVo type dvr for hdmi
  • TiVo type dvr for hdmi
    posted by Michael Oliver on Sept. 23, 2014, 1:30 p.m.

    Looking to add a TiVo type experience to a mocap stage. Anyone have a recommendation on a piece of hardware? Couple requirements:

    - Hdmi signal in and out.
    - ability to record/pause feed. Rewind. Fast forward to live. Wireless remote preferred.
    - ability to export clips (h264 preferred) to a computer.

    Looking for something in the hardware arena...not driven by software on a host pc.

    Been digging but coming up short.

    Michael Oliver
    mcoliver@gmail.com
    858.336.1438

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     

    Looking to add a TiVo type experience to a mocap stage. Anyone have a recommendation on a piece of hardware? Couple requirements:

    - Hdmi signal in and out.
    - ability to record/pause feed. Rewind. Fast forward to live. Wireless remote preferred.
    - ability to export clips (h264 preferred) to a computer.

    Looking for something in the hardware arena...not driven by software on a host pc.

    Been digging but coming up short.

    Michael Oliver
    mcoliver@gmail.com
    858.336.1438


    0 0

    Anyone Have Experience of GB Labs space storage ?
    posted by Dougal Matthews on Sept. 24, 2014, 4:35 a.m.

    hi all

     

    does anyone  have any  experience of   "Space" storage systems from GB Labs 

     

     on paper they look like giving lots of speed for the price  but I'm keen to  hear Pro's / cons and realworld experience ?

     

    how well do they scale if you  have more than one ? etc

     

    thanks in advnce for your time in responding 

    cheers Doug

    Thread Tags:
      storage 

    0 Responses     0 Plus One's     0 Comments  
     

    hi all

     

    does anyone  have any  experience of   "Space" storage systems from GB Labs 

     

     on paper they look like giving lots of speed for the price  but I'm keen to  hear Pro's / cons and realworld experience ?

     

    how well do they scale if you  have more than one ? etc

     

    thanks in advnce for your time in responding 

    cheers Doug


    0 0

    New file server: advice on eliminating file locking
    posted by Fredrik Averpil on Sept. 24, 2014, 4:36 a.m.

    Hi,

    We have a file server today which we are going to throw out the window. So I'm right now investigating which route to take in terms of setting up a new system and I would be grateful for any advice I can get.

    Right now, we have a mixed OS environment (Windows 7, CentOS 6) and we have some issues with file locking. I'm not entirely sure if it is the file server that is doing the locking or if it is software such as Nuke, or perhaps both. IS there at all any way of completely eliminating file locking on the share which is served by the file server?

    I'm hearing a lot of good things of OpenSolaris/ZFS and I'm downloading v 11.2 now to set up a test file server. Do you have any general dos or don'ts for me which I should take into consideration while setting it up for a general Maya/Nuke pipeline?

     

    Regards,

    Fredrik

    Thread Tags:
      server 

    0 Responses     0 Plus One's     1 Comments  
     

    Hi,

    We have a file server today which we are going to throw out the window. So I'm right now investigating which route to take in terms of setting up a new system and I would be grateful for any advice I can get.

    Right now, we have a mixed OS environment (Windows 7, CentOS 6) and we have some issues with file locking. I'm not entirely sure if it is the file server that is doing the locking or if it is software such as Nuke, or perhaps both. IS there at all any way of completely eliminating file locking on the share which is served by the file server?

    I'm hearing a lot of good things of OpenSolaris/ZFS and I'm downloading v 11.2 now to set up a test file server. Do you have any general dos or don'ts for me which I should take into consideration while setting it up for a general Maya/Nuke pipeline?

     

    Regards,

    Fredrik


    0 0

    RV on linux - what do you do for audio?
    posted by Greg Whynott on Sept. 24, 2014, 10:45 a.m.
    Anyone running Tweak RV on linux?

    Ran into a problem where ALSA can only play one stream(it doesn't want to share the device amongst processes) , if you open a review/video window it plays audio fine, subsequent opened players have no audio.

    The manual suggests using PluseAudio or JACK. We have had issues with PluseAudio using large amounts of CPU and becoming non-responsive. It goes on to say to consider using JACK with PluseAudio..

    anyway was just wondering what you decided on using if you do use Tweak RV on linux.

    -g


    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Anyone running Tweak RV on linux?

    Ran into a problem where ALSA can only play one stream(it doesn't want to share the device amongst processes) , if you open a review/video window it plays audio fine, subsequent opened players have no audio.

    The manual suggests using PluseAudio or JACK. We have had issues with PluseAudio using large amounts of CPU and becoming non-responsive. It goes on to say to consider using JACK with PluseAudio..

    anyway was just wondering what you decided on using if you do use Tweak RV on linux.

    -g



older | 1 | .... | 107 | 108 | (Page 109) | 110 | 111 | .... | 187 | newer