Are you the publisher? Claim or contact us about this channel


Embed this content in your HTML

Search

Report adult content:

click to rate:

Account: (login)

More Channels


Channel Catalog


Channel Description:

Latest Message Board Threads at StudioSysAdmins

older | 1 | .... | 185 | 186 | (Page 187)

    0 0

    Apple blocks linux on new hardware.
    posted by Greg Whynott on Nov. 12, 2018, 5:30 p.m. (1 day ago)
    Adobe is mixed into this somehow, I can feel it. lol

    Apple has made a conscious decision prevent consumers from installing OS's of their choosing onto newer hardware. As a hardware vendor why would you want to? When my conspiracy hat is on it makes the voices in my head say "they can't collect personal info or money from you if linux is installed silly.."..



    -g






    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Adobe is mixed into this somehow, I can feel it. lol

    Apple has made a conscious decision prevent consumers from installing OS's of their choosing onto newer hardware. As a hardware vendor why would you want to? When my conspiracy hat is on it makes the voices in my head say "they can't collect personal info or money from you if linux is installed silly.."..



    -g







    0 0

    Help, can't change SSA pasword
    posted by Bruce Dobrin on Nov. 14, 2018, 5:50 p.m.

    Click the password change and nothing comes in my email.  Can no longer find a admin contact for this sort of problem on the SSA site.

     

    -Bruce

     

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     

    Click the password change and nothing comes in my email.  Can no longer find a admin contact for this sort of problem on the SSA site.

     

    -Bruce

     


    0 0

    The evolvement of traditional drives, sas, sata, to ssd, m.2, nvme, what is next ….
    posted by Ergi on Nov. 25, 2018, 12:27 p.m. (2 days ago)
    The last two  years i have seen a lot of storage vendors pushing NVME upgrades or offering  explicitly  NVME storage. It appear that is

    ZFS high performance NAS server for media an entertianment becoming the norm these days. We are living in the era of technology advancements, new hard  drives, controllers , network cards, cpus becoming faster, bigger, lower latency every day. That is all good and companies are integrating  those in their workflows, well at least the companies  that can afford it. What i have seen in the the real world is that such solutions are still not enough for a M&E average production. Let me explain my thoughts. If you spec out a full NVME solution is probably in the 200k+  range and you may   get 40TB usable. The system will be super fast  and will have no issues keeping up with the iops and bandwidth for your ever demanding workflow.  Now that you have the system in, you have to figure out to to deliver that performance to the workstation. Find network cards that works with tour OS and compatible drivers.  Use the available  open source protocols  like NFS, SMB and figure out to to optimize them,  or write your own agent/protocol to actually take advantage of that performance. Ok well some of you have that figure out, now you have to see how your NLE will utilize such performance  and how to you maintain that setup with the inevitable upgrades  like OS versions, and application version.

     
     
    What if i tell you that there is another way to get faster performance  than the NVME flash arrays at more affordable cost. 
     
    Well lets get a bit technical now. What is the fastest piece of hardware that access data in a computer? Is it the hard drive ? Is the CPU? Is it the bus (PCI slots) ? Is it the RAM?
    Which one is M&E industry  always in demand. Of course all those do matter in a traditional server but how do you get that performance? One piece of hardware that i have see not being utilize in the traditional  storage vendors is the RAM. Some of the hardware manufacturers will offer hardware controllers that  use some caching or they have some kind of cashing mechanism build in. Not large enough  to make a significant difference. They usually do not get into very details about how that works. You have to dig it up yourself.
     
    RAM is the fastest  way to access data, large data, media and entertainment data. Load that  uncompressed movie into RAM, an average server these day can easily use 1TB of RAM. The challenge here is how to tell the filesystem and the OS to load that data into RAM. I have been working  with Solaris  ZFS for the last 5 years and no other OS/filesystem does it better  and more efficient. ZFS will load the most frequently accessed data in to RAM “ARC”. DDR4 at 2666MHz ECC low latency is utilized here. When the RAM is full the system will load data in the LARC which is the read cache. That is where the fast NVME drives  come handy. We used two 4TB Intel striped and we get 8TB read cache. I can tell you is quite amazing to see both of those RAM and READ CACHE max out on a production day when you have 60 edit bays hitting it. Lets do some quick math here, 1TB of RAM speed and 8TB of two NVME striped using PCI slots. I would say that is fast enough for a traditional  production workflow and also 9TB available  worth of media files ready to serve and any given time. I have posted some benchmarks ran a CLI  from a real production server  below so you can see some realistic numbers. 
     
    More over all the editing, compositing, finishing, motion graphics applications that are being developed are thirsty for ram. You can see it in system resources if you open  4K media  in your timeline. Somehow the application process sucks the entire ram available. They do that to load the data into ram for smoother playback. It does not matter how much ram you install, 64G, 124G, 256G  any modern release software will use it assuming that you have a heavy 4K timeline. 
     
     
     
    Here are some performance stats that we got from out latest  server ZFS-600 that we just put in production. These are results running CLI commands  not theoretical and accumulative bandwidth taken from data sheet products. 
     
    RAM sustainable bandwidth 197GB/sec 
    command line volume test run with dd   7.3GB/sec
    command line multithreaded using iozone 28GB/sec 
    that is upper case means Bytes.
    The full data sheet and server specs  of the ZFS-600 can be found of the here http://ittemple.com/products/
     
    Thread Tags:
      fasterthannvme zfs ram fastestnas solaris editorialserver highperfomancenas 4k workflow vfx server 

    0 Responses     0 Plus One's     0 Comments  
     
    The last two  years i have seen a lot of storage vendors pushing NVME upgrades or offering  explicitly  NVME storage. It appear that is

    ZFS high performance NAS server for media an entertianment becoming the norm these days. We are living in the era of technology advancements, new hard  drives, controllers , network cards, cpus becoming faster, bigger, lower latency every day. That is all good and companies are integrating  those in their workflows, well at least the companies  that can afford it. What i have seen in the the real world is that such solutions are still not enough for a M&E average production. Let me explain my thoughts. If you spec out a full NVME solution is probably in the 200k+  range and you may   get 40TB usable. The system will be super fast  and will have no issues keeping up with the iops and bandwidth for your ever demanding workflow.  Now that you have the system in, you have to figure out to to deliver that performance to the workstation. Find network cards that works with tour OS and compatible drivers.  Use the available  open source protocols  like NFS, SMB and figure out to to optimize them,  or write your own agent/protocol to actually take advantage of that performance. Ok well some of you have that figure out, now you have to see how your NLE will utilize such performance  and how to you maintain that setup with the inevitable upgrades  like OS versions, and application version.

     
     
    What if i tell you that there is another way to get faster performance  than the NVME flash arrays at more affordable cost. 
     
    Well lets get a bit technical now. What is the fastest piece of hardware that access data in a computer? Is it the hard drive ? Is the CPU? Is it the bus (PCI slots) ? Is it the RAM?
    Which one is M&E industry  always in demand. Of course all those do matter in a traditional server but how do you get that performance? One piece of hardware that i have see not being utilize in the traditional  storage vendors is the RAM. Some of the hardware manufacturers will offer hardware controllers that  use some caching or they have some kind of cashing mechanism build in. Not large enough  to make a significant difference. They usually do not get into very details about how that works. You have to dig it up yourself.
     
    RAM is the fastest  way to access data, large data, media and entertainment data. Load that  uncompressed movie into RAM, an average server these day can easily use 1TB of RAM. The challenge here is how to tell the filesystem and the OS to load that data into RAM. I have been working  with Solaris  ZFS for the last 5 years and no other OS/filesystem does it better  and more efficient. ZFS will load the most frequently accessed data in to RAM “ARC”. DDR4 at 2666MHz ECC low latency is utilized here. When the RAM is full the system will load data in the LARC which is the read cache. That is where the fast NVME drives  come handy. We used two 4TB Intel striped and we get 8TB read cache. I can tell you is quite amazing to see both of those RAM and READ CACHE max out on a production day when you have 60 edit bays hitting it. Lets do some quick math here, 1TB of RAM speed and 8TB of two NVME striped using PCI slots. I would say that is fast enough for a traditional  production workflow and also 9TB available  worth of media files ready to serve and any given time. I have posted some benchmarks ran a CLI  from a real production server  below so you can see some realistic numbers. 
     
    More over all the editing, compositing, finishing, motion graphics applications that are being developed are thirsty for ram. You can see it in system resources if you open  4K media  in your timeline. Somehow the application process sucks the entire ram available. They do that to load the data into ram for smoother playback. It does not matter how much ram you install, 64G, 124G, 256G  any modern release software will use it assuming that you have a heavy 4K timeline. 
     
     
     
    Here are some performance stats that we got from out latest  server ZFS-600 that we just put in production. These are results running CLI commands  not theoretical and accumulative bandwidth taken from data sheet products. 
     
    RAM sustainable bandwidth 197GB/sec 
    command line volume test run with dd   7.3GB/sec
    command line multithreaded using iozone 28GB/sec 
    that is upper case means Bytes.
    The full data sheet and server specs  of the ZFS-600 can be found of the here http://ittemple.com/products/
     

    0 0

    Hiring] Technology Lead @ Toronto
    posted by Tham Nguyen on Dec. 3, 2018, 11:47 a.m.

    Hello All,

     

    We have a very exciting role of Technology Lead at Rocket Science VFX based in Toronto.

    If you are interested or if you know anyone who wants to explore further, feel free to reach out to me at tnguyen@rsvfx.com.

     

    Here is the Job details for your convenience: 

    https://rocketsciencevfx.applytojob.com/apply/IHRc4nmidY/Technical-Lead

     

    Thanks for reading my post!

     

    Tham

    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     

    Hello All,

     

    We have a very exciting role of Technology Lead at Rocket Science VFX based in Toronto.

    If you are interested or if you know anyone who wants to explore further, feel free to reach out to me at tnguyen@rsvfx.com.

     

    Here is the Job details for your convenience: 

    https://rocketsciencevfx.applytojob.com/apply/IHRc4nmidY/Technical-Lead

     

    Thanks for reading my post!

     

    Tham


    0 0

    Mari, CentOS 7, Nvidia and Xorg crashes ?
    posted by James Pearson on Dec. 6, 2018, 10:15 a.m.
    Just wondering if anyone else that happens to be running Mari on CentOS 7 with Nvidia graphics cards, has had problems with Xorg SEGV'ing ? It only appears to happen to some users that are using Mari (v3.x) The crashes happen with different cards (e.g. Quadro P4000, TITAN X) and all recent Nvidia driver versions (390.xx, 410.xx and 415.xx) Xorg.0.log output is below We are using Mate (v1.16 from EPEL) as the desktop window manager I have a bug report open with Nvidia, but they can not reproduce the problem ... So, I'm wondering if this might be something we're doing 'wrong' somewhere - so it would be useful to know if anyone else might be seeing something like this ? Thanks James Pearson Xorg.0.log: [ 2950.322] (EE) [ 2951.230] (EE) Server terminated with error (1). Closing log file. [ 2950.322] (EE) [ 2950.322] (EE) Backtrace: [ 2950.322] (EE) 0: /usr/bin/X (xorg_backtrace+0x55) [0x56087a505645] [ 2950.322] (EE) 1: /usr/bin/X (0x56087a357000+0x1b23d9) [0x56087a5093d9] [ 2950.322] (EE) 2: /lib64/libpthread.so.0 (0x7fcfb8bae000+0xf6d0) [0x7fcfb8bbd6d0] [ 2950.322] (EE) 3: /usr/lib64/xorg/modules/drivers/nvidia_drv.so (0x7fcfb4837000+0xf9eac) [0x7fcfb4930eac] [ 2950.322] (EE) 4: /usr/lib64/xorg/modules/drivers/nvidia_drv.so (0x7fcfb4837000+0xdcb12) [0x7fcfb4913b12] [ 2950.322] (EE) 5: /usr/lib64/xorg/modules/drivers/nvidia_drv.so (0x7fcfb4837000+0x5142ba) [0x7fcfb4d4b2ba] [ 2950.322] (EE) [ 2950.322] (EE) Segmentation fault at address 0x333d [ 2950.322] (EE) Fatal server error: [ 2950.322] (EE) Caught signal 11 (Segmentation fault). Server aborting [ 2950.322] (EE) [ 2950.322] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 2950.322] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 2950.322] (EE) [ 2951.230] (EE) Server terminated with error (1). Closing log file. To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe
    Thread Tags:
      discuss-at-studiosysadmins 

    0 Responses     0 Plus One's     0 Comments  
     
    Just wondering if anyone else that happens to be running Mari on CentOS 7 with Nvidia graphics cards, has had problems with Xorg SEGV'ing ? It only appears to happen to some users that are using Mari (v3.x) The crashes happen with different cards (e.g. Quadro P4000, TITAN X) and all recent Nvidia driver versions (390.xx, 410.xx and 415.xx) Xorg.0.log output is below We are using Mate (v1.16 from EPEL) as the desktop window manager I have a bug report open with Nvidia, but they can not reproduce the problem ... So, I'm wondering if this might be something we're doing 'wrong' somewhere - so it would be useful to know if anyone else might be seeing something like this ? Thanks James Pearson Xorg.0.log: [ 2950.322] (EE) [ 2951.230] (EE) Server terminated with error (1). Closing log file. [ 2950.322] (EE) [ 2950.322] (EE) Backtrace: [ 2950.322] (EE) 0: /usr/bin/X (xorg_backtrace+0x55) [0x56087a505645] [ 2950.322] (EE) 1: /usr/bin/X (0x56087a357000+0x1b23d9) [0x56087a5093d9] [ 2950.322] (EE) 2: /lib64/libpthread.so.0 (0x7fcfb8bae000+0xf6d0) [0x7fcfb8bbd6d0] [ 2950.322] (EE) 3: /usr/lib64/xorg/modules/drivers/nvidia_drv.so (0x7fcfb4837000+0xf9eac) [0x7fcfb4930eac] [ 2950.322] (EE) 4: /usr/lib64/xorg/modules/drivers/nvidia_drv.so (0x7fcfb4837000+0xdcb12) [0x7fcfb4913b12] [ 2950.322] (EE) 5: /usr/lib64/xorg/modules/drivers/nvidia_drv.so (0x7fcfb4837000+0x5142ba) [0x7fcfb4d4b2ba] [ 2950.322] (EE) [ 2950.322] (EE) Segmentation fault at address 0x333d [ 2950.322] (EE) Fatal server error: [ 2950.322] (EE) Caught signal 11 (Segmentation fault). Server aborting [ 2950.322] (EE) [ 2950.322] (EE) Please consult the The X.Org Foundation support at http://wiki.x.org for help. [ 2950.322] (EE) Please also check the log file at "/var/log/Xorg.0.log" for additional information. [ 2950.322] (EE) [ 2951.230] (EE) Server terminated with error (1). Closing log file. To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

older | 1 | .... | 185 | 186 | (Page 187)