Quantcast
Channel: StudioSysAdmins Message Board
Viewing all 3749 articles
Browse latest View live

SSA Sydney Social w/Avere Systems - 6:30pm Thursday 12th February 2015 @ Flying Bark Productions

$
0
0
SSA Sydney Social w/Avere Systems - 6:30pm Thursday 12th February 2015 @ Flying Bark Productions
posted by Kim Pearce on Jan. 15, 2015, 5:30 p.m. (1 day ago)
First event of 2015 for us here in Sydney!

StudioSysAdmins, Avere Systems, StormFX and Flying Bark would like to invite you to Flying Bark Productions 62-68 Church Street, Camperdown, NSW 2050 on Thursday February 12th from 6:30pm to 9:30pm for an evening socialising with local SSA colleagues, enjoy some libations and snacks.

Jim Thompson, Averes World Wide Senior Systems Engineer for Media & Entertainment will do a brief talk about Averes on prem optimisation and aggregation technology. He will conclude his presentation with an introduction to Averes cloud storage and cloud compute solutions. Please join us for a fun and informative evening.

Please register for this event athttp://www.studiosysadmins.com/events/view/66/

Location:
Flying Bark Productions
62-68 Church Street
Camperdown

Date and Time:
6:30pm Thursday 12th February 2015
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
First event of 2015 for us here in Sydney!

StudioSysAdmins, Avere Systems, StormFX and Flying Bark would like to invite you to Flying Bark Productions 62-68 Church Street, Camperdown, NSW 2050 on Thursday February 12th from 6:30pm to 9:30pm for an evening socialising with local SSA colleagues, enjoy some libations and snacks.

Jim Thompson, Averes World Wide Senior Systems Engineer for Media & Entertainment will do a brief talk about Averes on prem optimisation and aggregation technology. He will conclude his presentation with an introduction to Averes cloud storage and cloud compute solutions. Please join us for a fun and informative evening.

Please register for this event athttp://www.studiosysadmins.com/events/view/66/

Location:
Flying Bark Productions
62-68 Church Street
Camperdown

Date and Time:
6:30pm Thursday 12th February 2015

Exclude shotgun from reports?

$
0
0
Exclude shotgun from reports?
posted by Greg Whynott on Jan. 15, 2015, 5:40 p.m. (1 day ago)

10 minutes before I leave to go home I start thinking about this.... sure to cause lost sleep too..


anyway....

We had this system in place for years which would monitor uploads to the internet via netflow data. Whenever the upload limit trigger was reached we would get an email alerting us to a 'large upload daily limit reached'.

The we started using shotgun online. And the reporting system was spamming us... I seen many of the connections were for amazon and going to the shotgun servers. So I excluded that range. A week or so later shotgun was using other IP blocks. So I wrote them a letter explaining what I was attempting to do and "could you provide me a list of IP ranges we can expect you to use" and the answer was "no, Amazon takes care of that, we don't know what they would be from day to day". Which in all honesty sounds like a crock of dung as that is not how it works typically. What if someone wants to reach your servers via IP? Anyway maybe it is true, I'm not up to speed with how Amazon is running things.

so this is my new challenge I'm working on. any ideas? I'm thinking of having a script that does a look up on our shotgun address every hour and excludes IP blocks from the reports each day.. thinking there is something similar... not too keen on setting up a https proxy, but maybe some wire snarfing routine...

i'm sure i'll wake up at 3 AM with an "ah-ha" solution, is the way it usually works..

-g



Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 

10 minutes before I leave to go home I start thinking about this.... sure to cause lost sleep too..


anyway....

We had this system in place for years which would monitor uploads to the internet via netflow data. Whenever the upload limit trigger was reached we would get an email alerting us to a 'large upload daily limit reached'.

The we started using shotgun online. And the reporting system was spamming us... I seen many of the connections were for amazon and going to the shotgun servers. So I excluded that range. A week or so later shotgun was using other IP blocks. So I wrote them a letter explaining what I was attempting to do and "could you provide me a list of IP ranges we can expect you to use" and the answer was "no, Amazon takes care of that, we don't know what they would be from day to day". Which in all honesty sounds like a crock of dung as that is not how it works typically. What if someone wants to reach your servers via IP? Anyway maybe it is true, I'm not up to speed with how Amazon is running things.

so this is my new challenge I'm working on. any ideas? I'm thinking of having a script that does a look up on our shotgun address every hour and excludes IP blocks from the reports each day.. thinking there is something similar... not too keen on setting up a https proxy, but maybe some wire snarfing routine...

i'm sure i'll wake up at 3 AM with an "ah-ha" solution, is the way it usually works..

-g



Oracle and Studio Sys Admins

$
0
0
Oracle and Studio Sys Admins
posted by Rob Giltrap on Jan. 15, 2015, 8:29 p.m. (1 day ago)

Hello SSA members!

Just wanted to say a quick hello and explain why you're starting to see Oracle turning up all over the SSA site. But first an introduction...

I'm Rob Giltrap and for a job I predominantly do server & storage technical pre-sales for Oracle based out of Wellington, New Zealand. I love the challenge of big compute performance problems and if you google me, you'll see that I had some success in proving large Mernsenne Primes back in the day.

Working with VFX companies is right up my alley as the challenges are huge and the goal posts are constantly moving. It helps when you've got a great portfolio of products to help address those challenges.

Oracle are stepping up their game in the Media & Entertainment industry. We've recently acquired Front Porch Digital with great MAM products like DIVArchive which perfectly compliments our StorageTek tape libraries, ZFS Storage Appliance and FS1 Flash Storage. 

We've also just become Platinum Sponsors of SSA which I'm really pleased about and I've already been to my first SSA event in Sydney late last year!

If you're from a production studio or a production consultant please feel free to give me bell and I can tell you about our products or put you in contact with one of my peers around the globe to give you a hand. If you're from a reseller who doesn't have an existing relationship with Oracle please also reach out to me and I'll put you in touch with the right people.

Cheers, Rob.

Thread Tags:
  storage, tape, nas, san, mam 

0 Responses   0 Plus One's   0 Comments  
 

Hello SSA members!

Just wanted to say a quick hello and explain why you're starting to see Oracle turning up all over the SSA site. But first an introduction...

I'm Rob Giltrap and for a job I predominantly do server & storage technical pre-sales for Oracle based out of Wellington, New Zealand. I love the challenge of big compute performance problems and if you google me, you'll see that I had some success in proving large Mernsenne Primes back in the day.

Working with VFX companies is right up my alley as the challenges are huge and the goal posts are constantly moving. It helps when you've got a great portfolio of products to help address those challenges.

Oracle are stepping up their game in the Media & Entertainment industry. We've recently acquired Front Porch Digital with great MAM products like DIVArchive which perfectly compliments our StorageTek tape libraries, ZFS Storage Appliance and FS1 Flash Storage. 

We've also just become Platinum Sponsors of SSA which I'm really pleased about and I've already been to my first SSA event in Sydney late last year!

If you're from a production studio or a production consultant please feel free to give me bell and I can tell you about our products or put you in contact with one of my peers around the globe to give you a hand. If you're from a reseller who doesn't have an existing relationship with Oracle please also reach out to me and I'll put you in touch with the right people.

Cheers, Rob.

OFF TOPIC - funny fridays

JPEG 2K sequence player

$
0
0
JPEG 2K sequence player
posted by  on Jan. 16, 2015, 1:05 p.m. (1 day ago)
Hi All,

Does anyone have a preferred player for playing back JPEG 2K image sequences? Thanks!

-louai
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hi All,

Does anyone have a preferred player for playing back JPEG 2K image sequences? Thanks!

-louai

Charles Poynton webinar on ACES 1.0 & Scene-linear workflow - Jan 21 & 22

$
0
0
Charles Poynton webinar on ACES 1.0 & Scene-linear workflow - Jan 21 & 22
posted by Tom Burns on Jan. 17, 2015, 12:45 p.m. (1 day ago)

Im forwarding this email from Charles to the list   if youve ever attended one of these webinars they are well worth the time and $$. /Tom

---------------------------

Colleagues -

 

On Wednesday and Thursday next week, I'll present a four-hour webinar: ACES 1.0 & Scene-linear workflow. 

 

Historically, cinema production and post-production has been based upon the conceptual model of film acquisition, where the image coding scheme incorporates the technical parameters of film in particular, the S-shaped tone response and the colour crosstalk. Such coding (e.g., Cineon/"DPX") made CGI and VFX difficult. Modern techniques acquire scene-linear data that is, image data is linearly related to the scene elements. Imposition of any required look is imposed during the DI process. The technique has been under development and refinement at the Academy for several years. It has been released at the 1.0 level, and is likely to see quite wide commercial deployment in the next year.

 

In this course, I will discuss the technical and visual requirements for acquisition and processing using the scene-linear model, and its log codings ACESproxy and ACEScc. I will introduce the basic technical parameters of various camera encodings. I will explain the conceptual and technical differences between power-function based video coding and log coding, and I'll describe the associated dynamic range and noise properties. I will outline how picture rendering must be imposed in the DI pipeline (for example, by the AMPAS reference rendering transform, RRT) and I will explain how CGI/VFX can be integrated into the process. I will describe how IDTs can be computed, and along the way I'll explain why camera filter/sensor combinations do not impose any colour gamut limitation. (Gamut may be limited by signal processing, but not by optics.)

 

I'll provide course note handouts. In the webinar, you'll hear me but you won't see me. Instead, you'll see my coloured pens scrawling over viewgraph and handout material. Questions and discussion will be encouraged.

 

The webinar has two, two-hour sessions:

 

    Wed. Jan. 21, 2015 13:00~15:00 PM EST, and

    Thu. Jan. 22, 2015 13:00~15:00 PM EST.

 

The registration fee is USD 200. You can register at this URL:

 

    <https://attendee.gototraining.com/r/8445886838026018562>

 

After registering you will receive a confirmation email containing information about joining the sessions.

 

Should you wish to receive no further messages from me, please drop me an email message saying so.

 

- C

 

Charles Poynton

<http://www.poynton.com/>

<mailto:charles@poynton.com>

<skype:cpoynton>

<twitter:@momaku>

+1 416 535 7187

 

 

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 

Im forwarding this email from Charles to the list   if youve ever attended one of these webinars they are well worth the time and $$. /Tom

---------------------------

Colleagues -

 

On Wednesday and Thursday next week, I'll present a four-hour webinar: ACES 1.0 & Scene-linear workflow. 

 

Historically, cinema production and post-production has been based upon the conceptual model of film acquisition, where the image coding scheme incorporates the technical parameters of film in particular, the S-shaped tone response and the colour crosstalk. Such coding (e.g., Cineon/"DPX") made CGI and VFX difficult. Modern techniques acquire scene-linear data that is, image data is linearly related to the scene elements. Imposition of any required look is imposed during the DI process. The technique has been under development and refinement at the Academy for several years. It has been released at the 1.0 level, and is likely to see quite wide commercial deployment in the next year.

 

In this course, I will discuss the technical and visual requirements for acquisition and processing using the scene-linear model, and its log codings ACESproxy and ACEScc. I will introduce the basic technical parameters of various camera encodings. I will explain the conceptual and technical differences between power-function based video coding and log coding, and I'll describe the associated dynamic range and noise properties. I will outline how picture rendering must be imposed in the DI pipeline (for example, by the AMPAS reference rendering transform, RRT) and I will explain how CGI/VFX can be integrated into the process. I will describe how IDTs can be computed, and along the way I'll explain why camera filter/sensor combinations do not impose any colour gamut limitation. (Gamut may be limited by signal processing, but not by optics.)

 

I'll provide course note handouts. In the webinar, you'll hear me but you won't see me. Instead, you'll see my coloured pens scrawling over viewgraph and handout material. Questions and discussion will be encouraged.

 

The webinar has two, two-hour sessions:

 

    Wed. Jan. 21, 2015 13:00~15:00 PM EST, and

    Thu. Jan. 22, 2015 13:00~15:00 PM EST.

 

The registration fee is USD 200. You can register at this URL:

 

    <https://attendee.gototraining.com/r/8445886838026018562>

 

After registering you will receive a confirmation email containing information about joining the sessions.

 

Should you wish to receive no further messages from me, please drop me an email message saying so.

 

- C

 

Charles Poynton

<http://www.poynton.com/>

<mailto:charles@poynton.com>

<skype:cpoynton>

<twitter:@momaku>

+1 416 535 7187

 

 

Charles Poynton webinar on ACES 1.0 & Scene-linear workflow - Jan 21 & 22

$
0
0
Charles Poynton webinar on ACES 1.0 & Scene-linear workflow - Jan 21 & 22
posted by  on Jan. 17, 2015, 1:50 p.m. (1 day ago)
Thank you Tom. Yes they are really worth it. Charles is one of the few people in our industry who has a very very deep knowledge of Colour Science. Just signed up.

On 17.01.2015 18:40, Burns Tom wrote:

I’m forwarding this email from Charles to the list  â€“ if you’ve ever attended one of these webinars they are well worth the time and $$. …/Tom

---------------------------

Colleagues -

 

On Wednesday and Thursday next week, I'll present a four-hour webinar: ACES 1.0 & Scene-linear workflow. 

 

Historically, cinema production and post-production has been based upon the conceptual model of film acquisition, where the image coding scheme incorporates the technical parameters of film – in particular, the S-shaped tone response and the colour crosstalk. Such coding (e.g., Cineon/"DPX") made CGI and VFX difficult. Modern techniques acquire “scene-linear” data – that is, image data is linearly related to the scene elements. Imposition of any required “look” is imposed during the DI process. The technique has been under development and refinement at the Academy for several years. It has been released at the 1.0 level, and is likely to see quite wide commercial deployment in the next year.

 

In this course, I will discuss the technical and visual requirements for acquisition and processing using the scene-linear model, and its log codings ACESproxy and ACEScc. I will introduce the basic technical parameters of various camera encodings. I will explain the conceptual and technical differences between power-function based video coding and log coding, and I'll describe the associated dynamic range and noise properties. I will outline how “picture rendering” must be imposed in the DI pipeline (for example, by the AMPAS reference rendering transform, RRT) and I will explain how CGI/VFX can be integrated into the process. I will describe how IDTs can be computed, and along the way I'll explain why camera filter/sensor combinations do not impose any colour gamut limitation. (Gamut may be limited by signal processing, but not by optics.)

 

I'll provide course note handouts. In the webinar, you'll hear me but you won't see me. Instead, you'll see my coloured pens scrawling over viewgraph and handout material. Questions and discussion will be encouraged.

 

The webinar has two, two-hour sessions:

 

    Wed. Jan. 21, 2015 13:00~15:00 PM EST, and

    Thu. Jan. 22, 2015 13:00~15:00 PM EST.

 

The registration fee is USD 200. You can register at this URL:

 

    <https://attendee.gototraining.com/r/8445886838026018562>

 

After registering you will receive a confirmation email containing information about joining the sessions.

 

Should you wish to receive no further messages from me, please drop me an email message saying so.

 

- C

 

Charles Poynton

<http://www.poynton.com/>

<mailto:charles@poynton.com>

<skype:cpoynton>

<twitter:@momaku>

+1 416 535 7187

 

 



To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

--
H A R V E S T / DIGITAL AGRICULTURE GMBH LOGO
///////////////////////////
///////////////////////////
////////////////////
JATHAVAN SRIRAM

HARVEST / 
DIGITAL AGRICULTURE GMBH

Alter Wandrahm 8-9
20457 Hamburg
Phone: +49/ 40 411 880 500
sriram@harvest-postproduction.com
www.harvest-postproduction.com


Managing Director: Florian Arlart
Trade register: Harvest Digital Agriculture GmbH, HRB 121482


”Find

The contents of this email are confidential. If you are not the intended recipient, please notify us as soon as possible: you should not copy, forward, disclose or use the contents. Start may monitor outgoing and incoming mail for business purposes.
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Thank you Tom. Yes they are really worth it. Charles is one of the few people in our industry who has a very very deep knowledge of Colour Science. Just signed up.

On 17.01.2015 18:40, Burns Tom wrote:

I’m forwarding this email from Charles to the list  â€“ if you’ve ever attended one of these webinars they are well worth the time and $$. …/Tom

---------------------------

Colleagues -

 

On Wednesday and Thursday next week, I'll present a four-hour webinar: ACES 1.0 & Scene-linear workflow. 

 

Historically, cinema production and post-production has been based upon the conceptual model of film acquisition, where the image coding scheme incorporates the technical parameters of film – in particular, the S-shaped tone response and the colour crosstalk. Such coding (e.g., Cineon/"DPX") made CGI and VFX difficult. Modern techniques acquire “scene-linear” data – that is, image data is linearly related to the scene elements. Imposition of any required “look” is imposed during the DI process. The technique has been under development and refinement at the Academy for several years. It has been released at the 1.0 level, and is likely to see quite wide commercial deployment in the next year.

 

In this course, I will discuss the technical and visual requirements for acquisition and processing using the scene-linear model, and its log codings ACESproxy and ACEScc. I will introduce the basic technical parameters of various camera encodings. I will explain the conceptual and technical differences between power-function based video coding and log coding, and I'll describe the associated dynamic range and noise properties. I will outline how “picture rendering” must be imposed in the DI pipeline (for example, by the AMPAS reference rendering transform, RRT) and I will explain how CGI/VFX can be integrated into the process. I will describe how IDTs can be computed, and along the way I'll explain why camera filter/sensor combinations do not impose any colour gamut limitation. (Gamut may be limited by signal processing, but not by optics.)

 

I'll provide course note handouts. In the webinar, you'll hear me but you won't see me. Instead, you'll see my coloured pens scrawling over viewgraph and handout material. Questions and discussion will be encouraged.

 

The webinar has two, two-hour sessions:

 

    Wed. Jan. 21, 2015 13:00~15:00 PM EST, and

    Thu. Jan. 22, 2015 13:00~15:00 PM EST.

 

The registration fee is USD 200. You can register at this URL:

 

    <https://attendee.gototraining.com/r/8445886838026018562>

 

After registering you will receive a confirmation email containing information about joining the sessions.

 

Should you wish to receive no further messages from me, please drop me an email message saying so.

 

- C

 

Charles Poynton

<http://www.poynton.com/>

<mailto:charles@poynton.com>

<skype:cpoynton>

<twitter:@momaku>

+1 416 535 7187

 

 



To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

--
H A R V E S T / DIGITAL AGRICULTURE GMBH LOGO
///////////////////////////
///////////////////////////
////////////////////
JATHAVAN SRIRAM

HARVEST / 
DIGITAL AGRICULTURE GMBH

Alter Wandrahm 8-9
20457 Hamburg
Phone: +49/ 40 411 880 500
sriram@harvest-postproduction.com
www.harvest-postproduction.com


Managing Director: Florian Arlart
Trade register: Harvest Digital Agriculture GmbH, HRB 121482


”Find

The contents of this email are confidential. If you are not the intended recipient, please notify us as soon as possible: you should not copy, forward, disclose or use the contents. Start may monitor outgoing and incoming mail for business purposes.

Framestore hiring.

$
0
0
Framestore hiring.
posted by Steve MacPherson on Jan. 19, 2015, 4:50 a.m. (2 days ago)
Hi all,

We have immediate openings for Systems Engineers.

In LA, we're looking for someone to bring a video engineering background to our Systems team. Montreal and London are more traditional Systems roles. We're privately owned which in practice means the people who own the company are people you work with. In my experience, one of the things about Framestore is that we can move fairly fast on good ideas and it's this entrepreneurial aspect which makes Framestore an interesting and challenging place.

Commercials, LA:
https://ldd.tbe.taleo.net/ldd01/ats/careers/requisition.jsp?org=FRAMESTORE&cws=39&rid=253
Cheers,
-s

--
Framestore |Steve MacPherson| CTO
London- New York - Los Angeles - Montr
al

19-23 Wells Street, London, W1T 3PQ
T: +44 (0) 20 7344 8000

framestore.com

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hi all,

We have immediate openings for Systems Engineers.

In LA, we're looking for someone to bring a video engineering background to our Systems team. Montreal and London are more traditional Systems roles. We're privately owned which in practice means the people who own the company are people you work with. In my experience, one of the things about Framestore is that we can move fairly fast on good ideas and it's this entrepreneurial aspect which makes Framestore an interesting and challenging place.

Commercials, LA:
https://ldd.tbe.taleo.net/ldd01/ats/careers/requisition.jsp?org=FRAMESTORE&cws=39&rid=253
Cheers,
-s

--
Framestore |Steve MacPherson| CTO
London- New York - Los Angeles - Montr
al

19-23 Wells Street, London, W1T 3PQ
T: +44 (0) 20 7344 8000

framestore.com


BlueArc CNS vs. HDS CNS

$
0
0
BlueArc CNS vs. HDS CNS
posted by Dan Young on Jan. 19, 2015, 3:35 p.m. (2 days ago)
Hello everyone, Wondering if you and your facility are currently using Hitatchi's CNS links system at any level of your organization - and the method in which you are using them. Currently, on our old(er) Mercury, we have 4200 CNS links, and they work wonderfully, such that when we purchased our new 4080 from HDS, we had (potentially foolishly) assumed we would be able to utilize CNS links in the same fashion as we had before. Naturally, this isn't the case. There is now a hard limit of 520 CNS links for these HNAS units - which, for better or for worse is currently about to cost us quite a bit of development in our facility. Our options seem to be; - Re-write pipeline utilities, and entire heart to ween us off of CNS links (lots of work) - Lean on HDS for a custom version of HNAS that removes the ability to have a hard limit on CNS links (least work, but potentially unstable as mentioned by Hitatchi) - Go with an interim solution to buy us time to re-dev pipeline like Avere and have it preserve our linkfarm The reason we really require these is that we have 2 tiers, each with 8 filesystems on them. If one filesystem is used up, or nearly used - we can migrate shots, whole shows, et al from filesystem to filesystem, tier to tier - without the operators/creatives/end users seeing a difference in pathing, and as a lovely added bonus to that - it's not an OS symlink, and crappier softwares do not attempt to "resolve" the paths that it gives out. Anyone out there hitting this limit? Is anyone out there working around it in a more clever fashion than we are here? Has anyone got a limitless CNS link version of HNAS running in their facility? Thanks everyone, good to be back in the production saddle. Cheers, DY (now with more California) To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hello everyone, Wondering if you and your facility are currently using Hitatchi's CNS links system at any level of your organization - and the method in which you are using them. Currently, on our old(er) Mercury, we have 4200 CNS links, and they work wonderfully, such that when we purchased our new 4080 from HDS, we had (potentially foolishly) assumed we would be able to utilize CNS links in the same fashion as we had before. Naturally, this isn't the case. There is now a hard limit of 520 CNS links for these HNAS units - which, for better or for worse is currently about to cost us quite a bit of development in our facility. Our options seem to be; - Re-write pipeline utilities, and entire heart to ween us off of CNS links (lots of work) - Lean on HDS for a custom version of HNAS that removes the ability to have a hard limit on CNS links (least work, but potentially unstable as mentioned by Hitatchi) - Go with an interim solution to buy us time to re-dev pipeline like Avere and have it preserve our linkfarm The reason we really require these is that we have 2 tiers, each with 8 filesystems on them. If one filesystem is used up, or nearly used - we can migrate shots, whole shows, et al from filesystem to filesystem, tier to tier - without the operators/creatives/end users seeing a difference in pathing, and as a lovely added bonus to that - it's not an OS symlink, and crappier softwares do not attempt to "resolve" the paths that it gives out. Anyone out there hitting this limit? Is anyone out there working around it in a more clever fashion than we are here? Has anyone got a limitless CNS link version of HNAS running in their facility? Thanks everyone, good to be back in the production saddle. Cheers, DY (now with more California) To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Spring cleaning

$
0
0
Spring cleaning
posted by Matthew Smyj on Jan. 21, 2015, 2:50 p.m. (1 day ago)

Our facilities manager, awesome fellow that he is, has some crew hours available for us to employ in activities our systems team hasn't gotten around to. One of the things that's needed addressing is vacumning the intakes for desktop workstation on the floor and possibly innards for any idle machines.

So my questions for you my comrades are...

1.) Recommended vacumn model?

2.) What is/are your experience/suggestions on the process if any?

Any input appreciated and thanks in advance!

-Matt Smyj

Systems Admin, Tippett Studio

 

(To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe)

Thread Tags:
  workstation, hardware 

0 Responses   0 Plus One's   0 Comments  
 

Our facilities manager, awesome fellow that he is, has some crew hours available for us to employ in activities our systems team hasn't gotten around to. One of the things that's needed addressing is vacumning the intakes for desktop workstation on the floor and possibly innards for any idle machines.

So my questions for you my comrades are...

1.) Recommended vacumn model?

2.) What is/are your experience/suggestions on the process if any?

Any input appreciated and thanks in advance!

-Matt Smyj

Systems Admin, Tippett Studio

 

(To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe)

Dreamworks / PDI shut down

$
0
0

Deduping redundant data to save space from redundant files

$
0
0
Deduping redundant data to save space from redundant files
posted by Will Rosecrans on Jan. 23, 2015, 8 p.m. (1 day ago)
So, we all hate duplicate data hogging all the bits. I know some of you have deployed awesome commercial solutions, or possibly in-house stuff. Anybody else using rdfind?

http://rdfind.pauldreik.se/

I recently dicovered it, and it certainly seems to work at least as well as anything similar I could bang out in my spare time. It flagged about 400 GB for me that could be cleaned up, so for a free utility it certainly seems to be worth the price. I don't know why I didn't run across it sooner. Surely, I am late to the game on this one?

The format that it outputs is a little odd, and not directly super useful, so I wrote a little python script to do some of the work for me. If anybody else if using rdfind, maybe you'll find it useful if you haven't already put something together:
I'll probably improve on it at some point, but first I have a bunch of deleting to do! (And probably a cron job to set up for the weekends...)

I'll try to be more trivial the next time I post on a Friday. Sorry about that.
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
So, we all hate duplicate data hogging all the bits. I know some of you have deployed awesome commercial solutions, or possibly in-house stuff. Anybody else using rdfind?

http://rdfind.pauldreik.se/

I recently dicovered it, and it certainly seems to work at least as well as anything similar I could bang out in my spare time. It flagged about 400 GB for me that could be cleaned up, so for a free utility it certainly seems to be worth the price. I don't know why I didn't run across it sooner. Surely, I am late to the game on this one?

The format that it outputs is a little odd, and not directly super useful, so I wrote a little python script to do some of the work for me. If anybody else if using rdfind, maybe you'll find it useful if you haven't already put something together:
I'll probably improve on it at some point, but first I have a bunch of deleting to do! (And probably a cron job to set up for the weekends...)

I'll try to be more trivial the next time I post on a Friday. Sorry about that.

Deduping redundant data to save space from redundant files

$
0
0
Deduping redundant data to save space from redundant files
posted by Greg Whynott on Jan. 24, 2015, 6:25 p.m. (3 days ago)
Nice little find. I'm going to run it against our file servers next week.

I think there are only very few use case scenarios where the amount of dup data justifies the cost of some of the dedup solutions out there, The trade offs with 'built in' solutions on storage products makes them not worth using most of the time.

I've heard bad horror stories from almost every implementation of dedup services running on storage systems, just seems like one more thing that can break, and every year data sets get larger, making dedup'n more challenging.

It was a cool idea in the era of 900 gig drives, but for the aforementioned reasons, I don't think i'll ever run dedup processes on my file server again. I'll consider buying some 8TB drives instead for a few hundred dollars, instead of 50-200k.

-g



On Fri, Jan 23, 2015 at 7:55 PM, Will Rosecrans <wrosecrans@gmail.com> wrote:
So, we all hate duplicate data hogging all the bits. I know some of you have deployed awesome commercial solutions, or possibly in-house stuff. Anybody else using rdfind?

http://rdfind.pauldreik.se/

I recently dicovered it, and it certainly seems to work at least as well as anything similar I could bang out in my spare time. It flagged about 400 GB for me that could be cleaned up, so for a free utility it certainly seems to be worth the price. I don't know why I didn't run across it sooner. Surely, I am late to the game on this one?

The format that it outputs is a little odd, and not directly super useful, so I wrote a little python script to do some of the work for me. If anybody else if using rdfind, maybe you'll find it useful if you haven't already put something together:
I'll probably improve on it at some point, but first I have a bunch of deleting to do! (And probably a cron job to set up for the weekends...)

I'll try to be more trivial the next time I post on a Friday. Sorry about that.

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Nice little find. I'm going to run it against our file servers next week.

I think there are only very few use case scenarios where the amount of dup data justifies the cost of some of the dedup solutions out there, The trade offs with 'built in' solutions on storage products makes them not worth using most of the time.

I've heard bad horror stories from almost every implementation of dedup services running on storage systems, just seems like one more thing that can break, and every year data sets get larger, making dedup'n more challenging.

It was a cool idea in the era of 900 gig drives, but for the aforementioned reasons, I don't think i'll ever run dedup processes on my file server again. I'll consider buying some 8TB drives instead for a few hundred dollars, instead of 50-200k.

-g



On Fri, Jan 23, 2015 at 7:55 PM, Will Rosecrans <wrosecrans@gmail.com> wrote:
So, we all hate duplicate data hogging all the bits. I know some of you have deployed awesome commercial solutions, or possibly in-house stuff. Anybody else using rdfind?

http://rdfind.pauldreik.se/

I recently dicovered it, and it certainly seems to work at least as well as anything similar I could bang out in my spare time. It flagged about 400 GB for me that could be cleaned up, so for a free utility it certainly seems to be worth the price. I don't know why I didn't run across it sooner. Surely, I am late to the game on this one?

The format that it outputs is a little odd, and not directly super useful, so I wrote a little python script to do some of the work for me. If anybody else if using rdfind, maybe you'll find it useful if you haven't already put something together:
I'll probably improve on it at some point, but first I have a bunch of deleting to do! (And probably a cron job to set up for the weekends...)

I'll try to be more trivial the next time I post on a Friday. Sorry about that.

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Deduping redundant data to save space from redundant files

$
0
0
Deduping redundant data to save space from redundant files
posted by Brian Krusic on Jan. 24, 2015, 6:50 p.m. (3 days ago)
Well, I don't mind duplicate data, a sort of very basic copy in case things go south.

Besides disk space is getting cheaper.

We're running ZFS for most all servers and I keep dedup off.

- Brian

Sent from my stupid POS iPhone on the craptaculous AT&T network.

On Jan 24, 2015, at 3:35 PM, Jathavan Sriram <sriram@harvest-postproduction.com> wrote:

Like most simple solutions, rfind is filebased and runs checksums on the entire file at some point. This might be fun to run against 100TByte of production data ;)

I wonder how much better are the grown solutions that actually compare chunks of data on the filesystem - something like the Deduplication Tech from Quantum? Anyone
uses something like that?

//ok. we now need to do something stronger. read a few bytes.
  const int nreadtobuffermodes=4;
  Fileinfo::readtobuffermode lasttype=Fileinfo::NOT_DEFINED;
  Fileinfo::readtobuffermode type[nreadtobuffermodes];
  type[0]=Fileinfo::READ_FIRST_BYTES;
  type[1]=Fileinfo::READ_LAST_BYTES;
  type[2]=(usemd5 ? Fileinfo::CREATE_MD5_CHECKSUM : Fileinfo::NOT_DEFINED);
  type[3]=(usesha1 ? Fileinfo::CREATE_SHA1_CHECKSUM : Fileinfo::NOT_DEFINED);

  for (int i=0;i<nreadtobuffermodes;i++){    
    if(type[i]!=Fileinfo::NOT_DEFINED) {
      string description;

      switch(type[i]){
      case Fileinfo::READ_FIRST_BYTES:description="first bytes";break;
      case Fileinfo::READ_LAST_BYTES:description="last bytes";break;
      case Fileinfo::CREATE_MD5_CHECKSUM:description="md5 checksum";break;
      case Fileinfo::CREATE_SHA1_CHECKSUM:description="sha1 checksum";break;
      default:description="--program error!!!---";break;
      }
      cout<<dryruntext<<"Now eliminating candidates based on "<<description<<":";

      cout.flush();

On 25.01.2015 00:20, greg whynott wrote:

Nice little find.  I'm going to run it against our file servers next week. 

 I think there are only very few use case scenarios where the amount of dup data justifies the cost of some of the dedup solutions out there,   The trade offs with 'built in' solutions on storage products makes them not worth using most of the time. 

 I've heard bad horror stories from almost every implementation of dedup services running on storage systems,  just seems like one more thing that can break,  and every year data sets get larger,  making dedup'n more challenging.  

 It was a  cool idea in the era of 900 gig drives,  but for the aforementioned reasons,  I don't think i'll ever run dedup processes on my file server again.    I'll consider buying  some 8TB drives instead for a few hundred dollars,  instead of 50-200k.

-g



On Fri, Jan 23, 2015 at 7:55 PM, Will Rosecrans <wrosecrans@gmail.com> wrote:
So, we all hate duplicate data hogging all the bits.  I know some of you have deployed awesome commercial solutions, or possibly in-house stuff.  Anybody else using rdfind?

http://rdfind.pauldreik.se/

I recently dicovered it, and it certainly seems to work at least as well as anything similar I could bang out in my spare time.  It flagged about 400 GB for me that could be cleaned up, so for a free utility it certainly seems to be worth the price.  I don't know why I didn't run across it sooner.  Surely, I am late to the game on this one?

The format that it outputs is a little odd, and not directly super useful, so I wrote a little python script to do some of the work for me.  If anybody else if using rdfind, maybe you'll find it useful if you haven't already put something together:
I'll probably improve on it at some point, but first I have a bunch of deleting to do!  (And probably a cron job to set up for the weekends...)

I'll try to be more trivial the next time I post on a Friday.  Sorry about that.

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

--
<harvest-signature.jpg>
///////////////////////////
///////////////////////////
////////////////////
JATHAVAN SRIRAM

HARVEST / 
DIGITAL AGRICULTURE GMBH

Alter Wandrahm 8-9
20457 Hamburg
Phone: +49/ 40 411 880 500
sriram@harvest-postproduction.com
www.harvest-postproduction.com


Managing Director: Florian Arlart
Trade register: Harvest Digital Agriculture GmbH, HRB 121482


<find-us-on-facebook_white_small.png>

The contents of this email are confidential. If you are not the intended recipient, please notify us as soon as possible: you should not copy, forward, disclose or use the contents. Start may monitor outgoing and incoming mail for business purposes.
To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Well, I don't mind duplicate data, a sort of very basic copy in case things go south.

Besides disk space is getting cheaper.

We're running ZFS for most all servers and I keep dedup off.

- Brian

Sent from my stupid POS iPhone on the craptaculous AT&T network.

On Jan 24, 2015, at 3:35 PM, Jathavan Sriram <sriram@harvest-postproduction.com> wrote:

Like most simple solutions, rfind is filebased and runs checksums on the entire file at some point. This might be fun to run against 100TByte of production data ;)

I wonder how much better are the grown solutions that actually compare chunks of data on the filesystem - something like the Deduplication Tech from Quantum? Anyone
uses something like that?

//ok. we now need to do something stronger. read a few bytes.
  const int nreadtobuffermodes=4;
  Fileinfo::readtobuffermode lasttype=Fileinfo::NOT_DEFINED;
  Fileinfo::readtobuffermode type[nreadtobuffermodes];
  type[0]=Fileinfo::READ_FIRST_BYTES;
  type[1]=Fileinfo::READ_LAST_BYTES;
  type[2]=(usemd5 ? Fileinfo::CREATE_MD5_CHECKSUM : Fileinfo::NOT_DEFINED);
  type[3]=(usesha1 ? Fileinfo::CREATE_SHA1_CHECKSUM : Fileinfo::NOT_DEFINED);

  for (int i=0;i<nreadtobuffermodes;i++){    
    if(type[i]!=Fileinfo::NOT_DEFINED) {
      string description;

      switch(type[i]){
      case Fileinfo::READ_FIRST_BYTES:description="first bytes";break;
      case Fileinfo::READ_LAST_BYTES:description="last bytes";break;
      case Fileinfo::CREATE_MD5_CHECKSUM:description="md5 checksum";break;
      case Fileinfo::CREATE_SHA1_CHECKSUM:description="sha1 checksum";break;
      default:description="--program error!!!---";break;
      }
      cout<<dryruntext<<"Now eliminating candidates based on "<<description<<":";

      cout.flush();

On 25.01.2015 00:20, greg whynott wrote:

Nice little find.  I'm going to run it against our file servers next week. 

 I think there are only very few use case scenarios where the amount of dup data justifies the cost of some of the dedup solutions out there,   The trade offs with 'built in' solutions on storage products makes them not worth using most of the time. 

 I've heard bad horror stories from almost every implementation of dedup services running on storage systems,  just seems like one more thing that can break,  and every year data sets get larger,  making dedup'n more challenging.  

 It was a  cool idea in the era of 900 gig drives,  but for the aforementioned reasons,  I don't think i'll ever run dedup processes on my file server again.    I'll consider buying  some 8TB drives instead for a few hundred dollars,  instead of 50-200k.

-g



On Fri, Jan 23, 2015 at 7:55 PM, Will Rosecrans <wrosecrans@gmail.com> wrote:
So, we all hate duplicate data hogging all the bits.  I know some of you have deployed awesome commercial solutions, or possibly in-house stuff.  Anybody else using rdfind?

http://rdfind.pauldreik.se/

I recently dicovered it, and it certainly seems to work at least as well as anything similar I could bang out in my spare time.  It flagged about 400 GB for me that could be cleaned up, so for a free utility it certainly seems to be worth the price.  I don't know why I didn't run across it sooner.  Surely, I am late to the game on this one?

The format that it outputs is a little odd, and not directly super useful, so I wrote a little python script to do some of the work for me.  If anybody else if using rdfind, maybe you'll find it useful if you haven't already put something together:
I'll probably improve on it at some point, but first I have a bunch of deleting to do!  (And probably a cron job to set up for the weekends...)

I'll try to be more trivial the next time I post on a Friday.  Sorry about that.

To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe



To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

--
<harvest-signature.jpg>
///////////////////////////
///////////////////////////
////////////////////
JATHAVAN SRIRAM

HARVEST / 
DIGITAL AGRICULTURE GMBH

Alter Wandrahm 8-9
20457 Hamburg
Phone: +49/ 40 411 880 500
sriram@harvest-postproduction.com
www.harvest-postproduction.com


Managing Director: Florian Arlart
Trade register: Harvest Digital Agriculture GmbH, HRB 121482


<find-us-on-facebook_white_small.png>

The contents of this email are confidential. If you are not the intended recipient, please notify us as soon as possible: you should not copy, forward, disclose or use the contents. Start may monitor outgoing and incoming mail for business purposes.
To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

Isilon backup?

$
0
0
Isilon backup?
posted by Chris M on Jan. 25, 2015, 2:52 p.m. (2 days ago)

I'm interested in opinions about backup solutions for Isilon OneFS.

We're currently using Bacula over NFS, which was originally chosen because it didn't cost much. But our Isilon cluster and the tapelibs are growing and I'd like to see what other options there are. The entry level solution with NDMP support seems to be BackupExec, but I'm scared of its rather bad reputation. Also there's a Bacula fork called Bareos which has NDMP support. And there's EMC Networker, which is of course EMC's official recommendation. I've also heard that some big Isilon customers are using Comvault and Netbackup.

What do you use? Are you happy?

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 

I'm interested in opinions about backup solutions for Isilon OneFS.

We're currently using Bacula over NFS, which was originally chosen because it didn't cost much. But our Isilon cluster and the tapelibs are growing and I'd like to see what other options there are. The entry level solution with NDMP support seems to be BackupExec, but I'm scared of its rather bad reputation. Also there's a Bacula fork called Bareos which has NDMP support. And there's EMC Networker, which is of course EMC's official recommendation. I've also heard that some big Isilon customers are using Comvault and Netbackup.

What do you use? Are you happy?


Anyone have an old SCSI 2 / 3 controller card with cables in Sydney

$
0
0
Anyone have an old SCSI 2 / 3 controller card with cables in Sydney
posted by James Bourne on Jan. 26, 2015, 11 p.m. (1 day ago)
Hi all,

I need to mount tapes from an old HP Colorado T4000s ... I have the tape drive but no SCSI controller. If someone has a PCI one that I could borrow this week that would be awesome or know where I could buy such a thing on short notice?

The tape drive has a IDC1 SCSI1 interface (Low Density 50-pin male),

I also have a couple of very old SCSI hard disks (1GB!!) to hook up too also Low Density 50-pin male and one which is 4GB (!!!) which has a high density 68-pin SCSI interface.

TIA,

James
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Hi all,

I need to mount tapes from an old HP Colorado T4000s ... I have the tape drive but no SCSI controller. If someone has a PCI one that I could borrow this week that would be awesome or know where I could buy such a thing on short notice?

The tape drive has a IDC1 SCSI1 interface (Low Density 50-pin male),

I also have a couple of very old SCSI hard disks (1GB!!) to hook up too also Low Density 50-pin male and one which is 4GB (!!!) which has a high density 68-pin SCSI interface.

TIA,

James

APC UPS diagnose

$
0
0
APC UPS diagnose
posted by Matt Daly on Jan. 27, 2015, 11:45 a.m. (1 day ago)
We have a few APC sua3000rm2u units, and one of them will not power on at all. Dumb question - gut check - it should normally turn on with no battery installed, correct? Breakers are on, but no lights. Besides the batteries, are there any replaceable parts or fuses inside?
MD
--

MATT DALY
chief scientist//LEVIATHAN
------------------------------------------------

Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
We have a few APC sua3000rm2u units, and one of them will not power on at all. Dumb question - gut check - it should normally turn on with no battery installed, correct? Breakers are on, but no lights. Besides the batteries, are there any replaceable parts or fuses inside?
MD
--

MATT DALY
chief scientist//LEVIATHAN
------------------------------------------------

Anyone have an old SCSI 2 / 3 controller card with cables in Sydney

$
0
0
Anyone have an old SCSI 2 / 3 controller card with cables in Sydney
posted by Oliver Timm on Jan. 27, 2015, 2:15 p.m. (1 day ago)
I only have Ultra320 - think that's too new right? OliPhone > On 27 Jan 2015, at 2:56 pm, James Bourne wrote: >> Hi all, >> I need to mount tapes from an old HP Colorado T4000s ... I have the tape drive but no SCSI controller. If someone has a PCI one that I could borrow this week that would be awesome or know where I could buy such a thing on short notice? >> The tape drive has a IDC1 SCSI1 interface (Low Density 50-pin male), >> I also have a couple of very old SCSI hard disks (1GB!!) to hook up too also Low Density 50-pin male and one which is 4GB (!!!) which has a high density 68-pin SCSI interface. >> TIA, >> James To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
I only have Ultra320 - think that's too new right? OliPhone > On 27 Jan 2015, at 2:56 pm, James Bourne wrote: >> Hi all, >> I need to mount tapes from an old HP Colorado T4000s ... I have the tape drive but no SCSI controller. If someone has a PCI one that I could borrow this week that would be awesome or know where I could buy such a thing on short notice? >> The tape drive has a IDC1 SCSI1 interface (Low Density 50-pin male), >> I also have a couple of very old SCSI hard disks (1GB!!) to hook up too also Low Density 50-pin male and one which is 4GB (!!!) which has a high density 68-pin SCSI interface. >> TIA, >> James To unsubscribe from the list send a blank e-mail to mailto:studiosysadmins-discuss-request@studiosysadmins.com?subject=unsubscribe

GHOST

Render problem slowly spreading across stack...

$
0
0
Render problem slowly spreading across stack...
posted by Jeremy Lang on Jan. 27, 2015, 7:30 p.m. (1 day ago)
Deadline 6.2, 3dsMax 2014.

Machines go from rendering normally to getting stuck at "starting up" status.

Once they start doing that they never seem to render in Max successfully again. Does not seem to be Deadline or network related because they continue to render in other applications fine.

Looking at one right now, last message was:
0: STDOUT: - 16:16:20.123 INFO: End registering loaded plugins

It doesn't matter that it'll be a couple machines out of dozens of identical ones, connected the same and imaged at the same time.

Anybody run into something like this, or have any ideas?

______________
Jeremy M. Lang
it4vfx
Thread Tags:
  discuss-at-studiosysadmins 

0 Responses   0 Plus One's   0 Comments  
 
Deadline 6.2, 3dsMax 2014.

Machines go from rendering normally to getting stuck at "starting up" status.

Once they start doing that they never seem to render in Max successfully again. Does not seem to be Deadline or network related because they continue to render in other applications fine.

Looking at one right now, last message was:
0: STDOUT: - 16:16:20.123 INFO: End registering loaded plugins

It doesn't matter that it'll be a couple machines out of dozens of identical ones, connected the same and imaged at the same time.

Anybody run into something like this, or have any ideas?

______________
Jeremy M. Lang
it4vfx
Viewing all 3749 articles
Browse latest View live




Latest Images