Trying to Find the Bottleneck of SVN Checkout
andCommit Speeds | ||||||
posted by Sam Edney on Feb. 18, 2017, 12:55 p.m. (1 day ago) | ||||||
| ||||||
Thread Tags: discuss-at-studiosysadmins | ||||||
|
I suspect you have already seen this page, but there are a few tips here: http://stackoverflow.com/questions/749337/best-practices-for-a-single-large-svn-project It seems like separating out binary data may be the way to go. If it was worth the investment, you could write some wrapper scripts around svn. They could build hashes of each file and keep the hashes as plain text in your repo. If the hashes change between checkouts, get the new binary file from a more suitable storage location. I suppose this gets away from the usefulness of svn. Separately, we noticed some huge speed increases when we moved to git. Our repos are much smaller than yours (between 10 and 400mb) with binary and text mixed. I hear git doesnt play nicely with large binaries either but it may be worth running a test if it is likely you could ever switch. From: William Sandler Hey fellow SSAs. Was wondering if anyone had any tips on speeding up SVN commits and checkouts. The project I'm testing with is ~10GB and ~24,000 files, mostly PNG, MAT, and FBX files. The SVN server is baremetal with a E5-2623 v3, 64GB RAM, and an Intel NVMe SSD. For OS I've tried both Ubuntu with the latest SVN and Apache, and Windows with the latest VisualSVN Server. The client machines doing the committing and checking out are using TortoiseSVN and have last generation i5's, 32GB RAM, and 500GB Samsung SSDs. The machines are connected over 10Gb Intel NICs. Commits average ~20MB/s and checkouts average ~50MB/s. Is this just the nature of single threaded SVN or can something be done to speed up this process? William Sandler All Things Media, LLC Office: 201.818.1999 Ex 158. william.sandler@allthingsmedia.com |