neilt0 Posted October 28, 2013 Share Posted October 28, 2013 Hi, I had a bit of a brain wave today -- I remembered from somewhere that /tmp/ was a magical, expanding RAM disk and I figured I could use it to speed up nzbget... ...in fact I am currently doing so, and holy poop, does it work! But, I figured I should post a query here to make sure I don't blow anything up. Here's my config: * HP Microserver N54L (2.2GHz dual core) * 8GB RAM * 7x 4TB array drives * 200GB Seagate 7200.2 2.5" cache drive (relatively slow) * 105megabits/sec cable internet (soon to be 120-133megabits/sec) * SuperNews News Service Provider nzbget 12 usenet client (this tweak may also apply to SABnzbd) The main issue is that with such a fast internet connection, to utilise it fully, I am using 28 NSP connections, so you have 28 simultaneous writes at a total 12MB/sec to a relatively slow cache drive. The drive can almost keep up, but not quite, and then of course you may see a simultaneous download and par repair or simultaneous download and unpack, hitting the drive even harder. I used to use one of the 4TB drives as a cache drive, and it was pretty quick, but I decided to move that to the array and use my spare 7200rpm 2.5" drive to save power, heat and also so if it died there couldn't be too much data on it. The way that nzbget works (as do virtually all usenet clients) is that it will retrieve segments of an individual rar to a temp folder (temp). Once that rar is complete, it will assemble those to an intermediate folder (inter), then when all the rars are done, it will unpack to a destination folder (dst). Like so: temp UbuntuAnimalName.part001.rar.001 1.3MB UbuntuAnimalName.part001.rar.002 1.3MB UbuntuAnimalName.part001.rar.003 1.3MB UbuntuAnimalName.part001.rar.004 1.3MB ... inter UbuntuAnimalName.part001.rar 100MB UbuntuAnimalName.part002.rar 100MB UbuntuAnimalName.part003.rar 100MB ... dst UbuntuAnimalName.ISO 4.5GB I have set dst to be on an array drive, and that speeds up unpacking, but I have the temp folder on the cache drive and that was really slowing things down. So, I set nzbget to use /tmp/nzbget-temp as the temp folder and boy, oh boy has it sped things up! The download speed used to fluctuate all over the place from 11,000KB/sec down to 8,000KB/sec then 10,000KB/sec, then 12,0500KB/sec (the line speed). Now it's pretty much locked on at 12,500KB/sec. The temp folder holds the parts of a rar as it's being downloaded, then it's built (joined) in to the inter dir. So, the space required for temp is a function of the size of the split RAR. I've seen .partxxx.rar RARs up to 1GB and right now I'm testing with 500MB rars. During the joining phase, nzbget will download the next RAR, so assuming the join completes before the second RAR is downloaded, you need 2x the RAR size. So far, with 500MB rars, I've not seen the temp folder go over about 1.2GB. With 8GB of RAM, I'm seeing 2.5GB utilised. So, my question is, is there anything inherently bad about using the tmpfs (if that is what it's called) ? I can't see it ever using more than 2GB even in a worst case scenario, as it's always emptying itself to the cache drive. Secondly, is there any danger of using "too much" lowmem, or does the RAM disk not use lowmem? And finally, is there a way of limiting the size of the RAM disk -- I'd be happier if I could limit it to, say, 4GB. Cheers! Neil. Quote Link to comment
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.