neilt0 Posted October 28, 2013 Share Posted October 28, 2013 Hi, I had a bit of a brain wave today -- I remembered from somewhere that /tmp/ was a magical, expanding RAM disk and I figured I could use it to speed up nzbget... ...in fact I am currently doing so, and holy poop, does it work! But, I figured I should post a query here to make sure I don't blow anything up. Here's my config: * HP Microserver N54L (2.2GHz dual core) * 8GB RAM * 7x 4TB array drives * 200GB Seagate 7200.2 2.5" cache drive (relatively slow) * 105megabits/sec cable internet (soon to be 120-133megabits/sec) * SuperNews News Service Provider nzbget 12 usenet client (this tweak may also apply to SABnzbd) The main issue is that with such a fast internet connection, to utilise it fully, I am using 28 NSP connections, so you have 28 simultaneous writes at a total 12MB/sec to a relatively slow cache drive. The drive can almost keep up, but not quite, and then of course you may see a simultaneous download and par repair or simultaneous download and unpack, hitting the drive even harder. I used to use one of the 4TB drives as a cache drive, and it was pretty quick, but I decided to move that to the array and use my spare 7200rpm 2.5" drive to save power, heat and also so if it died there couldn't be too much data on it. The way that nzbget works (as do virtually all usenet clients) is that it will retrieve segments of an individual rar to a temp folder (temp). Once that rar is complete, it will assemble those to an intermediate folder (inter), then when all the rars are done, it will unpack to a destination folder (dst). Like so: temp UbuntuAnimalName.part001.rar.001 1.3MB UbuntuAnimalName.part001.rar.002 1.3MB UbuntuAnimalName.part001.rar.003 1.3MB UbuntuAnimalName.part001.rar.004 1.3MB ... inter UbuntuAnimalName.part001.rar 100MB UbuntuAnimalName.part002.rar 100MB UbuntuAnimalName.part003.rar 100MB ... dst UbuntuAnimalName.ISO 4.5GB I have set dst to be on an array drive, and that speeds up unpacking, but I have the temp folder on the cache drive and that was really slowing things down. So, I set nzbget to use /tmp/nzbget-temp as the temp folder and boy, oh boy has it sped things up! The download speed used to fluctuate all over the place from 11,000KB/sec down to 8,000KB/sec then 10,000KB/sec, then 12,0500KB/sec (the line speed). Now it's pretty much locked on at 12,500KB/sec. The temp folder holds the parts of a rar as it's being downloaded, then it's built (joined) in to the inter dir. So, the space required for temp is a function of the size of the split RAR. I've seen .partxxx.rar RARs up to 1GB and right now I'm testing with 500MB rars. During the joining phase, nzbget will download the next RAR, so assuming the join completes before the second RAR is downloaded, you need 2x the RAR size. So far, with 500MB rars, I've not seen the temp folder go over about 1.2GB. With 8GB of RAM, I'm seeing 2.5GB utilised. So, my question is, is there anything inherently bad about using the tmpfs (if that is what it's called) ? I can't see it ever using more than 2GB even in a worst case scenario, as it's always emptying itself to the cache drive. Secondly, is there any danger of using "too much" lowmem, or does the RAM disk not use lowmem? And finally, is there a way of limiting the size of the RAM disk -- I'd be happier if I could limit it to, say, 4GB. Cheers! Neil. Quote Link to comment
technologiq Posted October 29, 2013 Share Posted October 29, 2013 Could the same be accomplished by using an SSD for TMP? Quote Link to comment
neilt0 Posted October 29, 2013 Author Share Posted October 29, 2013 Could the same be accomplished by using an SSD for TMP? Yes, but the problem with using an SSD is a) cost and b) limited writes Using RAM and a (spinning) hard drive is is faster (the RAM part at least) and not really limited when it comes to writes (the hard drive). Because the assembly process requires 2 writes, and I'm writing a lot of data to the drive each month, an SSD could be dead in about a year, according to the specs. Quote Link to comment
neilt0 Posted October 29, 2013 Author Share Posted October 29, 2013 Reading up on tmpfs, it appears to limit the size to 1/2 the RAM by default (which is a good thing). Does that still apply to Slackware/unRAID? http://en.wikipedia.org/wiki/Tmpfs#Linux Linux tmpfs is supported by the Linux kernel from version 2.4 and up.[3] tmpfs (previously known as shmfs) is based on the ramfs code used during bootup and also uses the page cache, but unlike ramfs it supports swapping out less-used pages to swap space as well as filesystem size and inode limits to prevent out of memory situations (defaulting to half of physical RAM and half the number of RAM pages, respectively).[4] These options are set at mount time and may be modified by remounting the filesystem. Quote Link to comment
neilt0 Posted October 29, 2013 Author Share Posted October 29, 2013 I had a discussion with hugbug (the author of nzbget) and we worked out that this tweak is only valid if you are using a FS that works best without DirectWrite switched on -- which is the case with ReiserFS: http://nzbget.sourceforge.net/forum/viewtopic.php?f=3&t=957&p=6113#p6113 Quote Link to comment
ivez Posted October 14, 2014 Share Posted October 14, 2014 I am interested then, is this a good solution for nzbget? Quote Link to comment
neilt0 Posted October 14, 2014 Author Share Posted October 14, 2014 My hack has been incorporated (in a way) into nzbget as a RAM cache, so the hack is not needed. Also, I switched my cache drive to BTRFS, so writes are faster. Quote Link to comment
MuppetRules Posted June 29, 2015 Share Posted June 29, 2015 neilt0, can you clarify how this hack has been incorporated into NZBGet? I've tried using RAM cache from within NZBGet without much success. The only way I can get it to work is if I specifically point the TempDir to /tmp in Environment Variable (Docker) Here's a quick look at my NZBGet configuration: Docker Template: gfjardim's nzbget (14.2) DOWNLOAD QUEUE ArticleCache: 1024MB (Would it help to increase this to 2048?) DirectWrite: No WriteBuffer: -1 The endgoal is to reduce frequent writes or at least minimize them. Quote Link to comment
neilt0 Posted June 29, 2015 Author Share Posted June 29, 2015 If you want to minimise writes and/or speed up downloading, I'd use the built in "article cache" feature, not changing the location of temp. Here are my settings: Full size: http://i.imgur.com/3ykt0iK.png 1024MB (1GB) is plenty, I think. I have 8GB RAM in my server. You want to set it so that nzbget can download around 1-2 RARs in to RAM and not hit the hard drive. It's rare to see 500MB RARs. I have seen 1GB RARs, but that's even rarer, so I set mine to 1GB to allow a maximum of 2 RARs to be in RAM until they are written to disk. If you have a crazy fast download speed, you could increase it, but at 152mbps, that's enough for me. You can look at nzbget's stats to see how much RAM it's using at a time: In this case, the NZB has 200MB RARs and nzbget is using 200MB article cache for the first RAR in memory and then about 100MB cache until the first RAR is written. Slower drive and fast internet means you will need more cache. If you are using an SSD, then 2x maximum RAR size should be plenty -- so maximum of 1 or 2GB. Quote Link to comment
dgaschk Posted June 30, 2015 Share Posted June 30, 2015 There should be no problem using tempfs in this manner as RAM allows. The more the better. How long is the warranty on a hybrid drive? Quote Link to comment
neilt0 Posted June 30, 2015 Author Share Posted June 30, 2015 You can, but there's no point any more - using RAM to build the RAR before writing to disk is now built in as articlecache. Quote Link to comment
Lev Posted February 10, 2018 Share Posted February 10, 2018 On 6/29/2015 at 5:07 PM, neilt0 said: You can, but there's no point any more - using RAM to build the RAR before writing to disk is now built in as articlecache. I like this old thread, shows how far things have progressed such that we have so much RAM to ask the question... what if we never want to write the RAR file to SSD or Disk, we want it to remain in RAM, such that the only thing ever written to SSD or Disk is the final contents of the RAR? Quote Link to comment
neilt0 Posted February 10, 2018 Author Share Posted February 10, 2018 That's what article cache does. Quote Link to comment
Lev Posted February 10, 2018 Share Posted February 10, 2018 8 minutes ago, neilt0 said: That's what article cache does. Maybe I'm asking in the wrong way, or I'm missing something cause here's what I'm observing having tested this for the last hour and getting a bit frustrated that I searched and foudn your thread Article cache based on what I'm observing keeps all of these individual of a RAR in RAM, like you have in your example here, all of these guys are in RAM. On 10/28/2013 at 2:43 PM, neilt0 said: temp UbuntuAnimalName.part001.rar.001 1.3MB UbuntuAnimalName.part001.rar.002 1.3MB UbuntuAnimalName.part001.rar.003 1.3MB UbuntuAnimalName.part001.rar.004 1.3MB ... Only once all of the pieces of a RAR have been downloaded, it moved out of the article cache (RAM) and written to disk (/InterDir) as you show in your written into the complete single RAR file, just like you explained here: On 10/28/2013 at 2:43 PM, neilt0 said: inter UbuntuAnimalName.part001.rar 100MB UbuntuAnimalName.part002.rar 100MB UbuntuAnimalName.part003.rar 100MB ... What I'm trying to do is keep all those completed RAR files in RAM rather than written to the /InterDir disk, therefore I'm trying to make InterDir be a ramdisk. I think this is the next logical step beyond what you were doing in 2013 (glad you're still here!) using temp (tmp). You're right that article cache solves the problem you were curious about, however based on my tests it does not also apply to keeping the completed RAR's in memory as well. I'm still observing those written to /InterDir, until they are all downloaded and finally unpacked and moved to /DestDir. Does this align with what you know? Expect to be called crazy for wanting this, as it means gigabytes of RAR's stored in memory that could easily be lost and have to redownload again in the event of a server reboot. Quote Link to comment
trurl Posted February 10, 2018 Share Posted February 10, 2018 28 minutes ago, Lev said: What I'm trying to do is keep all those completed RAR files in RAM rather than written to the /InterDir disk, therefore I'm trying to make InterDir be a ramdisk. You should be able to map a volume for the docker to /tmp and point NZBget to it. 1 Quote Link to comment
neilt0 Posted February 10, 2018 Author Share Posted February 10, 2018 I typically download 50GB + files and don't have that much RAM, so write the RARs to an SSD that's only used for this purpose. It's a 120GB drive I bought for cheap, so I don't care if it dies. Then I unrar to the array.My article cache is 2GB to hold 2x 1GB RARs before writing to the SSD. Quote Link to comment
Lev Posted February 11, 2018 Share Posted February 11, 2018 4 hours ago, trurl said: You should be able to map a volume for the docker to /tmp and point NZBget to it. Yes, I was thinking the same thing, but so far have failed miserably in my attempts to try it. I've tried multiple different ways mounting /tmp or tmpfs, that part works, best I can tell. I'm able to successful from the bash shell within the NZBget container see the mapped mount point and even create and edit files, and see them back on host. A+, I'm solid here. Where the trouble lies is getting the app NZBget to use that mount point. I've edited the appropriate /InterDir in the 'Paths' section of settings. I've double checked the nzbget.conf file to ensure it matches, but no matter what, NZB get ignores it and falls back to $MainDir/intermediate I've yet to enable debug logging for NZBget, but that's where I'll look next. I expect it must be some permission problem with the mount and tmpfs as a device type. All I know is it shouldn't be this painful, I must be missing something painful simple. Quote Link to comment
Lev Posted February 11, 2018 Share Posted February 11, 2018 4 hours ago, neilt0 said: I typically download 50GB + files and don't have that much RAM, so write the RARs to an SSD that's only used for this purpose. It's a 120GB drive I bought for cheap, so I don't care if it dies. Then I unrar to the array. My article cache is 2GB to hold 2x 1GB RARs before writing to the SSD. So far I've killed one SSD every 1.5 years, and the cheap ones die. It's not the cost, like you said they are cheap, but ugh I'd rather be spending my time on so many other projects than replacing them. RAM maybe my answer. Quote Link to comment
digiblur Posted February 11, 2018 Share Posted February 11, 2018 Definitely not a bad idea. Have 24 gig in my box and rarely see over 1/3 used even with Plex RAM transcoding going on. Might have to play around with the mounts and get that inter folder to be in ram too. Nothing wrong with actually using ram for those temp files. 1 Quote Link to comment
Lev Posted February 11, 2018 Share Posted February 11, 2018 Update got it working. It was as I thought "something painfully simple"... It works as you'd expect. What caught me up was having a existing queue in NZBget. I now know that each of those items in the queue, are set to the paths configured in NZBget at the time they are added to the queue. So with my existing queue, it wasn't until it got caught up to when I made the path changes changes that I saw log messages of those new downloads trying to hit my container path mapping for /InterDir/ (nzbget.conf) that was mapped to /tmp (unRAID Docker container path settings for NZBget container) Thanks @neilt0 this thread continues to deliver over 4 years later since your OP! @trurl thanks for helping me keep my sanity that what I was doing was doing all along was correct. Quote Link to comment
Kenny111 Posted November 6, 2021 Share Posted November 6, 2021 (edited) Hey. I get an error in SABNZBD when using the /tmp folder (RAM) for the downloads file. It won’t let me set permission in the SAB settings (like 755) for the /tmp directory. Anyone else run into this? Edited November 6, 2021 by Kenny111 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.