Using /tmp/ as a large RAM disk (for nzbget/SABnzbd)?


Recommended Posts

Hi,

 

I had a bit of a brain wave today -- I remembered from somewhere that /tmp/ was a magical, expanding RAM disk and I figured I could use it to speed up nzbget...

 

...in fact I am currently doing so, and holy poop, does it work!

 

But, I figured I should post a query here to make sure I don't blow anything up.

 

Here's my config:

 

* HP Microserver N54L (2.2GHz dual core)

* 8GB RAM

* 7x 4TB array drives

* 200GB Seagate 7200.2 2.5" cache drive (relatively slow)

* 105megabits/sec cable internet (soon to be 120-133megabits/sec)

* SuperNews News Service Provider

 

nzbget 12 usenet client (this tweak may also apply to SABnzbd)

 

The main issue is that with such a fast internet connection, to utilise it fully, I am using 28 NSP connections, so you have 28 simultaneous writes at a total 12MB/sec to a relatively slow cache drive.

The drive can almost keep up, but not quite, and then of course you may see a simultaneous download and par repair or simultaneous download and unpack, hitting the drive even harder.

 

I used to use one of the 4TB drives as a cache drive, and it was pretty quick, but I decided to move that to the array and use my spare 7200rpm 2.5" drive to save power, heat and also so if it died there couldn't be too much data on it.

 

The way that nzbget works (as do virtually all usenet clients) is that it will retrieve segments of an individual rar to a temp folder (temp). Once that rar is complete, it will assemble those to an intermediate folder (inter), then when all the rars are done, it will unpack to a destination folder (dst).

 

Like so:

 

temp

UbuntuAnimalName.part001.rar.001 1.3MB

UbuntuAnimalName.part001.rar.002 1.3MB

UbuntuAnimalName.part001.rar.003 1.3MB

UbuntuAnimalName.part001.rar.004 1.3MB

...

 

inter

UbuntuAnimalName.part001.rar 100MB

UbuntuAnimalName.part002.rar 100MB

UbuntuAnimalName.part003.rar 100MB

...

 

dst

UbuntuAnimalName.ISO 4.5GB

 

I have set dst to be on an array drive, and that speeds up unpacking, but I have the temp folder on the cache drive and that was really slowing things down.

 

So, I set nzbget to use /tmp/nzbget-temp as the temp folder and boy, oh boy has it sped things up! The download speed used to fluctuate all over the place from 11,000KB/sec down to 8,000KB/sec then 10,000KB/sec, then 12,0500KB/sec (the line speed). Now it's pretty much locked on at 12,500KB/sec.

 

The temp folder holds the parts of a rar as it's being downloaded, then it's built (joined) in to the inter dir.

So, the space required for temp is a function of the size of the split RAR. I've seen .partxxx.rar RARs up to 1GB and  right now I'm testing with 500MB rars.

During the joining phase, nzbget will download the next RAR, so assuming the join completes before the second RAR is downloaded, you need 2x the RAR size. So far, with 500MB rars, I've not seen the temp folder go over about 1.2GB. With 8GB of RAM, I'm seeing 2.5GB utilised.

 

So, my question is, is there anything inherently bad about using the tmpfs (if that is what it's called) ?

I can't see it ever using more than 2GB even in a worst case scenario, as it's always emptying itself to the cache drive.

 

Secondly, is there any danger of using "too much" lowmem, or does the RAM disk not use lowmem?

 

And finally, is there a way of limiting the size of the RAM disk -- I'd be happier if I could limit it to, say, 4GB.

 

Cheers!

 

Neil.

Link to comment

Could the same be accomplished by using an SSD for TMP?

 

Yes, but the problem with using an SSD is a) cost and b) limited writes

 

Using RAM and a (spinning) hard drive is is faster (the RAM part at least) and not really limited when it comes to writes (the hard drive). Because the assembly process requires 2 writes, and I'm writing a lot of data to the drive each month, an SSD could be dead in about a year, according to the specs.

Link to comment

Reading up on tmpfs, it appears to limit the size to 1/2 the RAM by default (which is a good thing). Does that still apply to Slackware/unRAID?

 

http://en.wikipedia.org/wiki/Tmpfs#Linux

Linux

tmpfs is supported by the Linux kernel from version 2.4 and up.[3] tmpfs (previously known as shmfs) is based on the ramfs code used during bootup and also uses the page cache, but unlike ramfs it supports swapping out less-used pages to swap space as well as filesystem size and inode limits to prevent out of memory situations (defaulting to half of physical RAM and half the number of RAM pages, respectively).[4] These options are set at mount time and may be modified by remounting the filesystem.

Link to comment
  • 11 months later...
  • 8 months later...

neilt0, can you clarify how this hack has been incorporated into NZBGet?

 

I've tried using RAM cache from within NZBGet without much success. The only way I can get it to work is if I specifically point the TempDir to /tmp in Environment Variable (Docker)

 

y7VZIIH.jpg

 

 

Here's a quick look at my NZBGet configuration:

 

Docker Template:

gfjardim's nzbget (14.2)

 

DOWNLOAD QUEUE

ArticleCache: 1024MB (Would it help to increase this to 2048?)

DirectWrite: No

WriteBuffer: -1

 

The endgoal is to reduce frequent writes or at least minimize them.

Link to comment

If you want to minimise writes and/or speed up downloading, I'd use the built in "article cache" feature, not changing the location of temp.

 

Here are my settings:

 

3ykt0iK.png

 

Full size: http://i.imgur.com/3ykt0iK.png

 

1024MB (1GB) is plenty, I think. I have 8GB RAM in my server. You want to set it so that nzbget can download around 1-2 RARs in to RAM and not hit the hard drive.

 

It's rare to see 500MB RARs. I have seen 1GB RARs, but that's even rarer, so I set mine to 1GB to allow a maximum of 2 RARs to be in RAM until they are written to disk.

 

If you have a crazy fast download speed, you could increase it, but at 152mbps, that's enough for me. You can look at nzbget's stats to see how much RAM it's using at a time:

 

HTTO50x.png

 

In this case, the NZB has 200MB RARs and nzbget is using 200MB article cache for the first RAR in memory and then about 100MB cache until the first RAR is written.

 

Slower drive and fast internet means you will need more cache. If you are using an SSD, then 2x maximum RAR size should be plenty -- so maximum of 1 or 2GB.

Link to comment
  • 2 years later...
On 6/29/2015 at 5:07 PM, neilt0 said:

You can, but there's no point any more - using RAM to build the RAR before writing to disk is now built in as articlecache.

 

I like this old thread, shows how far things have progressed such that we have so much RAM to ask the question... what if we never want to write the RAR file to SSD or Disk, we want it to remain in RAM, such that the only thing ever written to SSD or Disk is the final contents of the RAR?

 

 

Link to comment
8 minutes ago, neilt0 said:

That's what article cache does.

 

Maybe I'm asking in the wrong way, or I'm missing something cause here's what I'm observing having tested this for the last hour and getting a bit frustrated that I searched and foudn your thread :)

 

Article cache based on what I'm observing keeps all of these individual of a RAR in RAM, like you have in your example here, all of these guys are in RAM.

 

On 10/28/2013 at 2:43 PM, neilt0 said:

temp

UbuntuAnimalName.part001.rar.001 1.3MB

UbuntuAnimalName.part001.rar.002 1.3MB

UbuntuAnimalName.part001.rar.003 1.3MB

UbuntuAnimalName.part001.rar.004 1.3MB

...

 

 

Only once all of the pieces of a RAR have been downloaded, it moved out of the article cache (RAM) and written to disk (/InterDir) as you show in your written into the complete single RAR file, just like you explained here:

 

On 10/28/2013 at 2:43 PM, neilt0 said:

inter

UbuntuAnimalName.part001.rar 100MB

UbuntuAnimalName.part002.rar 100MB

UbuntuAnimalName.part003.rar 100MB

...

 

What I'm trying to do is keep all those completed RAR files in RAM rather than written to the /InterDir disk, therefore I'm trying to make InterDir be a ramdisk. I think this is the next logical step beyond what you were doing in 2013 (glad you're still here!) using temp (tmp). You're right that article cache solves the problem you were curious about, however based on my tests it does not also apply to keeping the completed RAR's in memory as well. I'm still observing those written to /InterDir, until they are all downloaded and finally unpacked and moved to /DestDir. Does this align with what you know?

 

Expect to be called crazy for wanting this, as it means gigabytes of RAR's stored in memory that could easily be lost and have to redownload again in the event of a server reboot. :)

 

 

Link to comment

I typically download 50GB + files and don't have that much RAM, so write the RARs to an SSD that's only used for this purpose. It's a 120GB drive I bought for cheap, so I don't care if it dies. Then I unrar to the array.

My article cache is 2GB to hold 2x 1GB RARs before writing to the SSD.

Link to comment
4 hours ago, trurl said:

You should be able to map a volume for the docker to /tmp and point NZBget to it.

 

Yes, I was thinking the same thing, but so far have failed miserably in my attempts to try it.

 

I've tried multiple different ways mounting /tmp or tmpfs, that part works, best I can tell. I'm able to successful from the bash shell within the NZBget container see the mapped mount point and even create and edit files, and see them back on host. A+, I'm solid here.

 

Where the trouble lies is getting the app NZBget to use that mount point. I've edited the appropriate /InterDir in the 'Paths' section of settings. I've double checked the nzbget.conf file to ensure it matches, but no matter what, NZB get ignores it and falls back to $MainDir/intermediate 

 

I've yet to enable debug logging for NZBget, but that's where I'll look next. I expect it must be some permission problem with the mount and tmpfs as a device type. All I know is it shouldn't be this painful, I must be missing something painful simple. :S

Link to comment
4 hours ago, neilt0 said:

I typically download 50GB + files and don't have that much RAM, so write the RARs to an SSD that's only used for this purpose. It's a 120GB drive I bought for cheap, so I don't care if it dies. Then I unrar to the array.

My article cache is 2GB to hold 2x 1GB RARs before writing to the SSD.

 

So far I've killed one SSD every 1.5 years, and the cheap ones die.

 

It's not the cost, like you said they are cheap, but ugh I'd rather be spending my time on so many other projects than replacing them. RAM maybe my answer. :D

Link to comment

Update got it working. It was as I thought "something painfully simple"...

 

It works as you'd expect. What caught me up was having a existing queue in NZBget. I now know that each of those items in the queue, are set to the paths configured in NZBget at the time they are added to the queue. So with my existing queue, it wasn't until it got caught up to when I made the path changes changes that I saw log messages of those new downloads trying to hit my container path mapping for /InterDir/ (nzbget.conf) that was mapped to /tmp (unRAID Docker container path settings for NZBget container)

 

Thanks @neilt0 this thread continues to deliver over 4 years later since your OP!

 

@trurl thanks for helping me keep my sanity that what I was doing was doing all along was correct.

 

 

 

Link to comment
  • 3 years later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.