Jump to content


  • Content Count

  • Joined

  • Last visited

  • Days Won


Lev last won the day on April 13

Lev had the most liked content!

Community Reputation

57 Good

About Lev

  • Rank
    Advanced Member


  • Gender
  • Personal Text
    Supermicro X9, 128GB ECC DDR3 @ 1600Mhz, 90TB (2x 6TB WD, 8x 8TB WD), 1x LSI9308 Cache: 2TB Crucial SSD

Recent Profile Visitors

694 profile views
  1. Lev

    Dynamix - V6 Plugins

    @bonienl I couldn't find this previously reported... Dynamix System Stats very small cosmetic nit I noticed over the years, it's more noticeable now that you've done such a great job polishing the rest of the UI. The vertical align of the Dynamix Stats plug-in.
  2. @limetech any specific list of test-cases you'd like to see performed to give confidence towards your goal?
  3. Lev

    CyberPower ok to use with Unraid?

    If it's a choice between having a UPS or no UPS, forget compatibility concerns! I'll take whatever I could get my hands on in Alaska. Do it brother!
  4. Lev

    CyberPower ok to use with Unraid?

    Thanks! this makes me want to give it another try someday with the usb cable and see if I get results like yours. I've only had luck using NET with the optional remote access add on card Cyberpower offers.
  5. Lev

    CyberPower ok to use with Unraid?

    @jonathanm I was able to finally get mine working a year ago using the network snmp after many hours wasted on trying the direct connect usb cable. Is that the same method you're describing? Ever try the usb cable? Did it work for you?
  6. Lev

    CyberPower ok to use with Unraid?

    Great decision to add a UPS! I also think cyberpower products and their customer support are excellent! However the apc library in linux and thus included in unraid doesn't marry well to the point of your question. I'd advise you stick to tried and proceed compatibility recommendations from others.
  7. Lev

    unRAID OS version 6.5.1-rc3 available

    Love it! Works fantastic! Thanks @bonienl ! I did a 'new config' just to try it out
  8. Thank you bonienl ! That's a very nice improvement!
  9. My bad, I really should of included a screenshot in the OP. Thanks for the reply @bonienl
  10. With a large number of unassigned disks, the list in the pull-down menu when assigning them can be quite long. There does not appear to be any logical order to how the disks are listed. Any logical order would be an improvement. Unless there's a better sort method preferred by the developers, seems like the list would be in ascending order of the disk letter, example: sdb, sdc, sdd... @bonienl @Squid just FYI if this is something to add into the on-going GUI improvements.
  11. I build this big. My advice... Don't trouble yourself with those items you listed you want to reuse. Build for where you going, not where you've been. Write speed becomes an issue to fill it in a reasonable amount of time if all drives are empty. Look at some of my previous post and you might find some ideas on how to speed it up. You already spent a lot of money on drives, now is not the time to be cheap. Finish the job. Let's see some pics
  12. Update got it working. It was as I thought "something painfully simple"... It works as you'd expect. What caught me up was having a existing queue in NZBget. I now know that each of those items in the queue, are set to the paths configured in NZBget at the time they are added to the queue. So with my existing queue, it wasn't until it got caught up to when I made the path changes changes that I saw log messages of those new downloads trying to hit my container path mapping for /InterDir/ (nzbget.conf) that was mapped to /tmp (unRAID Docker container path settings for NZBget container) Thanks @neilt0 this thread continues to deliver over 4 years later since your OP! @trurl thanks for helping me keep my sanity that what I was doing was doing all along was correct.
  13. So far I've killed one SSD every 1.5 years, and the cheap ones die. It's not the cost, like you said they are cheap, but ugh I'd rather be spending my time on so many other projects than replacing them. RAM maybe my answer.
  14. Yes, I was thinking the same thing, but so far have failed miserably in my attempts to try it. I've tried multiple different ways mounting /tmp or tmpfs, that part works, best I can tell. I'm able to successful from the bash shell within the NZBget container see the mapped mount point and even create and edit files, and see them back on host. A+, I'm solid here. Where the trouble lies is getting the app NZBget to use that mount point. I've edited the appropriate /InterDir in the 'Paths' section of settings. I've double checked the nzbget.conf file to ensure it matches, but no matter what, NZB get ignores it and falls back to $MainDir/intermediate I've yet to enable debug logging for NZBget, but that's where I'll look next. I expect it must be some permission problem with the mount and tmpfs as a device type. All I know is it shouldn't be this painful, I must be missing something painful simple.
  15. Maybe I'm asking in the wrong way, or I'm missing something cause here's what I'm observing having tested this for the last hour and getting a bit frustrated that I searched and foudn your thread Article cache based on what I'm observing keeps all of these individual of a RAR in RAM, like you have in your example here, all of these guys are in RAM. Only once all of the pieces of a RAR have been downloaded, it moved out of the article cache (RAM) and written to disk (/InterDir) as you show in your written into the complete single RAR file, just like you explained here: What I'm trying to do is keep all those completed RAR files in RAM rather than written to the /InterDir disk, therefore I'm trying to make InterDir be a ramdisk. I think this is the next logical step beyond what you were doing in 2013 (glad you're still here!) using temp (tmp). You're right that article cache solves the problem you were curious about, however based on my tests it does not also apply to keeping the completed RAR's in memory as well. I'm still observing those written to /InterDir, until they are all downloaded and finally unpacked and moved to /DestDir. Does this align with what you know? Expect to be called crazy for wanting this, as it means gigabytes of RAR's stored in memory that could easily be lost and have to redownload again in the event of a server reboot.