neilt0

Members
  • Posts

    1509
  • Joined

  • Last visited

Everything posted by neilt0

  1. 14 doesn't use libpar2. As for the rest, I have no idea. Either Needo or hugbug may be able to help.
  2. Yup, I run htop to keep an eye on what it's doing. When you see the compiling is done, and the CPU is idle, open up the web interface for nzbget and the new version should be running. P.S. The fix is now also in release version 14.1
  3. makes a good server for preclearing and testing drives. Or just a good server, period!
  4. We think that has been fixed in the latest build of nzbget. It only affects unraring on certain platforms, including Docker/unRAID. To get that new version, if you are running Needo's Docker, stop the Docker, change the EDGE variable to 1171 and it will update to that version with the fix. http://nzbget.net/forum/viewtopic.php?f=3&t=1603&p=10877#p10877
  5. I didn't see a speed increase using Turbo write. I think I enabled it on the N36L as a test, and not the N54L. I get up to about 45MB/sec writes without it, which is usually fast enough for general use. Parity on the N54L is a 4TB Deskstar™ 7K4000 - HGST. Maybe I should try that out when I have to write a lotta data.
  6. Get the biggest drives you can. Especially if you have a Microserver, as it has limited slots.
  7. A small number of large drives is much better for safe recovery than a large number of small drives. I run 7x 4TB drives in my Microserver.
  8. 128,000TB. I believe that's the current 48-bit LBA BIOS limit.
  9. I'm pretty sure it doesn't supply power. Just split off the Molex power in the top bay.
  10. xfs seems better in that respect. I migrated two of my 4TB ReiserFS drives to xfs as one was getting so slow to write to, it was essentially unusable. Even after deleting 500GB from that drive, it was still too slow to use.
  11. No, they run extensive tests, and have documented what they do. It used to take days, but I think it's faster now, since they switched to faster HBAs.
  12. My hack has been incorporated (in a way) into nzbget as a RAM cache, so the hack is not needed. Also, I switched my cache drive to BTRFS, so writes are faster.
  13. 6TB WD Red for £204.99: http://www.dabs.com/products/wd-6tb-red-sata-6gb-s-64mb-3-5--hard-drive-9MMM.html?q=6tb%20red&src=16&awc=3044_1413210008_9e8317f8d09a325e9364b0f1c4af4840&referrerid=AW&utm_source=awin&utm_medium=affiliates&utm_content=AW00
  14. Get a fill server (and use it!) There's virtually nothing I can't get (that I personally want), going back to late 2008. Torrents, especially for older material, are a waste of time for me - slow, never complete. I use SuperNews (per my sig), with an Astraweb block. I've used about 500GB of the AW block in 2 years, and I pull down about 2 to 4TB a month via SuperNews.
  15. That's pretty much what I did. I bought 7x 4TB drives a year ago, replacing a huge pile o'drives. Selling my old drives on Fleabay, I got close to the then-current per-TB price fir them.
  16. From that FlexRAID guy again? No, thanks!
  17. It'd still be nice to get an nzbget plugin (ideally one that can update itself with testing and stable versions). I don't know if overbyrn is able to pass on his code for the plugin he used to maintain? We have the Docker versions, but there is an outstanding problem with unraring when nzbget is running in a Docker. It doesn't look like the Docker is being maintained?
  18. A 4TB drive that's dead after a year is not in my plan! 3 years, yes.
  19. I'm happy paying extra for the warranty alone. You can pay a little extra and get a 4th year's warranty for the WD Reds. Compare that to the cheap Seagate DM I bought a year ago with a 1 year warranty, which do you think is the better deal?
  20. I couldn't get that to work, posted about it, but I don't think there was a response. I used needo's nzbget Docker, which works very well, but there's a problem with unraring that's specific to Docker. I spent a few days debugging that issue with the author of nzbget (hugbug) and we concluded it's specific to running nzbget in a Docker. I posted that problem, and again no replies. I kind of follow the idea of merging Dockers, rewriting them or writing your own, but I'm not a programmer, so I don't know how well that'd go. I think the issue is that we rely too heavily in volunteers for what are becoming "core" uses for our servers. Here's an idea: Limetech pick up the top ten applications used in unRAID and adopt them as official plugins or Dockers. That would mean they code the plugin or Docker. This is the Synology approach AIUI. Or, they sponsor a programmer (one ir more of the existing people here) and pay them to maintain the app. Or, we all crowdfund the development/maintenance of the core apps - we each pay $10 a year or something. Well, it's an idea...! [emoji16]
  21. Nowadays, of the files I get, about 90% as it's quite a bit more efficient than the older format. I download up to 1TB a week. This is not a RAR5 problem -- unraring in nzbget works outside of docker. It's an "nzbget inside docker" problem.
  22. Because everything to be unpacked is packed as a RAR? Also, no RAR5 support in 7zip? http://sourceforge.net/p/sevenzip/discussion/45797/thread/0500cb75/?page=1
  23. Thanks! We did try unrar 5.11 and it didn't fix the problem, unfortunately: http://nzbget.net/forum/viewtopic.php?f=3&t=1489&start=40#p9737