Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

7 Neutral

About AgentXXL

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. New backplanes? Unless I've missed something, Norcotek is now out of business. A few months ago I emailed and tried calling (disconnected number) to see if I could get replacement backplanes for my 4220. No response from the email and I've seen other mentions that Norcotek is dead. Not sure what kind of speeds you're seeing, but I've got 18 drives in mine and the speed maxes at about 150Mbps for writes to the array. More common to see it around 80 - 110 Mbps. The SATA/SAS controller and the motherboard+CPU combo plays into this as well. 16 of my drives are connected to my LSI 9201-16i which a PCIe 2.0 card that I have installed in a PCIe x8 slot. Max speed of this LSI is 6Gbps (SATA3) but it's also limited by the rest of the system and how many PCIe lanes are in use and/or dedicated to other hardware. I'm looking at a Supermicro enclosure to eventually replace my 4220 but for now I've removed the defective backplanes and direct-wired to each drive using miniSAS SFF-8087 to 4 SATA forward breakout cables. And of course separate power for each drive too. Definitely a LOT more mess than using the backplanes, but at least my system is now not throwing random UDMA CRC errors that it did when using the backplanes. I may look at upgrading the LSI to a PCIe 3.0 version with 12Gbps capability, but not until after I get a new motherboard/CPU. I'm budgeting to eventually pickup a Threadripper setup so I can run a few more VMs and still have some CPU core headroom. Dale
  2. No problem. Glad that's helping, but you don't have to delete all files and re-download - just delete the bad/renamed files for each affected download and attempt a re-postprocess from nzbget. For example, delete all files that have been renamed with the extension '.1' or any files that have had the extra leading '0' attached to the part identifier. Occassionally I'll have to ask nzbget to 'download remaining files' and let it attempt another repair before the unpack even tries to start. For some older content that often has more missing articles, I wish I could find a way to tell nzbget to download ALL remaining files as sometimes it stops download of the next parity file and just marks the download as failed. Some older (and even sometimes new) content need the full set of parity files for par to successfully repair the archive. Note that on the failed 7zip extracts that hang, I will sometimes just stop the nzbget Docker container and then use my Windows box with 7zip installed to do the extract manually. This is rare as most times I can cleanup the intermediate download folder and nzbget will then successfully call 7zip and proceed with the extract. Dale
  3. I and other users are seeing the same issue. I've discovered a few issues that seem to be related. First is that the par check/repair stage seems to fail randomly. Sometimes nzbget reports 'PAR Success' but no matter how many times I try and re-postprocess the download, the unpack fails or gets stuck. If I run QuickPAR from Windows using the same PAR set, it often finds 1 or 2 files that have all blocks present but they need to be re-joined. Once QuickPAR has re-joined these blocks/files, then nzbget can successfully unpack. The other issue is some PAR repairs leave the renamed damaged files in the source folder. I find this confuses nzbget's unpack processing, especially when the first file in the archive set has a renamed copy. For example, if nzbget PAR does a repair/rejoin, it sometimes seems to create a file with one more leading '0' in the filename, i.e. xxxxxxxxxxxxxxxxx.7z.001 is repaired/rejoined but there is a copy of the bad file named xxxxxxxxxxxxxxxxx.7z.0001. The same can happen with rar archives - the filename might be xxxxxxxxxxxxxxxxx.part001.rar and after the repair/rejoin there's a 2nd file named xxxxxxxxxxxxxxxxx.part0001.rar. When you look at the source folder (the 'intermediate' folder for most, depending on how you have nzbget configured) and delete all the 'bad' files that have been renamed and then do a re-postprocess, the unpack will usually succeed. The 3rd case of failure I've found is the complete 'halt' of the extract/unpack process, which seems to be a bug on the way 7zip is called to process .7z archives. The logs show the unpack request is calling 7zip but the unpack hangs for some reason that the logs don't identify. Hope these findings might help others and maybe even help the nzbget team further refine their post-processing routines. Note that I've also found these same issues when using the Linuxserver.io build of the nzbget Docker container. This means the issues are likely inherent to the nzbget app and/or the par/unrar/7zip extensions. Dale
  4. I was happy when I bought it over 6 years ago and used it for FreeNAS for many years with only 8 of the 20 bays populated with drives. When I moved to unRAID about 9 months ago, I had major issues with the hot-swap SATA backplanes that Norcotek has installed in the case. I eventually had to remove all the backplanes and now the drives are direct cabled - no more easy hot-swap but I never really needed that anyways. And as far as I know, Norcotek is now out of business. They haven't responded to multiple emails asking about replacement backplanes and their phone number has been disconnected. This means you'll have to look for something else - I'm considering a Supermicro 24 disk enclosure myself, but also picked up a Rosewill 4500 so I can do a 2nd unRAID setup with up to 15 x 3.5" drives (again, all direct cabled).
  5. Try the Krusader docker container.... it's quite full-featured as a file/directory utility. Just make sure to add the paths to the mountpoints for your UD device(s) so you can copy to the array.
  6. @TechMed So if you have UPS units and they're correctly configured to do shutdowns, why does this problem happen? If the remote shares on the other systems are also UPS protected, you just need to tweak your UPS shutdown sequence so that unRAID shuts down before the other systems do. This should prevent UD lockups as the remote shares should still be valid during the unRAID shutdown. The other thing to remember is to have your network gear all UPS protected as well. If your router/switches go down, that could cause the same issue where the remote shares/systems are no longer reachable until they restart. I have 2 remote mounts and haven't encountered a UD lockup like you describe. Hopefully it's just setting the shutdown times so that unRAID shuts down before the others.
  7. If your power is failing regularly (and even if it's stable), consider adding a UPS to protect your systems from 'instant shutdowns'. UPS units are relatively inexpensive and very beneficial when you have irregular power. If the outages are brown-outs (short duration) then a UPS will prevent the problem you're encountering. And longer duration outages can use UPS signalling to the OS to do controlled shutdowns if the power level on the UPS drops too low. Instead of trying to make it work under UD, the real answer is correcting/alleviating your power issues.
  8. Likely the cause - the single USB connection is identifying the 3 drives incorrectly via it's internal controller. Happens with port multiplier setups too. There's no easy way to correct this situation other than putting each of the 3 drives into separate USB enclosures. Or just attach them via SATA to your unRAID if possible, and then transfer the data from them.
  9. Possibly.... note that the 4th drive that is mounting properly also has the same ending sequence as the 3 drives that appear identical in their identification info. Could be a compatibility issue with the 1TB hard drives having a model/serial number that's longer and doesn't differentiate until later in the sequence. Only way to know is to try. If the user chooses to remove the forgotten devices at the bottom of the UD section, that might help eliminate the potential for them to get identified as the same drive.
  10. Simple work-around is to disconnect 2 of the USB drives and only do the rename of the mountpoint with one drive attached. Then remove it and attach the next and lastly do the 3rd.
  11. Not sure they made a huge difference but the full parity rebuild on my dual parity drives took 24hrs, about 3 - 4 hrs less than previously. That's 18 data drives and 2 parity drives. Regardless of time, the more important issue is that there were zero errors after replacing the SATA cable. My 168TB+ unRAID array is running quite nicely now.
  12. If you have another PC to use you could boot from a USB Linux distro, mount the image and then share it over your network for copying to the unRAID array. Or copy the data onto another disk that's formatted in a format that UD can use. Mounting it in the QNAP and transferring from there is probably just as simple though.
  13. No port multipliers: 6 x SATA from motherboard (all Intel SATA) and 16 from the LSI 9201-16i in IT mode. The old HGST 4TB drives were also 5400 rpm whereas the rest of the drives (and the 4 new 10TB replacements) are all 7200rpm. I know rotational speed doesn't always translate to higher performance, but having all drives the same won't hurt. After replacing the cable I'm now 5% into the complete parity rebuild (used the Tools -> New Config method) and no errors (CRC or otherwise). As I said above, the data and the drives themselves are fine - it was just a bad SATA cable. That's one of the disadvantages that the LSI cards have - you can't just replace the cable for single drive as you need the SFF-8087 miniSAS to SATA breakouts (4 drives per cable). At least I have spare new cables on hand. Thanks again! Dale
  14. That's my suspicion too... the full parity rebuild after doing a Tools -> New Config took about 27 hrs but I've had my monthly non-correcting parity checks take up to 45 hrs. I'll do the full parity rebuild again as soon as I shutdown and replace the cabling. Thanks!
  15. While I realize I could have let each 4TB drive replacement rebuild from parity, it took less time to move data off the remaining 4TB drives than it would have to rebuild each one onto the new 10TB replacements. Plus I used the opportunity to use the unBalance plugin to gather certain folders so that all of their content is on one drive only (an OCD thing of mine). As for the preclear, I mentioned doing it only as a way to do an initial test of the drives before shucking them. Regardless, the drives (and the data on them) appear to be fine. I'm certain that the issues reported after the new config are cabling related so I'll go ahead and replace it and then run another parity check.... I assume just leaving the 'Write corrections to parity' option checked? Or am I better to do the Tools -> New Config route again to completely rebuild parity?