Jump to content

AgentXXL

Members
  • Content Count

    148
  • Joined

  • Last visited

Community Reputation

5 Neutral

About AgentXXL

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I've been reading through this thread about the random unpacking failure with nzbget. I too started experiencing it about 2 weeks ago, at the time using the linuxserver.io container. After reading through their support thread, it appears that this issue affects both it and the binhex container. I've been using the binhex container for a few days to try and troubleshoot but now the issue is occurring very frequently. Scheduling restarts of the container every 30 minutes or 60 minutes hasn't worked as one of the downloads has been stuck repairing and the time estimate is more than 1hr. Every time the container restarts, the repair process kicks off again. I'm about to manually use the par commands on my Windows or Mac box to try the repair instead, but in the meantime other downloads have failed unpacking and are also looking like they need repair. No reports of health issues during the downloads so I wonder why the par checks are failing when they shouldn't and if that's part of the cause for stuck unpacks. I'm thinking of trying a sabnbzd container next but thought I'd at least post my comments. I can post some logs if required.
  2. Although I'm not one of the experts here, 46 hours for one full pass of the preclear (pre-read, zero and post-read) isn't unreasonable. I get similar numbers for my 10TB drives, but my 8TB preclears usually take about 40 hrs. I've seen similar errors in my syslog as well, but as the drive passed the preclear and as no SMART errors were of concern, I've used those drives with no issues. I suspect you are fine, but I too would be interested to know why random sectors report errors that don't affect the preclear. Perhaps those are dealt with through the drive's sector reallocation list but SMART has shown no re-allocated sectors for any of the drives that saw those errors. In any case, I think you're safe to not be concerned with using the drive. Hope it all goes well... now that I've had my connection issues resolved for a month, I'm finding the system to be extremely stable. I haven't even seen one UDMA CRC error, which are fully recoverable but do tend to indicate connection issues.
  3. Thanks @DanielCoffey and @Pauven for the quick responses. I guess I hadn't read up enough on what the tool does, but it was recommended to use since the tunables are what's been recommended to try and improve write performance. As it's only doing reads, I do feel a lot safer using it. My 'server room' is air conditioned so I might just set the air conditioner to run it cooler than normal to try and combat the heat generation of the test. Looking forward to the results!
  4. I have yet to try this tool but as more drives have been added to my system, write speeds have definitely taken a hit. I did change a couple of the tunables from the defaults but that was when the system was running on fewer than 8 drives. I'm now at 16 drives so I realize parity calc will slightly slow down writes. That said, I'd like to optimize and try this tool, but one thing I haven't seen is mention of the safety of using the tool with array drives that are all populated with data. As the tool must do writes, how much danger is there that it could cause disk corruption like @dalben had happen? I haven't been able to read the whole thread and don't want to since the latest version is a far cry from what it was when 1st released. I'm sure it's very safe to use, but before I try I just wanted to query how often users have reported drive failure/corruption while running the tool? Also, should I reset the tunables to default before trying the tool? Thanks!
  5. One other bug/minor annoyance. I use the 'Synchronze Folders...' function under the Tools menu to migrate data from outside the array. It works great and is easy to resume if you have to reboot or there's some sort of restart of Krusader/Docker. However, the window that opens always has the 'Name' field for the left pane expanded far out to the right, off the screen/window view. I grab the horizontal scroll bar at the bottom and slide it all the way over to the right and then resize the left pane 'Name' field so that both it and the right pane headings are visible. I've tried resetting this and closing/exiting Krusader properly to save the settings (like it does for other Krusader windows) but it never works. This does appear to be an issue with the Docker container as Krusader on my Ubuntu system seems to work fine. Similarly, any copy/move windows also open in a default location of the center of the screen, whereas I would like them to open off to the far right edge so my main Krusader window isn't obscured. Very minor issues, but would be nice if fixable.
  6. I'm having issues with this method for extracting files from a rar set. For some reason the extract starts fine, but randomly it will stop popping up a Information --- Krusader error dialog saying 'access denied' to the file within the rar set. Sometimes it seems to work with no issues, but most of the time it fails on a rar set that's larger than 4GB in size, for example one containing an iso file. My Krusader log entries from an attempt made just a few minutes ago.... 2019-08-22 13:19:53,732 DEBG 'start' stderr output: kdeinit5: Got EXEC_NEW '/usr/lib/qt/plugins/kf5/kio/file.so' from launcher. kdeinit5: preparing to launch '/usr/lib/qt/plugins/kf5/kio/file.so' 2019-08-22 13:19:53,740 DEBG 'start' stderr output: kdeinit5: Got EXEC_NEW '/usr/lib/qt/plugins/kio_krarc.so' from launcher. kdeinit5: preparing to launch '/usr/lib/qt/plugins/kio_krarc.so' 2019-08-22 13:20:28,795 DEBG 'start' stderr output: kdeinit5: PID 225 terminated. This has happened with many different rar sets in the past but I got around it by copying the rar set to another system and extracting there, then copy the extracted files back to my unRAID. Note that this occurs whether I try to extract to the array, to the cache or to an unassigned device (SSD or hard drive). Any suggestions?
  7. It's all good... your knowledge on other unRAID issues has helped me and many others countless times. You can't be expected to know about things you don't use or deal with yourself, at least until you get informed. I don't have a wife, but my pets often remind me that I'm ignoring them... not quite the same as ignorant, but similar! Ahh, so a Neil Young or Highlander fan are you? Of course that phrase has appeared in a few other pop culture references, including Kurt Cobain's suicide note and songs from many. Now that I've essentially been forced into retirement from my main career, I've joined the 'burn-out' culture in the form of my medical cannabis usage. Though not entirely true... I've found some of the sativa strains really do give me a burst of energy, focus and creativity! 🤣
  8. A zero only pass of the plugin still takes the same amount of time as the pre-read and post-read stages would, so it's not just writing a signature... it's actually writing zeroes to every sector of the drive before it writes the signature. I agree that if all it did was write a signature, which would take less than a minute, you're never going to be sure.
  9. If it does stop addition of the new drive, then that's great. For now I'll continue to use the plugin for the basic stress test on new drives, and for a simple zero pass for older but still functional drives. I'm potentially going to build a 2nd unRAID server to keep my media server isolated from my personal/work/backup data. I know I could just make a separate share with different credentials/access, but a 2nd unRAID box that I keep at a friends place would give me an offsite backup.
  10. So in the long run, I've removed the SAS/SATA backplanes from the Norco 4220. I'm still using it as my enclosure although I'm watching eBay for a decent deal on a Supermicro chassis with at least 24 bays. As for my drives, I'm now direct cabled to them in the Norco enclosure. I purchased some new miniSAS (SFF-8087 host) to 4 SATA forward (HD targets) breakout cables. I've essentially lost the hot-swap capability but for a home server like this, that isn't an issue. The bigger concern was ensuring that the mess of cabling doesn't block airflow to and through the drive bays. The Norco isn't great for this anyhow as there's next to no airspace between drives. Because of this the drive temps run hotter than I'd like. I replaced the standard fans that Norco supplied with 2 x 140mm fans which both move a lot more air and are also much quieter than the stock fans. It required some metal work to open up the mounting plate to accommodate the larger fans. If you're fine with direct cabling and can live with a 15 bay enclosure, the Rosewill RSV-L4500 4U case is an inexpensive option from the Rosewill official store on eBay. Note that I also ordered a certified genuine LSI 9201-16i as the initial one was a Chinese knock-off. That said, the Chinese knock-off is now performing well with the drives direct-cabled, so in the end I didn't need to order the genuine one. I went the inexpensive direct cable route as I'm on disability income and just don't have enough disposable income to buy a Supermicro enclosure yet. I'm saving up for one as that seems to be one of the most recommended for those needing space for a lot of drives. The other solution is to replace the 4TB and 6TB drives with more 10TB+ drives so I can reduce the drive count. Right now I'm sitting at 114TB with 6 x 10TB, 4 x 8TB, 1 x 6Tb and 4 x 4TB. And I still have 3 bays left in the Norco for more array drives - it would be 5 but I've used the first 2 bays for my dual parity drives (which are Ironwolf NAS vs the rest of my drives which are shucked from USB enclosures).
  11. For new drives I still prefer to do at least one pass (all 3 stages) of the preclear script/plugin. For drives that I've had in operation for a while, I'll typically only do a zero pass as long as the SMART report from the drive shows nothing concerning. Of course since unRAID will do that itself, it's not required to use the plugin. I just prefer being able to do a zero of the drive before adding it to the array. It takes the same amount of time whether you do the zero pass using the plugin or by adding the drive to the pool, but if by chance there's a failure during the zero, I'd rather know about it before adding it to the pool. One question about just adding a drive to the pool and letting unRAID do the zero: if by chance there's errors or a failure of the drive while unRAID is zeroing the drive, does it still add it to the pool? If so this means obtaining a replacement drive right away. If you do the zero pass with the plugin, you'll have the option of delaying the addition of the drive until it's replaced/repaired. Most of us tend to buy drives only when on sale, so using the plugin seems safer to me, although I do have dual parity in use.
  12. I'm by no means an expert like some of the others helping you, but if the drives that you want to recover data from were all part of the unRAID parity protected array, you could do as I mentioned a few posts back: get yourself some decent data recovery software that supports XFS formatted drives. As mentioned above, I went with UFS Explorer Standard edition and it allowed me to recover data from a drive that was also unmountable but otherwise functioning. I.E. no electronic/mechanical issues, just lost partition and directory tables. As for removing drive 6, you could use the 'Shrink Array' procedure, i.e. https://wiki.unraid.net/Shrink_array .
  13. Sorry, not wanting to hijack the thread but is this still an issue with the new 3rd gen Ryzens? Or the 2nd gen Threadrippers? I'm saving up some cash to replace my older i7-6700K based system that unRAID runs on. I've definitely been considering a Ryzen 3 setup but also might go with a 2nd gen Threadripper setup.
  14. I just went through something similar - failure of unRAID array due to old/defective SATA backplanes on my Norco RPC-4220 storage enclosure. I ended up with one drive that was repaired by the Check Filesystems procedure but the other didn't recover properly. I had made a disc image using dd/ddrescue before trying the Check Filesystems in maintenance mode. Glad I did as I was able to use data recovery software (UFS Explorer Standard) on the unmountable drive (image) and it let me recover everything I needed/wanted. I looked at assorted recovery platforms but chose UFS Explorer Standard as it ran natively on my Ubuntu main system. There's a few more details in this thread:
  15. So it appears all my issues have now been resolved. I was able to recover the data from the 4TB disk image I made with ddrescue. I went with UFS Explorer Standard for the recovery since the ddrescue image was unmountable. Both it and Raise Data Recovery are made by the same developer - SysDev. I chose UFS Explorer as it has a native Linux version whereas all the others are Windows primarily. Now the only thing that's left is to finish migrating the rest of my data from UD attached and mounted drives to being part of the protected array. 2 x 10TB and 1 x 4TB drives being precleared, with 3 x 10TB worth of data left to import. In any case, thanks again to all those that helped me along the way!