Jump to content

JonathanM

Moderators
  • Posts

    16,165
  • Joined

  • Last visited

  • Days Won

    65

Everything posted by JonathanM

  1. Keep an eye on the pending and reallocated counts as the check progresses. An extended smart test isn't a bad idea, but could still pass even if the drive is steadily getting worse. What you are looking for is stability in the SMART attributes indicating health. Increasing pending and reallocated counts mean the drive is dying. If the progression stops, you can assume the current bad spot has been fully taken care of, and the drive may possibly stay ok for a while. Probably smart to replace it with a much higher capacity more efficient model. Perhaps you can copy the data from the other old drives to this new replacement and reduce you spindle count. Fewer drives = fewer failure points. How healthy are the rest of your drives?
  2. The most probable solution would be to append the tag for the specific version you want instead of the latest tag that is used by default until you can upgrade nextcloud using the internal updater. You will need to look at the support thread for the specific nextcloud container you are using to determine what the exact syntax of that tag should be. There are many different nextcloud containers, and each one has their own support thread, you need to post in that thread instead of general support for issues with that container.
  3. A pending sector should be reallocated or removed from the pending list when new data is written to that spot. It's difficult to determine which files occupy which sectors, so the easiest way is to run a non-correcting parity check which will read from all the disks, when the pending sector is read, it should (hopefully) be overwritten by good data calculated from the rest of the disks + parity after it fails to read, causing the Unraid error count to increase, and moving that sector from pending to reallocated. The issue is if another drive happens to fail before this disk is healthy, the pending sectors will likely fail to read causing any other disk being rebuilt to have errors at that sector as well as probably being corrupt itself. Poor power or other environmental conditions can also cause pending sectors, so the drive isn't always at fault when these show up, but if more sectors keep showing up pending or reallocated chances are the drive is dying. Because drives can fail without warning, it's always prudent to replace drives that you can't trust as soon as possible, in case one of the "good" drives suddenly decides to die unexpectedly. Having dual parity can reduce the stress a little as it can tolerate 2 drive failures, but it's not bulletproof. Unraid's ability to rebuild drives is NOT backup, it's high availability. Always keep current backup of any files you can't afford to lose.
  4. Only if you had the syslog server already set up to mirror to the usb.
  5. yes, sector error counts increasing rapidly is bad, if another drive fails... if you can get the pending count back to zero and no increasing reallocated, it may be ok, but you need to be ready to replace
  6. Should this old thread be locked?
  7. There should be a previous folder on the flash drive that contains the files that were replaced in the root. Either copy them back, or if you can't find a previous folder, download the 6.11.5 zip and get the root files from there. Don't overwrite anything in the config folder.
  8. Probably time to think about releasing a new beta version under your name and splitting away from this old thread. I must confess I don't know what is needed to get this listed with CA, but I can move a new support thread you create elsewhere to this plugin support area. You can create a thread in the lounge or something, and when you are ready to have it moved here just report your own post asking for it to be moved.
  9. Seconded, that's the first suggestion that felt like it fit without being clumsy.
  10. Hopefully @iXNyNe will jump in and correct me if I'm wrong, but I assume you need to change the tag to an older version, then run the updater.phar command in the container console repeating the upgrade action until you get "current" with that image, then you can use later images to automatically update the nextcloud application version. This is described in detail in the info article you linked.
  11. You need to whitelist the remote address you are connecting from. The allowed addresses need to be put into the LAN_NETWORK value of the container.
  12. It's your baby, name it whatever you want. 😁
  13. Since unfolding is a verb, unfolders sounds like the thing doing the unfolding. Not the worst suggestion I've seen so far, but it doesn't jump out and grab me. UnCategories? I don't like that any better than Unfolders though. UnCategorizer?
  14. 20GB of RAM is a strange number, it would imply either a pair of 8's or a single 16 and a single 4 or a pair of 2's. Are you sure all the RAM is actually working as intended? What physical sticks are plugged in to which slots?
  15. Removing a drive slot completely will always entail copying the data off the drive to be removed, then either rebuilding parity without the drive, or forcing the drive to be removed to be filled with all zeroes so it doesn't effect parity when it's removed. Rebuilding parity is safer, the steps to wipe a data drive with zeroes are inherently dangerous as accidentally specifying the wrong drive to zero will always result in irreversible data loss on whichever drive is zeroed. Are all your drives perfectly healthy?
  16. I'm not familiar with the exact commands required, but I'm sure a Truenas pool doesn't import automatically YET. Something to do with the partition structure on Truenas using the second partition instead of the first. You should be able to import using the command line after each reboot, and importing automatically should be handled sometime in the nearish future.
  17. Sort of, but it sounds like you are thinking the parity disk is more important than it actually is. Rebuilding a disk requires all the other drives to be healthy, so if disk 1 isn't healthy, the rebuild of both disks will have corrupt areas where disk 1 was unable to be read correctly. This very likely will result in unmountable file systems on both disks, which may or may not be recoverable depending on where the corruption is. Unraid will fail the rebuild of disk4 as soon as it encounters an unreadable sector on disk1. So, the only way forward with any chance of success that I can think of is to use ddrescue to clone disk1 to a healthy drive of the same exact size. This will allow Unraid to read all the sectors, even if the data there isn't accurate, so at least the rebuild of disk4 can complete, albeit with data corruption in the areas that failed the read during the clone process. It would be much better to simply recover the data from your backups.
  18. That screenshot shows "Replaceable: Anytime" so it doesn't appear to be a too soon issue. Have you tried installing and connecting the "Unraid Connect" plugin? I seem to remember something about key management being moved there.
  19. Are you planning on keeping up with this project? If so (please?) I think it would be best if you start a new thread, since the first 30 pages of this thread will be totally irrelevant to your code. On the plus side, if this plugin becomes essential and used by enough people it's possible Limetech would roll it into the main project. I think a rename would be welcome as well, since this is no longer just containers, it's also VM's. Dunno what to call it though, maybe throw out some suggestions?
  20. Definitely agree, but it's hard to fix the issue when you can't consistently recreate it. Thousands of Unraid setups go many years without any USB boot stick errors.
  21. It will show an error count if there are any differences in what is read from the parity disk and what is calculated from the data drives. The messaging is currently the same whether those errors are simply noted, or corrected. Correct. Parity is kept up to date in realtime as writes are done. As long as the drive internal caches have finished their writes before power is lost, there should be no sync errors. If there are writes in progress, the data drives are higher priority, so it's entirely possible that a write can be successfully completed on the data drive but the parity write has yet to be fully committed, causing a sync error, a correcting parity check will complete that process. It's also possible for a power cut to corrupt a data drive if a write in progress gets cut off at just the wrong moment. In that case parity can't help, because it will have even older data. Most of the time the file system checks can at least get back to a readable state, at the expense of a corrupted file or several.
  22. What location are you downloading to inside deluge, and where is that location mapped on the server? Post the docker run and a screenshot of the deluge path settings if you are unclear.
  23. Edit the container and set the options you want. https://github.com/ytdl-org/youtube-dl/blob/master/README.md#output-template
×
×
  • Create New...