Jump to content

JonathanM

Moderators
  • Posts

    16,741
  • Joined

  • Last visited

  • Days Won

    66

Everything posted by JonathanM

  1. Each disk is a separate zfs single member. Parity array disks are not pooled at a bit level, each one can have different filesystems if desired, and are independent from each other, and can be read separately if needed.
  2. Unraid or any RAID isn't backup, it's high availability, where a failed drive can be emulated and replaced while your data is still accessible. Backup implies the ability to recover from a corrupted or deleted file, which requires a second copy somewhere besides the array. You need to have a second copy away from the array of any data you can't afford to lose.
  3. The multiple pools thing is a recent (relatively) addition that hasn't been fully integrated into mover actions. Hopefully this will all be taken care of in 6.13, at least that's been the stated intent. There aren't any public previews of 6.13 yet, as 6.12 hasn't totally settled out, but it's looking hopeful.
  4. Looks fixed now. When I posted that, the text editor buttons were missing in the dark theme, and the light theme was completely blown out with text overlapping, icons full screen, nothing working.
  5. Still massively messed up here. Only marginally useable in dark mode, light mode is completely screwed. @SpencerJ, any timeline for getting this fixed?
  6. They are in the docker image, which is not normally browsable. If you don't care about the files, since you said they can be deleted, just delete and recreate the docker image and they will be gone. You can reinstall all your containers quickly by using the previous apps section. After you get that sorted, before you start uploading more files into the image again, examine the container config path mappings. The host side is where the files will go on the array, the container side is where you point the application. So if the mapping shows host = /mnt/user/share/files and container = /home/nobody then files synced to /home/nobody in the container will appear in the \\share\files folder.
  7. The diskspeed container might be able to pinpoint which disk is causing the slowdown, diagnostics taken after an event may also be helpful.
  8. Yes. After the array has been started once and the config has been committed, you can later put a new drive into the disk1 slot and Unraid will clear it to keep parity valid. But if you plan to replace the drive, why bother zeroing it out, just do a normal drive replacement.
  9. You don't, an admin must do it. Reply back with the username you want, and hopefully @SpencerJ or one of the other admins will change it.
  10. Docker containers can only see the paths internal to the container unless you map a host path to a corresponding container path. Those paths are mapped in the container template. Sometimes containers require host paths pointing to unassigned device locations to be set to slave r/w instead of just r/w.
  11. Not likely. One of the features of Unraid is that the boot stick is easily readable and editable in virtually any OS, FAT32 is still the lowest common denominator for file systems. I'm not saying it will never happen, but until Microsoft Windows can natively read and write a ZFS mirror I don't think it's going to be considered.
  12. Any other BTRFS RAID level besides 0 and single have redundancy. So if it were RAID1, RAID10, or RAID5 you could recover from a single drive failure. BTRFS RAID5 is not a recommended profile.
  13. Just keep in mind that if any one of the 4 disks fails you will lose all the data on the whole pool, not just the disk that failed.
  14. What part of the instructions linked in Squid's post are you having a problem with?
  15. Keep an eye on the pending and reallocated counts as the check progresses. An extended smart test isn't a bad idea, but could still pass even if the drive is steadily getting worse. What you are looking for is stability in the SMART attributes indicating health. Increasing pending and reallocated counts mean the drive is dying. If the progression stops, you can assume the current bad spot has been fully taken care of, and the drive may possibly stay ok for a while. Probably smart to replace it with a much higher capacity more efficient model. Perhaps you can copy the data from the other old drives to this new replacement and reduce you spindle count. Fewer drives = fewer failure points. How healthy are the rest of your drives?
  16. The most probable solution would be to append the tag for the specific version you want instead of the latest tag that is used by default until you can upgrade nextcloud using the internal updater. You will need to look at the support thread for the specific nextcloud container you are using to determine what the exact syntax of that tag should be. There are many different nextcloud containers, and each one has their own support thread, you need to post in that thread instead of general support for issues with that container.
  17. A pending sector should be reallocated or removed from the pending list when new data is written to that spot. It's difficult to determine which files occupy which sectors, so the easiest way is to run a non-correcting parity check which will read from all the disks, when the pending sector is read, it should (hopefully) be overwritten by good data calculated from the rest of the disks + parity after it fails to read, causing the Unraid error count to increase, and moving that sector from pending to reallocated. The issue is if another drive happens to fail before this disk is healthy, the pending sectors will likely fail to read causing any other disk being rebuilt to have errors at that sector as well as probably being corrupt itself. Poor power or other environmental conditions can also cause pending sectors, so the drive isn't always at fault when these show up, but if more sectors keep showing up pending or reallocated chances are the drive is dying. Because drives can fail without warning, it's always prudent to replace drives that you can't trust as soon as possible, in case one of the "good" drives suddenly decides to die unexpectedly. Having dual parity can reduce the stress a little as it can tolerate 2 drive failures, but it's not bulletproof. Unraid's ability to rebuild drives is NOT backup, it's high availability. Always keep current backup of any files you can't afford to lose.
  18. Only if you had the syslog server already set up to mirror to the usb.
  19. yes, sector error counts increasing rapidly is bad, if another drive fails... if you can get the pending count back to zero and no increasing reallocated, it may be ok, but you need to be ready to replace
  20. Should this old thread be locked?
  21. There should be a previous folder on the flash drive that contains the files that were replaced in the root. Either copy them back, or if you can't find a previous folder, download the 6.11.5 zip and get the root files from there. Don't overwrite anything in the config folder.
  22. Probably time to think about releasing a new beta version under your name and splitting away from this old thread. I must confess I don't know what is needed to get this listed with CA, but I can move a new support thread you create elsewhere to this plugin support area. You can create a thread in the lounge or something, and when you are ready to have it moved here just report your own post asking for it to be moved.
×
×
  • Create New...