Jump to content

JorgeB

Moderators
  • Posts

    67,125
  • Joined

  • Last visited

  • Days Won

    703

Everything posted by JorgeB

  1. It's normal for the xfs filesytem, about 1GB per TB.
  2. If you're willing you could try redoing your flash to make sure it's not related to your config or any customization, which IMO it's highly likely. Backup the current flash drive, create a new install on it and copy just super.dat (disk assignments) and your key from the current flash, both on the config folder, if the array starts normally you can then start reconfiguring the rest one thing at a time.
  3. You can have 24 disks (or more) without bottlenecks, just need to use ti right hardware, if/when you do this just ask for some ideas.
  4. Yes, also forgot to say since it's kind of obvious that instead of removing the disk and cloning using another system you can just rebuild it, and in that case, save the old disk and run xfs_repair on the rebuilt one, and like itimpi mentioned you can use the GUI. Not really, unclean shutdowns are the number one reason, I'm assuming you checked SMART for that disk and all looks good, maybe also a good idea to run an extended SMART test even if all look good. I use btrfs on all my servers, mostly because of the checksums, and I believe it's stable especially for single disk usage, like unRAID data disks, still won't say that btrfs is more stable than xfs, the opposite is the consensus, so maybe a fluke but it's difficult to guess, especially without seeing long time diagnostics. Are these ATA errors or cache full errors? maybe post an excerpt from the syslog, if they are ATA errors most likely the reason for the problem.
  5. Type diagnostics on the terminal and upload the resulting zip.
  6. It's an old bug, unassign parity, start the array with both data disks, stop, re-assign parity and start again.
  7. Rebuilding can't fix filesystem corruption. You should store the new cloned disk and repair the old one using unRAID and do it on the mdX device to keep parity synced, if you try to add the cloned disk unRAID will complain it's the wrong disk and if it was checked on Ubuntu parity will become invalid.
  8. You should try booting using a different flash drive with a clean v6.4 install, just to confirm if the issue is related to v6.4 or some config or customization you're using.
  9. Not bad, I was expecting more after seeing that SMART report, good result.
  10. I don't see any attempt on the log to start the docker service, since the image is currently on an UD device, can you test after creating a new image on your cache device?
  11. Also, check that the docker image is in a device with available space, and best to delete and recreate it to avoid any issues.
  12. The docker image is always btrfs, but it can be stored on a xfs disk or cache device.
  13. Docker images shows write errors, these could be hardware related your not enough space on the device where the docker is: It's also advisable to uninstall both the preclear and the S3 sleep plugin as they have known issues with v6.4
  14. See here: https://lime-technology.com/forums/topic/65494-unraid-os-version-640-stable-release-available/?do=findComment&comment=619388
  15. Is the docker still stopped since rebooting? It appears to be working on the diagnostics posted.
  16. Please post your diagnostics: Tools -> Diagnostics
  17. Sometimes during these last updates, not sure exactly which one, load in watts is not displayed anymore, UPS is an APC Back-UPS Pro 900:
  18. If you don't mind sharing post how many files were corrupt when you finish checking them, just out of curiosity, since that disk looks to be in really bad shape. Good luck!
  19. Definitely, ddrescue is a last resort attempt only, since some/many files will be corrupt when it skips the read errors.
  20. If no backups are available most just copy what they can mounting the disk normally, any file that won't copy because of a read error would be corrupt if copied with ddrescue, others use dd with conv=noerror,sync, still ddrescue is optimized for reading a bad disk and can be a valid solution for some cases, hence why I said it would be a good addition to the nerd tools.
  21. I'm still not clear on what you are doing and what disk is having errors, whatever it is there's something very wrong, you could post your diagnostics. They replace the failing disk with a new one and let it rebuild, even for large disks it should take less than a day, e.g., my server with 8TB disks takes around 15 hours to do a rebuild.
×
×
  • Create New...