Jump to content

JorgeB

Moderators
  • Posts

    63,929
  • Joined

  • Last visited

  • Days Won

    675

Everything posted by JorgeB

  1. v6.4 includes a newer kernel, so it also likely include some xfs changes, if that's related is difficult to say. That's what we usually recommend, and has been used successfully before, mostly to recover data from accidentally formatted xfs disks.
  2. Though rare I've seen at least 3 or 4 cases where xfs_repair couldn't fix the filesystem, your filesystem looks very damaged, there's metadata corruption and the primary superblock is damaged also, xfs_repair has very few options and it's usually very simple to use, but when it fails there's really nothing a normal user can do, if you can't restore from backups IMO your best option would be the xfs mailing list, and possibly even if they can help you'll end up with a lot of corrupt/lost files.
  3. It's difficult to help without the diagnostics and the output of xfs_repair, but if -L doesn't work your best bet it to ask for help to a xfs maintainer on the xfs mailing list.
  4. Oops, was on the phone and confused the posts, yeah yours should work, palmio's doesn't support trim, if it really is a SAS2008
  5. What HBA are you using? Another report below yours with a 9207 that supports trim also stopped working on v6.4
  6. Change filesystem to reiser or btrfs, start the array, format, change back to xfs and format again.
  7. Also let me add that the 2TB was probably formated with an earlier xfs version, and that's the likely reason it was emptier, xfs started using more space on newer kernels, IIRC a couple of years ago or so.
  8. Don't know, ask an xfs maintainer, it's likely some metadata reserve for filesystem housekeeping, can only tell that it's normal.
  9. It's normal for the xfs filesytem, about 1GB per TB.
  10. If you're willing you could try redoing your flash to make sure it's not related to your config or any customization, which IMO it's highly likely. Backup the current flash drive, create a new install on it and copy just super.dat (disk assignments) and your key from the current flash, both on the config folder, if the array starts normally you can then start reconfiguring the rest one thing at a time.
  11. You can have 24 disks (or more) without bottlenecks, just need to use ti right hardware, if/when you do this just ask for some ideas.
  12. Yes, also forgot to say since it's kind of obvious that instead of removing the disk and cloning using another system you can just rebuild it, and in that case, save the old disk and run xfs_repair on the rebuilt one, and like itimpi mentioned you can use the GUI. Not really, unclean shutdowns are the number one reason, I'm assuming you checked SMART for that disk and all looks good, maybe also a good idea to run an extended SMART test even if all look good. I use btrfs on all my servers, mostly because of the checksums, and I believe it's stable especially for single disk usage, like unRAID data disks, still won't say that btrfs is more stable than xfs, the opposite is the consensus, so maybe a fluke but it's difficult to guess, especially without seeing long time diagnostics. Are these ATA errors or cache full errors? maybe post an excerpt from the syslog, if they are ATA errors most likely the reason for the problem.
  13. Type diagnostics on the terminal and upload the resulting zip.
  14. It's an old bug, unassign parity, start the array with both data disks, stop, re-assign parity and start again.
  15. Rebuilding can't fix filesystem corruption. You should store the new cloned disk and repair the old one using unRAID and do it on the mdX device to keep parity synced, if you try to add the cloned disk unRAID will complain it's the wrong disk and if it was checked on Ubuntu parity will become invalid.
  16. Looks good, but after this there will be an automatic balance, wait for cache activity to stop before changing the profile.
  17. You should try booting using a different flash drive with a clean v6.4 install, just to confirm if the issue is related to v6.4 or some config or customization you're using.
  18. It's not, but a backup before starting is always a good idea. https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=480421
  19. Not bad, I was expecting more after seeing that SMART report, good result.
  20. I don't see any attempt on the log to start the docker service, since the image is currently on an UD device, can you test after creating a new image on your cache device?
  21. Also, check that the docker image is in a device with available space, and best to delete and recreate it to avoid any issues.
  22. The docker image is always btrfs, but it can be stored on a xfs disk or cache device.
×
×
  • Create New...