Jump to content

JorgeB

Moderators
  • Posts

    63,929
  • Joined

  • Last visited

  • Days Won

    675

Everything posted by JorgeB

  1. Before that you can also try btrfs check using the backup superblock: btrfs check -s 1 /dev/sdn1 Also try the lowmen mode which might give different result: btrfs check -s 1 --mode=lowmem /dev/sdn1
  2. Can't help then, it's not usual for an unclean shutdown to damage a superblock, let alone all 3, but in any case you'll need to reformat the disk or ask for help from a btrfs maintainer on the mailing list: https://btrfs.wiki.kernel.org/index.php/Btrfs_mailing_list
  3. Try the other backup superblock: btrfs restore -v -u 1 /dev/sdn1 /mnt/user/Backup/cachebck
  4. Looks like the superblock is damaged, you can try restore using a backup superblock: btrfs restore -v -u 2 /dev/sdn1 /mnt/user/Backup/cachebck
  5. Nothing jumps out from the syslog but I'm finding it very strange that none of the recovery options work, try after rebooting.
  6. Try rebooting, and also post your diagnostics, before rebooting.
  7. That's usually a fatal error, see here to try and recover your data before reformatting cache: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=543490
  8. Those can sometimes be fixed by using the offending controller on a different PCIe slot (ideally changing from a CPU slot to a PCH slot or vice versa) or with a bios update.
  9. You just need to connect all 4, and FYI device order isn't important in a pool, as long as all are present it's fine.
  10. Yes, obviously this limit will only be noticed when all disks are used simultaneously.
  11. Yes, the Intel expander can be power bu the PCIe slot or a molex connector:
  12. lol, I quoted from your quote and that's how it appeared, it's an IPS bug
  13. You can use a single link, and you'll have 2200MB/s total usable for the disks on that link.
  14. You can, but pre v6.4.1 it would be more complicated than replacing the pool, but the procedure for doing it is in the FAQ if you want to do it.
  15. With v6.4.1-rc1 you can easily replace them one at a time using the GUI, for any prior release better to use the procedure below: https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?do=findComment&comment=511923
  16. This is still happening, just had another post disappear from the unread list.
  17. That's good news and thanks for posting, it might help someone in the future with a similar issue. Sounds good to me, definitely agree on the format, you should be fine on the latest release, and if there's a problem with xfs_repair it should be fixed soon on an upcoming kernel.
  18. If you can still edit the wiki please change the 9207-8i to working ootb.
  19. https://lime-technology.com/wiki/Troubleshooting#Re-enable_the_drive Exactly, so if it fails again there could really be an issue with the disk, SMART is a good indication but an healthy SMART doesn't always equal an healthy disk.
  20. I never did this on v6.4, looks like something changed, will have a look when I have the time.
  21. Without a pre-reboot syslog we can't see what happened, but the disk looks healthy so you'll need to rebuild, either using a new disk or the old disk, since you have dual parity it's not so risky to use the old one, just make sure that the contents of the emulated disk look correct, since whatever's there is what's going to be on the rebuilt disk, also I would recommend swapping/replacing cables/backplane slot just to rule that out in case the same disk fails again.
  22. That was to the OP, already responded to you on your thread.
  23. Please post the complete diagnostics, ideally after the disk was disabled and before rebooting: Tools - > Diagnostics
  24. That's one of the reasons raid controllers are not recommended. You can do a new config, assign all disks as data drives and start the array, you should get one unmontable disk and that will be your parity, if you get more than one grab and post your diagnostics, then do another new config now with parity assigned and check "parity is already valid" before starting the array, finally run a parity check since a few sync errors are expected.
×
×
  • Create New...