Jump to content

JorgeB

Moderators
  • Posts

    67,530
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. Please post the diagnostics, you can get them by typing "diagnostics" on the console.
  2. This is a different problem than above, try to format the device in the cache slot, if it fails please post the diagnostics: Tools -> Diagnostics
  3. Please post the diagnostics: Tools -> Diagnostics
  4. Do you mean 170Mb/s or do you have 10Gb Ethernet?
  5. Diags are after rebooting so not much to see, if it happens again post new diags before rebooting.
  6. You didn't post the persistent syslog like asked, thigh might give a clue, if there's nothing there it's likely a hardware problem.
  7. You could have used this thread but already replied on the other one.
  8. Run it again without -n or nothing will be done, if it asks for -L use it.
  9. No, only some SAS2116 based models are affected.
  10. Yep. That will likely have the docker and libvirt images, docker can be recreated, libvirt doesn't matter if you don't have VMs. Then everything should be on the array, assuming mover has been running without issues.
  11. If there's nothing else you need on cache you can wipe the devices and re-format, then restore from backup.
  12. There's a checkbox next to array start button to allow starting without cache assigned.
  13. Oh, and this is enough to cause a few sync errors (unless done in read only mode, you'll need to run a parity check.
  14. /mnt/rescue would be in RAM, just unassign the cache devices, start the array normally and use it to restore the data.
  15. You can try btrfs restore (option 2 here), but like mentioned at least the metada is only available on the failing disk, so data loss is likely, for the future it's good practice to have a backup of any important data.
  16. This missing device can be from some time earlier, and the pool never been able to rebalanced due to the errors on sdd, you were also having hardware errors on all devices, after this is solved look here for some more info. Sep 30 08:29:39 unRAID kernel: BTRFS info (device sdd1): bdev (null) errs: wr 2337, rd 206, flush 49, corrupt 0, gen 0 Sep 30 08:29:39 unRAID kernel: BTRFS info (device sdd1): bdev /dev/sdd1 errs: wr 1, rd 5717, flush 0, corrupt 0, gen 0 Sep 30 08:29:39 unRAID kernel: BTRFS info (device sdd1): bdev /dev/sdf1 errs: wr 0, rd 9, flush 0, corrupt 0, gen 0
  17. Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated -- --------- --------- --------- -------- ----------- 1 missing 62.00GiB 1.00GiB 32.00MiB -63.03GiB 2 /dev/sdd1 185.00GiB 1.00GiB 32.00MiB 3.46TiB 3 /dev/sdf1 123.00GiB - - 3.52TiB -- --------- --------- --------- -------- ----------- Total 185.00GiB 1.00GiB 32.00MiB 6.91TiB Used 178.58GiB 332.02MiB 48.00KiB DevId #1 is missing, sdd is failing.
  18. It means it's still crashing even in ro mode, try btrfs restore, also on that link.
  19. You have a 3 device cache pool with a device missing and another one failing, so it's beyond its redundancy, you can try to manually copy everything you can, but there will likely be some data loss, especially since the metadata was on the missing device and in the failing device.
  20. There was a recent case of a similar drive (SN750) not formatting correctly on a gen2 slot but formatting on a gen3 slot, doesn't make much sense but worth trying if it's the case.
  21. Some sync errors after an unclean shutdown are normal, even expected, run a correcting check. You also had some issues with your cache, syslog is missing some time due to spam, it appears you already dealt with it, but still good to take a look here.
  22. btrfs starts crashing right after cache mount: Sep 27 15:56:57 Skippy kernel: BTRFS info (device sdc1): enabling ssd optimizations Sep 27 15:56:57 Skippy kernel: BTRFS info (device sdc1): start tree-log replay Sep 27 15:56:57 Skippy kernel: ------------[ cut here ]------------ Sep 27 15:56:57 Skippy kernel: kernel BUG at fs/btrfs/extent-tree.c:6862! Yep
×
×
  • Create New...