Jump to content

johnnie.black

Members
  • Content Count

    18895
  • Joined

  • Last visited

  • Days Won

    230

Everything posted by johnnie.black

  1. Yes, but parity sync will re-start from the beginning.
  2. Unrelated but there are issues with both disks connected on the Marvell controller, Marvell controllers are not recommended for Unraid for a long time. LSI HBA doesn't appear to be very happy: Sep 18 18:58:54 SERVERUS kernel: mpt2sas_cm0: fault_state(0x2622)! Sep 18 18:58:54 SERVERUS kernel: mpt2sas_cm0: sending diag reset !! Sep 18 18:58:55 SERVERUS kernel: mpt2sas_cm0: diag reset: SUCCESS Upgrade to latest firmware sine it's running one with known issues, also make sure it's well seated and sufficiently cooled. Something is interfering with mount points: Sep 20 09:54:32 SERVERUS emhttpd: error: get_filesystem_status, 6481: Operation not supported (95): getxattr: /mnt/user/vmicons Rebooting in safe mode should get the shares back, if yes you'll need to see what plugin or setting is causing the problem.
  3. As long as there were no array device changes since last backup you can use that.
  4. It can't help if for example both copies are corrupt, try btrfs restore also on the FAQ, it might be able to copy the file, though some corruption is possible.
  5. Filesystem is still crashing, there are some recovery options in the FAQ, start by mounting the pool just read only.
  6. Filesystem is corrupt, try rebooting after disabling docker and VM services.
  7. No, and according to Crucial is normal so don't expect one.
  8. Flash drive problems: Sep 19 19:57:38 unRAID-1 kernel: usb 1-4: USB disconnect, device number 2 Sep 19 19:57:38 unRAID-1 kernel: sd 0:0:0:0: [sda] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 Sep 19 19:57:38 unRAID-1 kernel: sd 0:0:0:0: [sda] tag#0 CDB: opcode=0x2a 2a 00 00 05 bb db 00 00 01 00 Sep 19 19:57:38 unRAID-1 kernel: print_req_error: I/O error, dev sda, sector 375771 Sep 19 19:57:38 unRAID-1 kernel: Buffer I/O error on dev sda1, logical block 373723, lost async page write Sep 19 19:57:38 unRAID-1 emhttpd: error: put_config_idx, 609: No such file or directory (2): fopen: /boot/config/shares/DUMP.cfg Sep 19 19:57:38 unRAID-1 kernel: FAT-fs (sda1): Directory bread(block 29592) failed Sep 19 19:57:38 unRAID-1 kernel: FAT-fs (sda1): Directory bread(block 29593) failed Sep 19 19:57:38 unRAID-1 kernel: FAT-fs (sda1): Directory bread(block 29594) failed Sep 19 19:57:38 unRAID-1 kernel: FAT-fs (sda1): Directory bread(block 29595) failed
  9. Cache filesystem is fully allocated resulting in ENOSPC, see here.
  10. Device is not initializing correctly: Sep 20 00:49:42 MainCondo kernel: nvme nvme0: controller is down; will reset: CSTS=0xffffffff, PCI_STATUS=0x10 Sep 20 00:49:42 MainCondo kernel: nvme 0000:01:00.0: enabling device (0000 -> 0002) Sep 20 00:49:42 MainCondo kernel: nvme nvme0: Removing after probe failure status: -19 Sep 20 00:49:42 MainCondo kernel: nvme nvme0: failed to set APST feature (-19) Can't tell if it's the board or the NVMe device, try in in another board if possible.
  11. I have all my backups servers connected directly by 10GbE to the server they backup, never had any issues, backup script connects by IP, to make sure it goes to the 10Gbe NIC.
  12. You can have a raid0 cache pool, with up to 24 devices.
  13. That's normal with encryption, first time doesn't know it needs the key. Speed seems OK, if any issues grab and post new diags.
  14. Yes, use only the same disks, though data disk order is not important with single parity. Which SATA port doesn't matter.
  15. It's a hardware problem, it can be the device or the board, assuming it's not on a PCIe adapter, it can also be some firmware/compatibility problem, but once it drops Unraid can't do anything about it.
  16. Correct, if SMART test is successful assign all disks as before, check "parity is already valid" and start the array, if all disks mount correctly run a correcting parity check, if not post new diags.
  17. Assuming all disks are available and disk3 is indeed OK, and passes the extended SMART test, after the new config you just need to assign all disks as before and start the array to begin parity sync, you can even trust parity and then run a correcting check. If I misunderstood and there is one missing disk then this isn't the way to go, in that case the invalid slot command would be the way to go, but you need a new disk of the same size to replace the missing one.
  18. NVMe device dropped offline: Sep 18 23:51:01 Mozart kernel: nvme nvme0: I/O 136 QID 2 timeout, aborting Sep 18 23:51:01 Mozart kernel: nvme nvme0: I/O 137 QID 2 timeout, aborting Sep 18 23:51:01 Mozart kernel: nvme nvme0: I/O 138 QID 2 timeout, aborting Sep 18 23:51:01 Mozart kernel: nvme nvme0: I/O 139 QID 2 timeout, aborting Sep 18 23:51:31 Mozart kernel: nvme nvme0: I/O 136 QID 2 timeout, reset controller Sep 18 23:52:01 Mozart kernel: nvme nvme0: I/O 0 QID 0 timeout, reset controller Sep 18 23:53:33 Mozart kernel: nvme nvme0: Device not ready; aborting reset Sep 18 23:53:33 Mozart kernel: print_req_error: I/O error, dev nvme0n1, sector 14888064 Sep 18 23:53:33 Mozart kernel: print_req_error: I/O error, dev nvme0n1, sector 14888320 Sep 18 23:53:33 Mozart kernel: print_req_error: I/O error, dev nvme0n1, sector 14895648 Sep 18 23:53:33 Mozart kernel: print_req_error: I/O error, dev nvme0n1, sector 14895904 Sep 18 23:53:33 Mozart kernel: nvme nvme0: Abort status: 0x7
  19. It's not that uncommon for newer boards to have issues with older controllers. That's probably the best option, and it should work.
  20. Auto check after unclean shutdown is non correct. Aborting a check won't make the GUI show unclean shutdown, not sure what you mean, next time grab diags, they might show something.
  21. I would recommend using the invalid slot command if you want to use a new disk, rebuilding on top of the older one might be a mistake, instead run an extended test on it, and if all OK re-sync parity instead.