Jump to content

JorgeB

Moderators
  • Posts

    67,710
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Cache device dropped offline: Nov 3 09:04:18 NAS kernel: sd 12:0:3:0: attempting task abort!scmd(0x000000005fd2b71d), outstanding for 33986 ms & timeout 30000 ms Nov 3 09:04:18 NAS kernel: sd 12:0:3:0: [sdn] tag#1415 CDB: opcode=0x28 28 00 03 95 eb e8 00 01 00 00 Nov 3 09:04:18 NAS kernel: scsi target12:0:3: handle(0x000a), sas_address(0x4433221102000000), phy(2) Nov 3 09:04:18 NAS kernel: scsi target12:0:3: enclosure logical id(0x5782bcb03838b400), slot(1) Nov 3 09:04:18 NAS kernel: sd 12:0:3:0: No reference found at driver, assuming scmd(0x000000005fd2b71d) might have completed Nov 3 09:04:18 NAS kernel: sd 12:0:3:0: task abort: SUCCESS scmd(0x000000005fd2b71d) Nov 3 09:04:18 NAS kernel: XFS (sdn1): xfs_do_force_shutdown(0x2) called from line 1196 of file fs/xfs/xfs_log.c. Return address = 000000001e8ac67c Nov 3 09:04:18 NAS kernel: XFS (sdn1): Log I/O Error Detected. Shutting down filesystem Nov 3 09:04:18 NAS kernel: XFS (sdn1): Please unmount the filesystem and rectify the problem(s)
  2. Backup current one, re-create using the USB tool then restore only the config folder.
  3. I never had any issues setting the MTU to 9000, and have done it in multiple servers, maybe NIC/hardware specific issue?
  4. Either, you can just try a different flash without a key to see if it boots, or try redoing the current one.
  5. Didn't even noticed that the emulated disk4 was now mounting, so maybe disk1 can still be fixed, but kind of surprising due to all the errors during the rebuild.
  6. Once the sycn finishes run it again without -n or nothing will be done, did disk4 mount correctly?
  7. No, check filesystem. -Tools -> New Config -> Retain current configuration: All -> Apply Then start array to begin a parity sync (or check parity is already valid before array start and then run a correcting check).
  8. Disk was already disabled at boot, so we can't see what happened, probably got disable during last shutdown.
  9. Don't think there would be much point in rebuilding disk4, since disk1 is corrupt a rebuilt disk4 would also be corrupt, best bet is to do a new config to re-enable disk4, as for disk1 you can run a filesystem check but most likely there won't be much of use there.
  10. That looks like a flash drive problem, make sure you're using a USB 2.0 port and/or try a different one.
  11. Nov 2 19:47:09 GLaDOS emhttpd: device /dev/sdf problem getting id Nov 2 19:47:09 GLaDOS emhttpd: device /dev/sdc problem getting id Nov 2 19:47:10 GLaDOS emhttpd: device /dev/sdg problem getting id Nov 2 19:47:10 GLaDOS emhttpd: device /dev/sdd problem getting id This are usually the result of having multipath to the enclosure enable, Unraid doesn't support that, use a single cable from the HBA to the enclosure. Btrfs errors are the result of the device dropping offline, try using a different cables/slot or controller for the cache device.
  12. File system corruption, as it should be evident by the post above:
  13. Everything looks normal to me, all disks are mounting and none of the them is empty, certainly didn't have anything to do with the scheduled check, where are you missing data from?
  14. That doesn't make much sense, IIRC if the parity check is scheduled to start during a rebuild/re-sync it just starts over the actual operation Unraid was doing, please post the diagnostics.
  15. Run it again without -n or nothing will be done, if it asks for -L use it.
  16. No, the Dynamix UI is part of Unraid, not the plugins, you were given the same info in the plugins support thread.
  17. This is a plugin, not part of stock Unraid, any issues should be reported in the existing pllugin support thread.
  18. CA Backup ran last night: Nov 2 01:00:01 BB8 CA Backup/Restore: ####################################### Nov 2 01:00:01 BB8 CA Backup/Restore: Community Applications appData Backup Nov 2 01:00:01 BB8 CA Backup/Restore: Applications will be unavailable during Nov 2 01:00:01 BB8 CA Backup/Restore: this process. They will automatically Nov 2 01:00:01 BB8 CA Backup/Restore: be restarted upon completion. Nov 2 01:00:01 BB8 CA Backup/Restore: #######################################
  19. Because the other device is missing Unraid is not trying to mount the pool degraded (assuming it's raid1), you can do that manually, if successful backup pool contents and re-create the pool after replacing cables on the device that caused issues before.
×
×
  • Create New...