Jump to content

JorgeB

Moderators
  • Posts

    67,852
  • Joined

  • Last visited

  • Days Won

    708

Everything posted by JorgeB

  1. Try clearing the browser cache and/or using a different browser, if that fails boot in safe mode.
  2. Please post the diagnostics and the output of: ls -la /mnt/user
  3. Sep 23 11:23:35 BIGDADDY kernel: BTRFS: device fsid b22551d5-574f-45b3-a984-7648d094271c devid 1 transid 867196 /dev/sdb1 scanned by udevd (667) Sep 23 11:23:35 BIGDADDY kernel: BTRFS: device fsid b22551d5-574f-45b3-a984-7648d094271c devid 2 transid 862906 /dev/sdd1 scanned by udevd (663) Pool was out already of sync at boot, note the different transids, this suggests one them dropped offline before, and looking at the stats they both did: Sep 23 11:24:30 BIGDADDY kernel: BTRFS info (device sdb1): bdev /dev/sdb1 errs: wr 479443, rd 6903, flush 6054, corrupt 193330, gen 810 Sep 23 11:24:30 BIGDADDY kernel: BTRFS info (device sdb1): bdev /dev/sdd1 errs: wr 21307930, rd 58098292, flush 590499, corrupt 2129794, gen 5010 Hopefully not at the same time but the pool might still be a mess, since one of the devices was wiped after removing it you first need to recover that, with the array stopped type: btrfs-select-super -s 1 /dev/sdb1 Then unassign all pool devices, start array, stop array, re-assign both pool devices, start array, post new diags.
  4. Don't see a reason for the disk mounting ro, try asking in the UD plugin support thread.
  5. Looks like the HBA is losing connection with all the disks at the same time, I would suspect an issue with the enclosure, could also be the HBA>enclosure cable, try adding adding one disk.
  6. Server wasn't rebooted since last time, so cables were not replaced, did you at least check them? Though not easy to really do that with the server on, swap cables/slots between that disk and another one then see if the problem follows the disk or not.
  7. There's some discussion about that here. https://forums.unraid.net/bug-reports/stable-releases/6100-vnc-no-longer-connects-r1902/
  8. Should be fixed once -rc6 is released.
  9. In a Windows computer, if you're not familiar with the command line use the GUI, right click on the disk, tools, repair disk.
  10. You need to remove this to be able to use them in Unraid: Formatted with type 2 protection So try re-formatting them without it, more info below: https://forums.unraid.net/topic/93432-parity-disk-read-errors/?do=findComment&comment=864078 https://forums.unraid.net/topic/110835-help-with-a-sas-drive/
  11. Now re-assign disk4, start array and post new diags.
  12. Use the UD plugin to connect a USB disk/device to copy what you need.
  13. Clear the browser cookies and try again.
  14. Parity disk looks healthy, re-sync it and if there are issues post new diags before rebooting.
  15. x32 is bogus, likely virtualization related, run the diskspeed docker to test total controller bandwidth.
  16. Now stop the array, re-assign disk3 and start the array to begin rebuilding, during the rebuild monitor the log for any disk related errors, similar to these: Sep 22 14:12:46 wowserver02 kernel: sd 18:0:4:0: attempting task abort!scmd(0x00000000b2e31524), outstanding for 15103 ms & timeout 15000 ms Sep 22 14:12:46 wowserver02 kernel: sd 18:0:4:0: [sdm] tag#7355 CDB: opcode=0x85 85 06 20 00 00 00 00 00 00 00 00 00 00 40 e5 00 Sep 22 14:12:46 wowserver02 kernel: scsi target18:0:4: handle(0x000d), sas_address(0x4433221105000000), phy(5) Sep 22 14:12:46 wowserver02 kernel: scsi target18:0:4: enclosure logical id(0x500605b00b101dd0), slot(6) Sep 22 14:12:46 wowserver02 kernel: scsi target18:0:4: enclosure level(0x0000), connector name( ) Sep 22 14:12:48 wowserver02 kernel: sd 18:0:4:0: [sdm] tag#3975 UNKNOWN(0x2003) Result: hostbyte=0x0b driverbyte=DRIVER_OK cmd_age=18s Sep 22 14:12:48 wowserver02 kernel: sd 18:0:4:0: [sdm] tag#3975 CDB: opcode=0x88 88 00 00 00 00 03 a3 81 2a 78 00 00 00 08 00 00 Sep 22 14:12:48 wowserver02 kernel: I/O error, dev sdm, sector 15628053112 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 0 Sep 22 14:12:48 wowserver02 kernel: sd 18:0:4:0: task abort: SUCCESS scmd(0x00000000b2e31524) Sep 22 14:12:49 wowserver02 kernel: sd 18:0:4:0: Power-on or device reset occurred If you see them there are still issues, probably power/connection related.
  17. I would try running with just one DIMM at a time, that would basically rule that out if it keeps crashing with either one, then next suspect would be the board/CPU.
  18. Run xfs_repair again without -n, and if it asks for -L use it.
  19. Nothing relevant I can see, this usually suggests a hardware problem, one more thing you can try is to boot the server in safe mode with all docker/VMs disable, let it run as a basic NAS for a few days, if it still crashes it's likely a hardware problem, if it doesn't start turning on the other services one by one.
×
×
  • Create New...