Jump to content

trurl

Moderators
  • Posts

    44,017
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. As already explained, the disk is disabled because it is out of sync. Typically a failed write will update parity anyway so no data is lost. The whole point of parity is so things can continue to operate when a disk gets "kicked out".
  2. That is all wrong. Any idea how it got that way? Should be drwxr-xr-x 12 root root 240 Oct 6 14:11 ./ drwxr-xr-x 20 root root 420 Oct 23 06:14 ../ drwxrwxrwx 1 nobody users 62 Oct 25 04:40 cache/ drwxrwxrwx 3 nobody users 16 Oct 25 04:40 disk1/ drwxrwxrwx 3 nobody users 27 Oct 25 04:40 disk2/ drwxrwxrwx 6 nobody users 75 Oct 25 04:40 disk3/ drwxrwxrwx 9 nobody users 119 Oct 25 04:40 disk4/ drwxrwxrwt 3 nobody users 60 Oct 6 14:11 disks/ drwxrwxrwx 1 nobody users 16 Oct 25 04:40 user/ drwxrwxrwx 1 nobody users 16 Oct 25 04:40 user0/
  3. From the Unraid command line, what do you get with this? ls -lah /mnt
  4. According to your diagnostics, you do have user shares. All disks mounted as well as user shares. Not seeing shares in the webUI is commonly a symptom of a browser problem, such as an adblocker or something interfering. Whitelist your server. Not sure about not seeing them on the network, could be a separate issue with the client computer or network. Post screenshots showing the problems.
  5. Are any of these disks on the addin controller board? Try reseating it, and double check all connections.
  6. Looks like you skipped reading the first post in this thread, in particular
  7. I think you've got it right. Where I often see this is helping users with their docker setup and trying to go the other way with cache-prefer data. Somehow they get system share on cache and array, probably by starting dockers with missing cache. This may result in duplicate docker.img for example.
  8. SMART looks OK but doesn't look like an extended SMART test has ever been done on that disk. This seems likely. From syslog it looks like a connection problem to me. You should always double check all connections, all disks, power and SATA, including splitters, any time you are mucking about inside. Here are some relevant excerpts from syslog: Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#37 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=9s Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#37 CDB: opcode=0x88 88 00 00 00 00 00 38 26 11 50 00 00 00 20 00 00 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942018896 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018832 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#33 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=9s Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018840 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#33 CDB: opcode=0x88 88 00 00 00 00 00 38 26 0f 50 00 00 02 00 00 00 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018848 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#91 UNKNOWN(0x2003) Result: hostbyte=0x01 driverbyte=0x00 cmd_age=9s Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018856 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942018384 op 0x0:(READ) flags 0x0 phys_seg 64 prio class 0 Oct 25 13:21:44 NAS kernel: sd 1:1:3:0: [sde] tag#91 CDB: opcode=0x88 88 00 00 00 00 00 38 26 0d 50 00 00 02 00 00 00 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942017872 op 0x0:(READ) flags 0x0 phys_seg 64 prio class 0 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018320 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942017808 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018328 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942017816 Oct 25 13:21:44 NAS kernel: blk_update_request: I/O error, dev sde, sector 942017840 op 0x0:(READ) flags 0x0 phys_seg 4 prio class 0 Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942018336 ... Oct 25 13:21:44 NAS kernel: md: disk3 read error, sector=942011224 Oct 25 13:21:44 NAS rc.diskinfo[12723]: SIGHUP received, forcing refresh of disks info. Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022904 Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022912 Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022920 Oct 25 13:21:44 NAS kernel: md: disk3 write error, sector=942022928 It looks like it may have been read failures that really started it. When Unraid can't read a disk it will try to write the emulated data back to it and if that write fails then the disk gets disabled. It would be mostly guess work to say that the disk may not be very far out-of-sync if nothing was really writing to it and the emulated data is what was already on the disk. If you want to take a chance, you could unassign the disk and then mount it read-only as an Unassigned Device to check its contents. If it seems to be OK you could New Config / Trust Parity. If it doesn't look OK then you could reassign and rebuild. In any case a parity check should be done, either to confirm it wasn't out of sync or to confirm the rebuild went well. If you really want to take a chance you could even postpone that so you can use the server, but if anything is out-of-sync then rebuilding a real failure could be compromised. Do you have good (enough) backups?
  9. Since you were apparently having some hardware issue that caused it to be disabled, diagnostics might give a better idea of what needs to be fixed so the rebuild will be successful.
  10. The fact that it is out of sync is why it is disabled. Unraid disables a disk when a write to it fails. It won't use the disk again until rebuilt. The disk is emulated from all other disks. This emulation includes writes to the emulated disk. The initial failed write, and any subsequent writes to the emulated disk, update parity just as if the disk had been written. So all those writes can be recovered by rebuilding. So, as you can see, it is out-of-sync with parity from the moment it is disabled.
  11. You should post Diagnostics to see if more can be determined about what you might need to fix before rebuilding
  12. I don't use that particular radarr. Why do you have 2 different mappings for /mnt/user?
  13. Almost certainly to cache, though I know a few have all SSD arrays
  14. Delete the orphans. Maybe won't fix anything but they need to be deleted anyway. You might have to recreate docker.img and use Previous Apps on Apps page to reinstall your containers.
  15. Post your docker run for radarr as explained at this very first link in the Docker FAQ:
  16. yes I know you said you are using turbo but did you read about that? Here is a wiki link that explains how parity updates are done with each mode so you can see why parity updates affects writing speed. https://wiki.unraid.net/UnRAID_6/Storage_Management#Array_Write_Modes
  17. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  18. Did disk10 rebuild complete yet? Since rebuild has to read all other disks it might be better to wait until after that rebuild to try the repair.
  19. You can but it isn't necessary. In any case, we should leave that for later until you get your disk2 filesystem fixed.
  20. Not likely to be the cause of your problem, but appdata and system shares have files on the array. And other warnings from FCP. You also seem to be having some problem with Unassigned Device sdu. Go to Docker page and set the slider at upper right to Advanced Mode. Do you have any orphan containers?
  21. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
×
×
  • Create New...