Jump to content

trurl

Moderators
  • Posts

    43,999
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Nov 13 15:49:58 Tower kernel: ata4.00: ATA-9: ST5000DM000-1FK178, W4J0GWNW, CC47, max UDMA/133 ... Nov 13 15:50:20 Tower kernel: md: import disk0: (sde) ST5000DM000-1FK178_W4J0GWNW size: 4883770532 ... Nov 13 15:51:07 Tower kernel: ata4.00: exception Emask 0x10 SAct 0x0 SErr 0x400000 action 0x6 frozen Nov 13 15:51:07 Tower kernel: ata4.00: irq_stat 0x08000000, interface fatal error Nov 13 15:51:07 Tower kernel: ata4: SError: { Handshk } Nov 13 15:51:07 Tower kernel: ata4.00: failed command: WRITE DMA EXT Nov 13 15:51:07 Tower kernel: ata4.00: cmd 35/00:08:80:00:00/00:01:00:01:00/e0 tag 26 dma 135168 out Nov 13 15:51:07 Tower kernel: res 50/00:00:87:01:00/00:00:00:01:00/e0 Emask 0x10 (ATA bus error) Nov 13 15:51:07 Tower kernel: ata4.00: status: { DRDY } Nov 13 15:51:07 Tower kernel: ata4: hard resetting link No, this is a connection issue with ST5000DM000-1FK178_W4J0GWNW parity (sde)
  2. Still not getting SMART for disk1, though it looks like it's mounted. Looks like docker.img is corrupt and spamming syslog. You shouldn't have system share on the array anyway. Disable dockers in Settings then reboot and post new diagnostics.
  3. Not the question you asked about, but you don't want to use RAID controller with Unraid. Not entirely clear. Can you access your server on the LAN?
  4. SMART for disk1 looked OK in those first diagnostics, but disk1 is not connected in those latest diagnostics. Shutdown, check all connections, power and SATA, both ends, including splitters. Reboot and post new diagnostics.
  5. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  6. OP has 1x2TB parity + 5x2TB data to start, disk1 has issues but no data. Wants to replace parity, disk1, and another disk with new 14TB disks. New Config with new 14TB disks assigned as parity and disk1, all other disks assigned as before, let parity rebuild. After parity rebuilds, it will be in sync with all the disks in the array, including new disk1. Format new disk1, which writes a new empty filesystem to disk1. This write operation updates parity accordingly just like any other write operation in the array. Replace/rebuild another disk with the remaining new 14TB disk. For simplicity, I have left out any discussion of preclear. In this scenario, preclear would be strictly for testing purposes, since Unraid only requires a clear disk when ADDING a disk to a NEW slot in an array that already has valid parity. No disks are being ADDED, just REPLACED. I personally don't bother with preclear, but I have good (enough) backups, am careful (with connections), diligent (Notifications), and only put a single new untested disk into my array occasionally. Since there will be 3 new untested disks going in you could make an argument for testing.
  7. And here's where we need to clarify terminology. I was under the impression you wanted to replace, not add
  8. Mover ignores cache no or only, simple as that.
  9. (Tried to post this last night but apparently there were forum problems) If the disk has no data no need to include it in the rebuilds. Shutdown, install new parity and new disk1, reboot then Go to Tools - New Config, keeping all other assignments, assign new parity and new disk1. Start the array to begin parity rebuild. After it finishes you can format new disk1. Then proceed with rebuilding each of the other disks with larger disks, one at a time.
  10. You might try some recovery software running on Windows. If that is all it had on it those should be easy enough to download again.
  11. Have you tried eliminating that from the network to see if you still have problems? Are you sure this isn't a problem with your ISP?
  12. Go to Tools-Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  13. Worked for me. None of your array disks are mounted. Have you formatted them yet?
  14. According to that screenshot and diagnostics (and previous diagnostics now that I look again), docker.img is now /dev/loop4, don't know why /dev/loop2 is still hanging around. Maybe try rebooting.
  15. Doesn't look like you deleted the corrupt docker.img
  16. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  17. Not the way to fix that problem. You almost certainly have an app writing to a path that isn't mapped. You should always Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  18. The problem is often a path specified within the application not matching a mapping.
  19. Why have you allocated 100G for docker.img? 20G is usually much more than enough, but I see you have already used 39 of the 100G. I have 17 dockers and they are using less than half of 20G docker.img Making docker.img larger won't fix problems with filling and corrupting it. It will only make it take longer to fill. And your docker.img is indeed corrupt. You will have to recreate it (set it to use only 20G) and reinstall your dockers using the Previous Apps feature. But, reinstalling your dockers won't be enough since you obviously have one or more of you docker applications misconfigured. The usual reason for filling docker.img is an application writing to a path that isn't mapped to Unraid storage. Typical mistakes are specifiying a path within the application using a different upper/lower case than in the mappings (Linux is case-sensitive so /downloads is different from /Downloads), or specifiying a relative path (not beginning in /). Probably the best idea after you get your cache fixed is to recreate docker.img at 20G, and instead of reinstalling your containers, see if we can figure out what you have done wrong one application at a time.
  20. You should always Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  21. yes Not clear these are causing your problem, but your config/docker.cfg has 12 for docker.img size but system/df.txt is showing 20. 20 is probably a more reasonable setting. More importantly, config/docker.cfg has docker.img path as /mnt/user/docker.img. It isn't clear which disk if any this would be on, since it isn't technically within any user share. The "standard" setup would be to put docker.img in a (cache-prefer) user share named "system". Also, looks like you were getting parity sync errors on a non-correcting parity check after an unclean shutdown. You must run a correcting parity check to correct those. The only acceptable result is exactly zero sync errors and until you get there you still have work to do. Those diagnostics are a few days old now so don't know if you fixed those or not.
  22. If you started the array with dockers / VMs enabled, but with no cache installed, then you probably have had your docker / VM related shares (appdata, domains, system) recreated on the array.
  23. That sentence wasn't well written. Many native english speakers don't do that well writing it.😉
  24. I only see one comment in that thread about converting, and the wording doesn't suggest to me that plex is doing the conversion.
×
×
  • Create New...