Jump to content

trurl

Moderators
  • Posts

    43,989
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Nov 6 12:21:31 Tower kernel: ata6.00: ATA-8: ST2000DL003-9VT166, 5YD56M8Y, CC32, max UDMA/133 ... Nov 6 12:21:31 Tower kernel: ata2.00: ATA-9: ST5000DM000-1FK178, W4J0GWNW, CC47, max UDMA/133 ... Nov 6 12:32:22 Tower kernel: ata6.00: configured for UDMA/25 Nov 6 12:32:22 Tower kernel: ata6: EH complete Nov 6 12:32:54 Tower kernel: ata6: lost interrupt (Status 0x50) Nov 6 12:32:54 Tower kernel: ata6.00: limiting speed to PIO4 Nov 6 12:32:54 Tower kernel: ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 6 12:32:54 Tower kernel: ata6.00: failed command: READ DMA EXT Nov 6 12:32:54 Tower kernel: ata6.00: cmd 25/00:00:68:1d:69/00:02:02:00:00/e0 tag 0 dma 262144 in Nov 6 12:32:54 Tower kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 6 12:32:54 Tower kernel: ata6.00: status: { DRDY } Nov 6 12:32:54 Tower kernel: ata6: soft resetting link Nov 6 12:32:54 Tower kernel: ata6.00: configured for PIO4 Nov 6 12:32:54 Tower kernel: ata6: EH complete Nov 6 12:42:32 Tower kernel: ata2.00: exception Emask 0x10 SAct 0x0 SErr 0x400001 action 0x6 frozen Nov 6 12:42:32 Tower kernel: ata2.00: irq_stat 0x48000001, interface fatal error Nov 6 12:42:32 Tower kernel: ata2: SError: { RecovData Handshk } Nov 6 12:42:32 Tower kernel: ata2.00: failed command: WRITE DMA EXT Nov 6 12:42:32 Tower kernel: ata2.00: cmd 35/00:40:e0:04:00/00:05:1e:00:00/e0 tag 14 dma 688128 out Nov 6 12:42:32 Tower kernel: res 51/84:40:e0:05:00/00:04:1e:00:00/e0 Emask 0x10 (ATA bus error) Nov 6 12:42:32 Tower kernel: ata2.00: status: { DRDY ERR } Nov 6 12:42:32 Tower kernel: ata2.00: error: { ICRC ABRT } Nov 6 12:42:32 Tower kernel: ata2: hard resetting link Check connections
  2. The syslog has timestamps that you can look at to find those specific lines.
  3. Post new diagnostics or at least syslog so we can see what is happening now.
  4. Yes we can't tell if disks will mount until you start.
  5. Looks like connection problems with this cache disk: Nov 6 07:29:24 phoenix kernel: ata3.00: ATA-10: SPCC Solid State Disk, P1601544000000009646, V2.7, max UDMA/133 ... Nov 6 07:30:29 phoenix kernel: ata3.00: exception Emask 0x10 SAct 0x4000000 SErr 0x400001 action 0x6 frozen Nov 6 07:30:29 phoenix kernel: ata3.00: irq_stat 0x08000000, interface fatal error Nov 6 07:30:29 phoenix kernel: ata3: SError: { RecovData Handshk } Nov 6 07:30:29 phoenix kernel: ata3.00: failed command: WRITE FPDMA QUEUED Nov 6 07:30:29 phoenix kernel: ata3.00: cmd 61/08:d0:40:00:02/00:00:00:00:00/40 tag 26 ncq dma 4096 out Nov 6 07:30:29 phoenix kernel: res 40/00:d4:40:00:02/00:00:00:00:00/40 Emask 0x10 (ATA bus error) Nov 6 07:30:29 phoenix kernel: ata3.00: status: { DRDY } Nov 6 07:30:29 phoenix kernel: ata3: hard resetting link Not directly related except for the amount of cache space you are wasting. Why do you have 100G docker.img? 20G is usually more than enough unless you have some app misconfigured so it is writing into docker.img instead of to mapped storage. Have you had problems filling docker.img? I have 17 dockers and they are using less than half of 20G docker.img
  6. Yes the old filesystem still works but is not recommended going forward. Formatting to the new filesystem IS converting it, but of course format makes it empty, so as mentioned you will need room for the data elsewhere for each as you convert. Such as The Unassigned Devices plugin makes it easier to work with disks outside the array.
  7. Current Unraid stable is 6.8.3, and I doubt 6.0 is available anywhere. In any case you should go to 6.8.3. The upgrade wiki: https://wiki.unraid.net/Upgrading_to_UnRAID_v6 Note that eventually you need to reformat your data drives to one of the new filesystems, so will need some space somewhere for the data for each as you reformat, but we can get into those details later. Here is a fairly recent thread with good info:
  8. After doing this post a new diagnostic if you need us to take another look
  9. Or you could keep the mobo/cpu and get compatible RAM. Many people run without ECC and I did for years, only got some recently when I needed to rebuild.
  10. Parity is realtime. There is nothing for you to do to "update" parity because it gets updated when any data disk is changed. Unclean shutdown can result in some parity updates not completing though. If you had multiple unclean shutdowns as it seems you did then you could expect even more sync errors. You will have to correct them with a correcting parity check. Then another non-correcting parity check to verify you have exactly zero sync errors. Until you get that result you haven't finished fixing things.
  11. Assuming you aren't concerned about maintaining at least single parity during all this. If you are rebuilding both parities anyway, you can just New Config and assign any disks to any slots, regardless of what is on them or whether they are cleared or formatted. Parities will be calculated based on the bits of all data disks and so will be valid for whatever is on the disks. Any disk assigned to a parity slot will be completely overwritten with parity, and any disk assigned to a data slot will not be changed. Any that have a mountable Unraid filesystem on them will be mounted. If you want, you can format those, and format any that don't mount. Parity will be maintained if you format any data disks, and in fact, parity will be updated as needed even during parity rebuild so those bits of it remain valid when you format or otherwise begin using the data disks. You already got your answer on preclear, but just thought I would add that preclear isn't necessary, but many use it to test new disks. There is only one scenario where Unraid requires a clear disk. That one scenario is when adding a data disk to a new data slot in an array that already has valid parity. This is so parity remains valid, since a clear disk is all zeros and so has no effect on existing parity. In that one scenario where Unraid requires a clear disk, it will clear the disk if it hasn't been precleared.
  12. Since this is the Unraid forum I assume they want to install it on Unraid.
  13. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  14. If you are referring to total capacity I don't think there is any warning based on that, why would there be? Each disk is independent. If you had an array data disk drop out while being written, it would become disabled and emulated at that point and nothing about its free space would change.
  15. It won't cause any data or parity issues, but obviously other disk access will affect parity check speed, and parity check will affect other disk access speed.
  16. Are you sure the switch isn't to blame? Can you try another?
  17. Don't understand why you think that might have an effect. Are you sure you haven't set a different value for specific disks?
  18. You have been a member of this forum since 2007, so it still isn't clear from your answer of "default" what fs you have. If your data disks were created before V6 then the only choice back then was ReiserFS.
  19. Get us diagnostics with the array started. Do you mean it didn't get any logs on the flash drive, or do you just mean that you didn't see anything you thought was important?
  20. Is this a correcting parity check? After it finishes and before rebooting, Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread. In fact, don't reboot at all if you can help it while we are trying to track this down. We need to be able to compare this parity check with the next one in syslog, and syslog resets when you reboot.
  21. Yes, not clear that it will help with the problem that started this thread though. Frankly your description of that problem seems impossible since parity check doesn't do anything to any files. Possibly a hardware problem as suggested by itimpi. Parity check does cause all disks to run at the same time, so maybe a power or controller issue is causing a problem. Also
  22. The Unraid OS, including all the usual linux folders such as /etc, is in RAM. The flash drive only contains the archive of the OS. It is unpacked fresh into RAM at each boot. No changes are applied to that archive. It only changes when you update Unraid to a new version. Any change you make to the OS folders must be reapplied at boot or it won't persist. The User Scripts plugin can help with this. All this is just provided as general information. Not sure what your specific problems are. Most people don't require these hacks.
  23. You should start your own thread and post Diagnostics
×
×
  • Create New...