Jump to content

trurl

Moderators
  • Posts

    44,048
  • Joined

  • Last visited

  • Days Won

    137

Everything posted by trurl

  1. Nov 3 07:48:26 Tower autofan: Highest disk temp is 51C, adjusting fan speed from: 130 (50% @ 1295rpm) to: FULL (100% @ 1294rpm) Nov 3 08:58:37 Tower autofan: Highest disk temp is 43C, adjusting fan speed from: FULL (100% @ 1293rpm) to: 205 (80% @ 1284rpm) Nov 3 09:03:42 Tower autofan: Highest disk temp is 45C, adjusting fan speed from: 205 (80% @ 1294rpm) to: FULL (100% @ 1295rpm) Nov 3 09:08:47 Tower autofan: Highest disk temp is 43C, adjusting fan speed from: FULL (100% @ 1280rpm) to: 205 (80% @ 1285rpm) Nov 3 09:13:52 Tower autofan: Highest disk temp is 46C, adjusting fan speed from: 205 (80% @ 1294rpm) to: FULL (100% @ 1288rpm) Nov 3 09:18:58 Tower autofan: Highest disk temp is 43C, adjusting fan speed from: FULL (100% @ 1289rpm) to: 205 (80% @ 1285rpm) Nov 3 09:24:03 Tower autofan: Highest disk temp is 45C, adjusting fan speed from: 205 (80% @ 1286rpm) to: FULL (100% @ 1284rpm) And lots more like that. Also, if you look at SMART for each of your disks in those diagnostics you can see max lifetime temp in the reports
  2. That old version won't GUI update to any current version. You will have to go from the Upgrade Wiki and do it manually: https://wiki.unraid.net/Upgrading_to_UnRAID_v6
  3. Also see this in system/lspci: 04:00.0 RAID bus controller [0104]: HighPoint Technologies, Inc. RocketRAID 640L 4 Port SATA-III Controller [1103:0641] (rev 01) I don't think that controller is recommended for recent versions of Unraid. See for example
  4. That last screenshot seems to imply that you have mounted as Unassigned a disk that was assigned as disk11. Did you do that on purpose? I guess it doesn't matter because it needs to be rebuilt anyway, but mounting an array disk outside the array makes it out-of-sync with parity. Good news it looks like all assigned disks are mounted, of course disk11 is emulated but still mounted so your data looks like it should be OK. Disks 3 and 9 appear to have little if any data though. Don't know if that is as expected or not. Not sure what to make of that first screenshot. If all you are building is parity2, then that disk should have a lot of writes with few on any other disks, but those numbers in the write column don't seem to agree. Why are all your disks still on ReiserFS? All this is almost certainly hardware related. Especially as it happened after mucking about installing a new disk for parity2. Bad connections, bad power, maybe unseated controller, etc. Shutdown and check all connections, power and sata, both ends, including splitters. Check controller card seating, etc. You have a lot of disks, maybe your PSU isn't adequate or working well. What is the exact model of your power supply?
  5. docker.img corrupt. Why have you allocated 50G for docker.img anyway? 20G is usually much more than enough. I am running 17 dockers and they use less than half of 20G docker.img Have you had problems filling docker.img? Making it larger won't fix anything, it will just make it take longer to fill. Why are your disks so hot? Maybe you affected cooling in some way when you added parity disk.
  6. Why are you using such an (6.0.1 is over 5 years) old version of Unraid? But only joined the forum today? Very difficult to support versions we haven't seen in years. Lots of information we can't get from diagnostics on that old version. What controller(s) are these disks on?
  7. Nov 6 12:21:31 Tower kernel: ata6.00: ATA-8: ST2000DL003-9VT166, 5YD56M8Y, CC32, max UDMA/133 ... Nov 6 12:21:31 Tower kernel: ata2.00: ATA-9: ST5000DM000-1FK178, W4J0GWNW, CC47, max UDMA/133 ... Nov 6 12:32:22 Tower kernel: ata6.00: configured for UDMA/25 Nov 6 12:32:22 Tower kernel: ata6: EH complete Nov 6 12:32:54 Tower kernel: ata6: lost interrupt (Status 0x50) Nov 6 12:32:54 Tower kernel: ata6.00: limiting speed to PIO4 Nov 6 12:32:54 Tower kernel: ata6.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 6 12:32:54 Tower kernel: ata6.00: failed command: READ DMA EXT Nov 6 12:32:54 Tower kernel: ata6.00: cmd 25/00:00:68:1d:69/00:02:02:00:00/e0 tag 0 dma 262144 in Nov 6 12:32:54 Tower kernel: res 40/00:01:01:4f:c2/00:00:00:00:00/00 Emask 0x4 (timeout) Nov 6 12:32:54 Tower kernel: ata6.00: status: { DRDY } Nov 6 12:32:54 Tower kernel: ata6: soft resetting link Nov 6 12:32:54 Tower kernel: ata6.00: configured for PIO4 Nov 6 12:32:54 Tower kernel: ata6: EH complete Nov 6 12:42:32 Tower kernel: ata2.00: exception Emask 0x10 SAct 0x0 SErr 0x400001 action 0x6 frozen Nov 6 12:42:32 Tower kernel: ata2.00: irq_stat 0x48000001, interface fatal error Nov 6 12:42:32 Tower kernel: ata2: SError: { RecovData Handshk } Nov 6 12:42:32 Tower kernel: ata2.00: failed command: WRITE DMA EXT Nov 6 12:42:32 Tower kernel: ata2.00: cmd 35/00:40:e0:04:00/00:05:1e:00:00/e0 tag 14 dma 688128 out Nov 6 12:42:32 Tower kernel: res 51/84:40:e0:05:00/00:04:1e:00:00/e0 Emask 0x10 (ATA bus error) Nov 6 12:42:32 Tower kernel: ata2.00: status: { DRDY ERR } Nov 6 12:42:32 Tower kernel: ata2.00: error: { ICRC ABRT } Nov 6 12:42:32 Tower kernel: ata2: hard resetting link Check connections
  8. The syslog has timestamps that you can look at to find those specific lines.
  9. Post new diagnostics or at least syslog so we can see what is happening now.
  10. Yes we can't tell if disks will mount until you start.
  11. Looks like connection problems with this cache disk: Nov 6 07:29:24 phoenix kernel: ata3.00: ATA-10: SPCC Solid State Disk, P1601544000000009646, V2.7, max UDMA/133 ... Nov 6 07:30:29 phoenix kernel: ata3.00: exception Emask 0x10 SAct 0x4000000 SErr 0x400001 action 0x6 frozen Nov 6 07:30:29 phoenix kernel: ata3.00: irq_stat 0x08000000, interface fatal error Nov 6 07:30:29 phoenix kernel: ata3: SError: { RecovData Handshk } Nov 6 07:30:29 phoenix kernel: ata3.00: failed command: WRITE FPDMA QUEUED Nov 6 07:30:29 phoenix kernel: ata3.00: cmd 61/08:d0:40:00:02/00:00:00:00:00/40 tag 26 ncq dma 4096 out Nov 6 07:30:29 phoenix kernel: res 40/00:d4:40:00:02/00:00:00:00:00/40 Emask 0x10 (ATA bus error) Nov 6 07:30:29 phoenix kernel: ata3.00: status: { DRDY } Nov 6 07:30:29 phoenix kernel: ata3: hard resetting link Not directly related except for the amount of cache space you are wasting. Why do you have 100G docker.img? 20G is usually more than enough unless you have some app misconfigured so it is writing into docker.img instead of to mapped storage. Have you had problems filling docker.img? I have 17 dockers and they are using less than half of 20G docker.img
  12. Yes the old filesystem still works but is not recommended going forward. Formatting to the new filesystem IS converting it, but of course format makes it empty, so as mentioned you will need room for the data elsewhere for each as you convert. Such as The Unassigned Devices plugin makes it easier to work with disks outside the array.
  13. Current Unraid stable is 6.8.3, and I doubt 6.0 is available anywhere. In any case you should go to 6.8.3. The upgrade wiki: https://wiki.unraid.net/Upgrading_to_UnRAID_v6 Note that eventually you need to reformat your data drives to one of the new filesystems, so will need some space somewhere for the data for each as you reformat, but we can get into those details later. Here is a fairly recent thread with good info:
  14. After doing this post a new diagnostic if you need us to take another look
  15. Or you could keep the mobo/cpu and get compatible RAM. Many people run without ECC and I did for years, only got some recently when I needed to rebuild.
  16. Parity is realtime. There is nothing for you to do to "update" parity because it gets updated when any data disk is changed. Unclean shutdown can result in some parity updates not completing though. If you had multiple unclean shutdowns as it seems you did then you could expect even more sync errors. You will have to correct them with a correcting parity check. Then another non-correcting parity check to verify you have exactly zero sync errors. Until you get that result you haven't finished fixing things.
  17. Assuming you aren't concerned about maintaining at least single parity during all this. If you are rebuilding both parities anyway, you can just New Config and assign any disks to any slots, regardless of what is on them or whether they are cleared or formatted. Parities will be calculated based on the bits of all data disks and so will be valid for whatever is on the disks. Any disk assigned to a parity slot will be completely overwritten with parity, and any disk assigned to a data slot will not be changed. Any that have a mountable Unraid filesystem on them will be mounted. If you want, you can format those, and format any that don't mount. Parity will be maintained if you format any data disks, and in fact, parity will be updated as needed even during parity rebuild so those bits of it remain valid when you format or otherwise begin using the data disks. You already got your answer on preclear, but just thought I would add that preclear isn't necessary, but many use it to test new disks. There is only one scenario where Unraid requires a clear disk. That one scenario is when adding a data disk to a new data slot in an array that already has valid parity. This is so parity remains valid, since a clear disk is all zeros and so has no effect on existing parity. In that one scenario where Unraid requires a clear disk, it will clear the disk if it hasn't been precleared.
  18. Since this is the Unraid forum I assume they want to install it on Unraid.
  19. Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.
  20. If you are referring to total capacity I don't think there is any warning based on that, why would there be? Each disk is independent. If you had an array data disk drop out while being written, it would become disabled and emulated at that point and nothing about its free space would change.
  21. It won't cause any data or parity issues, but obviously other disk access will affect parity check speed, and parity check will affect other disk access speed.
  22. Are you sure the switch isn't to blame? Can you try another?
×
×
  • Create New...