danktankk

Members
  • Content Count

    99
  • Joined

  • Last visited

Community Reputation

5 Neutral

About danktankk

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you for the reply. They werent being removed or anything, this just happened while the drives were in the chassis. They would be in both the array and unassigned devices at the same time as well. The one odd thing i cant shake is why it is only these 4 drives. I am going to try to rebuild them from parity one at a time to see if that helps. I have found that putting these drives in an external eSata enclosure allows them to work without these strange errors.
  2. No luck there either. I have upload several requested diagnostic files but no feedback from that yet.
  3. I guess the backplane could be the problem, but why arent any of the other drives have any issues at all? I have 8 14TB drives, 1 12 TB drive, and 4 2 TB drives running on this same backplane and havent had a single error. I dont know if this is important or not, but it is the error from the drive when it fails: Dec 10 01:28:19 baconator kernel: sd 11:0:9:0: [sdk] Synchronizing SCSI cache Dec 10 01:28:30 baconator kernel: sd 11:0:9:0: [sdk] Synchronize Cache(10) failed: Result: hostbyte=0x01 driverbyte=0x00 Here is the array: And here are the 6TB also somehow si
  4. I have removed it. It is still there though. It must be from lsio nvidia unraid. baconator-diagnostics-20201210-1213.zip
  5. i am seeing this on the drives that are not "sticking" to the array: Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] 11721045168 512-byte logical blocks: (6.00 TB/5.46 TiB) Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] 4096-byte physical blocks Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Write Protect is off Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Mode Sense: 7f 00 10 08 Dec 10 00:58:35 baconator kernel: sd 11:0:9:0: [sdk] Write cache: enabled, read cache: enabled, supports DPO and FUA Dec 10 00:58:35 baconator kernel: sdk: sdk1 Dec 10 00:58:35 baconator k
  6. I have had UnRAID Nvidia build the entire time. That most definitely isn't the issue and you can look that error up. Its from a plugin that I just uninstalled called [PLUGIN] GPU Statistics. And perhaps nvidia unraid itself? Its annoying, but not the reason for drives dropping.
  7. Here is a new diagnostics file along with some images of these same 4 hard drives being in 2 places at the same time in the UnRAID GUI: array: unassigned devices: I have no idea why this is happening or why it is only these 4 drives. *Any* help would be very appreciated! baconator-diagnostics-20201209-2336.zip
  8. i am currently running an extended self-test but have added the requested files. Thank you for the effort. Let me know if you need anything else. baconator-diagnostics-20201209-0153.zip
  9. I have a bit of an oddball situation. Today, I noticed that all 4 of the 6TB drives I have in my array are all giving read errors. They all have the same amount of errors and it doesnt matter where in the chassis I move them. If I reboot unraid, the drives are fine for a time and them they start with these errors again... always these 4 drives and always the same number of errors for each of them. My hardware that attaches these drives to my HBA is a BPN-SAS-846EL1. Any ideas or comments would be appreciated. I am kind of astonished. lol EDIT: when the 4 drives get the
  10. I didnt know there was a difference between -h and -H That did the trick. Thank you!
  11. Is there a command that will display the same array space that unraid reports from the cli?
  12. can anyone please explain how UnRAID formulates the total HDD space in an array? when using df -h It will produce a nice even number, just not the same one that is reported by UnRAID. In addition, I use a grafana dashboard that has a total hard drive space panel and it also reports that same total space that is in agreement with UnRAID's total. It is pulled from /mnt/user0 I am just curious how the hard drive capacity on the disk is reported in UnRAID when almost, if not every, other utility knocks off just over 7% of the total hard drive space.
  13. When I used a quadro p2000, I was able to get the power output readings from nvidia-smi, which would pass that information along to wherever I wanted to view it - in this case Grafana. When I tried to do the same with a quadro p1000, I get N/A for this field. That seems odd, but it may be that it doesn't support this field? I wasn't able to definitively find an answer for this. I was wondering if anyone could confirm this. I talked to a friend that mentioned it may be the driver itself? Thanks for any help! I am using unraid nvidia 6.8.3 currently and the driver version is 440
  14. I never meant physically remove. Just to remove from array. No. I paused it and it lost its progress. I rebooted on Sunday, yes, but the parity check was not running then that i know of. There was a parity check that started after an unclean shutdown. Some of the docker containers werent behaving correctly and I was forced to reboot unclean. I have 2 14TB shucks coming in tomorrow, supposedly. One day shipping is iffy at best. I'll just wait until that point to rebuild from parity. It will take almost 2 days to rebuil
  15. @trurl I know the spam to which you are referring. I dont know of a way to get rid of it as it is related to nvidia unraid, but i forget how. Here is the parity check schedule and corrections setting. Thank you for your time on this! @jonathanm The reason I would remove the drive is because unraid has effectively shut it down due to the I/O error it was experiencing - my guess - from a faulty cable. The format, which was just explained to me, is unnecessary and I would not be doing that now. I hope this helps.