Jump to content

teedge77

Members
  • Content Count

    60
  • Joined

  • Last visited

Community Reputation

0 Neutral

About teedge77

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Anyone have an issue with there being two instances of port 9022 in the template and it makes the docker not start? I originally fixed it by removing one, but then Fix Common Problems complained, so I just changed one to 9023. Just wanted to check on this to see if it's truly the original docker template or if my template was somehow misconfigured by me.
  2. Again, it's what I have now. "Likely the source of the problem" isn't actually all that helpful. I've had the same controllers since I started using unraid in 2010. This problem just started. It may be the controllers, it may not. Do you have a suggestion for a replacement? I would love to replace them with something that works great and can handle more drives. Then, I can move everything off of the unraid box and into boxes of drives.
  3. So, I noticed the following and assume that is the problem, I don't exactly understand why it suddenly can't read it right. Sep 27 17:35:19 ZION kernel: sas: sas_form_port: phy2 belongs to port6 already(1)! Sep 27 17:35:21 ZION kernel: drivers/scsi/mvsas/mv_sas.c 1435:mvs_I_T_nexus_reset for device[2]:rc= 0 Sep 27 17:35:21 ZION kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1 Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Read Capacity(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] 0 512-byte logical blocks: (0 B/0 B) Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] 4096-byte physical blocks Sep 27 17:35:21 ZION kernel: sdv: detected capacity change from 8001563222016 to 0 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Write Protect is off Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Mode Sense: 00 00 00 00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO # Is this a case of the crappy controller just "losing" drives?
  4. So, for the second time in a week, I had two drives suddenly become disabled. Last time, I made the mistake of rebooting and then getting diagnostics. This time, I knew better and have the diagnostics appropriately ready. I was able to get them going again last time by stopping the arry, disabling them, starting/stopping the array, re-enabling them, and starting the array to rebuild. It took like 28 hours. Any advice on what to look for and any insight into what you can see wrong is greatly appreciated. Thanks for any help. zion-diagnostics-20190927-2310.zip
  5. Yeah, they are both rebuilding now.
  6. Oh, I'm sorry about that. I thought there may be something still there in the diagnostics saying why they start up "disabled". I should have grabbed them first. As far as SAS2LP, I am hoping to build a whole new host system that doesn't use those and that is smaller. I would like to have a host machine and two boxes of disks. Right now, I have 2 boxes of disks and a host machine with 15 disks in it as well. Anyway, thanks for having a look. I will try getting it going again.
  7. I was just sitting here, moving some things around in the cache pool, when I noticed a drive go red...and then another. Currently, they are both still red and I was wondering if I could get some help determining what happened and what my next steps should be. I am including the diagnostics, and it would be great if people could review them to see what happened, but it would also be immensely helpful (like the "throw you a donation" kind of helpful) if someone could give me an idea on what to look for. Should I just be grepping for the word "warning"? Is there something that should be more obvious in a disk-related section of the diagnostics or somewhere? My current thought on the next step would be to change the disks to "no device", start the array, stop the array, change it back to the right disks, and then start the array again...hopefully starting the data rebuild. Before all that, I figured I should probably know if there is an underlying issue that should be remedied first. Thanks for any help. zion-diagnostics-20190924-1805.zip
  8. Yeah, that was exactly it. I hadn't thought about it because I had it switched to advanced in the regular dockers tab. Once I hit edit, it went back to basic. Thanks so much.
  9. So, I recently moved all of my dockers (all of "appdate") off of my cache drive. I moved them to an SSD connected via USB using "unassigned devices" and saw significant improvements in responsiveness. I also now use the cache drive as it is actually intended. However, I also started getting errors in Fix Common Problems about "volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option." The ones that have the error are dockers where the option isn't there to change how they're mounted. (E.g., log storage path, config storage path, etc.) So, I guess my questions are: 1. Should I add the SSD to the array and just keep all of the other shares off it? My concern is reading/writing and impact to the dockers. I guess there wouldn't be writing, but could the reading for parity impact the dockers like the cache drive did? (I don't necessarily care about protection by the array for my dockers.) 2. Would it be better to just leave it in unassigned devices? (Ignoring the errors.) 3 Is there some other, more appropriate, way of fixing this? (Some sort of "best practice" I haven't seen.) Grateful for any help I can get with this. Thanks.
  10. Sorry, I also responded to your other post. I live in Spring. Feel free to PM me with whatever help you are looking for.
  11. I got to this a little late, but if you still need help, I live in Spring.
  12. It stays between 16 and 19 the entire time. Here are some diags without any CSRF errors too. zion-diagnostics-20171220-1053.zip
  13. Oh, OK. I am trying to be proactive, but not doing a very good job, I see. Ha. I saw them highlighted in red and only a few of the disks, so I thought they were errors. Well, let me know if you spot the problem.
  14. OK, so I see a few of the following errors and I am trying to find the corresponding drives. Being that they are ata9/11/13/14, could it be a controller problem or a loose cable? Dec 20 10:21:20 ZION kernel: sas: ata14: end_device-1:5: cmd error handler
  15. OK, I checked netstat and saw my hackintosh was logged in. It's off now.