teedge77

Members
  • Posts

    67
  • Joined

  • Last visited

Everything posted by teedge77

  1. I think I messed that one up. When I look at the other containers, they have the port set correctly. I think I probably just give up. Maybe I will try to figure it out again someday, when I'm tired of it again. Thanks for the help.
  2. This is how they are formatted: http://[IP]:[PORT] Is that correct?
  3. Nothing I see in the diagnostics. This is also for every container, not just that one. Included my diagnostics, since I'm definitely no expert. zion-diagnostics-20220609-1601.zip
  4. OK, I've checked here: https://wiki.unraid.net/Manual/Docker_Management and here: https://forums.unraid.net/topic/57181-docker-faq/ and have not found where to configure it. Could you elaborate a little?
  5. I used to be able to click a docker icon, get a dropdown, and from that dropdown, i could click the web gui link to open the docker's web site. That doesn't appear anymore. Is there a place to enable that or is this an isolated issue for me? I've attached an image of what I now see.
  6. Anyone have an issue with there being two instances of port 9022 in the template and it makes the docker not start? I originally fixed it by removing one, but then Fix Common Problems complained, so I just changed one to 9023. Just wanted to check on this to see if it's truly the original docker template or if my template was somehow misconfigured by me.
  7. Again, it's what I have now. "Likely the source of the problem" isn't actually all that helpful. I've had the same controllers since I started using unraid in 2010. This problem just started. It may be the controllers, it may not. Do you have a suggestion for a replacement? I would love to replace them with something that works great and can handle more drives. Then, I can move everything off of the unraid box and into boxes of drives.
  8. So, I noticed the following and assume that is the problem, I don't exactly understand why it suddenly can't read it right. Sep 27 17:35:19 ZION kernel: sas: sas_form_port: phy2 belongs to port6 already(1)! Sep 27 17:35:21 ZION kernel: drivers/scsi/mvsas/mv_sas.c 1435:mvs_I_T_nexus_reset for device[2]:rc= 0 Sep 27 17:35:21 ZION kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 0 tries: 1 Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Read Capacity(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] 0 512-byte logical blocks: (0 B/0 B) Sep 27 17:35:21 ZION kernel: sd 12:0:6:0: [sdv] 4096-byte physical blocks Sep 27 17:35:21 ZION kernel: sdv: detected capacity change from 8001563222016 to 0 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Write Protect is off Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Mode Sense: 00 00 00 00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(16) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Read Capacity(10) failed: Result: hostbyte=0x04 driverbyte=0x00 Sep 27 17:35:21 ZION kernel: sd 12:0:5:0: [sdu] Sense not available. Sep 27 17:35:21 ZION kernel: program smartctl is using a deprecated SCSI ioctl, please convert it to SG_IO # Is this a case of the crappy controller just "losing" drives?
  9. So, for the second time in a week, I had two drives suddenly become disabled. Last time, I made the mistake of rebooting and then getting diagnostics. This time, I knew better and have the diagnostics appropriately ready. I was able to get them going again last time by stopping the arry, disabling them, starting/stopping the array, re-enabling them, and starting the array to rebuild. It took like 28 hours. Any advice on what to look for and any insight into what you can see wrong is greatly appreciated. Thanks for any help. zion-diagnostics-20190927-2310.zip
  10. Yeah, they are both rebuilding now.
  11. Oh, I'm sorry about that. I thought there may be something still there in the diagnostics saying why they start up "disabled". I should have grabbed them first. As far as SAS2LP, I am hoping to build a whole new host system that doesn't use those and that is smaller. I would like to have a host machine and two boxes of disks. Right now, I have 2 boxes of disks and a host machine with 15 disks in it as well. Anyway, thanks for having a look. I will try getting it going again.
  12. I was just sitting here, moving some things around in the cache pool, when I noticed a drive go red...and then another. Currently, they are both still red and I was wondering if I could get some help determining what happened and what my next steps should be. I am including the diagnostics, and it would be great if people could review them to see what happened, but it would also be immensely helpful (like the "throw you a donation" kind of helpful) if someone could give me an idea on what to look for. Should I just be grepping for the word "warning"? Is there something that should be more obvious in a disk-related section of the diagnostics or somewhere? My current thought on the next step would be to change the disks to "no device", start the array, stop the array, change it back to the right disks, and then start the array again...hopefully starting the data rebuild. Before all that, I figured I should probably know if there is an underlying issue that should be remedied first. Thanks for any help. zion-diagnostics-20190924-1805.zip
  13. Yeah, that was exactly it. I hadn't thought about it because I had it switched to advanced in the regular dockers tab. Once I hit edit, it went back to basic. Thanks so much.
  14. So, I recently moved all of my dockers (all of "appdate") off of my cache drive. I moved them to an SSD connected via USB using "unassigned devices" and saw significant improvements in responsiveness. I also now use the cache drive as it is actually intended. However, I also started getting errors in Fix Common Problems about "volumes being passed that are mounted by Unassigned Devices, but they are not mounted with the slave option." The ones that have the error are dockers where the option isn't there to change how they're mounted. (E.g., log storage path, config storage path, etc.) So, I guess my questions are: 1. Should I add the SSD to the array and just keep all of the other shares off it? My concern is reading/writing and impact to the dockers. I guess there wouldn't be writing, but could the reading for parity impact the dockers like the cache drive did? (I don't necessarily care about protection by the array for my dockers.) 2. Would it be better to just leave it in unassigned devices? (Ignoring the errors.) 3 Is there some other, more appropriate, way of fixing this? (Some sort of "best practice" I haven't seen.) Grateful for any help I can get with this. Thanks.
  15. Sorry, I also responded to your other post. I live in Spring. Feel free to PM me with whatever help you are looking for.
  16. I got to this a little late, but if you still need help, I live in Spring.
  17. It stays between 16 and 19 the entire time. Here are some diags without any CSRF errors too. zion-diagnostics-20171220-1053.zip
  18. Oh, OK. I am trying to be proactive, but not doing a very good job, I see. Ha. I saw them highlighted in red and only a few of the disks, so I thought they were errors. Well, let me know if you spot the problem.
  19. OK, so I see a few of the following errors and I am trying to find the corresponding drives. Being that they are ata9/11/13/14, could it be a controller problem or a loose cable? Dec 20 10:21:20 ZION kernel: sas: ata14: end_device-1:5: cmd error handler
  20. OK, I checked netstat and saw my hackintosh was logged in. It's off now.
  21. Oh, actually, I see there are some CSRF errors before the array starts. It's just there is less going on after it starts and the CSRF token errors keep on going. (I think.)
  22. OK, so I noticed those after I uploaded them and was trying to get new ones when you posted. I restarted, closed/reopened the browser I was using and everything looked fine. Then, when I started the array, the messages came back. So, I restarted and used a completely different browser. Again, everything looked fine, until I started the array and the CSRF token errors started again. Is there something wrong with one of the plugins that is causing that? Here are the new diags with a few CSRF token errors towards the end. Thanks for the help with this. zion-diagnostics-20171220-0955.zip
  23. I have recently replaced my 6TB parity drive with two 8TB drives. The 8TB drives pre-cleared at roughly 185MB/second. When I started the parity sync, they started at about 16MB/second and the speed has not changed. I tried canceling and rebooting to see if it would have any effect, it did not.One thing I have considered is that I don't know which cards things are on right now. Could a BUS be overburdened? I am down to three 6TB drives and the two 8TB parity drives now and the speed has not changed. I can't believe that would be too much bandwidth, so I am under the impression something else had gone wrong somewhere. Is there anything in the logs that provide any insight into why things are going so slowly? Thanks for any help. zion-diagnostics-20171220-0858.zip