jcamer

Members
  • Posts

    14
  • Joined

  • Last visited

jcamer's Achievements

Noob

Noob (1/14)

4

Reputation

  1. I see on that link it was fixed in version 6.2.4, and that there is a 6.2.9 out now as well as 6.3 RC's. Did you try to move to 6.2.4 or newer? I just ran into this on 6.2. Not sure what to do as the docker website shows the official image as only being released as 6.2 so far.
  2. So glad I found this. I've been having issues with my dockers and then noticed this, although unrelated. I thought there was something bigger going on. Same thing for me, closed Brave and reopened and my drives/array now show again.
  3. I am having the exact same issue. I haven't touched it, except maybe for updating it when there is an update. I don't really use it all that much so I am not sure when it happened, but like you, it is unreachable. I am using OpenVPN with NordVPN. If I set VPN to no, it works just fine. Downloads, opens the web ui, etc. Of course, no VNP. Here's a snippet from the console.. Well I downloaded a new cert from NordVPN and replaced the old one, fixed everything for me. I went back a few pages and saw someone else had that same issue. All good now.
  4. What commands do you use to get them up and running?
  5. This solution has so far worked for me as well, with just the serial entry. I feel like going from 10.1 to 10.2 broke it again and I had to redo it and I was nervous going to 10.3, but it went smoothly. I do wish just adding the usb device would work as expected (or as I expect it to), but I guess if the serial way works for now, I'll go with it.
  6. Same here.. I did the 6.10rc's and am now on the final. I didn't put it together that it was the cause but I am also thinking it must be. Same as others, Home Assistant VM with ConbeeII. I also tried to update the firmware on mine with a Windows machine and while it did update to the latest successfully, it has made no difference.
  7. I updated the firmware and it's been fine for a few days now with spin down enabled. thanks!
  8. I updated the firmware and it's been fine for a few days now. thanks!
  9. Well, it's been running a couple of days with no issues after updating the firmware and keeping spin down turned off. Now, I'll enable disk spin down again and see how it goes.
  10. I updated the firmware to the latest on the supermicro site. I'm nervous to tell the drives to spin down yet, I might give it a day or so. No issues since telling them not to spin down. I'll see how the newer firmware does then reenable spin down. Thanks again, appreciate it. updated: original:
  11. Here they are, taken shortly after. understonekeep-diagnostics-20210603-1507-anon.zip
  12. How has this been after updating the firmware? I'm in the same boat and have the exact same issues you originally had. thanks, John
  13. I have been searching and this seems to be exactly what I am experiencing. In the logs I also see the exact same error across multiple drives on the exact same sector. As with that poster, I also have a Supermicro Chassis with 12 drive bays. I am going to try what they tried (disable spin down) and will see how that works. I wouldn't think it'd be a power issue as it has dual 1000w power supplies. I'll also try to update the firmware on my card (Supermicro Card, LSI3008-IT, running firmware 6.00.00.00-IT) The card and cables haven't been moved so nothing has come unseated or anything. My question now is, how do I get my two disabled drives back without having to stop the array, remove them, start the array, and add them back? Is there an easy way to tell Unraid to trust they're good and reenable them? edit: I stopped the array and removed both drives. Restarted the array, added them back, etc. Rebuilding now. Thanks again, John
  14. I said "drive failures" because I think my drives are actually fine. I just didn't know if anyone else has run across this. Yesterday my server emailed me saying it had array errors, 3 disks with read errors (Parity disk, Parity disk 2, and Disk 3). It disabled the Parity Drive as well as Disk 3, but things kept running. I was thinking the chances of three drives having errors all at once out of nowhere seemed a bit low, so I doubted it was actually bad drives. Turns out the following below is not the problem. Problem just came back and shows 11 drives with errors. I'm at a loss and have diagnostics if someone smarter than me can make sense of them. To get to it, I am wondering if this is the problem somehow.. I use the docker ShinySDR for a SDR dongle I use. I unplugged it from my server a day prior as I was getting a longer antenna cable for it. The docker file had the usb device set as /dev/bus/usb/003/002 (which was correct prior to me unplugging it). and the docker was set to automatically start. Somewhere in there I rebooted the server and I think this is where the issues started. I rebooted numerous times, shut down all docker containers, shut down the one vm I run, and tried to remove all plugins I felt I didn't need trying to find what the issue might be. I forced the server off a couple of times as it was just unresponsive as well. The server actually emailed me yesterday afternoon saying I had 9 disks with read errors. Well I opened the terminal and ran lsusb to see what it had connected and /dev/bus/usb/003/002 was now "Bus 003 Device 002: ID 058f:6387 Alcor Micro Corp. Flash Drive" - This is my Unraid USB drive... I am wondering if this cold have been the cause. I didn't know if the docker container could be trying to access the usb drive in such a way as to spew out all of these read errors and disable my drives. I did run tools -> diagnostics several times, but I now know that every time you reboot you might miss something important. These files along with the syslog did show errors, but I'm hesitant to believe it as it has since rebuilt the parity drive, and is currently 66% through rebuilding disk 3. The syslog currently shows only the errors for the disabled drives, prior to me removing them and adding them back. thoughts? thanks, John