ryoko227

Members
  • Posts

    152
  • Joined

  • Last visited

Everything posted by ryoko227

  1. Stop and edit the docker. Change Repository: to this... linuxserver/openvpn-as:2.8.8-cbf850a0-Ubuntu18-ls122 Click Apply and it will rebuild the image You should be good to go after that once it finishes loading up o/
  2. Unfortunately I tried that as well as connecting from differing machines that normally do not have access. No joy. I ended up reverting back to 6.8.3 due to other issues, but won't be able to mess with the docker till next Tuesday. UPDATE (solved) - Seems to have been a corrupted image. Moved the appdata to another share, removed the image, reinstalled via template, copied the appdata back over, used the controller's autoback file from the beginning of the month to restore everything. 100% back up and running.
  3. Updated unRAID 6.8.3 -> 6.9.2 and now get a "ERR_CONNECTION_REFUSED" when trying to access my unifi 5.14.23 docker webui. The docker log shows no errors messages. So I tried: removing and redownloading the docker, completing deleting the docker image, reverting to 6.8.3, etc. Tried playing with the webui ports in the docker, but it just spits out the same error. The tunnels I have to offsite USGs are still up and according to their webui's, "The UniFi Controller is currently unreachable." TBH, not sure if this is a 6.9.2 issue, or an underlying issue that happened to pop off when I tried the update. Has anyone else run into this or has a suggestion on where/what I can try?
  4. Out of curiosity, which video did you find? What was the exact solution for your problem? I am also having a similar issue, worked fine in 6.8.3, but not with 6.9.2 @awediohead
  5. Unfortunately, based on your described symptoms, coincidental failure at the same time you happened to be updating/rebooting your unRAID. If you are not getting POST of any kind, then this points to your hardware/bios. At minimum when you start a PC you should get a post screen. If you are getting nothing on a confirmed working monitor, then you need to dig deeper. Firstly, pull your unRAID USB out and set it off to the side, you won’t need it right now. Your goal is to get the PC to properly boot, then worry about whether your unRAID works. There are many things that could be wrong when you get a solid black screen/no signal. If your MB displays post codes, check your manual to see what they mean and follow the troubleshooting steps. If the MB is older and doesn’t have this feature (nor a pc speaker), you might want to look into the following…. BIOS – may have gotten corrupted (pull all power, hit bios/cmos reset button (if your MB has this) or hold in the power button) plug it back in and see if it will boot. GPU – may have died, or gotten unseated while you have been working on this (try pulling it out and reseating, additionally, you could try it in a different slot). If you have another PC nearby, verify the GPU works by tossing it into the other machine to see if you can get a post screen. Memory – reseat all, still nothing? pull all out, try each dimm one at a time in different slots (try to see if you have a bad dimm). This is a process of elimination to try and find the fault in your PC. I think the above is enough for now, if none of the above solves it, let us know and we will try to go from there. PS – Truth be told, I had a CMOS battery die on me once, and after a simple reboot it borked my bios and I had to reset like I mentioned above. With constant power, that shouldn’t be possible, but meh, tech stuff so…. Best of luck.
  6. Smooth update on the Home server from 6.8.2->6.8.3. The new kernel fixed my nvme IO_PAGE_FAULT issue, which is awesome!! I was also able to easily get the AMD cache thing running on my VMs with a simple Edit->Update to each of them. Awesome work as always Limetech! EDIT/UPDATE: Work server also updated from 6.8.2->6.8.3 with no issues.
  7. Sorry I haven't gotten back to you on this!!! I agree with you on the kernel patching (way outside my bredth), www. I haven't had a chance to update unRAID to 6.8.3 yet, but the kernel for that is ... Linux kernel: version 4.19.108 (CVE-2020-2732) kernel-firmware: version 20200207_6f89735 oot: wireguard: version 0.0.20200215 I hopped back over to bugzilla, and according to Comment 111 , it's been commited. Here's a quote for brievity... So, I would think that (haven't tested yet) it *should* be fixed with an unRAID update. When I get around to updating the system with the issue, I'll edit/post the results! EDIT/UPDATE - No more errors with the 6.8.3 update on mine 🥰
  8. Thank you for this!! I can confirm this works and got Manjaro 18.1.5 KDE Plasma running using binhex's method.
  9. Updated from 6.7.2 to 6.8.2 on 2 servers and seem to have no issues. Dockers are up and running and VMs with passthrough all seem good too. *cheers*
  10. ※Related to the "unexpected GSO type" issue on unRAID 6.8※ Yes, related to unRAID's specific workload, hence the reason I worded it the way I did and linked what Tom wrote on that topic. For clarification, Tom in his own words... "This implies the bug was introduced with some code change in the initial 5.0 kernel. The problem is that we are not certain where to report the bug; it could be a kernel issue or a docker issue. Of course, it could also be something we are doing wrong, since this issue is not reported in any other distro AFAIK." That link is related to passthrough of an NVME device and was a seperate issue that has been resolved to my knowledge. The problem that we are discussing is related to IO_PAGE_FAULT being thrown when a NVME is trimmed, specifically to users who need to have IOMMU enabled. The current kernel patch information is located here https://bugzilla.kernel.org/show_bug.cgi?id=202665#c105 Awesome! It looks like from the latest update on https://bugzilla.kernel.org/show_bug.cgi?id=202665#c107 that you need to build a custom kernel with the patch applied to it. I've never attempted that, but to be completely honest, I don't know how one would apply that into unRAID, www.
  11. I actually never got around to trying it TBH. The newer versions of unRAID were slated to jump into the 5.x kernels, so I was just holding off hoping that it would just come down the pipe, www. Unfortunately 6.8, with the 5.x kernels are having some new, weird, unRAID only error that was mentioned in this post "UNRAID OS VERSION 6.8 RELEASE PLAN"... so... I will have to look into this patching again when I get some free time.
  12. I know this is marked SOLVED, and that johnnie.black linked the bugzilla bug id in which it is being talked about, but since some people might find this topic who DO need to have IOMMU enabled, I wanted to give a quick update on the IO_PAGE_FAULT issue. From the bug ID in question, it seems this to not be specific to AMD, but more to the controller. There are from what I gathered, 2 different patches based on the Phishon controllers & the Silicon Motion controllers. This stuff isn't my forte, but they seem to be kernel patches. https://bugzilla.kernel.org/show_bug.cgi?id=202665#c87 I also have a ASX8200PNP like someone on bugzilla that is kicking the IO_PAGE_FAULT when trimmed. So I plan to try out the patching they are talking about, and if it clears up my issue I guess ask unRAID to add the patch till this gets sorted on the kernel side of things.
  13. Just a gentle reminder, the support link in the docker tab still takes you to the incorrect forum link listed above.
  14. ^ agree with everything above. Unifi is the one docker I never rarely update, it just sits there staring at me. Watching the utter S-Storm when new "updates" (if you can call them that) come out over on their forums. Awesome hardware, terrible (most often broken) software. Still sitting on 5.8.3 since that's the last stable I've had 0 issues with. Love LIO for their hardwork trying to keep everything up and running though for us. o/
  15. Not sure if I'm the only one, but when I click support in the Docker drop down menu, it takes me to the old support page located here https://forums.unraid.net/topic/41631-support-linuxserverio-openvpn-as/ Just a heads up
  16. ^ that is basically what I ended up doing. Luckily I can pass through the USB controllers on my MB. Never any issues anymore.
  17. Home Server running AMD 1920x updated from 6.6.7 => 6.7.2 with no issues or modifications needed o/ Thank you as always!
  18. Couldn't update to 6.7.0 without hosing my system (in that update thread). Decided to try again adding ... iommu=pt before updating to 6.7.2, and was able to move over with no issues from 6.6.7! System seems snappier and looks great. Happy to finally be on the 6.7.~ line. \o/
  19. Oh, dang, I was hoping for some juicy tidbits, sorry for the confusion. Thank you though, I can also confirm that the APs don't seem to have any issue when “Override inform host with controller hostname/IP” are with the FQDN, just the USG that's WAN IP is where the FQDN resolves to. I'm gonna see if I can find a way to json the inform information and hard lock them to something that works. ---------SOLVED---------- https://www.reddit.com/r/Ubiquiti/comments/bxrzoh/usg_setinform_fqdn_gives_local_usg_adopt_loop/ This is how I made the config.gateway.json... { "system": { "static-host-mapping": { "host-name": { "FQDN": { "inet": "My_unRAID_Local_IP" } } } } } Since it's running in unRAID, I could simply place this into... /mnt/cache/appdata/Unifi_Container_Name/data/sites/Site_Name_Being_Used
  20. Sounds like something specific on my side then. So I checked the USG's config.boot and its showing "hairpin-nat enable" in the port forwarding section. I then verified that the FQDN resolves internally, pointing to my WAN IP, which it does. So I checked my firewall and porting forwarding to try and make sure I don’t have anything silly in there. Everything is default, aside from my port forwarding stuff for the inform, stun, and an old OpenVPN docker I had setup. From there I started searching and found something about forwarding ports 443 and 10443, but that also didn’t work. I even forced a DNS entry for that FQDN => local unRAID IP, which worked for all devices except the USG, which would still come back with the WAN IP, and then adopt loop. Thinking something was off, I created another docker container completely from scratch, no backup restore, set-default all devices, this time using 5.8.30 on the controller. Same issue, if I don’t “Override inform host with controller hostname/IP” when the docker/unRAID server restarts, the devices go into an adoption loop. One thing I did notice that I thought was odd is if you SSH into the devices and use “info” while its looping, it’s showing an inform set to the docker container IP, not the previously set/working inform information. I really don’t have any idea why the USG isn’t liking the FQDN as its inform. I have pulled power, set-default, used the reset button, everything back to factory, and it still hates it. If I set-inform to the local unRAID server address, it immediately connects. Did you have to do anything special to get your's working with an overiden FQDN? Outside of that, I am extremely curious if there is a .json way to hard-lock the inform information on a per-device basis. That's kind of the last ditch effort , as this is looking like my only/easiest option.
  21. Coming off of the deprecated 5.6.40 controller which had no issues, now I am seeing the same adopting issues as many others have mentioned on 5.6.42, 5.8.30, etc. Unfortunately, I cannot use the "Override inform host with controller hostname/IP" work around as I have USGs in other sites that use a FQDN to inform the controller, which I have found the above workaround does not work with. Things I have tried and noticed. Override inform host with controller hostname/IP – using FQDN ・Offsite no issues connecting, local devices adopt loop ・From ^ SSH into the local devices, set-inform manually to the controller’s local IP, it will connect, then when it re-provisions those local devices to the FQDN forced hostname, it starts looping again Not using override - after restarting unRAID ・Offsite that had been set previously with FQDN inform, connects with no issues ・Local devices adopt loop. Like above, setting the inform manually via ssh to the controller’s local ip works immediately I am using a site-to-site tunnel, and according to unifi, the tunnel should auto update the WAN address, keeping the connection alive. So I’m wondering if forcing the inform to the local ip of the controller will work or not. I don’t know, even if it does, I don’t like that idea of having only a single point of failure, with the probability of loosing access to that offsite site. I’m curious if maybe making a json file, setting the local device’s informs to the controller’s local IP address, if that will be like ssh’n into them manually stopping the loop or not. Just an idea that popped into my head, but I wouldn’t know how to do that, even following the unifi “make a json” instructions. If anyone has any other ideas, I am open to them. Otherwise, I might have to try and get the 5.6.40 controller back up and running again (if that is even possible with it being deprecated). Updated---- Tried rolling back to 5.6.40, same issue there now. Not sure whats causing it. I mean, everything still works (aside from the controller) after a reboot, so I suppose it will just be a minor annouyance of manually set-inform'n for awhile until things get sorted out.
  22. Unfortunately, upgrading from 6.6.7 to 6.7 grenades my unRAID. In reference to the system in my signature named Yokohama Server, after updating I got something similar to... DMAR:[DMA Read] Request device [04:00.0] fault addr DMAR:[fault reason 06] PTE Read access is not set can't find bond device. It assigns the server a completely out of range IP, and the webui isn't even available via the GUI boot option's localhost. Thinking this is an issue with IOMMU based off previous posts, I decided to try rebooting with VT-d disabled in BIOS. Server came right up, no errors in the logs, no issues at all (aside from not being able to passthrough devices to my VMs). I checked with MSI, and they haven't released a new BIOS for this MB since 2018 (already on the current version). I require passthrough on this server, so have had to revert back to 6.6.7
  23. Sorry, quick question, how does one go about doing this? (also having the same issue as the previous "update") EDIT- I removed the image, then tried docker pull linuxserver/openvpn-as:2.6.1-ls11 Which seemed like it gave me the previous version (just fumbling about using google here). However, still have the same issue. EDIT 2- Removed that image, added the container again via the unRAID GUI, changed the repository to "linuxserver/openvpn-as:2.6.1-ls11", worked like a charm! Back up and running ^_^v