Jump to content

Ryoko

Members
  • Posts

    161
  • Joined

  • Last visited

  • Days Won

    1

Posts posted by Ryoko

  1. Couldn't update to 6.7.0 without hosing my system (in that update thread). Decided to try again adding ...

    iommu=pt

    before updating to 6.7.2, and was able to move over with no issues from 6.6.7! 

     

    System seems snappier and looks great. Happy to finally be on the 6.7.~ line. \o/

  2. On 6/4/2019 at 7:36 PM, jonathanm said:

    I don't use any USG's, I only have wifi radios in my sites. Sorry to mislead you, I didn't see any mention of a USG in your post, and didn't think to go back in your post history to see which devices you had.

    Oh, dang, I was hoping for some juicy tidbits, sorry for the confusion. Thank you though, I can also confirm that the APs don't seem to have any issue when “Override inform host with controller hostname/IP” are with the FQDN, just the USG that's WAN IP is where the FQDN resolves to.

     

    I'm gonna see if I can find a way to json the inform information and hard lock them to something that works.

     

     

    ---------SOLVED----------

    https://www.reddit.com/r/Ubiquiti/comments/bxrzoh/usg_setinform_fqdn_gives_local_usg_adopt_loop/

    This is how I made the config.gateway.json...
    
    
    {
        "system": {
            "static-host-mapping": {
                "host-name": {
                    "FQDN": {
                        "inet": "My_unRAID_Local_IP"
                    }
                }
            }
        }
    }
    
    
    
    Since it's running in unRAID, I could simply place this into...
    
    /mnt/cache/appdata/Unifi_Container_Name/data/sites/Site_Name_Being_Used

     

  3. 21 hours ago, jonathanm said:

    Using the override with FQDN works fine internally for me. Are you sure your NAT reflection / loopback is working properly in your router? I have 3 external sites and 1 internal site managed with this container, all working fine, with all devices using FQDN.

    Sounds like something specific on my side then. So I checked the USG's config.boot and its showing "hairpin-nat enable" in the port forwarding section. I then verified that the FQDN resolves internally, pointing to my WAN IP, which it does. So I checked my firewall and porting forwarding to try and make sure I don’t have anything silly in there. Everything is default, aside from my port forwarding stuff for the inform, stun, and an old OpenVPN docker I had setup. From there I started searching and found something about forwarding ports 443 and 10443, but that also didn’t work. I even forced a DNS entry for that FQDN => local unRAID IP, which worked for all devices except the USG, which would still come back with the WAN IP, and then adopt loop.

     

    Thinking something was off, I created another docker container completely from scratch, no backup restore, set-default all devices, this time using 5.8.30 on the controller. Same issue, if I don’t “Override inform host with controller hostname/IP” when the docker/unRAID server restarts, the devices go into an adoption loop. One thing I did notice that I thought was odd is if you SSH into the devices and use “info” while its looping, it’s showing an inform set to the docker container IP, not the previously set/working inform information.

     

    I really don’t have any idea why the USG isn’t liking the FQDN as its inform. I have pulled power, set-default, used the reset button, everything back to factory, and it still hates it. If I set-inform to the local unRAID server address, it immediately connects.

     

    Did you have to do anything special to get your's working with an overiden FQDN?

     

    Outside of that, I am extremely curious if there is a .json way to hard-lock the inform information on a per-device basis. That's kind of the last ditch effort , as this is looking like my only/easiest option.

  4. Coming off of the deprecated 5.6.40 controller which had no issues, now I am seeing the same adopting issues as many others have mentioned on 5.6.42, 5.8.30, etc. Unfortunately, I cannot use the "Override inform host with controller hostname/IP" work around as I have USGs in other sites that use a FQDN to inform the controller, which I have found the above workaround does not work with.


     

    Things I have tried and noticed.


     

    Override inform host with controller hostname/IP – using FQDN

    ・Offsite no issues connecting, local devices adopt loop

    ・From ^ SSH into the local devices, set-inform manually to the controller’s local IP, it will connect, then when it re-provisions those local devices to the FQDN forced hostname, it starts looping again


     

    Not using override - after restarting unRAID

    ・Offsite that had been set previously with FQDN inform, connects with no issues

    ・Local devices adopt loop. Like above, setting the inform manually via ssh to the controller’s local ip works immediately


     

    I am using a site-to-site tunnel, and according to unifi, the tunnel should auto update the WAN address, keeping the connection alive. So I’m wondering if forcing the inform to the local ip of the controller will work or not. I don’t know, even if it does, I don’t like that idea of having only a single point of failure, with the probability of loosing access to that offsite site.


     

    I’m curious if maybe making a json file, setting the local device’s informs to the controller’s local IP address, if that will be like ssh’n into them manually stopping the loop or not. Just an idea that popped into my head, but I wouldn’t know how to do that, even following the unifi “make a json” instructions.


     

    If anyone has any other ideas, I am open to them. Otherwise, I might have to try and get the 5.6.40 controller back up and running again (if that is even possible with it being deprecated).

     

    Updated----

     

    Tried rolling back to 5.6.40, same issue there now. Not sure whats causing it. I mean, everything still works (aside from the controller) after a reboot, so I suppose it will just be a minor annouyance of manually set-inform'n for awhile until things get sorted out. 

  5. Unfortunately, upgrading from 6.6.7 to 6.7 grenades my unRAID.

    In reference to the system in my signature named Yokohama Server, after updating I got something similar to...

     

    DMAR:[DMA Read] Request device [04:00.0] fault addr

    DMAR:[fault reason 06] PTE Read access is not set

    can't find bond device.

     

    It assigns the server a completely out of range IP, and the webui isn't even available via the GUI boot option's localhost. Thinking this is an issue with IOMMU based off previous posts, I decided to try rebooting with VT-d disabled in BIOS. Server came right up, no errors in the logs, no issues at all (aside from not being able to passthrough devices to my VMs). I checked with MSI, and they haven't released a new BIOS for this MB since 2018 (already on the current version).

     

    I require passthrough on this server, so have had to revert back to 6.6.7

  6. 3 hours ago, aptalca said:

    you can change your image to "linuxserver/openvpn-as:2.6.1-ls11" and it should work

    Sorry, quick question, how does one go about doing this? (also having the same issue as the previous "update")

     

    EDIT-

    I removed the image, then tried

    docker pull linuxserver/openvpn-as:2.6.1-ls11

    Which seemed like it gave me the previous version (just fumbling about using google here). However, still have the same issue.

     

    EDIT 2-

    Removed that image, added the container again via the unRAID GUI, changed the repository to "linuxserver/openvpn-as:2.6.1-ls11", worked like a charm! Back up and running ^_^v

  7. Just a follow up to this, since I thought it was weird  and also so when I forget what I did I can check back later, www

     

    So I never got around to running the memtest, partially out of laziness, mostly out of not wanting to stay after hours.

    I noticed yesterday, that the mounting point for the drive no longer seemed to have any files or folders in it.

    So I tried copying the backups over just to see, and it said the disk was full...

     

    I decided I would try reformating the drive (again just to see) and pulled the mount commands from the go file and rebooted.

    Drive showed up totally fine in Unassigned Devices, could mount with no issues, and all the files and folders were still there.

    So, I set it to automount, changed my VM settings to point to this new mounting point, and profit.

    EDIT - Even with mutliple reboots nothing has changed... weird

     

    Everything seems to be running fine and I have had 0 checksum errors since.

    Keeping in mind, I did the 6.6.3 update right before all of this. So TBH, I don't know what "fixed" it, or even what was ultimately wrong with it.

     

    I know I should still run the mem test to verify

    • Like 1
  8. Should I try running memtest or something to troubleshoot it further? I'm thinking that might help with trying to sort out which piece of hardware might be causing it. I'm also going to check and see if there is a new BIOS for this specific motherboard as well. Thank you as always for your help johnnie!

     

    EDIT-

    Updated the BIOS as it specifically had increased memory compatiblity listed. Also took the time to clean the dust out and reseat everything. Still popping the error. I'll run memtest on it later when people aren't using it and see if that identifies/finds the issue.

  9. This and a multiple varients of this error are getting spammed across my system log since updating to 6.6.0

    BTRFS warning (device sdb1): csum failed root 5 ino 274 off 62905892864 csum 0x2d8eafc5 expected csum 0xea417956 mirror 1

    The device in question is a singular SSD mounted outide the array via the go file at startup.

     

    Running stats on the device results in this

    [/dev/sdb1].write_io_errs    0
    [/dev/sdb1].read_io_errs     0
    [/dev/sdb1].flush_io_errs    0
    [/dev/sdb1].corruption_errs  0
    [/dev/sdb1].generation_errs  0

    Scrubbing the device also gives 0 errors.

    scrub started at Thu Sep 27 14:24:13 2018 and finished after 00:05:14
    total bytes scrubbed: 153.88GiB with 0 errors

    I have copied all of the data off and reformated the device. (This did have the added benefit of UD plug-in now showing the: FS, temp, and capacity).

    However, the errors continue.

     

     

    I've read quite a few  posts related to this, but still haven't found a solution on my own.

    I admittedly don't know enough about the btrfs file system to understand which file(s)/metadata are causing the error.

    Any help would be greatly appreciated.

    Thank you in advance o/

     

     

    yes-mediaserver-diagnostics-20180927-1436.zip

  10. On 6/25/2018 at 9:14 PM, Rich said:

     

    Thanks for the heads up ryoko. I gave the three commands a shot and it did sort out the syslog flooding, but sadly didn't solve the single thread at 100% or allow the VM to boot, so looks like i'll be continuing with UEFI disabled for the moment.

    That's too bad. Not sure what might be causing the other issue you are running into, sorry bout that

  11. 8 hours ago, Rich said:

    I'm seeing this as well now. With UEFI boot enabled, a VM with iGPU passthrough doesn't boot, maxes out a CPU thread and totally fills the syslog with, 

    
    kernel: vfio-pci 0000:00:02.0: BAR 2: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]

    Disabling UEFI boot stops the problem and allows the VM and passthrough to return to working as expected.

     

    Rich

     

    diagnostics-20180624-1740.zip

     

    Most likely what is happening is that efi-framebuffer is being loaded into the area of memory that the GPU is also trying to use when unRAID is booted in UEFI mode. This thread explains the who, how, what, why and how to fix it if your issue is the same as mine was.

  12. On 5/22/2018 at 3:25 PM, bonienl said:

     

    A possible way of doing this is changing the general setting which applies to all disks and next change each individual disk in the array/cache to their desired values.

     

     

    Hmm, *thinking* interesting way to work around it! I'll take a look at that when I get home tonight and let you know how it goes. Thank you for the suggestion!

     

     

    EDIT- Seems to have worked out :) Thank you again bonienl!

  13. I have an M.2 nvme drive that is mounted in the go file and resides in Unassigned Devices. I love that I can now see temps and all the work done in this area (thank you for that!). Previously, I got help solving Array and Cache drive temp warning spam on another machine from johnie.black, but the method of clicking into the drive name and setting individually doesn't seem to be available under UD. Specifically there are not any temp or utilization threshold option/textboxes. I'm sure I could probably set this globably in disk settings, but since the drive can regularly run at/around/over 50C, I am not comfortable with the other drives having to get this hot before sending me a notification (not that they do-but incase of an issue).

     

    Is there anyway to add this into UD? Or some other way of doing this that I am missing?

     

  14. The way I am handling this is by having my SSDs setup in a btrfs RAID-0 cache pool, then syncing the information off onto the array for redundacy nightly. Of course, between syncs&movers the data on this cache pool is vunerable, but having 1TB of space available at SSD access speeds is worth it to me. I really only use it for cacheing and my Steam games library. So if you are just using it for your audio and video files, something like this might be a viable option for you.

  15. On 2018/1/19 at 9:11 PM, Djoss said:

     

    Can you try to run the following command to see if it helps?  Restart the container after running it.

    
    docker exec CrashPlanPRO add-pkg wqy-zenhei --repository http://dl-cdn.alpinelinux.org/alpine/edge/testing

     

     

    Hey Djoss, thank you for the responce. I ran the above, and while it seemed to install, I still have the "?" marks after a restart of the docker. Even if I change the application language to Japanese in options, the file names still show up "?"

     

    EDIT - ps, the blank file/folder names are because I redacted them from the image

    キャプチャ2.PNG

  16. I was just looking for a file (someone deleted something they shouldn't have) and noticed that none of my files/folders written in Japanese are showing up in the backup. I went in to Add Backup Set and any folder/file with Japanese is showing up as a "?"  for the character. How do I add Japanese language support to the docker? Or, could I request that Japanese character support be added to the docker?

     

     

    The files look like this

    5a615d0a134bf_.PNG.56d0b7ad7eb75b0f387ab71e5dcf2e46.PNG

  17. You can find the plugin I reference in Community Apps - Apps tab, search for usb. It will be the only thing that shows up.

    Its full name is "Libvirt Hotplug USB".

     

    Ya, mine still is pretty much off and on. Sometimes it works on its own. Sometimes I can get it to connect the first try with the above plugin. Sometimes I have to attach and detach (via VM panel) like 5 times to get it to work. Its super hit and miss.

     

     

×
×
  • Create New...