Jump to content

_rogue

Members
  • Posts

    26
  • Joined

  • Last visited

Posts posted by _rogue

  1. On 7/30/2022 at 5:31 PM, AgentXXL said:

    UPDATE: Rolled back to 4.4.2-2-01 and it's now moving the completed from the temporary download disk. I'll stick with this version until the next released version.

     

    Start of Original Message:

     

    Is anyone else experiencing issues with qBt moving files from the 'incomplete' folder on my scratch disk to the 'Complete' share on unRAID? It does have a SSD cache pool for the 'Complete' share, and the folders get created with 'placeholder files'. When I do a binary compare from the scratch disk to the share, the share files are not shown as equal. If I try and play one of the media files from the share, it won't play, but the one on the scratch disk does. If I manually move them from scratch to the share, then qBt reports them as missing files.

     

    This seems to have started after I upgraded to unRAID 6.11 rc2. I've tried running the 'Docker Safe New Permissions` tool, but that didn't work. Even if I manually move the completed torrents from the scratch disk to the share, qBt still sees them as missing even though the 'Save Path' shows the correct location. A 'Force Recheck' sets them to 0% completed and they start downloading again. If I copy the files from the share back to the scratch disk and do another 'Force Recheck' then they are shown as complete again.

     

    I've opened the console for the container and verified that my path mountpoints are all pointing to the correct locations. qBt reports them as saved in Complete for the Save Path and even with the files in both locations, but the moment they are removed from the scratch disk, they show as missing files and start to re-download.

     

    I've still been running the 4.4.x builds after the big glitch with 4.4.0 where it lost settings and reset the default network to LAN instead of VPN. Once I corrected those issues, the 4.4 releases have been working fine up until the upgrade of unRAID. I also just noticed that I somehow switched to an old template using OpenVPN instead of Wireguard. I'll try re-configuring with Wireguard but I doubt that will make a difference to the save path.

     

    Thoughts? Ideas on what to try next? Any help appreciated!

     

    I am experiencing the exact same issue on the latest version and unRADI 6.11 rc2. I just rolled back to :4.4.2-2-01 and immediately all my files began moving to the correct location. I have no idea why this is happening.

    • Like 1
  2. On 11/20/2021 at 10:48 AM, Squid said:

    Does it work if you create the image within the array and not on a ZFS device?

    Sorry for the late reply... Came across my own post from googling trying to fix this issue once again.

     

    Yes, I can create the image if I set docker to use the array rather than ZFS.

     

    I have tried completely rebuilding my unRAID USB and the issue persists. Its 100% related to ZFS. Also woth noting that since the last time I had this issue I also rebuilt my ZFS pool with a completely new topology.

     

    Any ideas?

  3. Hello, I was trying out the docker image folder but realized it was causing odd issues (probably due to me using ZFS). I switched back to the regular img file but now I cannot install any docker containers without my whole server freezing up. I receive the following message.

     

    Quote

    Unable to find image '<image>:latest' locally

     

    I have tried on both the latest release (6.9.2) and latest 6.10-RC2. I have attached diagnostics. If anyone has any ideas on how to repair my docker service, please help.

    unraid-diagnostics-20211120-0815.zip

  4. 1 minute ago, ich777 said:

    What is the same issue then?

    Have you read the second recommended post in this thread at the top?

    Disregard me. I was seeing the same log entries and it seemed like my server was not booting. Turns out the issue was Public Server == 0 

     

    Dont know how it got changed but setting it to 1 fixed the issue.

    • Like 1
  5. 1 hour ago, blure007 said:

    First of all ich777 thank you for all your effort on creating, maintaining and support these!

     

    I'm wondering if anyone has been able to run multiple Valheim servers on the same box. I tryign to run two Valheim instances, one running Valheim and another running Valheim plus. They are both pointed the same appdata/steamcmd source. Each have their own Game Port and Game Port Range, 2456-2458 and 2466-2468.

     

    The server running on the standard 2456-2458 ports works like a champ, no connection issues and shows up in the public Server browser, peachy.

    The server running on 2466-2468 I can try to manual connect using the IP:PORT and I get the connecting screen for about 5 seconds before I get disconnected, never get to the server password prompt. Server appears to be listening to the correct port as I get to the connecting window. Log below

     

    Any ideas on a possible cause?

    
    Connecting anonymously to Steam Public...Logged in OK
    Waiting for user info...OK
    Success! App '896660' already up to date.
    ---Prepare Server---
    ---Server ready---
    ---Starting Backup daemon---
    ---Start Server---
    [S_API] SteamAPI_Init(): Loaded local 'steamclient.so' OK.
    CAppInfoCacheReadFromDiskThread took 1 milliseconds to initialize
    CApplicationManagerPopulateThread took 0 milliseconds to initialize (will have waited on CAppInfoCacheReadFromDiskThread)
    RecordSteamInterfaceCreation (PID 58): SteamGameServer013 /
    RecordSteamInterfaceCreation (PID 58): SteamUtils009 /
    Setting breakpad minidump AppID = 892970
    RecordSteamInterfaceCreation (PID 58): SteamGameServer013 / GameServer
    RecordSteamInterfaceCreation (PID 58): SteamUtils009 / Utils
    RecordSteamInterfaceCreation (PID 58): SteamNetworking006 / Networking
    RecordSteamInterfaceCreation (PID 58): SteamGameServerStats001 / GameServerStats
    RecordSteamInterfaceCreation (PID 58): STEAMHTTP_INTERFACE_VERSION003 / HTTP
    RecordSteamInterfaceCreation (PID 58): STEAMINVENTORY_INTERFACE_V003 / Inventory
    RecordSteamInterfaceCreation (PID 58): STEAMUGC_INTERFACE_VERSION014 / UGC
    RecordSteamInterfaceCreation (PID 58): STEAMAPPS_INTERFACE_VERSION008 / Apps
    [S_API FAIL] Tried to access Steam interface SteamNetworkingUtils003 before SteamAPI_Init succeeded.
    RecordSteamInterfaceCreation (PID 58): SteamNetworkingUtils003 /
    RecordSteamInterfaceCreation (PID 58): SteamNetworkingSockets008 /

     

    I am having this exact issue but I am just trying to run a single server at the default ports.

  6. 18 hours ago, brentdog said:

    Was this ever resolved?  I am having the same issue and the same bad luck with google.  The only thing I am trying to proxy right now is self-hosted Bitwarden container.  Then only conf I changed was the Bitwarden proxy one to rename it to bitwardenrs.  I followed Spaceinvaders videos for the most part and am using a user defined bridge network.  Everything works as far as clients accessing Bitwarden through the domain.   But all access through nginx gets reported as coming from my Unraid server's IP address and everything in the Bitwarden log is either from the Unraid server's IP address or the address of the Swag docker.  Is this just how it works inside docker? I was previously using nginx directly (not in a container) on an Arch vm and always got the real internet ip addresses in the logs.  But I was really hoping to ditch that vm and go with an all container solution.

     

    I'm not sure what other information to provide.  Any help would be greatly appreciated?

     

    So I gave up trying to figure it out. What I think is happening is the applications we are proxying are showing the client IP they "discovered" rather than the one they are told about. You cannot change the actual source IP because the app has to respond to the proxy at the proxies IP. Make sense?

    Basically the backend apps are not following the expression "Do as I say, not as I do" when it comes to logging. Pfsense shows both the proxy IP and client IP, Librespeed shows just the client IP and Tautuli shows just the proxy IP. Issue is not with SWAG but with the backend app.

  7. On 2/2/2021 at 9:44 PM, Noah Tatum said:

    @_rogue, sorry to reply to an older comment, but did you ever figure this issue with pfSense out? I have essentially the same setup, but I'm using binhex-sabnzvpn instead of qbittorrent. Completely at a loss, myself.

    Also sorry to reply to an older comment. (Bunch of Canadians here). I never figured it out. I switched over to wireguard on the binhex container and the issue was gone so I just left it that way.

  8. So I have been banging my head off the wall trying to figure this out. I have searched this thread and google as much as I can. I think I might just not have the right search terms to get the info I need. (or something is not working right)

     

    I am trying to get nginx to pass the real client IP to the backend. I cannot figure for the life of me why it does not work. My proxy.conf is set to default right now but I have tried every combination of settings I can think of. It appears that I am passing a list of IPs to the backend that includes both the reverse proxy and the client IPs but apps are only reading the reverse proxy IP. I need to get it to pass just the client IP. How do I do this?

     

  9. On 10/13/2020 at 3:49 PM, binhex said:

    ahh ive spotted the issue!, you cannot use custom bridge with a fixed ip in the same range as your lan network, so you could do a fixed ip in another range that is different to the lan network, or simply use the default 'bridge'.

    Hey binhex, I think I am having a similar issue to dnLL. I am using PIA and I have switched to the new network already as part of my troubleshooting.

     

    For the longest time I always had all my dockers on one independent VLAN so qbittorrentvpn has IP 10.15.1.57 and my unRAID host would be on another VLAN with IP 10.15.0.30. Since a few days ago I can no longer access qbittorrent from my other containers on the 10.15.1.0 VLAN (sonarr, radarr, reverse proxy). I can access it from my other subnets without issue. Like dnLL if I turn off the VPN I can access qbittorrent without issue from the 10.15.1.0 VLAN. 

     

    Looking at pfsense I am getting an entry like this (10.15.1.50 is my reverse proxy):

    image.thumb.png.b564ac8af567b41552febd649ffcb1de.png

    Google-fu tells me that TCP:SA is related to asymmetric routing but trying to configure the floating rules does nothing to help. This kinda makes sense because my reverse proxy would be accessing qbittorrent over the "switch" within unraid/docker but for some reason qbittorrent is sending its reply to the default gateway. Does not explain why this issue only started since the 4.3.0 update but even if I downgrade it does not work again. I even tried a whole new container and still not working.

     

    I'm stumped. Is this the same/similar issue as dnLL?

  10. I have been debating what to do next with my setup. I really like unRAID but I want ZFS for my storage. I want to make use of this plugin and while I am comfortable with the CLI I really wish there was a GUI. Any chance we can see that happen? Does something already exist?

     

    Thanks

  11. I switched to this plugin this past weekend. I used to tar my backups manually anyways to it's nice to have it integrated. Question though, can we get an option where the dockers are updated and restarted before the verification. I just like to minimize downtime for my services as much as possible. Thanks

    • Like 1
  12. 47 minutes ago, dlandon said:

    Remote mounts are mounted when the array starts if they are set to auto mount.  UD does not attempt another mount if the remote mount comes on-line.

     

    You could set up a User Script on a cron to auto mount remote mounts with the following script:

    
    /usr/local/sbin/rc.unassigned mount auto

    This will mount any devices that are not mounted if they are set to auto mount.  It doesn't affect any devices already mounted, but could add a lot of entries to the log.  It will also mount any disks set to auto mount that are not mounted.

    This is exactly what I needed! I can make a quick script that checks if the mount point exists and if it does not run this command.

     

    Thank you!

  13. I have not been able to find an answer to if this is a bug or by design. 

     

    When I have an NFS mount configured for auto mount and it is not available at unRAID boot the device will remain unmounted. Once it becomes available the mount button will become green/orange and allow me to mount it. Should it not just auto mount the share when it sees it available?

     

    I don't know if this is intended or a bug but it would be really nice if NFS shares would always auto connect when they become available. 

×
×
  • Create New...