_rogue
-
Posts
26 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by _rogue
-
-
On 11/20/2021 at 10:48 AM, Squid said:
Does it work if you create the image within the array and not on a ZFS device?
Sorry for the late reply... Came across my own post from googling trying to fix this issue once again.
Yes, I can create the image if I set docker to use the array rather than ZFS.
I have tried completely rebuilding my unRAID USB and the issue persists. Its 100% related to ZFS. Also woth noting that since the last time I had this issue I also rebuilt my ZFS pool with a completely new topology.
Any ideas?
-
Hello, I was trying out the docker image folder but realized it was causing odd issues (probably due to me using ZFS). I switched back to the regular img file but now I cannot install any docker containers without my whole server freezing up. I receive the following message.
QuoteUnable to find image '<image>:latest' locally
I have tried on both the latest release (6.9.2) and latest 6.10-RC2. I have attached diagnostics. If anyone has any ideas on how to repair my docker service, please help.
-
1 minute ago, ich777 said:
What is the same issue then?
Have you read the second recommended post in this thread at the top?
Disregard me. I was seeing the same log entries and it seemed like my server was not booting. Turns out the issue was Public Server == 0
Dont know how it got changed but setting it to 1 fixed the issue.
- 1
-
1 hour ago, blure007 said:
First of all ich777 thank you for all your effort on creating, maintaining and support these!
I'm wondering if anyone has been able to run multiple Valheim servers on the same box. I tryign to run two Valheim instances, one running Valheim and another running Valheim plus. They are both pointed the same appdata/steamcmd source. Each have their own Game Port and Game Port Range, 2456-2458 and 2466-2468.
The server running on the standard 2456-2458 ports works like a champ, no connection issues and shows up in the public Server browser, peachy.
The server running on 2466-2468 I can try to manual connect using the IP:PORT and I get the connecting screen for about 5 seconds before I get disconnected, never get to the server password prompt. Server appears to be listening to the correct port as I get to the connecting window. Log belowAny ideas on a possible cause?
Connecting anonymously to Steam Public...Logged in OK Waiting for user info...OK Success! App '896660' already up to date. ---Prepare Server--- ---Server ready--- ---Starting Backup daemon--- ---Start Server--- [S_API] SteamAPI_Init(): Loaded local 'steamclient.so' OK. CAppInfoCacheReadFromDiskThread took 1 milliseconds to initialize CApplicationManagerPopulateThread took 0 milliseconds to initialize (will have waited on CAppInfoCacheReadFromDiskThread) RecordSteamInterfaceCreation (PID 58): SteamGameServer013 / RecordSteamInterfaceCreation (PID 58): SteamUtils009 / Setting breakpad minidump AppID = 892970 RecordSteamInterfaceCreation (PID 58): SteamGameServer013 / GameServer RecordSteamInterfaceCreation (PID 58): SteamUtils009 / Utils RecordSteamInterfaceCreation (PID 58): SteamNetworking006 / Networking RecordSteamInterfaceCreation (PID 58): SteamGameServerStats001 / GameServerStats RecordSteamInterfaceCreation (PID 58): STEAMHTTP_INTERFACE_VERSION003 / HTTP RecordSteamInterfaceCreation (PID 58): STEAMINVENTORY_INTERFACE_V003 / Inventory RecordSteamInterfaceCreation (PID 58): STEAMUGC_INTERFACE_VERSION014 / UGC RecordSteamInterfaceCreation (PID 58): STEAMAPPS_INTERFACE_VERSION008 / Apps [S_API FAIL] Tried to access Steam interface SteamNetworkingUtils003 before SteamAPI_Init succeeded. RecordSteamInterfaceCreation (PID 58): SteamNetworkingUtils003 / RecordSteamInterfaceCreation (PID 58): SteamNetworkingSockets008 /
I am having this exact issue but I am just trying to run a single server at the default ports.
-
18 hours ago, brentdog said:
Was this ever resolved? I am having the same issue and the same bad luck with google. The only thing I am trying to proxy right now is self-hosted Bitwarden container. Then only conf I changed was the Bitwarden proxy one to rename it to bitwardenrs. I followed Spaceinvaders videos for the most part and am using a user defined bridge network. Everything works as far as clients accessing Bitwarden through the domain. But all access through nginx gets reported as coming from my Unraid server's IP address and everything in the Bitwarden log is either from the Unraid server's IP address or the address of the Swag docker. Is this just how it works inside docker? I was previously using nginx directly (not in a container) on an Arch vm and always got the real internet ip addresses in the logs. But I was really hoping to ditch that vm and go with an all container solution.
I'm not sure what other information to provide. Any help would be greatly appreciated?
So I gave up trying to figure it out. What I think is happening is the applications we are proxying are showing the client IP they "discovered" rather than the one they are told about. You cannot change the actual source IP because the app has to respond to the proxy at the proxies IP. Make sense?
Basically the backend apps are not following the expression "Do as I say, not as I do" when it comes to logging. Pfsense shows both the proxy IP and client IP, Librespeed shows just the client IP and Tautuli shows just the proxy IP. Issue is not with SWAG but with the backend app.
-
On 2/2/2021 at 9:44 PM, Noah Tatum said:
@_rogue, sorry to reply to an older comment, but did you ever figure this issue with pfSense out? I have essentially the same setup, but I'm using binhex-sabnzvpn instead of qbittorrent. Completely at a loss, myself.
Also sorry to reply to an older comment. (Bunch of Canadians here). I never figured it out. I switched over to wireguard on the binhex container and the issue was gone so I just left it that way.
-
So I have been banging my head off the wall trying to figure this out. I have searched this thread and google as much as I can. I think I might just not have the right search terms to get the info I need. (or something is not working right)
I am trying to get nginx to pass the real client IP to the backend. I cannot figure for the life of me why it does not work. My proxy.conf is set to default right now but I have tried every combination of settings I can think of. It appears that I am passing a list of IPs to the backend that includes both the reverse proxy and the client IPs but apps are only reading the reverse proxy IP. I need to get it to pass just the client IP. How do I do this?
-
With unraid 6.9 Unassigned Devices that are marked as passthrough now show their temp on the dashboard. Is it now possible to have it shown on the main page too? Read and write stats are showing on main now.
-
I just switched from a 2x E5-2660v2 system to a Ryzen 5 5600X system. Consuming close to 200W less under load.
-
On 10/13/2020 at 3:49 PM, binhex said:
ahh ive spotted the issue!, you cannot use custom bridge with a fixed ip in the same range as your lan network, so you could do a fixed ip in another range that is different to the lan network, or simply use the default 'bridge'.
Hey binhex, I think I am having a similar issue to dnLL. I am using PIA and I have switched to the new network already as part of my troubleshooting.
For the longest time I always had all my dockers on one independent VLAN so qbittorrentvpn has IP 10.15.1.57 and my unRAID host would be on another VLAN with IP 10.15.0.30. Since a few days ago I can no longer access qbittorrent from my other containers on the 10.15.1.0 VLAN (sonarr, radarr, reverse proxy). I can access it from my other subnets without issue. Like dnLL if I turn off the VPN I can access qbittorrent without issue from the 10.15.1.0 VLAN.
Looking at pfsense I am getting an entry like this (10.15.1.50 is my reverse proxy):
Google-fu tells me that TCP:SA is related to asymmetric routing but trying to configure the floating rules does nothing to help. This kinda makes sense because my reverse proxy would be accessing qbittorrent over the "switch" within unraid/docker but for some reason qbittorrent is sending its reply to the default gateway. Does not explain why this issue only started since the 4.3.0 update but even if I downgrade it does not work again. I even tried a whole new container and still not working.
I'm stumped. Is this the same/similar issue as dnLL?
-
7 minutes ago, dlandon said:
I mis-read the screen shot. You have those two disks marked as "Pass Thru". Temperatures don't show when a disk is passed through. The only thing shown is the file system and disk size.
Yeah. So if I turned off passthrough I would get temp but run the risk of mounting the device and corrupting it?
-
Just now, dlandon said:
Normally those are partitions on the same drive and temperatures would be redundant. This is not a typical UD disk setup.
Those are two separate physical disks.
-
8 minutes ago, dlandon said:
I need more information on what you are requesting. Please show more of the UD page. I'm assuming these are individual drives?
Yes these are individual drives as far as UD is concerned. They are in a ZFS mirror. Basically is it possible to get SMART attribute 194 to show under the temp column above?
-
I did a quick search of this thread and didn't see anything relevant so sorry if this has been asked before.
Can we have UD show the temp for zfs_member drives? It shows in the SMART info and just needs to be passed to the main page.
-
I have also been having issues with my network since enabling Host Access Custom Networks. Today I had both my switch and unraid completely lock up which has never happened before. (Not ruling out it being a different issue) Another issue I have been having is Host Access Custom Networks being enabled but the SHIMS never being created.
-
I have been debating what to do next with my setup. I really like unRAID but I want ZFS for my storage. I want to make use of this plugin and while I am comfortable with the CLI I really wish there was a GUI. Any chance we can see that happen? Does something already exist?
Thanks
-
I switched to this plugin this past weekend. I used to tar my backups manually anyways to it's nice to have it integrated. Question though, can we get an option where the dockers are updated and restarted before the verification. I just like to minimize downtime for my services as much as possible. Thanks
- 1
-
47 minutes ago, dlandon said:
Remote mounts are mounted when the array starts if they are set to auto mount. UD does not attempt another mount if the remote mount comes on-line.
You could set up a User Script on a cron to auto mount remote mounts with the following script:
/usr/local/sbin/rc.unassigned mount auto
This will mount any devices that are not mounted if they are set to auto mount. It doesn't affect any devices already mounted, but could add a lot of entries to the log. It will also mount any disks set to auto mount that are not mounted.
This is exactly what I needed! I can make a quick script that checks if the mount point exists and if it does not run this command.
Thank you!
-
I have not been able to find an answer to if this is a bug or by design.
When I have an NFS mount configured for auto mount and it is not available at unRAID boot the device will remain unmounted. Once it becomes available the mount button will become green/orange and allow me to mount it. Should it not just auto mount the share when it sees it available?
I don't know if this is intended or a bug but it would be really nice if NFS shares would always auto connect when they become available.
[Support] binhex - qBittorrentVPN
in Docker Containers
Posted
I am experiencing the exact same issue on the latest version and unRADI 6.11 rc2. I just rolled back to :4.4.2-2-01 and immediately all my files began moving to the correct location. I have no idea why this is happening.