Caldorian

Members
  • Posts

    70
  • Joined

  • Last visited

Everything posted by Caldorian

  1. Thanks Squid. For some reason, I swore that there was an actual "Done" text at the bottom of the script window when the script completed before.
  2. Just updated my unraid system from 6.11.5 to 6.12.8. Now every time I perform some management action with my plugins (install a new one, upgrade an existing, remove/delete one), the script window does it's thing, and then at the end, gets stuck on the line "Executing hook script: post_plugin_checks". If I click done, everything responds like the action completed successfully. I've attached the diagnostics from the system. unevan-diagnostics-20240302-1642.zip
  3. Trying to do some cleanup of files, and when I tried deleting a folder with 1000s of files in it, I started getting I/O errors: After that, I couldn't access the folder above the one I tried to delete, or the share that the folder was mounted in. I then found that I couldn't even get into one of the disks in my array from the terminal: I stopped the array, started it in maintenance mode and tried running a disk check with no options. It told me there were outstanding logs. So I restarted the array fully, stopped it again, started in maintenance mode, and ran the disk check again with no options. Below is the log from that check disk run (edited to remove redundancy): I've started the array back up and things seem functional. I can access data on my other disks without issue, and data outside the folder I was trying to delete seems to be intact. Are there any suggestions on how I should proceed?
  4. Does this plugin implementation still cause issues when you have docker containers that run on br0 with their own IP address? On my server, I have several containers (swag, pihole, etc) that I run on br0 with their own IP address. Swag in particular will need to be able to connect with other docker containers running on a custom docker network. The other interesting configuration I have with my server is I have a wireguard outbound vpn setup, with a couple dockers connected to that route through it. I'm also running it all on 6.11.5 still if that matters.
  5. Thanks. Not quite sure what you're trying to say with the first part of that sentence, but I was able to convert all my connections over to a different username in a standard format. Seems to be working better now then when I first tried to do it several years ago.
  6. Several years ago when I first setup my unraid server, I followed these instructions to create a user on my server so that my Windows clients where I use a MS Account for login could access the unraid SMB shares directly without having to manually enter other credentials. Been working fine for several years now. Upgraded to from 6.10 to 6.11.1 this week, and this user can no longer access shares. Using other user credentials who's usernames are "normal" work fine, but the user who's name is in [email protected] format doesn't work. And it's not just from Windows clients. I would also use this user credential to access the server from my iPhone (either browsing shares with VLC to play some videos or with the Files app), and neither of these options work any more either. I have to edit my connections to user basic usernames. Can this please be investigated? Happy to supply whatever logs you want/need.
  7. Makes sense. To update them all, would I just change the Repository on my existing container to the synonymous lscr.io repository, or would I need to remove and re-create/re-install the containers with the right one?
  8. Thanks. Definitely something I should have thought of on my own, so a good reminder that I could do that. Unfortunately I've also got other containers where I don't have that control, or that it doesn't work (ie. binhex-sabnzbd)
  9. A bit of a general question here: I've currently got a setup with this container where I've routed other binhex containers through it as specified by Q24-27 of the VPN FAQ. One issue I've run into this setup is that if I have multiple instances of a container I want routed through it (ie. radarr), I can't because I can't use alternate port mappings. I've also started looking at the new Wireguard tunnels available directly in unraid and routing dockers through it. How do these two methods compare to each other? Option 2 seems simpler and more flexible in being able to support multiple container instances. But is there something that I'd be loosing by going with it?
  10. Going through and doing some reviews of my system after updating from 6.9.2 to 6.10.3, and I'm noticing that my linuxserver containers seemingly come from multiple repositories now: linuxserver, lscr.io, & ghcr.io. Looking in CA, any of the ones that aren't from lscr.io aren't recognized as being installed. Is there any explanation on what's going on? Should I remove and re-install them all so they're all from the lscr.io repository? IMO, the linuxserver repository is the more desired one as all those have an easy to read format of linuxserver/<container>. The other 2 get truncated when looking at them in Unraid as the names are too long (ie. "ghcr.io/linuxse...rseerr").
  11. Just throwing my experience out there for people: I found that after I upgraded from 6.9.2 to 6.10.3, most of my binhex container /config folders were set to root:root, and refused to run. The only ones that were set to nobody:users were the ones that I've spun up in the last few months (the others are on the range of a couple years old now). Things seem to be running normally now on 6.10.3 after I chown'ed them to all be nobody:users. So this was probably related to something that's changed in these containers over the couple years, and some new docker permission/settings with 6.10 that then caused things to flip out.
  12. Does anyone have any thoughts on how Cornelious' container compares to Hotio's? The one big thing I notice is all the extra mappings pre-defined in Cornelious' template, but most of them are all within the standard container appdata folder and seem like they'd be redundant. As well, how rapidly each one updates compared to when the base-docker image updates.
  13. Sorry, I guess I wasn't clear; My SWAG/nginx configuration works fine, and I can access the webUI through the reverse proxy without issue. What I'm trying to update is my container-to-container configuartion (ie. sonarr to sabnzbd) so that the sonarr dns name/port is uses the reverse proxy rather then communicating with the target container directly. Mostly so that I can save myself a small bit of configuration in the future where I only need to update the RP, rather then every container that might use the target.
  14. I was recently playing with @binhex's VPN containers, and following along with Q24-Q27 of his FAQ, managed to route a couple of my containers through the VPN container. Awesome. But one thing I noticed as I was removing port mappings on some containers and adding them to others was that modified port mappings weren't showing up on either the overall docker view, or the "Show docker allocations" section. So I was curious, what's the actual trigger for mappings to show up in those sections?
  15. So a quick follow up; I managed to google and find the VPN docker FAQ, and was able to setup my sabnzdb dockers with the container:delugevpn option. Since the network type on it is set to None, I'm guessing I can reliably assume that it's outbound connections are going through the VPN tunnel without having to further test it to guarantee it? I've got one quick followup question; I've got swag infront of things to use as an internal-only reverse proxy so I can access my containers' web UIs with nice DNS names (ie. https://sabnzbd.mydomain.com). I'd love to be able to use those same DNS names for the app connections as well. ie. in Sonarr, my download client config for SAB is set with the local hostname and port (8080) of the docker container. If I try using the SWAG reverse proxy name and port (443, Use SSL), it doesn't work. Is this possible?
  16. So I've been using delugeVPN for a while now, but decided to start getting into usenet as well. Got SABnzbdvpn setup on the with the same VPN creds as my delugeVPN container, and it works, but it seems redundent to have both containers separately logged in. Is there any way I could setup the sabnzbd vpn-less container to route things through the privoxy setup on delugevpn?
  17. There's a couple bug fixes in 1.8.5 that could help with the stability of the connection (assuming I'm interpreting them correctly):
  18. Any updates for the client coming? Current docker version is 1.8.2, but linux version 1.8.6 is out.
  19. Anyone else find that their unraid zerotier client drops offline for hours on end at random times? I've got the client setup on both my unraid server, as well as a raspberry pi and other clients. On my pi configuration, I've got it setup to also be used as an inbound gateway for the same network my unraid server is on. What I keep finding is that my pi and other clients are up and running without any issue, able to talk each other, etc. But there will be random times where my unraid zerotier client will go offline and stay that way for hours. Then it'll randomly come back online. And unfortunately, I haven't been able to find any logs or such with the container to be able to identify what the issue might be. Any help would be appreciated.
  20. Perfect, I see it now. So all the configs are stored as XML files. Awesome. Thanks for that.
  21. Where in the flash/mount is the configuration then? ie. if I wanted to see all the paths/variables/etc that are set up for each docker?
  22. Is there any way that I can export my docker configurations? CA Backup only backups the appdata folder. I'm looking for a way to backup the actual definitions so I could wipe/restore them if necessary. (In my case, I ended up with a corrupted docker image directory, and had to wipe it out. I was able to restore everything from Previous Apps, but if that wasn't available, it would have been a royal pain trying to remember all the configurations and entering them in again)
  23. Thanks. Upgraded to 1.8.2, issue is still there. But things are working correctly. Don't know anything about building docker images, but this guy seems to have cleared it when he was getting the same error for a different container he was building.
  24. Just installed this on my unraid server. Any time I try to run a zerotier-cli command from the docker instance, I get a dozen lines of the following before the command results: Any fix for this? Other then that, things look to be working well. I also installed it on my raspberry pi/pi-hole instance to use a bridge to get access to my whole internal network. Plan is to do the same on a second Pi at my parent's for remote access to their stuff.
  25. Ended up clearing the cookies for my home domain to clear the issue. Weird...