Jump to content

aglyons

Members
  • Posts

    195
  • Joined

  • Last visited

Everything posted by aglyons

  1. I started a parity swap procedure last night around 11pm. I've been watching it for a while now and that's when I noticed the UI bug. The "Current Operation Started" date and the running time displayed are out of whack. I started the operation at 12/12/23 11:41 pm. The current date today is 12/13/2023 8:36 am. The system feels that this process started "twenty-eight days ago" I suspect this is purely UI and I doubt this has any effect on anything serious, but who know better other than the devs, which brings me here!
  2. me too. And then TA shuts down after a series of connection attempts. I really am not a fan of multiple container operations for a single service to function.
  3. AFAIK, Are these set in the config? And what are the read/write permissions on the ..../appdata/Nginx-Proxy-Manager-Official/data/logs files? Make sure that they are set to this
  4. @Kilrah BINGO! That fixed it. Running on port 80! Thx very much for the tip!
  5. That would make sense in normal situations but Unraid's docker networking is a little.....unique from what I understand. The default bridge is hooked into the primary NIC of the server. If you want to run your docker containers on a secondary NIC -like me, you have to run that NIC in bridge with Vlans. Each container on that network gets its own IP on the host network. I've set up a number of other containers that use 8080 or the like but when you flip to the custom bridge network, it ignores the container default and uses the defined variable as the port. In my case, 80. In this template, the reverse seems to be the case. Anything other than bridge forces the predefined container port and ignores the provided port variable. This is the first time I've come across a container that acts like this.
  6. Revisiting the port concerns. What if you are not running the container in bridge mode? I have a dedicated IP assigned and the container is stuck at port 8080. Could this be down to the template config? I see that line 30 is setting 8080 hardcoded.
  7. NPM v2.10.3 NPM > Settings > Default Site I have this set to redirect to google.com. I went to test it hitting my public IP and the redirect works for 443 but not for 80. I then used an old subdomain I had set up which was still pointing to my IP. If I hit the domain naked, I get the unknown domain redirection If I add the protocol 'http' I get the redirection If I add the protocol 'https' I don't get the redirection. This may be something I need to submit to Git
  8. That's interesting. How long have you had your TA installed for? I installed mine about 3 months ago and that was already part of the community template, if I recall correctly. I already had that variable in place since installing. Maybe that's why I haven't been seeing these errors crop up. I also installed the Docker patch a while ago when I noticed the Unknown Version issue pop up. Has there been any movement on a docker image that is self contained? I read somewhere a while back that this was something that was being looked into to simplify the TA setup process. One container to rule them all!
  9. would be nice if there was the best of both worlds. Write a DL to the cache but have the mover put it on the array. But, still bypass fuse for the write.
  10. Came across this video the other day. I'm wondering if this is even a good idea or not. Would this have any effect on share marked for cache that 'should' be moved to the array? Are they still moved or not?
  11. Hey Ford! Thanks for the deep dive. I think I can follow this. I've been swamped with the other stuff that pays the bills. I'll go through this with a fine tooth magnifying glass and see if I can put 2 and 2 together. Thx A. PS ......and always carry a towel.
  12. Hey Ford! So networking gear is Unifi so VLANs are not a problem there. By default Unifi allows inter-vlan traffic. You have to block it if you don't want it. But the majority of what you were talking about flew right past me lol. I went back to using MACVLAN as being a geek, I like to see all the servers and PC's on the network. IPVLAN use plays havock with Unifi as clients pop up and drop off randomly. The MACVLAN issue was when the primary NIC was used for bridging creating br0 while using MACVLAN. Using a second NIC alleviates that problem. Thanks for jumping in and trying to help out. If you could dumb it down a bit for a lunkhead, I'd appreciate the translation!
  13. So I've searched around like mad trying to find some tutorials that would happen to show my 'possible' use case. So far, no luck. My UR has 3 NICs in it. I am using only two of them currently. NIC1: 10gbe is primary - 192.168.200.0/24 NIC2: 1gbe - 192.168.202.0/24 (tagged vlan2 on Unifi) I currently have bridging turned off an both. Docker network mode is set to MACVLAN. The default bridge network in UrDocker is hooked into NIC1 and I do have some containers on there that I want to keep there for the higher bandwidth. Other containers I have on NIC2 to keep them somewhat separate from the primary network and route traffic through NPM (also on NIC2). But there are containers that are on the bridge network that I would rather be on the 202.0/24 network. I've tried pulling the IP assigned to NIC2 and setting up a VLAN-ID2 with the 202.0/24 network and assigning the IP manually there. I also added another Unifi network as VLAN-ID201 201.0/24 and assigned an IP on that network in the event I want to put my HA VM on there (that's another puzzle, VM networking in UR). But here's the thing. Once I add VLAN201 in the networking settings, the gateway for VLAN2 disappears in the Docker settings and any container assigned to br1.2, can't get out and nobody can access the services. My thought was I wanted to have the default bridge (200.0/24) running as a bridge. 202.0/24 running as a MACVLAN but also have a 202.0/24 bridge that is accessed from NIC2's assigned IP. So I would have Bridge (200.0 > 172.1) eth1-bridge (202.2 > 172.2) eth1-MACVLAN-2 (202.0/24) eth1-MACVLAN-201 (201.0/24) Is this setup even possible using manual custom Docker networks? Also, if anyone knows of any video series that plainly lays out Docker networks, please share! I've seen a bunch so far but when I try to implement what they've shown I don't get the same results.
  14. I figured "But you do you." would be the end of that convo.
  15. Setting your IMG to 20GB and having it fill up can crash every container you have running. A 2tb ssd will take a heck of a lot longer to fill up and you would probably notice this before any damage happens. But you do you. Best bet would be to file a post on the Github for the project. - https://github.com/NginxProxyManager/nginx-proxy-manager/issues
  16. On the advice of someone else, I switched from using docker.img to a DIR. It's more flexible with how much data can be written as the IMG approach has a fixed virtual disk size. THe DIR route is the capacity of the share it's being stored in. You can chagne that in the Docker settings after you shut down Docker. You will have to reinstall all your containers but that's fairly simple as they are all listed in 'Previous Apps' in the CA.
  17. First up, I tried looking at the Unraid docs to try and figure it out myself but there is nothing there! I followed along, like everyone else. But, I ran into something that confuses the heck out of me. I have a second NIC, always have and on that NIC I had bridging turned off. Each container assigned to eth1 would be on network 168.202.x defined as vlan2 on my UDMProSE, and assigned an IP manually to each container. This would be the situation for any container that I want to expose to the internet via NPM. But, some containers are not exposed and don't need to have a dedicated IP so for those I stuck on the bridge for local and VPN access only (Radarr, Sonarr etc). But once I turned on bridging for eth1, br1 showed up and eth1 disappeared! All the containers I had setup on eth1 were offline. On a sidenote; 'Bridge' is still listed in the networks but choosing that is using the eth0 NIC network 168.200.x even though bridging is disabled for eth0. Why isn't that bridge using the eth1 NIC for the bridge mode since it IS enabled? Same for 'Host'
  18. regarding the "because parent directory has insecure permissions" errors that were discussed in earlier threads. As I understand it, NPM is looking at the permissions of the parent folder. That folder is in the appdata folder on UR. UR wants the ownership and permissions of its contents as 'nobody' and 777. There's even the docker permissions tool that fixes this in one click. But if UR sets the permissions to what it needs, NPM will complain that this is insecure. How can we get around this problem? I'm sorry if this is a rehash and has already been resolved. I've spent that last 30 minutes searching the web for answers with no luck.
  19. Read through this thread on the GitHub https://github.com/ytdl-org/youtube-dl/issues/31530
  20. So I'm sure I'm not the only one that has had some pains with some Docker containers not assigning the right ownership or permissions on files that they create in the shared folders. Or, if a an SMB user puts files onto a networks share. In my case, I couldn't change that in the Docker container I was running. So while I am not a scripter, I did stay at a Holiday-Inn Express last night! So I fired up ChatGPT and got to work on writing a script that would fix this problem, in real-time. I think it's pretty self-explanatory if you go through the code. But this will monitor files/folders for whenever a file or folder is touched on some way. It will then check the permissions and ownership and fix them accordingly. I added array variables so that you can define which targets you want to monitor. I also added an array variable to exclude certain file extensions from being changed. Add the share paths you want to monitor each in quotes. Then add the file extensions you want to exclude, each in quotes. The script is set to monitor recursivly so you only have to enter the top level. Everything contained will be monitored. Pop it into the userscripts add-on and set it to start at array start. So far in my testing it's been working a treat! But do your own testing to make sure that it is working how you want it to before you set it to start automatically. #!/bin/bash # Monitored directories monitored_dirs=("/mnt/user/path1/" "/mnt/user/path2/") # Excluded extensions excluded_exts=("ext1" "ext2") # Function to check ownership and permissions of files and directories check_obj() { if [[ -e "$1" ]]; then # File or directory exists # Exclude files with excluded extensions for ext in "${excluded_exts[@]}"; do if [[ "$1" == *".$ext" ]]; then return fi done # Check ownership owner=$(stat -c '%U' "$1") if [[ "$owner" != "nobody" ]]; then # Set ownership to nobody chown nobody "$1" fi # Check permissions if [[ -d "$1" ]]; then # Directory perms=$(stat -c '%a' "$1") if [[ "$perms" != "777" ]]; then # Set permissions to 'drwxrwxrwx' chmod 777 "$1" fi else # File perms=$(stat -c '%a' "$1") if [[ "$perms" != "666" ]]; then # Set permissions to '-rw-rw-rw-' chmod 666 "$1" fi fi fi } # Watch for changes to files and directories inotifywait -r -m -e create,modify,move,attrib "${monitored_dirs[@]}" | while read path action file; do # Check ownership and permissions of changed object check_obj "$path/$file" done
  21. So, being that I am not a scripter, and that ChatGPT is so popular and awesome. I had ChatGPT write a bash script! Perhaps someone with some more coding experience can review it and make sure this would work. #!/bin/bash # Function to check ownership and permissions of files and directories check_obj() { # Check ownership owner=$(stat -c '%U' "$1") if [[ "$owner" != "nobody" ]]; then # Set ownership to nobody chown nobody "$1" fi # Check permissions if [[ -d "$1" ]]; then # Directory perms=$(stat -c '%a' "$1") if [[ "$perms" != "777" ]]; then # Set permissions to 'drwxrwxrwx' chmod 777 "$1" fi else # File perms=$(stat -c '%a' "$1") if [[ "$perms" != "666" ]]; then # Set permissions to '-rw-rw-rw-' chmod 666 "$1" fi fi } # Watch for changes to files and directories inotifywait -m -e -r create,delete,modify,move,attrib /path/to/watch | while read path action file; do # Check ownership and permissions of changed object check_obj "$path/$file" done With the following instructions:
  22. periodically, using the file manager plugin, select the appdata share and reset the ownership to 'nobody' and the permissions to 'read/write' for all three levels. This is the default for Unraids built in permissions system to function. Some Docker containers either don't have explicit settings for GUID and PUID as well as UMASK support. As such, they write files with internal container permissions and it messes with Unraid on the host side. I wish there was an automation plugin that could watch for this happenening and correect it in real-time. All my searches have led to people telling me to script it myself. There is a plugin that is specifically for this purpose, "Docker Safe New Perms" in the tools section, top row. But it is a manual process and it hits all shares across the whole server. Probably smart to do now and then as well.
  23. Screen shot your SS template settings and post it here. Also, screen shot the setting for the share where your are saving DLs to.
×
×
  • Create New...