Caldorian

Members
  • Posts

    70
  • Joined

  • Last visited

Posts posted by Caldorian

  1. Just updated my unraid system from 6.11.5 to 6.12.8. Now every time I perform some management action with my plugins (install a new one, upgrade an existing, remove/delete one), the script window does it's thing, and then at the end, gets stuck on the line "Executing hook script: post_plugin_checks". If I click done, everything responds like the action completed successfully. I've attached the diagnostics from the system.

    image.thumb.png.bcf4b2726c1593e6c73a47d13667358d.png

    unevan-diagnostics-20240302-1642.zip

  2. Trying to do some cleanup of files, and when I tried deleting a folder with 1000s of files in it, I started getting I/O errors:

    Quote

    rm: cannot remove 'Backups/Files/more folders/IMG_2373.JPG': Structure needs cleaning
    rm: cannot remove 'Backups/Files/more folders/IMG_2472.JPG': Input/output error
    <dozens more I/O error listings>

    After that, I couldn't access the folder above the one I tried to delete, or the share that the folder was mounted in. I then found that I couldn't even get into one of the disks in my array from the terminal:

    Quote

    root@MyServer:/mnt/disk3# ls
    /bin/ls: cannot open directory '.': Input/output error

    I stopped the array, started it in maintenance mode and tried running a disk check with no options. It told me there were outstanding logs. So I restarted the array fully, stopped it again, started in maintenance mode, and ran the disk check again with no options. Below is the log from that check disk run (edited to remove redundancy):

    Quote

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
            - agno = 6
            - agno = 7
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 0
            - agno = 3
            - agno = 5
            - agno = 2
            - agno = 1
            - agno = 7
            - agno = 6
            - agno = 4
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
    Metadata CRC error detected at 0x463fa8, xfs_dir3_free block 0x3a3812a40/0x1000
    free block 16777216 for directory inode 15511924341 bad header
    rebuilding directory inode 15511924341
    Metadata CRC error detected at 0x463fa8, xfs_dir3_free block 0x3a3812a38/0x1000
    free block 16777216 for directory inode 15511924372 bad header
    rebuilding directory inode 15511924372
    Metadata CRC error detected at 0x4619e8, xfs_dir3_leaf1 block 0x3a3812a30/0x1000
    leaf block 8388608 for directory inode 15514174026 bad CRC
    rebuilding directory inode 15514174026
    Warning: recursive buffer locking at block 15628053040 detected
    Metadata CRC error detected at 0x4619e8, xfs_dir3_leaf1 block 0x3a3812a28/0x1000
    leaf block 8388608 for directory inode 15608717125 bad CRC
    rebuilding directory inode 15608717125
    Warning: recursive buffer locking at block 15628053032 detected
    Warning: recursive buffer locking at block 15628053040 detected
    Warning: recursive buffer locking at block 15628053040 detected
    (Repeated 50 times)
    Warning: recursive buffer locking at block 15628053040 detected
    Metadata CRC error detected at 0x4619e8, xfs_dir3_leaf1 block 0x3a3812a20/0x1000
    leaf block 8388608 for directory inode 15608782647 bad CRC
    rebuilding directory inode 15608782647
    Warning: recursive buffer locking at block 15628053024 detected
    Warning: recursive buffer locking at block 15628053032 detected
    Warning: recursive buffer locking at block 15628053032 detected
    (Repeated 65 times)
    Warning: recursive buffer locking at block 15628053032 detected
    Metadata CRC error detected at 0x4619e8, xfs_dir3_leaf1 block 0x3a3812a18/0x1000
    leaf block 8388608 for directory inode 15608955416 bad CRC
    rebuilding directory inode 15608955416
    Warning: recursive buffer locking at block 15628053016 detected
    Warning: recursive buffer locking at block 15628053024 detected
    Warning: recursive buffer locking at block 15628053024 detected
    (Repeated 64 times)
    Warning: recursive buffer locking at block 15628053024 detected
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    cache_purge: shake on cache 0x509600 left 4 nodes!?
    cache_purge: shake on cache 0x509600 left 4 nodes!?
    cache_zero_check: refcount is 1, not zero (node=0x152714237a10)
    cache_zero_check: refcount is 1, not zero (node=0x15271422c610)
    cache_zero_check: refcount is 1, not zero (node=0x152733386410)
    cache_zero_check: refcount is 1, not zero (node=0x152720b5f810)
    done

    I've started the array back up and things seem functional. I can access data on my other disks without issue, and data outside the folder I was trying to delete seems to be intact. Are there any suggestions on how I should proceed?

  3. Does this plugin implementation still cause issues when you have docker containers that run on br0 with their own IP address? On my server, I have several containers (swag, pihole, etc) that I run on br0 with their own IP address. Swag in particular will need to be able to connect with other docker containers running on a custom docker network.

     

    The other interesting configuration I have with my server is I have a wireguard outbound vpn setup, with a couple dockers connected to that route through it. I'm also running it all on 6.11.5 still if that matters.

  4. Several years ago when I first setup my unraid server, I followed these instructions to create a user on my server so that my Windows clients where I use a MS Account for login could access the unraid SMB shares directly without having to manually enter other credentials. Been working fine for several years now.

     

    Upgraded to from 6.10 to 6.11.1 this week, and this user can no longer access shares. Using other user credentials who's usernames are "normal" work fine, but the user who's name is in [email protected] format doesn't work. And it's not just from Windows clients. I would also use this user credential to access the server from my iPhone (either browsing shares with VLC to play some videos or with the Files app), and neither of these options work any more either. I have to edit my connections to user basic usernames.

     

    Can this please be investigated? Happy to supply whatever logs you want/need.

  5. On 8/17/2022 at 7:33 AM, Squid said:

    You really want to use the lscr one as that is the preferred.  dockerHub (what you're referring to as linuxserver) still works and is identical to lscr, but may at some point in the distant future disappea

    Makes sense. To update them all, would I just change the Repository on my existing container to the synonymous lscr.io repository, or would I need to remove and re-create/re-install the containers with the right one?

  6. 8 hours ago, JonathanM said:

    Since the app controls the listening IP, you should simply change the IP in radarr itself.

    Thanks. Definitely something I should have thought of on my own, so a good reminder that I could do that. Unfortunately I've also got other containers where I don't have that control, or that it doesn't work (ie. binhex-sabnzbd)

  7. A bit of a general question here: I've currently got a setup with this container where I've routed other binhex containers through it as specified by Q24-27 of the VPN FAQ. One issue I've run into this setup is that if I have multiple instances of a container I want routed through it (ie. radarr), I can't because I can't use alternate port mappings. I've also started looking at the new Wireguard tunnels available directly in unraid and routing dockers through it.

    How do these two methods compare to each other? Option 2 seems simpler and more flexible in being able to support multiple container instances. But is there something that I'd be loosing by going with it?

  8. Going through and doing some reviews of my system after updating from 6.9.2 to 6.10.3, and I'm noticing that my linuxserver containers seemingly come from multiple repositories now: linuxserver, lscr.io, & ghcr.io. Looking in CA, any of the ones that aren't from lscr.io aren't recognized as being installed.

    Is there any explanation on what's going on? Should I remove and re-install them all so they're all from the lscr.io repository?

    IMO, the linuxserver repository is the more desired one as all those have an easy to read format of linuxserver/<container>. The other 2 get truncated when looking at them in Unraid as the names are too long (ie. "ghcr.io/linuxse...rseerr").

  9. On 5/3/2022 at 8:49 AM, binhex said:

    indeed!, all files and folders in the /config folder should be set to owner nobody group users, the ONLY exception to this is the supervisord.log files, which are written as root, as supervisor is running as root user.

    Just throwing my experience out there for people: I found that after I upgraded from 6.9.2 to 6.10.3, most of my binhex container /config folders were set to root:root, and refused to run. The only ones that were set to nobody:users were the ones that I've spun up in the last few months (the others are on the range of a couple years old now). Things seem to be running normally now on 6.10.3 after I chown'ed them to all be nobody:users.

     

    So this was probably related to something that's changed in these containers over the couple years, and some new docker permission/settings with 6.10 that then caused things to flip out.

  10. Does anyone have any thoughts on how Cornelious' container compares to Hotio's? The one big thing I notice is all the extra mappings pre-defined in Cornelious' template, but most of them are all within the standard container appdata folder and seem like they'd be redundant. As well, how rapidly each one updates compared to when the base-docker image updates.

  11. 2 hours ago, JonathanM said:

    If you can access the webui with an ip and port, use that in the reverse proxy. I.E. if you can use http://192.168.0.2:8989 to access sonarr, that is the entry you would use in the nginx configuration to point to sonarr.mydomain.com

    Sorry, I guess I wasn't clear; My SWAG/nginx configuration works fine, and I can access the webUI through the reverse proxy without issue. What I'm trying to update is my container-to-container configuartion (ie. sonarr to sabnzbd) so that the sonarr dns name/port is uses the reverse proxy rather then communicating with the target container directly. Mostly so that I can save myself a small bit of configuration in the future where I only need to update the RP, rather then every container that might use the target.

  12. I was recently playing with @binhex's VPN containers, and following along with Q24-Q27 of his FAQ, managed to route a couple of my containers through the VPN container. Awesome. But one thing I noticed as I was removing port mappings on some containers and adding them to others was that modified port mappings weren't showing up on either the overall docker view, or the "Show docker allocations" section.

     

    So I was curious, what's the actual trigger for mappings to show up in those sections?

  13. On 3/25/2022 at 1:12 PM, Caldorian said:

    So I've been using delugeVPN for a while now, but decided to start getting into usenet as well. Got SABnzbdvpn setup on the with the same VPN creds as my delugeVPN container, and it works, but it seems redundent to have both containers separately logged in.

     

    Is there any way I could setup the sabnzbd vpn-less container to route things through the privoxy setup on delugevpn?

    So a quick follow up; I managed to google and find the VPN docker FAQ, and was able to setup my sabnzdb dockers with the container:delugevpn option. Since the network type on it is set to None, I'm guessing I can reliably assume that it's outbound connections are going through the VPN tunnel without having to further test it to guarantee it?

     

    I've got one quick followup question; I've got swag infront of things to use as an internal-only reverse proxy so I can access my containers' web UIs with nice DNS names (ie. https://sabnzbd.mydomain.com). I'd love to be able to use those same DNS names for the app connections as well. ie. in Sonarr, my download client config for SAB is set with the local hostname and port (8080) of the docker container. If I try using the SWAG reverse proxy name and port (443, Use SSL), it doesn't work. Is this possible?

  14. So I've been using delugeVPN for a while now, but decided to start getting into usenet as well. Got SABnzbdvpn setup on the with the same VPN creds as my delugeVPN container, and it works, but it seems redundent to have both containers separately logged in.

     

    Is there any way I could setup the sabnzbd vpn-less container to route things through the privoxy setup on delugevpn?

  15. There's a couple bug fixes in 1.8.5 that could help with the stability of the connection (assuming I'm interpreting them correctly):
     

    Quote
    • Merge a series of changes by Joseph Henry (of ZeroTier) that should fix some edge cases where ZeroTier would "forget" valid paths.
    • Minor multipath improvements for automatic path negotiation.

     

  16. Anyone else find that their unraid zerotier client drops offline for hours on end at random times?

    I've got the client setup on both my unraid server, as well as a raspberry pi and other clients. On my pi configuration, I've got it setup to also be used as an inbound gateway for the same network my unraid server is on.

    What I keep finding is that my pi and other clients are up and running without any issue, able to talk each other, etc. But there will be random times where my unraid zerotier client will go offline and stay that way for hours. Then it'll randomly come back online. And unfortunately, I haven't been able to find any logs or such with the container to be able to identify what the issue might be.

     

    Any help would be appreciated.

  17. Is there any way that I can export my docker configurations? CA Backup only backups the appdata folder. I'm looking for a way to backup the actual definitions so I could wipe/restore them if necessary.

     

    (In my case, I ended up with a corrupted docker image directory, and had to wipe it out. I was able to restore everything from Previous Apps, but if that wasn't available, it would have been a royal pain trying to remember all the configurations and entering them in again)

  18. On 11/4/2021 at 3:12 PM, Dmitry Spikhalskiy said:

    Not I'm aware of, but I didn't spend any time resolving it. Stuff works just fine and it's safe to ignore.

    If it bothers you to the extend of looking for a solution, contributions are welcomed!

    Thanks. Upgraded to 1.8.2, issue is still there. But things are working correctly.
     

    Quote

    root@UnEVAN:~# docker exec -it ZeroTier zerotier-cli status
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)
    200 info 233970ce34 1.8.2 ONLINE

    Don't know anything about building docker images, but this guy seems to have cleared it when he was getting the same error for a different container he was building.

  19. Just installed this on my unraid server. Any time I try to run a zerotier-cli command from the docker instance, I get a dozen lines of the following before the command results:
     

    Quote

    zerotier-cli: /usr/lib/libstdc++.so.6: no version information available (required by zerotier-cli)

     

    Any fix for this? Other then that, things look to be working well. I also installed it on my raspberry pi/pi-hole instance to use a bridge to get access to my whole internal network. Plan is to do the same on a second Pi at my parent's for remote access to their stuff.

  20. 19 hours ago, Caldorian said:

    Okay, this is a weird one. Just updated to the latest Firefox on Windows 10 (v93.0), and now I can't log into my Deluge instance. Page loads, and the password dialog box comes up, but entering the password doesn't take and it keeps prompting me for it again. Testing on Edge and Chrome has no issues.

    Ended up clearing the cookies for my home domain to clear the issue. Weird...