Jump to content

Meles Meles

Members
  • Posts

    54
  • Joined

  • Last visited

Posts posted by Meles Meles

  1. run.Webui != "" && (run.Name != "Docker-WebUI" || run.Name != os.Getenv("HOST_CONTAINERNAME"))

     

     

    that's always going to display it isn't it (whenever the env var isn't Docker-WebUI)?

     

    run.Name != "Docker-WebUI" OR run.Name != EnvVar

     

     

    some logic along these lines would be better

    checkName = os.Getenv("HOST_CONTAINERNAME")
    
    if checkName is null then checkName = 'Docker-WebUI'
    
    if run.Webui != "" and run.Name != checkName then display it

     

    Screenshot 2021-11-05 065115.png

  2. On 9/7/2021 at 3:38 PM, Kameleon83 said:

    If you are in version 6.10-rc2 (or newer), you can change the container name by adding an environment variable to each container (HOST_CONTAINERNAME).

     

     

    sorry, i wasn't clear enough....

     

    from 6.10-rc2 the HOST_CONTAINERNAME env variable is automatically assigned to ALL docker containers on creation, therefore you'll be able to change your code to look for that variable and exclude the value stored in there.

     

    this is an example of a "docker run" command generated now by 6.10

     

    docker run -d --name='Docker-WebUI' --net='downloads'
    -e TZ="Australia/Perth" -e HOST_OS="Unraid" -e HOST_HOSTNAME="skynet" -e HOST_CONTAINERNAME="Docker-WebUI"
    -e 'CIRCLE'='no'
    -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui='http://[IP]:[PORT:8080]'
    -l net.unraid.docker.icon='https://raw.githubusercontent.com/Olprog59/unraid-templates/main/docker-webui/docker-webui.png'
    -p '1111:8080/tcp' -v '/var/run/docker.sock':'/var/run/docker.sock':'rw' 
    -v '/var/local/emhttp/plugins/dynamix.docker.manager':'/data':'ro' 
    'olprog/unraid-docker-webui'

     

     

    the HOST_HOSTNAME and HOST_CONTAINERNAME environment variables are new as of 6.10-rc2. They help our containers be "self-aware" when they need to be!

  3. On 9/7/2021 at 3:38 PM, Kameleon83 said:

     

      - Do not change the name of the application (Docker-WebUI). I have omitted the list with this name. If you change it then you will see it listed.

     

     

     

    Once you are on 6.10-rc2 (or newer) containers get created with an environment variable called HOST_CONTAINERNAME which contains the name of the container (Docker-WebUI if left unchanged). If you check for the existence of this env-var then you can remove the hard-coding for "Docker-WebUI"

  4. 5 hours ago, gStone82 said:

    Also would be great if we were able to fix the webui's in place rather than having them randomly appear in different locations on every reload.

     

     

    Even alphabetically (case-insensitive!) sorted would be a start!

  5. On 9/28/2021 at 10:49 PM, Ford Prefect said:

    ...this is a nice case...similar ones I know come in higher depths only.

    Ask the distributor in UK, whether they can directly forward it to your location in DE...they don't have any fuel for the logistics anyway now ;-)

     

     

     

    bloody foreigners, staying over there - not driving our trucks....

     

    🤣

     

    • Like 1
    • Haha 2
  6. Put a container onto a macvlan network, and assign it an IP in the LAN range. Or even have the docker network make a DHCP assignment. Here's a

    docker network inspect

    for my network 

     

    [
        {
            "Name": "br0",
            "Id": "6c8a8d37276c8d82f047ccfb156aba833629db9e2d166cb9e8229463aac1d6ac",
            "Created": "2021-09-06T08:38:55.45040253+08:00",
            "Scope": "local",
            "Driver": "macvlan",
            "EnableIPv6": false,
            "IPAM": {
                "Driver": "default",
                "Options": {},
                "Config": [
                    {
                        "Subnet": "10.1.0.0/22",
                        "IPRange": "10.1.2.64/27",
                        "Gateway": "10.1.2.1",
                        "AuxiliaryAddresses": {
                            "server": "10.1.2.2"
                        }
                    }
                ]
            },
            "Internal": false,
            "Attachable": false,
            "Ingress": false,
            "ConfigFrom": {
                "Network": ""
            },
            "ConfigOnly": false,
            "Containers": {
                "d84731c04a826aeaa63fa0e88e2582e9875de2292904acdfe96f1cb2bd2aca01": {
                    "Name": "pihole",
                    "EndpointID": "bd20b00120acf20bdb6d0e2a27056104f0ec05ae0437297e985f992b485c51a0",
                    "MacAddress": "02:42:0a:01:02:03",
                    "IPv4Address": "10.1.2.3/22",
                    "IPv6Address": ""
                }
            },
            "Options": {
                "parent": "br0"
            },
            "Labels": {}
        }
    ]

     

     

     

    so you can see, i have a container called "pihole" which has a manually assigned 10.1.2.3 IP which is accessible on my LAN proper.

  7. Because I was interested to have a play (and just for S&G), i've modded my /usr/local/emhttp/plugins/dynamix.docker.manager/include/Helpers.php file

     

     

    274   // Add HOST_HOSTNAME variable
    275   $Variables[]   = 'HOST_HOSTNAME=$HOSTNAME';
    276   // Add HOST_CONTAINERNAME variable
    277   $Variables[]   = 'HOST_CONTAINERNAME='.escapeshellarg($xml['Name']);

     

     

    All seems to work fine

     

    Why would I want this, i hear you cry?

     

    Well, in my "Firefox" container the tab name being "Guacamole Client" gave me the poops (being polite!) - so I stuck the following in appdata/firefox/custom-cont-init.d/00.set.guac.page.title.sh

     

    filename="/gclient/rdp.ejs"
    echo 
    echo ----------------------------------------------------------------------------------
    echo 
    echo "Set the Guacamole Client page title in \"${filename}\""
    echo 
    echo "Before"
    echo "------"
    grep "<title>" $filename
    sed -i "s+<title>Guacamole Client</title>+<title>${HOST_OS} ${HOST_HOSTNAME} ${HOST_CONTAINERNAME}</title>+g" $filename
    echo 
    echo "After"
    echo "-----"
    grep "<title>" $filename
    echo 
    echo ----------------------------------------------------------------------------------
    echo 


     

     

    so now my tab is titled "Unraid skynet firefox" - much more OCD friendly....

  8. Very handily the docker code automatically creates a HOST_OS = "unraid" env var in each container, can we make it automatically create a HOST_HOSTNAME = $HOSTNAME (i.e. the unraid server's hostname) as well? 

     

    and yes, i know i can just do it in "Extra Parameters" for each container - but it'd be handy to have it there automatically. 

     

    -e HOST_HOSTNAME=$HOSTNAME

     

     

    making the container name available within all containers would probably be useful too.....

     

     

  9. 14 minutes ago, itimpi said:

    Most of the time it is easier to restore from backups (assuming you have them).

     

     

    but surely RAID is a backup? :P

     

    yeah, i'll take a look at the files that are there. if it's all too hard i'll trash them - i'm pretty sure they are all downloaded media files anyway. 

     

    I "normally" have a backup on a second unRAID server, but as "luck" would have it i've trashed it this week and haven't yet redone my backup of stuff I care about. It's all on Onedrive as well anyway

  10. One of my data disks has decided not to mount. 

     

    UNMOUNTABLE: NOT MOUNTED

     

    I've tried

    1. Stopping the array. Removing the disk. Start the array. Stop the array. Add the disk back in again. Start the array, rebuild from parity. 24hrs later (once the rebuild has happened), still the same error. grrrrr.

     

    2. xfs_repair (from the GUI so save any potential confusion) with the -L option

     

    Phase 1 - find and verify superblock...
    Phase 2 - using internal log
            - zero log...
            - scan filesystem freespace and inode maps...
            - found root inode chunk
    Phase 3 - for each AG...
            - scan and clear agi unlinked lists...
            - process known inodes and perform inode discovery...
            - agno = 0
            - agno = 1
            - agno = 2
            - agno = 3
            - agno = 4
            - agno = 5
    Metadata CRC error detected at 0x459c09, xfs_dir3_block block 0x280000010/0x1000
    bad directory block magic # 0x1e0d0000 in block 0 for directory inode 10737418375
    bad bestfree table in block 0 in directory inode 10737418375: repairing table
    Metadata CRC error detected at 0x459c09, xfs_dir3_block block 0x280000018/0x1000
    bad directory block magic # 0xa770470 in block 0 for directory inode 10737418387
    bad bestfree table in block 0 in directory inode 10737418387: repairing table
            - agno = 6
            - agno = 7
    Metadata CRC error detected at 0x45c929, xfs_dir3_data block 0x40/0x1000
    bad directory block magic # 0xa770270 in block 0 for directory inode 15149353946
    bad bestfree table in block 0 in directory inode 15149353946: repairing table
            - agno = 8
            - agno = 9
            - agno = 10
            - process newly discovered inodes...
    Phase 4 - check for duplicate blocks...
            - setting up duplicate extent list...
            - check for inodes claiming duplicate blocks...
            - agno = 1
            - agno = 3
            - agno = 5
            - agno = 2
            - agno = 4
            - agno = 7
            - agno = 0
            - agno = 6
    bad directory block magic # 0x1e0d0000 in block 0 for directory inode 10737418375
    bad bestfree table in block 0 in directory inode 10737418375: repairing table
    bad directory block magic # 0xa770470 in block 0 for directory inode 10737418387
    bad bestfree table in block 0 in directory inode 10737418387: repairing table
    bad directory block magic # 0xa770270 in block 0 for directory inode 15149353946
            - agno = 8
            - agno = 9
            - agno = 10
    Phase 5 - rebuild AG headers and trees...
            - reset superblock...
    Phase 6 - check inode connectivity...
            - resetting contents of realtime bitmap and summary inodes
            - traversing filesystem ...
    bad directory block magic # 0x1e0d0000 for directory inode 10737418375 block 0: fixing magic # to 0x58444233
    bad directory block magic # 0xa770470 for directory inode 10737418387 block 0: fixing magic # to 0x58444233
    bad directory block magic # 0xa770270 for directory inode 15149353946 block 0: fixing magic # to 0x58444433
            - traversal finished ...
            - moving disconnected inodes to lost+found ...
    Phase 7 - verify and correct link counts...
    Metadata corruption detected at 0x45c7c0, xfs_dir3_data block 0x40/0x1000
    libxfs_bwrite: write verifier failed on xfs_dir3_data bno 0x40/0x1000
    Metadata corruption detected at 0x459aa0, xfs_dir3_block block 0x280000010/0x1000
    libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0x280000010/0x1000
    Metadata corruption detected at 0x459aa0, xfs_dir3_block block 0x280000018/0x1000
    libxfs_bwrite: write verifier failed on xfs_dir3_block bno 0x280000018/0x1000
    Maximum metadata LSN (8:2325105) is ahead of log (1:2).
    Format log to cycle 11.
    xfs_repair: Releasing dirty buffer to free list!
    xfs_repair: Releasing dirty buffer to free list!
    xfs_repair: Releasing dirty buffer to free list!
    xfs_repair: Refusing to write a corrupt buffer to the data device!
    xfs_repair: Lost a write to the data device!
    
    fatal error -- File system metadata writeout failed, err=117.  Re-run xfs_repair.

     

    Bah....

     

    Any more ideas?

     

    This is a brand new (22days Power On) 12TB Ironwolf Pro which is pretty much full.

     

     

    Diagnostics attached.......

     

     

     

    skynet-diagnostics-20210811-1952.zip

  11. I'm pretty certain this is relating to my disk, rather than anything to do with the container - but can anyone shed any light on what this error actually means?

     

    From the log

    == Disk /dev/sdg has NOT been successfully precleared
    == Postread detected un-expected non-zero bytes on disk==
    == Ran 1 cycle
    ==
    == Last Cycle's Zeroing time   : 14:22:04 (154 MB/s)
    == Last Cycle's Total Time     : 33:17:25
    ==
    == Total Elapsed Time 33:17:25

     

     

    and from the noVNC window (line feeds added for my own sanity)

    00000728FCD22FA0 - 58
    00000728FCD22FA1 - F8
    00000728FCD22FA2 - 15
    00000728FCD22FA3 - 2C
    00000728FCD22FA4 - 81
    00000728FCD22FA5 - 88
    00000728FCD22FA6 - FF
    00000728FCD22FA7 - FF
    0000072F9077CFA0 - 98
    0000072F9077CFA1 - 59

     

    command was...

    preclear_binhex.sh -A -W -f /dev/sdg

     

  12. 8 minutes ago, OmgImAlexis said:


    Being on a trial shouldn't effect it at all.

    If you goto Settings -> Management Access and scroll to the bottom.
    Is "Allow Remote Access" set to "Yes"? If not then that'll be the issue.

     


    Yeah, I even went the "Turn it off an on again" trick with "Allow Remote Access" and that made no difference.

     

    It's odd, "My Servers" knows all about it (including its docker containers) so it's got some concept of what's going on

     

     

    EDIT - bloody typical.... it's started working now.....

    • Like 2
  13. Any idea why a server would be showing as "Online" but not able to remote access? Its config is (seemingly) identical to my other server which is playing nicely. The only difference is that this one (as you can see) is still a trial version - is that a thing?

    2021-07-22_9-35-57.png

  14. Because i'm a big fan of healthchecks in my docker containers. i've created a healthcheck script for my rsnapshot container

     

     

    i've got it located in 

    /mnt/user/appdata/rsnapshot/healthcheck.sh

     

    and have set "Extra Parameters" in the template to

     --health-cmd /config/healthcheck.sh --health-interval 5m --health-retries 3 --health-start-period 1m  --health-timeout 30s

     

     

     

    Now if my container is unhappy, it'll mark itself as such and the autoheal (willfarrell/autoheal:1.2.0) container i've got running will restart it automatically (which may or may not solve the cause of the unhealthiness)

     

    healthcheck.sh

  15. On 6/27/2021 at 2:41 PM, majestic said:

    I am attempting to use the "Gmail Account with OAuth2 Verification" option to configure my Email Server Settings. When I click on the "Setup Gmail Account as E-Mail Server" button, the webpage just spins and no activity can be completed on the page until I restart the docker container. 

     

    I followed all the instructions (https://github.com/janeczku/calibre-web/wiki/Setup-Mailserver) to setup the Google OAuth2 portion, mapped the resulting gmail.json file to the container, and added the additional environment variable (OAUTHLIB_RELAX_TOKEN_SCOPE=1) mentioned in a recent post (https://discourse.linuxserver.io/t/docker-calibre-web-problem-enabling-gmail-account-with-oauth2-verification/3052).

     

    Here's the docker create command used:

    
    
    /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker create --name='calibre-web' --net='bridge' 
    	-e TZ="America/Los_Angeles" 
    	-e HOST_OS="Unraid" 
    	-e 'DOCKER_MODS'='linuxserver/calibre-web:calibre' 
    	-e 'OAUTHLIB_RELAX_TOKEN_SCOPE'='1' 
    	-e 'PUID'='99' 
    	-e 'PGID'='100' 
    	-p '8083:8083/tcp' 
    	-v '/mnt/user/Media/Books/Calibre/Library/':'/books':'rw' 
    	-v '/mnt/user/appdata/calibre-web/gmail.json':'/app/calibre-web/gmail.json':'rw' 
    	-v '/mnt/user/appdata/calibre-web':'/config':'rw' 
    	'linuxserver/calibre-web'

     

    There doesn't appear to be any detail as to why the webpage just spins in the calibre-web.log or in the docker logs.

     

    Has anyone else been able to get this working?

     

     

    I've just been through the same thing. The first time you use your OAUTH JSON file, it pops a browser window (on the host OS - so presumably somewhere INSIDE the docker container) asking you to confirm you are happy for this app to do stuff on your behalf. Obv this doesn't go so well...

     

    I installed calibreweb locally on my PC (and the optional requirements.txt) and then the popup came on my desktop machine. Now that this is done should be good (the popup is just for the first time).

     

    Although.... it's just first time for the user - how does the docker container know who i'm (google-ey) logged in as. It's still just spinning for me.... To be continued....

     

     

    EDIT - i have a suspicion i'll need to reboot my unRAID server into GUI mode and the window will pop up there. but not this evening.... (GMT+8)

×
×
  • Create New...