Jump to content

guruleenyc

Members
  • Posts

    214
  • Joined

  • Last visited

Posts posted by guruleenyc

  1. Update - if the array can't stop due to "Retry unmounting shares" in 6.12.0 - 6.12.2, the quick fix is to open a web terminal and type:
    umount /var/lib/docker

    The array should then stop and prevent an unclean shutdown.
     
    (It is possible the array won't stop for other reasons, such as having a web terminal open to a folder on the array. Make sure to exit any web terminals or SSH sessions in this case)
     
    We have a fix in the 6.12.3-rc3 prerelease, available here:
    The fix is in the 6.12.3 release, available here:
    It would be helpful if some of the folks who have been having an issue stopping the array could upgrade and confirm the issue is resolved in this version. No need, this fix is confirmed.  Thanks to everyone who helped track this down!

    This issue hit me yesterday after two months of uptime while on 6.12.6.

    Thank you for the quick fix to get the array stopped and restarted without needing to reboot.

    Sent from my SM-N986U using Tapatalk


  2. On 8/30/2021 at 6:13 PM, Osiris said:

    my two cents (but I'm a noob). I had to do this a few times as the docker stop commands didn't result in an actual container stop.

     

    Get you docker container id using

    docker container list

    then 

    ps auxw | grep yourcontainerid

    to get the pid

    then 

    kill -9 yourpid


    If that doesn't work, you've got a zombie process and I'm afraid you'll need a reboot to unlock it

    I encountered the issue too with one docker in a started state not being accessible by its webUI. I tried killing it in this manner by PID, but it was not clear what integer was the PID in the grep output. However, I was not able to cleanly stop or restart the docker.

    I ultimately had to stop all other containers, stop the array (which hung), and eventually had to reboot unraid softly to resolve the issue with one docker. All my other dockers were healthy.

    I'm on unraid 6.1.2.6

  3. On 7/4/2023 at 6:41 PM, jsk said:

    I experienced the same issue after upgrading to 6.12. For some reason the Unraid nginx process now binds to 443 even if SSL is disabled. You can change the https port manually in Settings/ManagementAccess as a quick workaround.

    THANK YOU!

  4. Greetings friends!

    Since plugins will not update unless we're on 6.12.x (e.g. Community Apps), I was forced to upgrade unRAID to bleeding-edge which is not how I typically upgrade in general.

     

    I am relieved to share that I successfully upgraded from 6.11.5 to 6.12.6.
    The only issue so far is my Swag docker is not starting, it returns Execution 'server error' popup. There as also a pending Swag docker update I then completed successfully...
    However, it still does not start with same above error.
     
    UPDATE:
    I had to go into Settings > Management Access and change the HTTPS port from 443 to 4434, even though 'use SSL/TLS' for webGUI was ahready set to 'NO'.
    When I applied the change, it still tried to redirect me to the webGUI on 4434 w/TLS for some odd reason...
    I had to force my browser to use HTTP/non-TLS to get back to webGUI.

    Swag docker starts fine now. [emoji41][emoji106][emoji6]
     
     
     

  5. On 12/5/2023 at 5:59 PM, ConnerVT said:

     

    Along with a flash drive backup, I usually:

    • Grab a diagnostic (full of useful info if things go sideways)
    • A printout of the Main tab (array and drive assignments)
    • Set array, docker and VM to not start automatically (I'll start them up manually at first, once I see things are working properly)
    • Manually stop the array before updating (avoids any unclean shutdown)

     

    When all works perfectly, none of this is needed.  But better to have it and not need it than...

    So smart and a great reminder. Thank you

    • Thanks 1
  6. On 6/2/2023 at 5:35 AM, tomwhi said:

    I found a way to progress past this error. 

     

    I added a tag onto my repo referenced in my container. 

     

    In my case I'm on V24 so I used this tag however all the tags can be found can be found here: https://hub.docker.com/r/linuxserver/nextcloud/tags

     

    (This is assuming you're using the LinuxServer container, adjust for your own image creator of choice). 

     

    image.thumb.png.d240ea8c6febf160c03e0a1fa55f6326.png

     

     

    I then started my container up and ran the following command in the Console for Nexcloud container (I'm assuming the user is abc but I've seen it called other things in different OS'): 

     

    sudo -u abc php /config/www/nextcloud/updater/updater.phar

     

    I kept doing this until my application was up to date (I tried running this command without changing the tag first, and I found it wouldn't progress past the error). 

     

    During the process I couldn't update any more...  I tried reverting back to the "latest" tag (by not imputting a tag), but i still got the error. 

     

    I changed to tag on the repo to a v25 tag (25.0.4) and was able to get back into the GUI and was able to keep updating. I was then presented with an update to v26.0.2 inside the GUI and the CLI updater. 

     

     

    Once I completed that I removed the tag off my application and restarted the docker again. 

     

     

    This is probably a good time to think about the way we look after our NextCloud instances. It's an amazing app but it's not "set and forget"...  I think what I'll start doing it setting the tag to the version I'm on so that it can update the container within the major version (i.e. now I'm on v26) which will not allow it to go to v27 when that comes out - and i'll do the in-app updates first before I do the container updates.... 

     

     

     

    Ref: https://github.com/linuxserver/docker-nextcloud/issues/288

    Ref: https://www.reddit.com/r/unRAID/comments/13xlxyz/nextlcoud_stuck_in_step_3/

     

    image.png

    This also resolve the issue for me. I was already on 25.x and I had to upgrade to 27.x via console CLI of docker and then reverting the repo tag afterwards to finally get back in to GUI. THANK YOU!

    I also set my docker repo to 'lscr.io/linuxserver/nextcloud:version-27.0.0' to prevent this from happening again and have a controlled upgrade in the future.

    • Like 1
  7. I'm trying to get the Swag Dashboard working. I have the Swag docker port configured to 81 and 443 respectively and it has been working fine my other subdomains. However, after enabling the dashboard mod, configuring an A record for 'dashboard.domain.com' to point to my server IP that Swag is listening on, I get a nginx 504 time out error when trying to go to https://dashboard.mydomain.com . I'm coming from an internal LAN subnet that is within the allowed IP ranges stated in the domain.subdomain.conf file as well.

    The Swag docker logs are not reporting any issues either.

    Any ideas on what I may be missing or doing wrong?

  8. I was able to resolve the 'socket failed to connect' error while using Swag/nginx using a proxy-conf file like this below. I was not successful when trying to use a site-conf

     

    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name <YOURSUBDOMAIN>.*;
    
        include /config/nginx/ssl.conf;
        add_header X-Frame-Options "SAMEORIGIN" always;
        add_header Strict-Transport-Security "max-age=15768000; includeSubDomains; preload;";
        add_header X-Content-Type-Options "nosniff" always;
        client_max_body_size 0;
    
        location / {
            include /config/nginx/proxy.conf;
            include /config/nginx/resolver.conf;
            set $upstream_app <IPADDRESS>;
            set $upstream_port 13378;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
    
            proxy_hide_header X-Frame-Options;
            proxy_max_temp_file_size 2048m;
        }
    }

     

  9. I was able to get this working by extracting the VMDK from the Wazuh OVA and converting it to IMG using this command:

    qemu-img convert -p -f vmdk -O raw /mnt/user/<the location of your vmdk file> /mnt/user/<the location of your new file>.img

    Then I built a VM in unraid using i440fx-4.2 and SeaBIOS with path to IMG as disk as IDE.

    • Like 2
×
×
  • Create New...