Jump to content

Soldius

Members
  • Posts

    22
  • Joined

  • Last visited

Posts posted by Soldius

  1. Thank you for the safe mode suggestion. I didn't want to turn it off again unless it is absolutely necessary, as I may not be able to even get it to start my array next time... shortly after the array started, i got this alert sent to my email: "Timed out waiting for /boot/changes.txt!". 

     

    In addition, i caught a line in the logs before I lost WebUI: "👋 Farewell. UNRAID API shutting down!".

     

    This is really odd :(. I really hope someone can help.

  2. My server was working perfectly until I decided to give it a restart. The WebUI stopped working after a few minutes whether I start the array or not. I read around the forum and decided to upgrade to the latest version using CLI. It worked and I was able to get back into the WebUI long enough to start my encrypted array. After that, the WEBUI crashed again and I was locked out.

     

    SSH'ing still works, and I was able to see that all my shares and dockers are up. I could use NGINX to my dockers without any issues. I restarted the webui using a command but did not help. I'm a little stump because I don't know what is causing this... reading around it sounds like there is something using port 80... but I don't recall installing anything new that uses port 80. I did not make any changes to my network either. But again, everything else seems to work so... This happened after my restart. Not sure what is causing it. I attached my diagnostics files. I hope someone can shed some light for me on this.  

    ntserver-diagnostics-20231009-1224.zip

  3. 9 hours ago, Soldius said:

    Unfortunately, turning the debug log off did not help. Any clue on how to trace down what exactly is filling my docker img? "docker system df: only gives an overview and spaceinvader's scripts gives more detail but only that the container is taking up a large amount of space and not what the space actually is. I also went into the container's console to look at the structure and not able to see any folder with large sizes. I'm stump 🤔

    Nevermind, I figured it out. Apparently it is using the dir /temp to download the images files and then putting it into the nzb in the "library" folder. This leaves all the images inside the temp folder without cleaning them up afterwards 😑

     

    Anyway, I mapped those dir temp like this (remember to change the permission of the folders using Krusader or tachidesk won't be able to write downloads in there): 

    image.thumb.png.8bd7f43ddbbc2e47a1b788d5d88619bd.png

     

    I may end up just mapping the whole temp folder, since it seems to be creating other things in there. 

     

    Another thing i notice is that each time i restart the container, the download list disappeared resulting in me having to re-add the downloads in the GUI. Not sure if this is related.

  4. 12 hours ago, C3004 said:

    Weird. I don't see anything wrong.

     

    Do the container Logs currently note any errors?

     

    If the container is working correctly set the debug variable to false that the logs don't fill up the docker.img.

     

    Just for fun you could also run the docker system df command(https://docs.docker.com/engine/reference/commandline/system_df/)

     

    But the debug variable probably filled up the logs.

    Unfortunately, turning the debug log off did not help. Any clue on how to trace down what exactly is filling my docker img? "docker system df: only gives an overview and spaceinvader's scripts gives more detail but only that the container is taking up a large amount of space and not what the space actually is. I also went into the container's console to look at the structure and not able to see any folder with large sizes. I'm stump 🤔

  5. 3 hours ago, C3004 said:

    Weird. I don't see anything wrong.

     

    Do the container Logs currently note any errors?

     

    If the container is working correctly set the debug variable to false that the logs don't fill up the docker.img.

     

    Just for fun you could also run the docker system df command(https://docs.docker.com/engine/reference/commandline/system_df/)

     

    But the debug variable probably filled up the logs.

    Ohhhhhh, I didn't think the logs were the culprit. Good catch! I will turn it off and try. Thank you!😂

  6. 34 minutes ago, theangelofspace15 said:

    Did you tried my container? Thanks should fix it.  As well. 

    I did! But for some strange reason, nothing loaded. I wiped the appdata and the container data and tried it again but same issues. The webui was not loading for some reason. I gave up after that lol 😆

  7. 1 hour ago, C3004 said:

    Hi Soldius,

     

    That variable is new and currently only in the preview containers.

    Please note that the fix only works with the develop container.

     

    I wrote about the Fix before the tachidesk guys made a full rewrite of the docker container. The rewrite happened two weeks ago. The Variable you want to use is part of that new rewrite.

    As the fix only works in the develop container and the latest update of that one was 18 Days ago the Variable is currently not available in that one.

     

    Best Regards

    I followed up on this with a roundabout fix. I switched the docker to "preview container" and used Krusader to go into appdata and change permission of the whole "Tachidesk-Docker" to:

     

    image.png.adafd9563a069608d5c2810af7b87fe3.png

     

    Basically i just right-click and went into Properties > Permissions.

     

    I also did the same thing for the destination download folder to make sure the docker can write into it. This seems to work for me as I am downloading chapters in cbz just fine. Hope this helps someone.

  8. On 5/6/2023 at 8:10 AM, C3004 said:

    New Way to get Rid of the File permission error.

     

    I Don't know if this will stick, or if it will get added in the default container.

    Create a Backup before adding or removing these Changes.

    Don't use this if you made the webui publicly available.

     

    They added a function to the development container to change the running user. (https://github.com/Suwayomi/docker-tachidesk/issues/22)

     

    Activate advanced view to make these changes

    Change **Repository** from "ghcr.io/suwayomi/tachidesk" to "ghcr.io/suwayomi/tachidesk:develop"

    Add in **Extra Parameters ** "-u 99:100" without quotations

    It should look like this afterwards:

    grafik.thumb.png.f116513fa7e1664c57cd83cd2b56e7d1.png

    This fix works for me, but even though I added the environment variable DOWNLOAD_AS_CBZ to true, it does not download into cbz. It keeps the structure as Manga Series > Chapter > images files.

     

    image.thumb.png.70812a5d6e5d52c623c1ba0d66f7e485.png

     

    Does anyone know of a fix for this? Thank you.

  9. Just in case anyone want to get meta data for the downloads. I tried the recommended Manga-Tagger but did not like how it uses the original language's title. So I switched to Komga to server my manga needs and the docker version of Komf, the metadata agent. FMD-WINE + Komga + Komf is perfect for hosting your own manga 😀

    • Like 1
  10. Hi All, 

    I know this is an old thread, but I am trying to get any version of deepstack docker to install on my unraid, but there does not seems to be any template available. Could anyone help guide me through how to install it? There are some command line on the deepstack AI website, but when i ran them, nothing happened on my unraid and I do not see any new docker created for it. Anyway, hopefully someone can shed some light on this for me, as having to boot into windows 10 and running the exe and turning on the deepstack software is too unreliable. Thank you so much.

  11. On 3/25/2018 at 1:55 PM, binhex said:

    note /transcode can be named anything you want

     

    I want to transcode to ram - create a new volume mapping, host path /tmp container path /transcode then define TRANS_DIR so that it points at ram drive e.g.:-

    
    TRANS_DIR=/transcode
    
    /transcode maps to host path /tmp

    Hi binhex and unRAID community,

     

    I just wanted to confirm something regarding this config. I am using binhex-plexpass and does have a plex pass subscription. So i just want to confirm if I have to do this still, in order to encode to ram instead of cache. Or does it know enough to automatically does that with the default /config/transcode setting? Please let me know. I have been searching for a while but not too clear on this. Thank you very much ahead of time.

  12. 2 minutes ago, Cessquill said:

    That script (which looks familiar 😉) is a patch for Plex and is required until Plex formally release hardware decoding.  Unless I've missed a big change, it has nothing to do with the Nvidia plugin (as in, the Nvidia version of Unraid has nothing to do with the apps you may be using).

     

    So, until Plex is upgraded, and you want hardware DEcoding, and you've got a Plex pass, you'll still need to run the script.

    Wow, OK. Was a little confused on that. Thank you for the thorough explanation. So Plex hardware ENcoding (from unraid server) should still be working on the Nvidia plugin for 6.6.7 and unRAID 6.6.7? 

  13. On 3/10/2019 at 6:04 PM, CHBMB said:

    v6.6.7 and v6.7.0rc5 uploaded.  If anyone pings me or @bass_rock and mentions the word Nvidia in the next week, we'll probably murder you and dispose of your body so well you'll never be discovered.

     

    It's been a slog for both of us.  To say between us we've compiled this at least 50 times would be a conservative estimate, and the theories and conversations we've had have been numerous.

     

    Bottom line, we're not really sure how we got it to work for so many successive versions before hitting this wall.

    Hi guys,

     

    So with the new release of 6.6.7 NVIDIA. Is applying the script still necessary? I am just asking due to still having issues after I upgraded to Unraid and NVIDIA 6.6.7 and a lot of my movies are not playing when it needs to be transcoded. It seems to be ranging from 4k HEVC (x265) movies to regular 1080p x264 movies. Even playing at original quality is not working on a mobile device I know could play HEVC just fine. It would just freeze the player. Any help/guidance would be appreciated. Thank you ahead!

  14. On 3/6/2018 at 2:01 AM, dazzathewiz said:

     

    So for Krusader - I got it to work (fixed the rolling gear) copying the settings under Nginx section in https://guacamole.apache.org/doc/gug/proxying-guacamole.html

    (Note the docker runs guacamole)

    
    location /guacamole/ {
        proxy_pass http://HOSTNAME:8080/guacamole/;
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $http_connection;
        access_log off;
    }

     

    Hi, Could you post your config file? I tried doing this and it is not bringing me to anything. Just the default nginx page saying i need to configure more. Could you also let me know what filename you gave the config? Currently mine is: krusader.subdomain.conf and below is the content: 

     

    server {
        listen 443 ssl;

        server_name ntkrusader.*;

        include /config/nginx/ssl.conf;

        client_max_body_size 0;
        
        # enable for ldap auth, fill in ldap details in ldap.conf 
        #include /config/nginx/ldap.conf;

        location /guacamole/ {
        proxy_pass http://192.168.29.250:6080/guacamole/;
        proxy_buffering off;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection $http_connection;
        access_log off;
        }
    }

     

    Thanks ahead.

  15. Hello Binhex,

     

    There is a new version of Jackett that fixes the tracker Horrible Subs. Do you know when you will be able to update the docker? The reference fix is here:

     

    https://github.com/Jackett/Jackett/issues/3957

     

    Thank you for your hard work.

     

×
×
  • Create New...