Jump to content

yendi

Members
  • Posts

    118
  • Joined

  • Last visited

Posts posted by yendi

  1. On 5/8/2024 at 11:33 AM, itimpi said:

    You could already do that manually I think as far as basic booting is concerned.    However the USB stick would still be used for storing all settings so not sure you would gain much as currently they are always stored on the device holding the licence.

    I undestand the use of USB boot to authenticate the installation, but I dont understand why you could not replicate that on a SSD, far more reliable. What prevent Unraid to use SSD UUID instead of USB UUID?

  2. Hello everyone,

     

    On a fresh Unraid server I can’t manage to make the flash backup to work (online since 10 days).
    On my settings I can see the feature activated, I can be reached from the internet but the backup is never updated: “Activated: Not up-to-date”

     

    My network setup: 

    • ISP router setup with a DMZ (no bridge allowed) to my Unifi Dream Machine Pro
    • My DNS are Cloudflare (1.1.1.1) primary, Google (8.8.8.8) secondary
    • I can resolve backup.unraid.net but 100% ping lost

    138890570_Capturedecran2022-08-31a18_12_01.thumb.png.44a8bb2cfed3872434cb5f61f96669a6.png

     

    • On the UDM Pro I have forwarded the 9010 port to the 443 port on the Unraid server.

    941179374_Capturedecran2022-08-31a18_07_35.thumb.png.8a37ab63d20a61d4aec65e9815bc0abb.png

     

    • When I click on the CHECK wan port, I have a success message

    99809191_Capturedecran2022-08-31a17_51_58.thumb.png.97406bc8ffd04e1222af102144fcf68e.png

     

    However the Flash backup stays Not up-to-date.

     

    1444567396_Capturedecran2022-08-31a17_52_04.thumb.png.a045c92e0ebcc4ad2db69ae78ae9b0a0.png

     

    I tried to force update, deactivate/reactivate, reinitialize but nothing works.

    On the Unraid “My Server” page the Flash Backup appears Unavailable.

     

    1626757068_Capturedecran2022-08-31a17_53_56.thumb.png.5a6dc0dbc302ab31ddf3f09e6aa82dcd.png

     

    Has anyone a clue how to solve this ?
     

    Thanks

  3. I love having the ability to use dockers. Actually I was using 10 drive and reduced it yo only 4, and keep unraid only for the Docker support.

    In 2020 I would love having a branch for critical operation: Update delayed to have a bug free system to be sure to have a stable release. Let’s be honest the 6.7 version was a major failure and we could not afford this again.

    Also build in backup system.

  4. Thanks for the input guys.

    For the Sonarr log, thanks I put the extra parameter for the log size. I had the rotation enabled but I did not know that I had to remove the docker to make it work.

    Regarding Radarr, I dont understand how it can fill up as all the path are mapped correctly. I updated the docker and few hours later it was also 21 Gb large. So I removed it completely and reinstalled it.

    Strange thing is that my config for all docker is the same since almost 8 months and I downloaded around 10 TB of data using Radarr. So I don't think it's a miss configuration. If the issue happens again, is there a way to list all files from a container ? To know what is exactly filling it ?

     

    Thanks again and sorry for the post in the FAQ.

  5. EDIT 2: I solved the issue, easy...

    I clicked on the "Container size" on the bottom of the Docker page.

    I saw that Radarr was 50GB...

    I updated it to last version and my issue was solved: 11% docker img utilization...

    Son't know why Radarr grow that much, will monitor the situation.

    yd1Vf7M.png

     

    Hello Everyone,

     

    I had Plex DB corruption yesterday after a reboot. I tried to restore all my appdata and plex autobackup but all appears to be corrupted aswell (my server did not restart for 2 months).

    I ended up install a different plex docker (binhex instead of linux server) to be able to test solution without touching the other docker.

    I finally made a working Plex docker by repairing the SQLite DB (blobs db was faulty) and had to refresh all metadata (40tb library...)

    Today I see that my docker image (100GB) if filling up anormaly. I have everything configured correctly so I don't understand what can be filling it.

    I changed the /transcode directory from plex as it was the default one but as I see the CAdvisor, the problem is not plex related, its widespread.

     

    Could you please help me find was is wrong?

    Here are some screenshots:

     

    Docker config:

    8AcD0DB.png

     

    Docker Usage:

    irACLn4.png

     

    CAdvisor:

    KWK7xNW.png

     

    OhICFnQ.png

     

    Dockers page:

    KmHBqK0.png

     

    NZBGet config:

    CTCMTiP.png

     

    Transmission:

    gatMWha.png

     

     

    Thanks !

     

     

    EDIT:

    I had a 17Gb Sonarr log (Debug logging enabled...), I was able to delete it with this command but it does not solve the issue, I still have 50Gb used...

     

    echo "" > $(docker inspect --format='{{.LogPath}}' fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6)
    root@Mercure:~# du -ah /var/lib/docker/containers/ | grep -v "/$" | sort -rh | head -60
    17G     /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6-json.log
    17G     /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6
    114M    /var/lib/docker/containers/8d60b1c98afbd3f2c45d1d4525bc5901716d62399e4e163e32a8680ce4de2bdb/8d60b1c98afbd3f2c45d1d4525bc5901716d62399e4e163e32a8680ce4de2bdb-json.log
    114M    /var/lib/docker/containers/8d60b1c98afbd3f2c45d1d4525bc5901716d62399e4e163e32a8680ce4de2bdb
    58M     /var/lib/docker/containers/b25cbbb806a80473049080db470be6f6b4755ed674521a5e26cf697f2befde5e/b25cbbb806a80473049080db470be6f6b4755ed674521a5e26cf697f2befde5e-json.log
    58M     /var/lib/docker/containers/b25cbbb806a80473049080db470be6f6b4755ed674521a5e26cf697f2befde5e
    51M     /var/lib/docker/containers/3aae22fe6c7ff0a935390e8523b2bf23a5a06afe5c620c27b7aabacb84771333/3aae22fe6c7ff0a935390e8523b2bf23a5a06afe5c620c27b7aabacb84771333-json.log
    51M     /var/lib/docker/containers/3aae22fe6c7ff0a935390e8523b2bf23a5a06afe5c620c27b7aabacb84771333
    32M     /var/lib/docker/containers/d053aefdd74c8caceccd18d2f4f6568cb9ea3a3a3c974571adb90efc65c84d40/d053aefdd74c8caceccd18d2f4f6568cb9ea3a3a3c974571adb90efc65c84d40-json.log
    32M     /var/lib/docker/containers/d053aefdd74c8caceccd18d2f4f6568cb9ea3a3a3c974571adb90efc65c84d40
    30M     /var/lib/docker/containers/35230c29fd65bde430becab1b53035b1b670aaa40381892bde04b79c68da5c8a/35230c29fd65bde430becab1b53035b1b670aaa40381892bde04b79c68da5c8a-json.log
    30M     /var/lib/docker/containers/35230c29fd65bde430becab1b53035b1b670aaa40381892bde04b79c68da5c8a
    29M     /var/lib/docker/containers/57abb32f88a8b5042e936181993f71308b715090d5dfe6702633b196773107f4/57abb32f88a8b5042e936181993f71308b715090d5dfe6702633b196773107f4-json.log
    29M     /var/lib/docker/containers/57abb32f88a8b5042e936181993f71308b715090d5dfe6702633b196773107f4
    17M     /var/lib/docker/containers/072b0050b358893efadfccc65ce0d865abd297283697a99d1ed1d52096c31005/072b0050b358893efadfccc65ce0d865abd297283697a99d1ed1d52096c31005-json.log
    17M     /var/lib/docker/containers/072b0050b358893efadfccc65ce0d865abd297283697a99d1ed1d52096c31005
    11M     /var/lib/docker/containers/bba2c290af99408729b3abc0c7734e8be90128bbaea4f08021f406f1738ab284/bba2c290af99408729b3abc0c7734e8be90128bbaea4f08021f406f1738ab284-json.log
    11M     /var/lib/docker/containers/bba2c290af99408729b3abc0c7734e8be90128bbaea4f08021f406f1738ab284
    4.9M    /var/lib/docker/containers/9c6fcf52bfbe97a6d87634f9e76ee5accedc18a44d2c1911f7c622d025eaa446/9c6fcf52bfbe97a6d87634f9e76ee5accedc18a44d2c1911f7c622d025eaa446-json.log
    4.9M    /var/lib/docker/containers/9c6fcf52bfbe97a6d87634f9e76ee5accedc18a44d2c1911f7c622d025eaa446
    4.5M    /var/lib/docker/containers/4218bcedf79528f70c0a0cee86649d81330145a4cdfd2f3abb0791bb5b0f7057/4218bcedf79528f70c0a0cee86649d81330145a4cdfd2f3abb0791bb5b0f7057-json.log
    4.5M    /var/lib/docker/containers/4218bcedf79528f70c0a0cee86649d81330145a4cdfd2f3abb0791bb5b0f7057
    4.4M    /var/lib/docker/containers/9d325450554d9997aa09b124149548f4639292db04461e46243b56a7a4b611ec
    4.3M    /var/lib/docker/containers/9d325450554d9997aa09b124149548f4639292db04461e46243b56a7a4b611ec/9d325450554d9997aa09b124149548f4639292db04461e46243b56a7a4b611ec-json.log
    316K    /var/lib/docker/containers/a8f0ea007e705e59d8303cbcbbfa42f51c64cbd4ee36e8855792ce8a9f5ed176
    292K    /var/lib/docker/containers/a8f0ea007e705e59d8303cbcbbfa42f51c64cbd4ee36e8855792ce8a9f5ed176/a8f0ea007e705e59d8303cbcbbfa42f51c64cbd4ee36e8855792ce8a9f5ed176-json.log
    208K    /var/lib/docker/containers/c8c2035c5328d0b6efbcfe41e55b8aa272b3f5cfdf029898585eeb12cd01594f
    180K    /var/lib/docker/containers/c8c2035c5328d0b6efbcfe41e55b8aa272b3f5cfdf029898585eeb12cd01594f/c8c2035c5328d0b6efbcfe41e55b8aa272b3f5cfdf029898585eeb12cd01594f-json.log
    96K     /var/lib/docker/containers/434c82b18a17f1431d28807cd55b046bda41c223bb84a9797fc08b9476916123
    72K     /var/lib/docker/containers/434c82b18a17f1431d28807cd55b046bda41c223bb84a9797fc08b9476916123/434c82b18a17f1431d28807cd55b046bda41c223bb84a9797fc08b9476916123-json.log
    52K     /var/lib/docker/containers/665670b96e0320862f1c70e1ce6e83349df691b3d84791d9ca6cd25c8501a569
    36K     /var/lib/docker/containers/9b10cb71ecb9a7af213655649568cf5d3733c001ab89ee1714fef9db0ddba0f2
    32K     /var/lib/docker/containers/3b6eca5c6348d76dfb3776358ac7dbeb7b2a2c60977170c58e87d8d41ea69f20
    28K     /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38
    28K     /var/lib/docker/containers/665670b96e0320862f1c70e1ce6e83349df691b3d84791d9ca6cd25c8501a569/665670b96e0320862f1c70e1ce6e83349df691b3d84791d9ca6cd25c8501a569-json.log
    24K     /var/lib/docker/containers/bcc3cda2e8ccd7fa57e83ee11dcc26a47604c8dd6a0c4d9155690cce42db14e5
    16K     /var/lib/docker/containers/9b10cb71ecb9a7af213655649568cf5d3733c001ab89ee1714fef9db0ddba0f2/9b10cb71ecb9a7af213655649568cf5d3733c001ab89ee1714fef9db0ddba0f2-json.log
    8.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/config.v2.json
    8.0K    /var/lib/docker/containers/c8c2035c5328d0b6efbcfe41e55b8aa272b3f5cfdf029898585eeb12cd01594f/config.v2.json
    8.0K    /var/lib/docker/containers/bcc3cda2e8ccd7fa57e83ee11dcc26a47604c8dd6a0c4d9155690cce42db14e5/config.v2.json
    8.0K    /var/lib/docker/containers/b25cbbb806a80473049080db470be6f6b4755ed674521a5e26cf697f2befde5e/config.v2.json
    8.0K    /var/lib/docker/containers/3b6eca5c6348d76dfb3776358ac7dbeb7b2a2c60977170c58e87d8d41ea69f20/mounts/shm
    8.0K    /var/lib/docker/containers/3b6eca5c6348d76dfb3776358ac7dbeb7b2a2c60977170c58e87d8d41ea69f20/mounts
    8.0K    /var/lib/docker/containers/35230c29fd65bde430becab1b53035b1b670aaa40381892bde04b79c68da5c8a/config.v2.json
    4.0K    /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38/resolv.conf.hash
    4.0K    /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38/resolv.conf
    4.0K    /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38/hosts
    4.0K    /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38/hostname
    4.0K    /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38/hostconfig.json
    4.0K    /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38-json.log
    4.0K    /var/lib/docker/containers/fba15cd5e5e96d9cc68e405a3ef2921043cab05746ae9cd60811041625554c38/config.v2.json
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/resolv.conf.hash
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/resolv.conf
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/mounts/shm/mono.226
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/mounts/shm
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/mounts
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/hosts
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/hostname
    4.0K    /var/lib/docker/containers/fb959334cdbb75a5c400f2e319c35da75251ef9b070f43ff7a9c454b98171bd6/hostconfig.json
    4.0K    /var/lib/docker/containers/d053aefdd74c8caceccd18d2f4f6568cb9ea3a3a3c974571adb90efc65c84d40/resolv.conf.hash

    The ID looks like Sonarr:

    root@Mercure:~# docker ps
    CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS                                     NAMES
    3b6eca5c6348        binhex/arch-plex           "/usr/bin/tini -- /b…"   33 minutes ago      Up 33 minutes                                     binhex-plex
    072b0050b358        titpetric/netdata          "/run.sh"                2 days ago          Up 46 hours                                     Netdata
    a8f0ea007e70        linuxserver/bazarr         "/init"                  2 days ago          Up 46 hours         0.0.0.0:6767->6767/tcp                             bazarr
    fba15cd5e5e9        google/cadvisor:latest     "/usr/bin/cadvisor -…"   2 days ago          Up About an hour    0.0.0.0:8082->8080/tcp                             cadvisor
    bba2c290af99        linuxserver/jackett        "/init"                  10 days ago         Up 46 hours         0.0.0.0:9117->9117/tcp                             jackett
    434c82b18a17        linuxserver/ddclient       "/init"                  4 weeks ago         Up 46 hours                                     ddclient
    d053aefdd74c        linuxserver/hydra2         "/init"                  4 weeks ago         Up 46 hours         0.0.0.0:5076->5076/tcp                             hydra2
    3aae22fe6c7f        linuxserver/transmission   "/init"                  4 weeks ago         Up 46 hours         0.0.0.0:9091->9091/tcp, 0.0.0.0:51413->51413/tcp   transmission
    8d60b1c98afb        linuxserver/ombi           "/init"                  4 weeks ago         Up 46 hours         0.0.0.0:3579->3579/tcp                             ombi
    fb959334cdbb        linuxserver/sonarr         "/init"                  4 weeks ago         Up 46 hours         0.0.0.0:8989->8989/tcp                             sonarr
    57abb32f88a8        linuxserver/nzbget         "/init"                  4 weeks ago         Up 46 hours         0.0.0.0:6789->6789/tcp                             nzbget
    b25cbbb806a8        linuxserver/radarr         "/init"                  4 weeks ago         Up 46 hours         0.0.0.0:9898->7878/tcp, 0.0.0.0:7979->9899/tcp     radarr
    9d325450554d        eafxx/traktarr:latest      "/init"                  6 weeks ago         Up 46 hours                                     Traktarr
    4218bcedf795        linuxserver/tautulli       "/init"                  6 weeks ago         Up 46 hours         0.0.0.0:8181->8181/tcp                             tautulli
    35230c29fd65        binhex/arch-radarr         "/usr/bin/tini -- /b…"   6 weeks ago         Up 46 hours         0.0.0.0:9797->7878/tcp                             radarr-uhd-binhex

     

  6. 48 minutes ago, nuhll said:

    I got it working:

    https://github.com/rclone/rclone-webui-react/issues/38

     

    There are my 2 commands i use currently you need to add another script which only runs at the first start... now you can see what your mount is doing!

    So to sum up, you have to create a new userscript that start at startup with:

    rclone rcd --rc-web-gui --rc-addr :5555 &

     

    and then add to your mount

    --rc --rc-addr=YOURIP:YOURPORT --rc-web-gui --rc-user=test --rc-pass=test --rc-web-gui-update --stats=24h

    And to your upload script

    --rc --rc-addr=YOURIP:YOURPORT --rc-web-gui --rc-user=test --rc-pass=test --rc-web-gui-update --stats=24h

    am I correct?

     

    I don't understand the usage of the first script ? (rclone rcd --rc-web-gui --rc-addr :5555 &)

    What is the purpose? Why a different port as the others?

  7. 2 hours ago, Bolagnaise said:

    The last 2 days, I’m getting a constant issue occurring. I will mount a crypt g drive and everything works fine, I  can access the folders through windows explorer and plex can read them. After about 30 mins, the mount appears to drop and I can no longer access any share on unraid, not even local ones. If I unmount and then remount using user scripts, the shares instantly reappear and everything is working again. 
     

    any ideas?

    Same as the other guy, do you have enough ram? I had those issues because of only having 8 gigs and plex scanning was creating up to 8 simultaneous connections iirc. Because no available memory rclone crashed like you

  8. 1 minute ago, jamesac2 said:

    im hoping im not hijacking a thread here but i am after some help with rclone and plex in my unraid box. 

     

    i have created a VPS to re download a lot of my shows as this has a 100/100 unlimited data transfer for cheap. however ive followed spaceinvader ones guide to mounting the storage which works perfectly up until a point, 

     

    When plex is playing a file, if i go to rewind it or forward it, it just crashes plex and in turn locks docker up completely the only way for me to resolve this is to restart my server which is a pain as i run pfsense on it. this has happened a few times, its noticeable when a program is around 75% through, (its happened to my wife on both occasions so ive got to fix this!!!) 

     

    my internet connection is not great but i have 40/10 hence my need for a VPS (quicker than moving my content to gdrive via upload)

     

    i am using Rclone Beta on my unraid 6.7,2 but this has happened on 6.5.3 which is what i was on before upgrade.  playing files is ok and i have only had a streaming issue from plex twice when playing back a 1080 tv show that is around 5gb for 45 mins,  

     

    this is my mount options for rclone which i have a userscript to mount manually (when i power down the server i have to uninstall the plugin and reinstall when it boots which is a pain) 

     

    mkdir -p $mntpoint
    rclone mount --allow-other --buffer-size 1G $remoteshare $mntpoint &

     

    my remote is mounted in disks/cloud and i have read write access for tidying up files and moving the odd file up to gdrive from my server via dolphin.

     

    i have moved a lot of my mechanical disks out of my server and gone pure SSD & M2 storage for files i want quick access to but the films and tv can be considered cold storage and it wouldnt be the end of the world if i lost them (liike what happened when i had amazn drive a couple of years back) 

     

    im using google drive now, 

     

     

    ive seen a lot of options tailored towards cache drives etc but im not sure if thats what i need, i am only playing media to three clients in my house and most would be two at a time. im not sure what would be helpful to resolve this rewind forward issue, i dont mind waiting a couple of second for a program to start as long as once its there, it doesnt crash my system. 

     

     

     

    I had rclone crash because of Ram issues. With 1G buffer, I’m pretty sure its this!

    Lower your buffer to something like 128 and try again !

×
×
  • Create New...