gshlomi

Members
  • Posts

    338
  • Joined

  • Last visited

Posts posted by gshlomi

  1. On 11/20/2021 at 11:26 AM, aarontry said:

    I had the same issue after moving vDisk from cache to array. The problem for me was when I point the vDisk path to the new location the type of vDisk changed from "qcow2" to "raw". So after I change this back to qcow2 everything works perfectly.

    Thanks!!!!

    Was about to reinstall my HomeAssistant from scarch, saved me a day or two or retesting everything afterwards.

     

    Does anyone know why this happens?

  2. Hi,
    I’ve got a few files saved at the root of my Appdata folder, with spaces and dashes in the filename (e.g. “frigate - config.yml”).
    There seems to be a problem backing up these files, Tar returns an error as if it tries to backup “frigate”,”-“,”config.yml” as three different files…


    Sent from my iPhone using Tapatalk

    • Thanks 1
  3. On 9/8/2022 at 5:33 PM, Cessquill said:

    Absolutely, yes. Thought of that as I was typing - apologies.  Fix Common Problems would still warn against a share across multiple pools though, yes?

     

    EDIT: I've now gone and set all dockers to point to /mnt/docker/.appdata...
    Turns out I had already pointed Plex to /mnt/plex/appdata, so I'd obviously started at some point.

    Also to note, stopped the docker service and set the default docker location to /mnt/docker/appdata

    Sorry for hijacking this thread, but what's the benefit of splitting the Plex appdata from the general Appdata folder?

  4. 3 minutes ago, JorgeB said:

    Like mentioned either the rebuild finished and the disk is green, or did didn't and it's orange (invalid), Unraid only changes the status to green after the rebuild finished successfully.

    Thanks, that's reassuring.

    Powering back on now...

  5. 11 minutes ago, JorgeB said:

    Just boot the server and check the drive status, green ball means the rebuild finish before the crash, orange triangle means it didn't, it will rebuild from the beginning at next array start.

    My concern is that it did not finish the rebuild process, yet it will try to do a parity check using the (incomplete) replacement drive, leading to data loss.

    Are you sure just booting the server won't lead to data loss?

    Thanks

  6. Hi,

    So one of my data drives failed, replaced it and started the rebuild process, but woke up this morning to discover my server has crashed during the night (assuming the rebuild did not finish).

    Now what? I guess force rebooting the server will start a parity check (due to unclean shutdown), but what about my data? is it lost?

     

    Thanks

  7. Hi all.

    Set up tdarr a few days ago, works great mostly but having a strange issue with my TV Shows library.

    Transcoding works fine but getting a "Copy failed" error afterwards.

    Checking the handling info, I'm getting:

    Quote

    Cache file /temp/Show - S01E07-TdarrCacheFile-HjYJU-2Xm.mkv (1679520663 bytes) does not match size of new cache file /mnt/media/TV Shows - English/Show/Season 01/Show - S01E07-TdarrCacheFile-k3p7_wY55.mkv (25427968 bytes)

    I've trying setting all permissions to 777, no change.

     

    Any idea anyone?

    Thanks

  8. 9 hours ago, corgan said:

    Hello

     

    I'm using an app compreface on my unraid server. I'm running this atm via docker-compose, which runs fine.

    https://github.com/exadel-inc/CompreFace

    Github: https://github.com/exadel-inc/CompreFace#getting-started-with-compreface

    DockerHub: https://hub.docker.com/u/exadel

     

    docker-compose.yaml:

    version: '3.4'
    
    volumes:
      postgres-data:
    
    services:
      compreface-postgres-db:
        image: postgres:11.5
        container_name: "compreface-postgres-db"
        environment:
          - POSTGRES_USER=postgres
          - POSTGRES_PASSWORD=postgres
          - POSTGRES_DB=frs
        volumes:
          - postgres-data:/var/lib/postgresql/data
    
      compreface-admin:
        image: exadel/compreface-admin:0.6.0
        container_name: "compreface-admin"
        environment:
          - POSTGRES_USER=postgres
          - POSTGRES_PASSWORD=postgres
          - POSTGRES_URL=jdbc:postgresql://compreface-postgres-db:5432/frs
          - SPRING_PROFILES_ACTIVE=dev
          - ENABLE_EMAIL_SERVER=false
          - EMAIL_HOST=smtp.gmail.com
          - EMAIL_USERNAME=
          - EMAIL_FROM=
          - EMAIL_PASSWORD=
          - ADMIN_JAVA_OPTS=Xmx8g
        depends_on:
          - compreface-postgres-db
          - compreface-api
    
      compreface-api:
        image: exadel/compreface-api:0.6.0
        container_name: "compreface-api"
        depends_on:
          - compreface-postgres-db
        environment:
          - POSTGRES_USER=postgres
          - POSTGRES_PASSWORD=postgres
          - POSTGRES_URL=jdbc:postgresql://compreface-postgres-db:5432/frs
          - SPRING_PROFILES_ACTIVE=dev
          - API_JAVA_OPTS=Xmx8g
          - SAVE_IMAGES_TO_DB=true
    
      compreface-fe:
        image: exadel/compreface-fe:0.6.0
        container_name: "compreface-ui"
        ports:
          - "8000:80"
        depends_on:
          - compreface-api
          - compreface-admin
    
      compreface-core:
        image: exadel/compreface-core:0.6.0
        container_name: "compreface-core"
        environment:
          - ML_PORT=3000

     

    But the app creates 5 different containers and has no icons.

     

    Is there a way to transform these into an Unraid Docker Template?

    Can you please create a step-by-step guide for using the above on unRAID?

     

    Thanks

  9. On 4/15/2021 at 5:56 PM, trurl said:

    How would you define "appropriate disk" if not based on the settings for the user share?

     

    Mover has gotten a bit more complicated over the years and with new multiple pools feature, but it is still basically moving between each /mnt/pool and /mnt/user0.

    Thanks @itimpi and @trurl, forgot about /mnt/user0, so now it makes sense.

    I guess it’s time to manually move some files around...

  10. On 4/10/2021 at 3:23 PM, itimpi said:


    The problem is that the logic for selecting a drive to use is at a much lower level in unRaid than that at which mover runs so the file size is not known at the level that selects the target drive.

     

    I agree it would be nice if the size WAS taken into account before even attempting to move a file but I suspect that the changes required to achieve this may be non-trivial.   Still worth asking for in case someone at Limetech has a brainwave on an easy way to implement this.

    My assumption is that the Mover script moves the files from the cache drive/pool directly to the appropriate disk, not to the array share (which includes the cache itself), so the move operation can check all the target prerequisites before moving a file (including the target free space, file sizes & split levels), but I might be wrong...

  11. Please note that I'm referring to the Mover script - it's moving files already on Unraid from the cache pool/drive to the Array-protected disks, so it should already have the files sizes known.

    If I'm setting a share Minimum Free restriction to 4GB, the movie file is 4.5GB and the subtitles file is 40kb, with a split level that restricts the subtitles & movie files to be on the same disk, and the mover moves the subtitles file to a disk with 4.1GB free, the movie file will be stuck on the cache drive.

    If the mover script will move the files ordered from largest to smallest, the movie file will move first to a disk with enough free space, and the subtitles file will be moved to the same disk...

  12. Hi.

    I've noticed that many times, when the Mover script needs to move a movie folder (which contains a large MKV file and a small SRT subtitles file) from my cache drive to the array, it moves the small subtitles file first, then fails to move the MKV file to the same drive due to not enough free space, so the MKV is getting stuck on the cache drive.

    Is it possible to add a check for Mover to check the folder size & destination allocation restrictions (method + split level + free space) for the decision to which drive it should move the folder?

    If it's too much to ask, maybe just making Mover move the files off the cache drive by their sizes? Starting with the biggest files I believe would resolve the problem I think.

     

    Thanks

  13. On 1/31/2021 at 3:20 AM, mlapaglia said:

    removing `height: 6%` from the `gpu-image` css worked for me on chrome.

    image.png.631f22a87083ec47c4487e118e396fbd.png

     

    I have one plex stream and one ethereum miner but they both show up as plex:

    image.thumb.png.c1ce0524339dba56e3ffad50f5f63ca0.png

    Can you share how do your remove the 'height' so the icons will appear as they should?

  14. 4 hours ago, blaine07 said:

    Would that be related to seeing this in my NC instance today? Looks like AppStore opens and such today though? Also seeing update to 20.0.7; maybe that fixes conundrum?
     
    4D090DA9-0228-4E91-8E6D-EAE189402B53.thumb.jpeg.670673956197f4df4c763fda1eb65721.jpeg
    01D00D81-ADDA-48FD-A5B0-ADC20FBA1BA8.thumb.jpeg.2a7cb4a070eabfe91db86e4acc9fab04.jpeg

    Also, with apps server maybe being broke is it even safe to attempt to update to 20.0.7 today at all? emoji848.png

    It seems (at least on my end) that everything is back to normal, so I've disabled the above settings and updated my instance to 20.07 just fine...

    • Like 1
  15. Hi folks.

    It appears that the repository at http://apps.nextcloud.com/ is down for a while, so many face problems with upgrading or freshly installing Nextcloud.

    After a lot of googling, I've found a docker someone made to self-host the apps, and this is what I've done to solve it on my installation:

    1. The docker is not available at dockerHub or CA, so had to run it manually using:

    docker run -d --name='nc-cache' --net='lsio' -e TZ="Asia/Jerusalem" -e HOST_OS="Unraid" -p '8000:8000/tcp' registry.r3ktm8.de/sealife-docker/nextcloud-cache:nightly

    2. Added the following to "/mnt/user/appdata/SWAG/nginx/proxy-confs/nc-cache.subdomain.conf":

    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name apps.nc.*;
    	access_log /config/log/nginx/nc_apps.access.log;
    	error_log /config/log/nginx/nc_apps.error.log;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_app nc-cache;
            set $upstream_port 8000;
            set $upstream_proto http;
            proxy_pass $upstream_proto://$upstream_app:$upstream_port;
        }
    }

    3. Added "apps.nc" to my list of subdomains at the SWAG container and restarting SWAG.

    4. Added the following to "/mnt/user/appdata/Nextcloud/www/nextcloud/config/config.php":

      'appstoreenabled' => true,
      'appstoreurl' => 'https://apps.nc.mydomain.org/api/v1',

    and restarted Nextcloud.

     

    Hope this helps, unfortunately don't know how to add it as a template to CA with automatic updates, but this is a start.

  16. It looks as if the RDP session is timing out if window is left open without any user interaction:

    Quote

    guacd[353]: ERROR: User is not responding.
    guacd[353]: INFO: User "@5065b63a-ecc8-46de-bda6-9939f02548e0" disconnected (0 users remain)
    guacd[353]: INFO: Last user of connection "$b5d0262b-19d1-40c5-86f6-b05a14205cec" disconnected
    guacd[332]: INFO: Connection "$b5d0262b-19d1-40c5-86f6-b05a14205cec" removed.

    Any way of removing this limitation?

    No way to access the app again without restarting the container.

     

    Thanks

  17. Hi.

    Thanks, was just about to install aptalca's version, but already has some containers from linuxserver.io (sharing the base image?).

    Anyway, just a quick question/suggestion - is it possible to change the networking from "host" to "bridge"? what ports are needed for that?

    Another suggestion is adding a dedicated "Pictures" folder mapping to the settings, so not to use the appdata folder (as the program suggests by default to use /config/Pictures).

    BTW - is there a known reverse proxy configuration for SWAG to make it available over WAN?

     

     

    Thanks 🙂

  18. 13 hours ago, gshlomi said:

    Hi.

    Just a quick question - Can't the RTMP module be integrated into the LinuxServer's LetsEncrpypt nginx container?

    Thanks

    Answering my own question - it's already integrated, just needs some work to enable it.

    Downloaded the complete project as a ZIP file from https://github.com/arut/nginx-rtmp-module and extracted the "stat.xsl" to my "appdata/LetsEncrypt/www" folder.

    Then, editing "appdata/LetsEncrypt/nginx/nginx.conf" , changing:

    worker_processes 4;

    to:

    worker_processes auto;

    and adding:

    rtmp {
    	server {
    		listen [::]:1935;
    		chunk_size 4096;
    		application live {
    			live on;
    			record off;
    			push 'rtmp://url.twitch.tv/app/<StreamKey>';
    			push 'rtmp://url.youtube.com/<StreamKey>';
    		}
    	}
    }

    after the "events" block.

     

    Additionaly, edited "appdata/LetsEncrypt/nginx/site-confs/default" to add:

    server {
        listen 8080;
    
        location / {
            root /config/www;
        }
    
        location /stat {
            rtmp_stat all;
            rtmp_stat_stylesheet stat.xsl;
        }
    
        location /stat.xsl {
            root /config/www;
        }
    }

    as a seperate server block.

     

    Last step was adding port mapping 1935(TCP) & 8080(TCP) to the container template (I had to map internal 8080 to 8083 on host due to other containers already mapping 8080) and saving.

     

    Works like a charm, no need for another container just for RTMP streaming 🙂


     

  19. On 8/31/2016 at 8:31 PM, Squid said:

    Turn on dockerhub searches within CA settings, then search for dvdgiessen.  You'll have to add the port of 1935, and a volume mapping of /path/to/my/custom/nginx.conf mapped to /etc/nginx/nginx.conf  (its not an automated build, so CA won't be able to populate those fields)

    Hi.

    Just a quick question - Can't the RTMP module be integrated into the LinuxServer's LetsEncrpypt nginx container?

    Thanks