crazygambit

Members
  • Posts

    71
  • Joined

  • Last visited

Posts posted by crazygambit

  1. I've also been having this issue. Has anyone found a solution? Or at least a way to increase the log size permanently, since 128M seems pretty small and for some reason when the logs roll they're saved in the same directory, which kinda negates the point of rolling in the first place (so the log doesn't fill up in the first place!).

  2. I've been racking my brain to come up with a way to have the plugin automatically start the mover if the cache is filled up over a certain threshold (say 90%), but also run once a day normally at a certain time.

     

    I don't want to take the performance hit to run it every hour if it's not needed, but I'm willing to take the hit if it's getting full. I know the plugin won't automatically detect if it's over 90% full, so I'm running it hourly if it's over a certain threshold and have the option to Cron Schedule to force move all of files at 6 am. The problem is that the hourly move never activates, even if the cache is full. Is it possible to do what I want or am I completely misunderstanding what this plugin does?

  3. 8 hours ago, TexasDave said:

     

    Yes - I am having the same issue but trying to get reverse proxy (using Ngninx Reverse Proxy) to work before I tackle the email issue.

    Anyone have this working with Ngninx Reverse Proxy on CloudFlare? Any tricks? Thanks!

    I got reverse proxy working using Letsencrypt (now called Swag) using nginx just by modifying the provided sample for subdomain. The install instructions mention that subfolder doesn't work so I made a new subdomain using duckdns.org. I've never used Cloudflare, so no clue there.

    • Like 1
  4. Ah yes, I'm indeed using a couple of AOC-SASLP-MV8 cards. If anyone is wondering, the IRQ Nobody cared issue is triggered by either connecting an HDMI out cable to a monitor or USB mouse and keyboard. Simply not doing that (or rebooting after you say change BIOS settings) avoids the problem altogether. Otherwise those cards seem to work fine. I've had them for almost 10 years now.

  5. 33 minutes ago, Squid said:

    Suggestion by Johnnie Black, and when it does happen it very rarely impacts performance, so at the end of the day it was removed to stop the answers on the forum that basically said "don't worry about it"

    Really? For me it's a complete performance killer for certain stuff. Like the mover slows to a crawl for example. I've never understood what exactly the problem is.

  6. I replaced an old 2TB HDD with a brand new 10TB one. The rebuilding process was going along smoothly until about halfway through (4.8TB) I got an error. Now the read check in progress is paused. I think it's unlikely the drive is bad and cable issues are more likely, but then it probably would have failed earlier right? I can't make heads or tails of the log so I was hoping some of the more experienced members might shed some light on how to proceed. I still have th 2TB drive, but I don't know if I can just plug it right back in, since parity has certainly changed (the server was still in use during that time, which may have been a mistake).

     

    Any insight I'd appreciate it.

    tower-diagnostics-20190510-1919.zip

  7. On 2/16/2018 at 10:11 PM, Djoss said:
    On 2/15/2018 at 6:18 PM, ice pube said:

    Is there a limit to the clipboard for this docker? Sometimes I have a bunch of links I need to send to the clipboard, but when I enter them and press "submit" it just sits there. If I copy less links it works fine. Not the end of the world but would be nice to fix if possible.

    I will check this.

    There are also some browser extensions that allow you to interact with JDownloader.  Instead of copy/paste you links, they could be better alternatives.

     

    Did you ever find the answer to this? I find the docker dies with even a moderate number of links ~100. If assigning more resources to it would allow it to work with more, it would be great.

  8. I have 2 sticks 8GB RAM, this one if it makes any difference: Kingston HyperX Fury Red 8GB 2400 MHz DDR4 RAM,

    https://www.amazon.com/Kingston-Technology-HyperX-2400MHz-HX424C15FR2/dp/B06XKSPTH7

     

    I'm looking to add more RAM to my unraid server and for some reason in my neck of the woods 2666 Mhz DDR4 RAM is cheaper at the moment. I'm not super concerned with RAM performance, but I want to know if it's safe to mix and match. I'm OK with the new RAM running at the lower speed, I'd just rather not pay more for 2400 RAM.

     

    On the Kingston page it says the RAM automatically overclocks to 2666. I honestly have no idea at the speed it's currently running, I just plugged it and turned the machine on. I'm super novice when it comes to motherboard settings, timings, overclocking and all that.

     

    Does anyone have any insight they would like to share regarding this?

  9. Does anyone know why the webgui doesn't work for me in Chrome while it works in MS Edge? It's not critical, but I'd rather not run Edge.

     

    Edit: Also how do I get it to save the settings. The speed limit on downloads is particularly annoying to have to reset each time.

  10. On 5/9/2018 at 2:25 PM, GabeB said:

    HA! I figured something out. I was a little apprehensive about having to use the command line. I was poking around in Krusader and saw that I could browse to a given folder and then open a terminal window at that spot, which I did. Now in the same directory as the RAR file I didn't need to mess with a lengthy absolute or relative reference to point to the file. So I just tried 'unrar e name.of.torrent.rar' and voila, it worked. 

     

    Not quite as easy as a GUI because the torrent names are a pain to type, but doable. 

    I'd been having exactly the same issues for a while and this worked perfectly! Though I did learn the hard way that "unrar x" respects the rar file folder structure, while "unrar e" will just dump everything in your current directory.

    • Like 1
  11. I finally manged to get this working with sonarr, radarr, sabnzbd and tautulli, using reverse proxy and authentication for all those apps and I have to say I'm pretty pleased with the result (using the Organizr v2 container).

     

    I was always wary of using reverse proxy on sonarr and radarr since if they fall into the wrong hands basically your entire server could be deleted and felt that a simple username and password protection was insufficient. Now on top of that I have the login of organizr, so I'm thinking that's sufficient.

     

    Seeing how well all this works, I'm staring to get greedy and thinking using a reverse proxy on the unraid GUI under organizr might not be such a terrible idea.

     

    I know a VPN is the accepted method to acess the server from outside, but how risky would that approach be? You'd be behind at least 2 auths (organizr's own and letsencrypt password)

     

    I'm already taking risk in opening up those applications that have permission to delete all your media, so how much worse could opening up the GUI be? Does anyone have any thoughts on this approach? Is it even possible to use a reverse proxy on the GUI?

  12. I uninstalled this plugin, but today is the day I had it scheduled to run (the first of the month) and my server is going crazy, so I highly suspect it's still running.

     

    I found this in my system log today:

     

    Mar 1 05:44:39 Tower inotifywait[5734]: Failed to watch /mnt/disk4; upper limit on inotify watches reached!

    Mar 1 05:44:39 Tower inotifywait[5734]: Please increase the amount of inotify watches allowed per user via `/proc/sys/fs/inotify/max_user_watches'.

     

    Does that mean it is in fact running? If so, how do I disable it? Do I need to install it again and clear the schedule or something?

  13. 14 hours ago, wedge22 said:

    I am using a 5930k cpu which has no Quick sync built in. I have a Nvidia Shield tv that is used for my media playback. I have issues playing certain files as they require transcoding and are very large rips of 4K content. How can I improve the transcoding abilities of the Plex server? I currently have 4 CPUs pinned to Plex and my unraid server has a total of 16GB ram along with an SSD cache drive.

     

    Your CPU should be more than enough to transcode one 4k rip of any size (though the Nvidia Shield should be able to direct play anything you throw at it without transcoding anyways). I have an i5-8400 that has a similar Passmark score than yours and it can almost, but not quite handle two 4k remux transcodes. With one, CPU usage is around 55%. I don't pin any cores to Plex though, I just let it do its thing.

  14. It's funny I just came here to make the same request.

     

    When doing a parity check sometimes my server becomes a bit unresponsive since I/O volume is high. Since HDD sizes continue to increase, parity keeps taking longer and longer. Maybe it wasn't a problem when unraid first came out, but with 10TB HDDs having your system not fully operational for 24 hours is less than ideal.

     

    Apps like Sonarr become unresponsive at some points and you just need it to do one little thing, but having to scrap all the hours you've already spent doing parity is also painful.

     

    Alternatively, make parity super low priority. I wouldn't care if it took twice as long if it didn't impact my day to day usage in any relevant way.

  15. On 8/13/2018 at 11:54 PM, tential said:

     

    It's been over a year, so probably need to figure out a solution on your own like something to autorestart or limit the ram usage of the docker.  I plan on upgrading to 32 gigs of ram eventually, but in the meantime I restart every 1-2 days or anytime I'm at my PC.

     

    Still makes my life way better with radarr than without, so can't complain.

    I've only recently started experiencing issues with radarr. It's using over 5GB RAM (out of 16) and it makes all my dockers completely unresponsive and I start to get 100% CPU utilization. So basically until this is fixed I can no longer use it. Has anyone had any success with limiting the amount of RAM the docker can use?

  16. 2 hours ago, CHBMB said:

    You've put https somewhere where it should be http

     

    I don't think that's quite it. In fact if I change it from http to https in the Ombi section I immediately get a 502 Bad Getaway error and don't even get to see the login info. I suspect it has to do with Ombi requiring the blocks of code before and after the typical proxy pass stuff. Here's my current try, I'd love to see how you have it setup if you're using Ombi.

     

    location /ombi {
        return 301 $scheme://$host/ombi/;
    }
    
    location /ombi/ {
        include /config/nginx/proxy.conf;
        proxy_pass http://192.168.0.11:3579;
    }
    
    if ($http_referer ~* /ombi/) {
        rewrite ^/dist/(.*) $scheme://$host/ombi/dist/$1 permanent;
    }

     

  17. 1 hour ago, CHBMB said:

     

    As far as I know the way to view logs is by clicking the icon on the far right.

     

    That's weird, I'm 100% sure that option was there yesterday. Indeed clicking on the icon on the far right works, I hadn't noticed because I was using advanced view.

     

    I finally managed to get Tautulli working well, but I'm having some issues with Ombi. I get to the login page fine, but after I login I'm getting "400 Bad Request The plain HTTP request was sent to HTTPS port". I'm following the template of the readme, but not the custom docker network, so I put in my IP address.

  18. Thanks to @CHBMB and @bonienl for their very clear responses. It clears a lot of stuff up.

     

    Now I have yet another question. Yesterday when I clicked on the docker icon in the docker page on the GUI I had the option to look at the logs, but now for some reason it's gone and I can't find the logs in the appdata folder either. Does anyone know how I can get that option back?