• Posts

  • Joined

  • Last visited


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

noja's Achievements


Apprentice (3/14)



  1. Ran into an issue today where I had a Synology share mounted under UD. I had manually shutdown the Synology, but forgot to unmount the shares from Unraid first. UD would not allow me to unmount the share until I turned the Synology back on. Additionally, got impatient and decided to just reboot the server. However, a graceful shutdown and reboot was blocked by that unmount. Turning the Synology back on finally allows the server to reboot. Weird stuff.
  2. Don't think so. I think I understand what trapexit is arguing, but there has to be a better solution. I've been using autofs on an ubuntu setup and it still ends up with stale file handles all the time. I should note that I have hard links off and no cache for the share. The only other option I can see is the Tunable (fuse_remember): setting under Settings->NFS, but the warning about out of memory errors has me a little skittish for setting that to -1.
  3. Hey! So I've basically decided to give up on cloning the contents of the ssd. What I've done is remapped the folder where Plex itself stores it's automated backups to an NFS share from my backup location. Essentially, I added an NFS share to the container called "/databasebackups" Then in Plex itself, I told it to backup to that folder. Essentially, I now have a backup of the database including watch stats, libraries, etc. - but if that SSD ever takes a dive, I'll have to re-download all the images, posters, metadata. Does that make sense? Given that I'm kinda comfortable with that, I've also started using that same method for all my programs that have an integrated backup feature - Sonarr, Radarr, etc. While those are also covered in the CA Appdata Backup - I like the idea of being able to restore one item at a time, rather than the entire appdata folder that CA forces you to do.
  4. Yep - I'll add another one to the list. Was timing out on the web ui and enabling privoxy solved that. After getting it up and running, I tried the old "turn it off and on again" on privoxy, and having it off results in no web ui. Funky.
  5. Happy to do that. However, looking at it currently, I've had zero errors overnight now. Thinking about this, the errors do seem to coincide with a massive file transfer that I was doing over the last 4-5 days. I moved all my movies to a second server (9ish TB) and since the transfer finished, there have been zero errors. Over the time, I was using Krusader and DoubleCommander for the transfers. Could there potentially be a larger networking issue on my router side that it couldn't handle the load? pfSense logs aren't showing me anything and I have an SG-3100, so it should be good, but I guess that doesn't mean anything.
  6. So I keep getting a ton of errors spamming the log, mostly based on disk IO I think. Specifically, I keep seeing these three: kernel: traps: lsof[4210] general protection... unassigned.devices: Error: shell_exec(/usr/bin/lsof '/mnt/disks/plexappdata' 2>/dev/null | /bin/sort -k8 | /bin/uniq -f7 | /bin/grep -c -e REG) took longer than 5s! nginx: 2020/10/08 14:52:15 [error] 9129#9129: *311962 upstream timed out (110: Connection timed out) while reading response header from upstream...from the main server dashboard. Finally, the server gui itself is incredibly slow to complete any tasks. Just downloading the diagnostics for this post timed out and failed twice. Weird note regarding this part - I've been using the Chromium Edge mostly and the diagnostics only downloaded once I used Firefox. I have no idea what's killing me on all this.
  7. I'm currently also getting a ton of these errors.
  8. Are there any general tips out there for increasing the performance and speed of Nextcloud? I found this Synology-specific guide that is a little bit in the direction of what I'm looking for. However, I'm not sure about how to translate into an Unraid environment. My NC instance is based around the LSIO docker container running behind the LSIO Letsencrypt reverse proxy. My appdata sits on an NVMe drive connected through a PCIe adapter. The vast majority of my NC storage data is on the array though. I have 20Mbps upload through a pfSense SG-3100. Generally, I feel like NC should be pretty snappy, however, moving between pages is definitely not-snappy. A discussing on reddit notes that this behaviour is heavily influenced by the client. I am currently using the latest Edge Chromium (I honestly love it) but I've experienced the same general speed issues on Firefox and Vivaldi. I'm really hoping there is a write up somewhere already about performance tuning from an Unraid specific perspective. Thanks for any direction!
  9. Hi - sorry if this has been asked before, but my search hasn't found anything close enough to my own scenario. I currently have my plex appdata hosted on an unassigned device (ssd) and I would like to find the best backup method. All of my other docker containers are on my cache drive and are backed up by the CA Appdata Backup. I feel like I have a couple options for backing up and I'm not sure what will work best. Either 1) use syncthing or 2) do up a custom rsync user script. If I go with syncthing, I can't seem to find a way to tell it to only sync once a day at 4am. Also, is it a major issue for Plex if the container doesn't shut down while the sync is running? Finally, is syncthing the best third party option for a backup in this scenario? If I go with option 2 - I'm mostly concerned that I'm going to F* it up, but I don't know if my concerns are worthwhile. I found an example of a script here - just not sure if this has been executed correctly. Thanks for any insight!
  10. Hey @cyberfreakde - did you find a solution? p.wrangles doesn't seem to work for me.
  11. Huh - you learn something new everyday. Force Update did it for me. Thanks again!
  12. Hey @binhex, thanks again! Just wanted to note though, when using the "latest" tag, the container doesn't seem to work. Oddly though, "2.21.1-4-02" works perfectly. I've restarted the container with both tags twice just to be sure. Thanks again for all your efforts!
  13. Thank you! I can confirm that it's downloading to the correct place now.
  14. I'm having an issue with the YouTubeDL-Material. Everything setup super easy, but the download paths don't seem to work. Once I get a video downloaded, it shows up in the webui, but it does not show up in the folder path that has been specified. Its fairly small, so I'm wondering if it is somehow in the dockerfile where I can't find it?
  15. Plex authentication is down.