Cpt. Chaz

Members
  • Posts

    220
  • Joined

  • Last visited

Everything posted by Cpt. Chaz

  1. Unraid 6.8.0 Hi guys, This past saturday, out of the blue, i noticed the dashboard on my server wasn't showing any cpu stats... then i noticed none of my docker stats were working, then realized my syslog was at 100%. The whole system seemed very slow and unresponsive. Fix Common Problems showed nothing out of sorts, and the dockers themselves (plex for example) all seemed to be working fine. I tried to use the diagnostics zip tool and even that wouldn't work - unresponsive. I used terminal to capture the syslog to flash manually - the wiki said it would be ready to post to forums using this method, but if it's not anonymized let me know and i'll take it down straight away. this log file is below, titled "syslog-2020-01-11.zip". Clean reboot, everything seemed to return to normal. Fast forward a couple days to just a few short moments ago, i logged in to find my syslog back up to 77%. FCP still not showing any problems. diagnostic zip tool worked this time, so i snagged the logs and have attached them below as well. file is "kal-el-diagnostics-20200113-1320.zip". I glanced at the logs from today and saw a few red lines, but unfortunately the rest is greek to me. I have a couple of suspicions... i installed folding@home last week, even with cpu and memory parameters, it caused problems so i uninstalled (no good deed... lol). other possibility is i have a semi permanent wireguard connection now with other remote unraid server. but that's been setup for weeks now with no issues (until now i guess). other than that, I can't think of any changes that might be the culprit, really don't know where to start. Thanks in advance -Chaz kal-el-diagnostics-20200113-1320.zipsyslog-2020-01-11.zip
  2. Radarr and Sonarr handle media files differently. Sonarr actually inspects the files and determines whether it’s x264 or x265. Radarr (at least V2) does not. It’s imperative to have radarr rename the files (not folders) with the media tag like so: {Movie Title} ({Release Year}) -{Quality Title}-{Mediainfo Videocodec} this will output a (Plex compatible) file name like: Batman (1992) -Bluray-1080p-x265 so if you don’t have this tag set but you have a x265 profile, it will re download the file from a source with a x265 tag if it senses a file change ie from unmanic. so add the file tag under the radarr settings. Then you’ll need to rename all the movies in your library to apply the naming scheme, which can be done with radarr or filebot. Then, when unmanic converts “Batman (1992) -Bluray-1080p-x264” to x265, you’ll need to manually change it to Batman (1992) -Bluray-1080p-x265 radarr will then see it meets the quality profile and won’t attempt to redownload (assuming other quality profile criteria is met). PS radarr V3 May have file analysis functionality which would render this whole exercise moot. Might also be worth looking into.
  3. Ah. Crap. I fiddled with it for a while back when I was trying to use the Mac Pro for unraid, but never could get there with it. Tried 6.7.2. Too bad 6.8 didn’t get it. Wonder if the RC with the new Linux kernel would be worth a shot? 6.8rc7 or whatever it was (I’d have to go back and look unless someone knows off hand)
  4. @jonathanm i think josh.5 may be clocked out for a few months, per this post here.
  5. Wanted to share what i think is another awesome way of utilizing Unmanic. I don't mean to hijack the thread here, but this certainly seems like a good place to share with fellow unmanic users. It requires two different unraid servers, so if you only have one, I don't see this being applicable for you. But essentially, i've "found" a way to use two instances of unmanic on one library. i'll just give some bullet points, if anyone is actually interested in details feel free to ask any questions. For my setup, i have an unraid server at "home", and one at a separate location at my "office", so they do not share a LAN. Unraid 6.8 has introduced support for wireguard VPN and that's what makes this all possible. I won't go into the details of setting that up, but for anyone that missed that intro look here for setup info. "home" server is where all my media server content is housed, and where my primary unmanic instance has been running the past few months. "office" server is a way over-powered ryzen 2700 file server. Reference: "home" = 192.168.1.121 "office" = 192.168.0.121 Setup "home" wireguard VPN tunnel and add "office" as a Lan to Lan peer with an active wireguard connection, use the unassigned devices plugin to add remote smb path on "office" server pointing to path "192.168.1.121/media" (this is the plex media library location on "home") and mount the path. install unmanic docker on "office". now the library paths will look something like this: /library/movies -> /mnt/disks/192.168.1.121_media/Plex/Movies/ /library/tv shows -> *intentionally left blank, see explanation below* change access mode for both paths to "RW/Slave" As i mentioned earlier in the thread here, i've changed my cache directory to use ram instead of disk so now it's either /tmp or /dev/shm/ if you go this route apply the settings and let the container install. access gui, and make sure tv path is left empty The reason i've not mapped the tv path is because i don't want the containers fighting over the same files. So doing this, i've got "home" converting TV shows, and "office" remotely converting Movies. your own configuration may vary here, but it's important not to point both containers at the same media folders simultaneously. I see this causing problems. So far, i'm 24 hours into this experiment, "home" wireguard is showing 59.9gb up and and 45.9gb down, with 22 successful movie conversions. i've not had a single file failure yet, and i've just cut my overall library conversion time (more or less) in half. I've got my dual xeon server at home and my ryzen 2700 8-core both crunching away as we speak. Of course i have cpu pinning and prioritization in place on both servers. hopefully somebody can find this useful. YMMV. good luck!
  6. yeah, that's what i'm going for. otherwise, it leaves countless empty folders behind that have to be cleared out. It also appears that these folders aren't integral to the log/history, so they serve no purpose that i can see.
  7. Hard to pick just one thing I love about unraid, but if I had to pick... - I love the docker integration and UI. Before I knew anything else about unraid, I knew I liked the way it handled dockers and I red-pilled after that. - One thing I’d really like to see added, is a more comprehensive and user friendly way to do a server to server backup instead of relying on third party services or my shitty terminal skills. (I think this ability would even make unraid more business friendly, as I run a server for my small business as well as my home) Haha!
  8. Sure yeah - in the container settings under “encoding cache directory”. The docker mapping is /tmp/unmanic, the host mapping is the file path right above it. Different people map it to different places, that’s what piqued my curiosity about mapping it to ram so instead of disk leaving the empty folders. Ram mapping will clear it out.
  9. Someone else earlier on in this thread mentioned it so I tried it and it’s worked awesome for me. using “--cpu-shares=2” in the docker’s extra parameters.
  10. Awesome! Glad it worked, I got to thinking about it at first, not knowing if it would even work, but also not knowing why it wouldn’t. Seems good so far for me too. It always bugged me the empty folders left behind...
  11. I un-prioritized core usage in the dockers extra parameters, is that what you’re talking about? also, /dev/shm is also allocated to just 50% of your total ram, a handy safeguard. But I’ve also got 192gb of ram in the arsenal, so shouldn’t be a problem for me either way. But for folks with less, I’d definitely use that over /tmp
  12. Good to know! I’m utilizing 3 workers for tv episodes at the moment, no where near 25gb file sizes so I should be in good shape there. Once I get to the movies I may keep a closer eye on it.
  13. /dev/shm is allocated to only use 50% of ram, whereas /tmp has no limit. Wasn’t sure what to expect for ram consumption trying this out, so erred on the side of caution. the history part had not occurred to me tho. I pretty much leave the container running 24/7, except for updates, etc. I’m not overly concerned with long history, but the next time I restart the docker I’ll report back.
  14. Has anybody tried setting /tmp directory to /dev/shm? I wouldn't advise if you're low on ram, but so far i'm not experiencing any performance issues and it's been great for keeping my file system clean.
  15. A failed conversion trying to go from a 2160 remux HEVC mkv to 2160 HEVC mp4. Ffmpeg log here.
  16. I’m getting that same message even on every successful completion of mine. 3000+ files so far showing that, all but 34 successful.
  17. that's exactly what it is. i got the remote ssh part working, but then trying to follow along with op's terminal commands was not resulting in the same output for me at all, which was then resulting in me googling almost every single step trying to figure out the delta between my results and his. but i don't usually skip ahead too much in tutorials, if anything i tend to get bogged down in the details so as not to miss anything. but if there's clarification down the line, maybe i'll give it one last gander!
  18. Hey hoop - i just wanted to say thanks again for your help with all this. i'm going to table this method for the time being - i think given enough time i could get there, but i'm out of my depth here i'm afraid, without just being spoonfed through the entire process. when things slow down and i can dedicate a little more time to trying, i'll look at giving it another shot. just wanted to say thanks again.
  19. how stable is 6.8 right now? i'm running Server 1 in my office and can't afford any downtime at the moment. That does sound like a perfect solution though
  20. quick update, i'm still working on initial setup of this - didn't want you to think i fell off after you pm'd the scripts. I'm trying to get this to work across vpn. So here's what i've got: i'm planning to back up "Server 1" to "Server 2". I've also got a mac (Mac 1) on the same lan as "Server 1". Mac 1 is vpn'd to "Server 2". My naive hope had been that Server 1 would be able to access Server 2 via Mac 1's vpn, since Mac 1 and Server 1 share a lan connection. Couldn't SSH in this way, and pinging Server 2 from Server 1 with Mac 1 vpn returned nada. So tonight i'm going to work on getting openvpn client installed on Server 1, so i can directly connect it to Server 2 and try again. unless... i can ssh directly over to Server 2 using my dns address instead of a local ip, but it looks like this requires port 22 to be open. Is it safe to do that? i'm guessing i'd at least need to change unraid's root user password? sure would be nice to not have to rely on a vpn, even though that's not a deal killer
  21. wow, ok let's give it a shot. can you send me a pm to get me started?
  22. The end result for your first suggestion looks pretty much like what i'm looking for. But the steps to get there may be out of my league. following along in the first step, the first rsync cl prompt didn't work for me 😂 then trying to figure out if 'rsync' and 'ssh' should be in thisuser's path (use "which ssh" and "which rsync"), 'rsync' should be in remoteuser's path, and 'sshd' should be running on remotehost completely went over my head. Also leads to me to think, any sort of troubleshooting i'd encounter with this configuration, i'd be completely in the dark with. IT unfortunately is not my profession, as much as i enjoy it (especially being self employed). Leads me to think i need something a little more user friendly here. Wish there was, because i hate the monthly fee of a cloud when i have plenty of my own offsite storage, you know?
  23. Thanks for the replies hoopster! I’m going to give your first suggestion a good look tomorrow (work was crazy today) and I may be back with some questions. thanks again, talk soon