Jump to content

Cpt. Chaz

Members
  • Posts

    220
  • Joined

  • Last visited

Posts posted by Cpt. Chaz

  1. On 12/30/2019 at 4:22 PM, jonathanm said:

    @Josh.5, could you please give a brief overview of what operations you do on /tmp/unmanic inside the container? I was experimenting with allowing my second server to help with the load, and I ran into some permission issues, it seemed to indicate it couldn't move the file from /tmp/unmanic to the final destination. So, I mapped /tmp/unmanic to a spot on the same mount point, and it appears to have recursively deleted my media files out of the folders in that mount, leaving the folders empty.

     

    I assumed with a description of "encoding cache directory" that I could safely map that anywhere I wanted. That appears not to be the case.

    @jonathanm i think josh.5 may be clocked out for a few months, per this post here

  2. Wanted to share what i think is another awesome way of utilizing Unmanic. I don't mean to hijack the thread here, but this certainly seems like a good place to share with fellow unmanic users. 

     

    It requires two different unraid servers, so if you only have one, I don't see this being applicable for you. But essentially, i've "found" a way to use two instances of unmanic on one library. i'll just give some bullet points, if anyone is actually interested in details feel free to ask any questions.

     

    For my setup, i have an unraid server at "home", and one at a separate location at my "office", so they do not share a LAN. Unraid 6.8 has introduced support for wireguard VPN and that's what makes this all possible. I won't go into the details of setting that up, but for anyone that missed that intro look here for setup info. "home" server is where all my media server content is housed, and where my primary unmanic instance has been running the past few months. "office" server is a way over-powered ryzen 2700 file server.

     

    Reference: "home" = 192.168.1.121

                     "office" = 192.168.0.121

    1. Setup "home" wireguard VPN tunnel and add "office" as a Lan to Lan peer
    2. with an active wireguard connection, use the unassigned devices plugin to add remote smb path on "office" server pointing to path "192.168.1.121/media" (this is the plex media library location on "home") and mount the path.
    3. install unmanic docker on "office". now the library paths will look something like this:
      1. /library/movies -> /mnt/disks/192.168.1.121_media/Plex/Movies/
      2. /library/tv shows -> *intentionally left blank, see explanation below*
      3. change access mode for both paths to "RW/Slave"
    4. As i mentioned earlier in the thread here, i've changed my cache directory to use ram instead of disk so now it's either /tmp or /dev/shm/ if you go this route
    5. apply the settings and let the container install. access gui, and make sure tv path is left empty

     

    The reason i've not mapped the tv path is because i don't want the containers fighting over the same files. So doing this, i've got "home" converting TV shows, and "office" remotely converting Movies. your own configuration may vary here, but it's important not to point both containers at the same media folders simultaneously. I see this causing problems.

     

    So far, i'm 24 hours into this experiment, "home" wireguard is showing 59.9gb up and and 45.9gb down, with 22 successful movie conversions. i've not had a single file failure yet, and i've just cut my overall library conversion time (more or less) in half. I've got my dual xeon server at home and my ryzen 2700 8-core both crunching away as we speak. Of course i have cpu pinning and prioritization in place on both servers. hopefully somebody can find this useful. YMMV. good luck!

     

     

     

  3. 31 minutes ago, Zer0Nin3r said:

    In my quick tests in the past, folders inside /tmp are cleared after a reboot. *shrugs*

    yeah, that's what i'm going for. otherwise, it leaves countless empty folders behind that have to be cleared out. It also appears that these folders aren't integral to the log/history, so they serve no purpose that i can see.

  4. Hard to pick just one thing I love about unraid, but if I had to pick...

     

    - I love the docker integration and UI. Before I knew anything else about unraid, I knew I liked the way it handled dockers and I red-pilled after that. 
     

    - One thing I’d really like to see added, is a more comprehensive and user friendly way to do a server to server backup instead of relying on third party services or my shitty terminal skills. (I think this ability would even make unraid more business friendly, as I run a server for my small business as well as my home) Haha!

  5. 6 minutes ago, TexasDave said:

    @Cpt. Chaz you say:

     

     

    Can you tell me where this is happening? Want to check (and clean) on my system as needed. Thanks!

    Sure yeah - in the container settings under “encoding cache directory”. The docker mapping is /tmp/unmanic, the host mapping is the file path right above it. Different people map it to different places, that’s what piqued my curiosity about mapping it to ram so instead of disk leaving the empty folders. Ram mapping will clear it out. 

    • Like 1
  6. 5 hours ago, letrain said:

    I would think it would use the same amount of ram that it would for disk space, i.e. 8gb transcode would need 8gb of ram. It fully converts then moves the file to overwrite the original. I tried it out with tmp (I have 96gb of ram so I'm not worried at all), And so far so good. And my history was there. It must be kept somewhere else (logs?). The file folders it uses to store the transcoded file before moving it must just be for while it's converting, but they never get deleted by program so I assumed it was history of some sort.

    I am glad you brought this up. I've used /tmp for plex transcode and this is a great option. I have an SSD in my cache pool that complains about errors if I let unmanic run for over a day ( crc errors). But is happy any other time. So this means I can leave unmanic plugging along and not have to worry about over working that drive. Also it means mover and parity check won't be slowed down when unmanic is running.

    Thank again.

    Sent from my Pixel 2 XL using Tapatalk
     

    Awesome! Glad it worked, I got to thinking about it at first, not knowing if it would even work, but also not knowing why it wouldn’t. Seems good so far for me too. It always bugged me the empty folders left behind...

  7. 9 hours ago, ijuarez said:

    I haven't tried it because I'm afraid it would consume all of my ram, i have alot, but if you dont configure it correctly out of the box it will eat all of your cores imagine if you let it free on all of your ram.

    I un-prioritized core usage in the dockers extra parameters, is that what you’re talking about?

     

    also, /dev/shm is also allocated to just 50% of your total ram, a handy safeguard. But I’ve also got 192gb of ram in the arsenal, so shouldn’t be a problem for me either way. But for folks with less, I’d definitely use that over /tmp 

  8. 3 hours ago, jonathanm said:

    I map /tmp to /tmp, works great. HOWEVER...

    I only have 2 workers. Each worker keeps /tmp occupied with the conversion file, so if you are converting 25GB movie files and have 5 workers running, you are probably going to have a bad time unless you are swimming in excess RAM.

     

    YMMV, etc.

    Good to know! I’m utilizing 3 workers for tv episodes at the moment, no where near 25gb file sizes so I should be in good shape there. Once I get to the movies I may keep a closer eye on it. 

  9. 6 hours ago, letrain said:

    How does this work for history? I noticed there are files in the temp folder for every conversion that's been done. Wouldn't these be cleared out of temp on reboot? Does it matter?

     

    Edit: I think I'm confused. So /dev/shm is ram?  Are you changing this in the docker itself? Or mapping it to there? I know on Plex mapping transcode to ram maps /transcode to /tmp... I thought tmp was ram in unraid?

    /dev/shm is allocated to only use 50% of ram, whereas /tmp has no limit. Wasn’t sure what to expect for ram consumption trying this out, so erred on the side of caution. 
     

    the history part had not occurred to me tho. I pretty much leave the container running 24/7, except for updates, etc. I’m not overly concerned with long history, but the next time I restart the docker I’ll report back.

  10. 30 minutes ago, Hoopster said:

    No problem.  Are you hung up on properly generating and testing the SSH keys, or something else?  For me, it was the keys. 

     

    I needed to do things exactly opposite to what the OP did, so I had to generate the keys differently.  He initiated the backup from the backup/destination server to pull from the source.  I needed to initiate from the source (where the script runs) to power on and push to the destination (backup) server.  The OPs method also left out a few important steps for me getting it right and Ken-ji jumped in and helped me sort it all out.  That's all documented later in the thread.

    that's exactly what it is. i got the remote ssh part working, but then trying to follow along with op's terminal commands was not resulting in the same output for me at all, which was then resulting in me googling almost every single step trying to figure out the delta between my results and his. 

     

    but i don't usually skip ahead too much in tutorials, if anything i tend to get bogged down in the details so as not to miss anything. but if there's clarification down the line, maybe i'll give it one last gander!

  11. Hey hoop - i just wanted to say thanks again for your help with all this. i'm going to table this method for the time being - i think given enough time i could get there, but i'm out of my depth here i'm afraid, without just being spoonfed through the entire process. when things slow down and i can dedicate a little more time to trying, i'll look at giving it another shot. just wanted to say thanks again. 

  12. quick update, i'm still working on initial setup of this - didn't want you to think i fell off after you pm'd the scripts.

     

    I'm trying to get this to work across vpn. So here's what i've got: i'm planning to back up "Server 1" to "Server 2". I've also got a mac (Mac 1) on the same lan as "Server 1". Mac 1 is vpn'd to "Server 2".

     

    My naive hope had been that Server 1 would be able to access Server 2 via Mac 1's vpn, since Mac 1 and Server 1 share a lan connection. Couldn't SSH in this way, and pinging Server 2 from Server 1 with Mac 1 vpn returned nada. So tonight i'm going to work on getting openvpn client installed on Server 1, so i can directly connect it to Server 2 and try again.

     

    unless... i can ssh directly over to Server 2 using my dns address instead of a local ip, but it looks like this requires port 22 to be open. Is it safe to do that? i'm guessing i'd at least need to change unraid's root user password? sure would be nice to not have to rely on a vpn, even though that's not a deal killer

  13. The end result for your first suggestion looks pretty much like what i'm looking for. But the steps to get there may be out of my league.

     

    following along in the first step, the first rsync cl prompt didn't work for me 😂 then trying to figure out if 'rsync' and 'ssh' should be in thisuser's path (use "which ssh" and "which rsync"), 'rsync' should be in remoteuser's path, and 'sshd' should be running on remotehost completely went over my head. 

     

    Also leads to me to think, any sort of troubleshooting i'd encounter with this configuration, i'd be completely in the dark with. IT unfortunately is not my profession, as much as i enjoy it (especially being self employed). Leads me to think i need something a little more user friendly here. 

     

    Wish there was, because i hate the monthly fee of a cloud when i have plenty of my own offsite storage, you know?  

  14. Hi guys,

     

    Basically what i'd like to do is backup my unraid office server to my unraid home server. Not to be used on a regular basis, just as an offsite backup in the event of some kind of "act of god" scenario is all.

     

    Wondering what would be the best way to do this? i currently have vpn's set up between both locations, but would prefer something a little more automated. Seems like FTP may be a good way to go? Although i have zero experience in that realm, so would need a good guide or someone to help get me started if possible.

     

    Also open to any other suggestions, main objective here is to simply backup digital office files not less than once a week, automated, and remotely. 

×
×
  • Create New...