Cpt. Chaz

Members
  • Posts

    220
  • Joined

  • Last visited

Posts posted by Cpt. Chaz

  1. Hi everyone! Over the past few months i've put together a "how-to" playlist for the different functions of Unmanic. For anyone that hasn't seen it, you can find that playlist here.

     

    More to my point: to wrap up the playlist, I thought it would be neat to visit with @Josh.5 for a few minutes over Zoom and he kindly obliged. If you'd like to see our short conversation, you can check that out here. Hope you enjoy

    • Like 1
  2. 24 minutes ago, Masterwishx said:

     

    Thanks for great videos , and your script backup, im using now script backup to unnasigned device, and to smb share from my Main Comp but only source IP smb share work, when im using source NAME that smb share can see it but when mounting, its cant find it .... 😞

    thanks for your kind words @Masterwishx. if you don't want to use the ip address to mount remote smb share, you best bet will be to ask for some help troubleshooting the unassigned device plugin in that specific support forum here.

    • Like 2
  3. On 4/8/2021 at 2:46 AM, Masterwishx said:

    If you mount GDrive smb mounted to some Ip, then you need it to be online when using backup right? Is any way to mount GDrive shared folder to unraid share. And then to make backup to GDrive? 

    Thanks

    hi @Masterwishx! that's correct, when mounting gdrive using remote smb, the host machine with the google drive application must be online for unraid to find and use it. there are some containers in community applications that allow you to access your gdrive, but after looking at one or two of them, i was not impressed with the security work arounds incorporated by the containers. in fact, one container author even offered full disclosure that his private server was used to handle some of the traffic involved. while i appreciate that disclosure, i felt it was too insecure. for now, i still think this is the best/ safest method.

    • Like 1
  4. 22 hours ago, matt.shepker said:

    @Josh.5 - I was able to get everything up and running and it processed through around 1000 files without issue.  Now it is sitting with a bunch of stuff in the queue and it won't kick off.  Looking at the log, I see this:
     

    
    [2021-04-06 06:09:10,842 pyinotify WARNING] Event queue overflowed.
    
    [W 210406 06:09:10 pyinotify:929] Event queue overflowed.

     

    Any ideas on what I should do to resolve this?

    I recently had a similar issue, josh will be able to confirm but it looks like the watcher overloaded. Try disabling and restarting. 

  5. On 3/29/2021 at 6:03 PM, Yekul said:

    Having a quick play with this, and the foundations look real solid and almost just what i'm after. I am curious though, any plans to add a manual scan button to the front page, and perhaps a 'start paused' type option? So we could select a single folder from the scan and start it before bed for example. I prefer to be more in control of what is happening is all, so being able to scan and encode a folder at a time without -everything- being encoded automatically would be great.

     

    @Yekul There is also a way to do this manually. disable the "scan on start" option in the general settings, save and submit, then restart the container. then go back to the settings, and point the library path at whatever specific folder you want. go back to the dashboard, and at your discretion, hit the options tab and "rescan library now". 

     

    this will keep the entire library path from being scanned automatically, and offer a little more control over specific folders to be scanned and transcoded manually, along with any cron schedules you apply. 

  6. On 3/20/2021 at 12:02 AM, ich777 said:

    Currently only my Jellyfin container supports transcoding with AMD hardware as far as I know.

    In the description is how to do that but well yes '/dev/dri' is the device to add.

    Please also note I recommend adding it via the button on the bottom of the template 'Add another Path, Port, Variable, Label or Device' and then select 'Device' from the drop down menu.

    cool, thanks!

    • Like 1
  7. 14 hours ago, Knightwolf said:

    This is my first post here. I am really enjoying the app.  Question I have for anyone is has anyone been able to get a watch folder working for an unassigned usb drive. In an example I have I had my library already on an external usb 3.0 drive that I connected to Unraid using unassigned app. I was able to add it under the Unmanic settings as a watch folder but when I looked at the logs it showed it could not see the folder and no files were started. It did create a folder on the drive,Movies, but I had already a folder for TV in which the files I had were in. Just wondering if anyone had tested this further than I. Thank you for any information.

    @Knightwolf have you tried mapping the folder directly to the container in the template settings? adding a path to something like "/usb --> /mnt/remotes/usbstickpath" and then inside the unmanic container, change the watch folder path to "/usb/movies". that would be the next thing i would try if not. also worth noting, if your usb drive is in exFAT format, you'll need to be using Unassigned Devices Plus, in addition to the regular UD plugin

  8. I’d also like to chime in and say something similar, that either on stable or staging branches, I cannot get gpu transcoding to working with intel quick sync either. I’ve added /dev/dri to the extra parameters and enabled in the container settings. I know my gpu transcoding works because I use it for my Plex container (even with Plex off, cannot get Unmanic to use gpu). 

  9. cool deal, hope all goes well. i'll keep an eye on the user script forum, and see what others have to say as well. feel free to reach back out with anything else. cheers!

    • Thanks 2
  10. the user scripts logs don't persist after a reboot, so that's ok.

     

    i was about to suggest taking this over to the user script forum, but it look's like you're already there. i think a better post there would be to ask about why the bottom lines in the script aren't working on your server. they're basic linux commands, and the syntax works perfectly for me on my servers so after our troubleshooting i'm stumped as to why they're not working for you. when you make your post there, you can link to this topic for context to users that may be able to help. you may also want to post your script and logs there directly, too. i'll be following and keeping up with what's going on.

    • Thanks 1
  11. ok, starting to suspect something else may be going on with your system. copy and pasting your script line into my terminal (just adjusting paths) on two different servers of mine works the way it should. try typing this into a terminal window:

     

    find /mnt/user/Backup/Plex/ -mtime +1 -exec rm -rfdv {} \;

     

    If there's no visible errors in the terminal, check the plex backup folder and see if older files were removed, then report back with your findings. however, you may get an output in the terminal, showing a similar error to what's in your logs like find: unknown predicate `-mtime +1' (this is what i'm expecting). The source of this problem may also help explain why your gui notification is not working at the end too, as your script line for this works for me but it's obviously not working for you. 

     

    Then:

     

    If you do get an error, let's try the following - install the nerd pack plugin from Community Applications if you don't already have it. Once this is installed, open it up and make sure you have the following items "turned on":

    • ncurses-terminfo-6.1.20191130-x86_64-1.txz
    • perl-5.32.0-x86_64-1.txz

     

    Once that's turned on, reboot the server and try running the command above and see if anything changes. If you still get an error after enabling this, post back here the terminal output of the error. At that point we'll need to call in some bigger guns to help us figure out whats going on with your server.

  12. Actually, I think your problem may be an error in the syntax from the script. I checked my logs and it was reporting a similar error.  in addition to removing the quotes you added to mtime, try taking out the asterisk so it looks like this

     

    #ENTER NUMERIC VALUE OF DAYS AFTER "-MTIME +"
    find $pdest -mtime +1 -exec rm -rfd {} \;

     

    This seemed to fix it for me. If that also fixes it for you, i'll publish a correction. thanks for your input on this! 

  13. Hi again @coblck , happy to help. it looks like your script has quotes around the mtime entry that shouldn't be there. here is how it should look instead:

     

    #ENTER NUMERIC VALUE OF DAYS AFTER "-MTIME +"
    find $pdest* -mtime +1 -exec rm -rfd {} \;

     

    try that and see how it does. If it still doesn't remove the old backups, post a copy of the logs and lets take a look at that. on the user script page, look towards the right and you should see an icon on the line of your script that you can click to view the log for that script

     

    374419642_ScreenShot2021-02-24at2_08_45PM.png.e32811333cc9c4241e25cfce13bfd6a2.png

     

    Keep me posted

  14. Hi @coblck! Thanks for your kind words. As for your issue, now that I can see a little more, my first idea for troubleshooting this would be to see if you can mount one of the other folders listed on that same machine ie Downloads. If you can map it and mount it, that at least narrows the problem down specifically to the google drive folder. 
     

    if you cannot mount one of the other folders, that would tell us it’s a broader networking issue. Hopefully doing this will make the answer more clear to us, and if we still hit a wall we can link over to the UAD support thread with your logs and let dlandon take a look. Keep us posted!