Cpt. Chaz

Members
  • Posts

    220
  • Joined

  • Last visited

Everything posted by Cpt. Chaz

  1. Nothing wrong with using the IP address! Glad you got it working and are enjoying the videos. Cheers!
  2. Hi @coblck! Thanks for your kind words. As for your issue, now that I can see a little more, my first idea for troubleshooting this would be to see if you can mount one of the other folders listed on that same machine ie Downloads. If you can map it and mount it, that at least narrows the problem down specifically to the google drive folder. if you cannot mount one of the other folders, that would tell us it’s a broader networking issue. Hopefully doing this will make the answer more clear to us, and if we still hit a wall we can link over to the UAD support thread with your logs and let dlandon take a look. Keep us posted!
  3. I thought that too. it only goes away if i enter the exact url address of the hosting page, including the port number. here's a picture of the page at http://youtransfer.myserver.com. it tells me the base url should be http://youtransfer.myserver.com:443 when i add the port number to the base url address in full, the message goes away. so i guess in practice this would be the fix, but it breaks from the conventional use of any base url in my experience. now i'm wondering if i'm the only one experiencing this
  4. very cool app, thanks for adding it to CA @FlippinTurt. Wondering if you or anyone else is running this behind reverse proxy (swag)? while i can access it outside my network just fine, i get an error on the main UI about the base url. i've tried leaving the base url blank (which is what i usually do for reverse proxy) and it does the same thing. anybody else dealing with this?
  5. I’d have to agree with @Energen. The spaces in the directory path was the first thing that caught my eye. I think it’s time for a revision with a mkdir section too
  6. The native Unraid script creates a symlink in the /usr/local/emhttp/ directory each time it's ran. For the purposes of this script, the symlink is not needed and therefor removed.
  7. if you have an external desktop computer on your server's network, you can use the google drive desktop app on it, and then mount that folder as a remote smb share. much less work than many of the cloud containers i've seen (tho i don't know about the rclone plugin), and you don't have to run it through someone's private server to use it! also, CA backup is great, but it doesn't backup a zipped flash version, or do time stamping. i wrote a script that might be helpful for your use case here.
  8. i wasn't able to tell what's causing the problem, but definitely seeing the error in your syslog. plex is often the culprit. couple ways to handle it. next time it happens, type htop in the terminal and take a look at the memory processes running and see if you can identify the culprit. another way would be to disable all the containers, except for one at a time and see which one is using up all your mem. once you find the container causing problems, go into the container settings page, and in the extra parameters you can manually limit the amount of memory a container can use by entering something similar to the following: --memory=8G More info on this can be found here. You may have to play around with the amount of ram that you allocate. if plex is the problem, limiting ram may cause problems on a live stream transcode and commercial removal (hopefully not, but best case it will most likely increase the amount of time it takes to process these files). hopefully the problem is handbrake, in which case limiting it's memory usage will also increase process times, but that's better than the alternative. it also looks like you have two empty DIMM slots, filling those will almost certainly help your problem 😉
  9. Excellent, keep us posted! And if you have any questions about deluge, let me know. I use the binhex vpn container
  10. wish i could answer that but i honestly dont know. try reaching out to Art of Server and see what he says
  11. sorry for the delay @thefozzybear, somehow i missed the notification response on here. anyway, i think i see the problem, based on comparisons of my mappings. i'm using deluge, but the principal should be the same. here's my screenshots for you to compare with: I think your remote path should be /mnt/disks/Sonarr/, and your local path should just be /downloads/ instead of /downloads/complete/. try that and see if it makes any difference.
  12. if your budget can allow it, i'd say nip this in the bud now since you're starting out, and get you an H310 controller. I looked at the specs page here for the r520 and it looks like it can take the H310, although i'm not sure if that includes the mini H310 as well. This isn't a plug, but there's a guy on ebay called Art of Server (and on youtube) where that's one of his specialties, reflashing and selling these controllers for people. I bought an H310 mini flashed to IT from him last year. Out of the box it's ready for unraid, straight plug and play drop-in. Never had an issue since. Granted, i'm running an R720, but he's very friendly. i reached out to him to confirm my specs before i bought and he was happy to oblige a response, i bet you could do the same to confirm for your R520.
  13. can you show us the actual container mappings with a screen shot of the docker container settings for sonarr and qbt? are sonarr and qbt on the same host? do you have remote path mappings set up under sonarr > download clients > remote path mappings? if so, screenshot those too
  14. sure thing. also worth noting is that unmanic is still under active development. it's also still in beta, so there are things being ironed out, but Josh.5 even recently has pushed a lot of new updates.
  15. not exactly sure what you're trying to accomplish, video conversion or compression? there are other arguably better containers that can help with that. handbrake or tdarr offer the most customizable options, but my favorite and preference is Unmanic.
  16. looks like you have some incorrect settings in your container. they repopulated when you reinstalled because the container template persists even if you remove the appdata folder (templates can be removed by going to the docker tab -> add container -> select template to delete -> press red x to delete). I think if you'll just fix/change your ports it should solve your issue. Here's pictures of my current working and proxied setup for anyone that might find it helpful. WebUI port can be changed by clicking "advanced view" in the top right of the container settings page.
  17. i use it your way for internal network access, but the other method is needed for reverse proxy, which provides external network access for other users to make requests. edit: thanks for this container btw, very cool!
  18. has anyone gotten the proxy conf working for this? i can't seem to get it working per this example from the github page. i've added the cname into cloudfare and running SWAG from lsio. my reverse proxy is working fine for all my other proxy confs, so i don't think this is a SWAG issue. edit: duh, forgot to put the container network type on the proxynet! 🤦‍♂️
  19. Thanks! using unassigned devices, i mounted a folder from my google drive, and now have daily syncs of my flash's stored on google drive, super handy.
  20. In my previous post found here, i outlined a multi-step process with 2 different scripts and 2 cron jobs for automating the unraid flash zip backup process using the User Scripts Plugin. Thanks to @nitewolfgtr and @sreknob i've refined it down to one simple "fill in the blanks" script. I've also added an optional UI notification. This script backs up a zipped copy of your unraid flash disk to a location of your choice, removes old backups after specified number of days, as well as the optional notification upon completion Due to the changes, i've marked the other OP outdated and now refer to this script. While i have tested (version 6.8.3 only) and currently run this on all 3 of my unraid servers with reasonable confidence, i stress for anyone that decides to try, MANUALLY BACKUP YOUR FLASH DRIVE FIRST BEFORE TRYING. if you are at all nervous or unfamiliar, please don't try this - i don't assume any liability here for data loss. Use at your own risk, this is still a work in progress. Here's the script, with explanation underneath currently for unraid version 6.8.3 AND 6.9.0-rc2 (version 6.9b30 see below): #!/bin/bash #### SECTION 1 ####------------------------------------------------------------------------------------------------------ #dir = WHATEVER FOLDER PATH YOU WANT TO SAVE TO dir="/insert/your/path/here" echo 'Executing native unraid backup script' /usr/local/emhttp/webGui/scripts/flash_backup #### SECTION 2 ####------------------------------------------------------------------------------------------------------ echo 'Remove symlink from emhttp' find /usr/local/emhttp/ -maxdepth 1 -name '*flash-backup-*.zip' -delete sleep 5 #### SECTION 3 ####------------------------------------------------------------------------------------------------------ if [ ! -d "$dir" ] ; then echo "making directory as it does not yet exist" # make the directory as it doesnt exist mkdir -vp "$dir" else echo "As $dir exists continuing." fi #### SECTION 4 ####------------------------------------------------------------------------------------------------------ echo 'Move Flash Zip Backup from Root to Backup Destination' mv /*-flash-backup-*.zip "$dir" sleep 5 #### SECTION 5 ####------------------------------------------------------------------------------------------------------ echo 'Deleting Old Backups' #ENTER NUMERIC VALUE OF DAYS AFTER "-MTIME +" find "$dir"* -mtime +10 -exec rm -rfv {} \; echo 'All Done' #### SECTION 6 ####------------------------------------------------------------------------------------------------------ #UNCOMMENT THE NEXT LINE TO ENABLE GUI NOTIFICATION UPON COMPLETION #/usr/local/emhttp/webGui/scripts/notify -e "Unraid Server Notice" -s "Flash Zip Backup" -d "A copy of the YOURTOWERNAME unraid flash disk has been backed up" -i "normal" exit Here's the slightly modified script if you're running unraid version 6.9b30: Copy and paste the appropriate version above into a new blank script in the User Scripts Plugin, then proceed to edit per below. Some final changes on your end to implement the script. I broke the script down into sections for an easier explanation here: Section 1 - you need enter the path to where you want to save your zipped flash copy (local or remote). Section 5 - be sure to enter the number of days you want to keep old flash backups, default is 10 days. Section 6 - optional, to enable notifications, uncomment the bottom line and replace "YOURTOWERNAME" with your actual tower name (this is for notification only) And that's it, all there is to it. Just copy/paste the script into the user scripts plugin, set how often you want it to run, and you're done. If you have any questions or suggestions, let me know. Enjoy. EDIT 1: 12/21/20 - Merged two previous scripts into one script for aforementioned versions EDIT 2: 12/21/20 - Added quotes to $dir in Section 4 that could cause directories with spaces to get lost EDIT 3: 2/5/21 - Step by step instructions found on my youtube channel here Edit 4: 2/8/21 - Inserted new Section 3 with mkdir command to prevent errors if directory does not yet exist, bumped subsequent sections down. No user interaction required - Added quotes to primary directory path to help prevent syntax errors