Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. Torrents with Unionfs: torrent gets download torrent gets copied to unionfs folder - disk write (time+wear) plus 2x torrent space taken up copied torrent gets uploaded whilst orig is seeding delete seed whenever Torrents with mergerfs: torrent gets download hardlink created to unionfs folder - no disk write, noise, no time to copy, no 2x torrent space taken up hardlinked torrent gets uploaded whilst orig is seeding delete seed whenever
  2. My unraid server's local server is 172.30.12.2. Is it possible to add a 2nd IP address/create a 2nd IP address for my server? I want to do this so that I can route certain traffic (rclone) over the 2nd address for better monitoring shaping. At the moment it's all mixed up in 172.30.12.2 and if I could assign another IP e.g. 172.30.12.3 or 192.168.30.91 to unRAID, then I think I could solve my problem. My setup at the moment: - pfsense VM with dual nic assigned connected to my ISP modem and switch. Various VLANs and LAN on 172.30.12.x) - switch connected to unRAID LAN port and other devices Thanks in advance for any help.
  3. Glad you got it working, but looking again at my post I'm not sure if doing: /media/downloads <-> /mnt/user/mount_unionfs/google_vfs/downloads/ /media <-> /mnt/user/mount_unionfs/google_vfs/ will work. I'm no expert and what I do to also ensure I don't mess up when moving stuff around in dockers is just use these mappings for all my dockers: /user <-> /mnt/user/ /disks <-> /mnt/disks/ (RW Slave) That way all dockers are consistent and I don't have to remember mappings.
  4. I'm glad it's working perfectly for you. I've been doing this for about 20 months now and I've only had one issue when google had problems with rclone user_IDs for about 2 days. I moved home recently and lost my 1Gbps connection, but even on my 360/180 with lots of users I've not had any buffering, with a lot of other traffic occuring at the same time.
  5. Using both /data and /media is your problem - your dockers think these are two separate disks so you don't get hardlinking and moving instead of copying benefits. Within your containers, point nzbget etc to /media/downloads
  6. Last post - I just noticed in my stats that the most watched movie is 'Concurrent Streams'. I've checked in Tautulli and it's reporting correctly The Force Awakens, so the problem isn't there I think.
  7. Can somebody share their reverse proxy config please as I can't crack it - I'm using the letsencrypt docker. Thanks
  8. @ninthwalker got it all working now - my fault not reading info boxes properly. It's working great - thanks for building. I'm sure it's been asked before - is it possible to link from the media items to the plex server page for fast playback in future versions? i.e. next to the IMDB link and trailer link?
  9. I'm trying to use this for the first time, but I'm getting no data via email (no email sent) or added to the web pulled from tautulli or plex. Announcement emails arrive ok, so my SMTP settings are fine. What am I doing wrong? Thanks in advance. docker run -d --name='NowShowingv2' --net='br0.55' --ip='192.168.50.15' --cpuset-cpus='1,8,9,17,24,25' --log-opt max-size='50m' --log-opt max-file='3' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_6878'='6878' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/cache/appdata/dockers/NowShowingv2':'/config':'rw' 'ninthwalker/nowshowing:v2 I do see this error, but not sure what it means: from /usr/local/sbin/combinedreport:460:in `<main>' /usr/lib/ruby/2.3.0/uri/rfc3986_parser.rb:67:in `split': bad URI(is not URI?): http://192.168.30.90:32400:32400/library/sections (URI::InvalidURIError) from /usr/lib/ruby/2.3.0/uri/rfc3986_parser.rb:73:in `parse' from /usr/lib/ruby/2.3.0/uri/common.rb:227:in `parse' from /usr/lib/ruby/gems/2.3.0/gems/httparty-0.13.1/lib/httparty/request.rb:58:in `uri' from /usr/lib/ruby/gems/2.3.0/gems/httparty-0.13.1/lib/httparty/request.rb:149:in `setup_raw_request' from /usr/lib/ruby/gems/2.3.0/gems/httparty-0.13.1/lib/httparty/request.rb:90:in `perform' from /usr/lib/ruby/gems/2.3.0/gems/httparty-0.13.1/lib/httparty.rb:521:in `perform_request' from /usr/lib/ruby/gems/2.3.0/gems/httparty-0.13.1/lib/httparty.rb:457:in `get' from /var/lib/nowshowing/plex.rb:35:in `get' from /usr/local/sbin/combinedreport:110:in `getMovies' from /usr/local/sbin/combinedreport:396:in `main' from /usr/local/sbin/combinedreport:460:in `<main>' --- email: title: "New This Week" image: "http://i.imgur.com/LNTSbFl.png" footer: "Thanks for watching!" language: "en" web: title_image: "img/nowshowing.png" logo: "img/logo.png" headline_title: 'Just added:' headliners: "Laughs,Screams,Thrills,Entertainment" footer: "Thanks for watching!" language: "en" plex: plex_user_emails: "no" libraries_to_skip: - "Photos" - '# Movies - IMDB Top 250' - '# Movies - Kids Recommended' - '# Movies -Recommended' - '# Movies - Trending' - '# TV - Trending' - '## Movies - 4K' - '## TV - 4K' server: "192.168.30.90:32400" mail: from: "Highlander Plex" subject: "Now Showing" recipients_email: - "xxxxxxxxxxxxxxxxx" recipients: - "" provider: "gmail" address: "smtp.gmail.com" port: "587" username: "xxxxxxxxxxxxxx" password: "xxxxxxxxxxxxxxxxxxxxxx" report: interval: "7" report_type: "both" email_report_time: '30 10 * * 5' web_report_time: '30 23 * * *' extra_details: "yes" test: "disable" tautulli: server: "192.168.50.91" port: "8181" https: "no" httproot: "tautulli" api_key: "xxxxxxxxxxxxxxxxxxxxxxxxxxx" title: 'Statistics:' stats: "LBmvadDutTcAS" token: api_key: "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
  10. I currently run two scripts that I'd like to combine into one. They check to see if my cache disk has got to a certain amount of % used - if it's less than a certain amount it moves files that I don't need on the cache to the array so that files I want on the cache can stay on for longer - if it's over a certain amount then it starts the mover (I've disabled the scheduled mover) Here's the first script: #!/usr/bin/php <?PHP $min = 70; $max = 90; $moveAt = 90; $diskTotal = disk_total_space("/mnt/cache"); $diskFree = disk_free_space("/mnt/cache"); $percentUsed = ($diskTotal - $diskFree) / $diskTotal * 100; if ( $percentUsed > $moveAt ) { $my_string = 'Starting mover'; echo $my_string; echo "<br>"; exec("/usr/local/sbin/mover"); } else { $my_string = 'Not running mover'; echo $my_string; echo "<br>"; } if (($min <= $percentUsed) and ($percentUsed <= $max)) { $my_string2 = 'starting diskmv_all'; echo $my_string2; echo "<br>"; echo "<br>"; exec("bash /boot/config/plugins/user.scripts/scripts/diskmv_all/script"); } else { $my_string = 'Not running diskmv_all'; echo $my_string; echo "<br>"; } ?> and the second that runs the diskmv script: #!/bin/bash ####### Check if script already running ########## echo "$(date "+%d.%m.%Y %T") INFO: Creating Tracker file." if [[ -f "/mnt/user/appdata/other/rclone/diskmv_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else touch /mnt/user/appdata/other/rclone/diskmv_running fi if [ -f /var/run/mover.pid ]; then if ps h `cat /var/run/mover.pid` | grep mover ; then echo "$(date "+%d.%m.%Y %T") INFO: mover already running. Removing Tracker file." rm /mnt/user/appdata/other/rclone/diskmv_running exit 0 fi fi # /usr/local/sbin/mdcmd set md_write_method 1 # echo "Turbo write mode now enabled" echo "$(date "+%d.%m.%Y %T") INFO: moving backup." bash /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/backup" cache disk2 # echo "$(date "+%d.%m.%Y %T") INFO: moving local." bash /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/local/tdrive_vfs/downloads/complete" cache disk1 echo "$(date "+%d.%m.%Y %T") INFO: moving media." bash /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/media/music" cache disk2 bash /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/media/other_media/books" cache disk2 bash /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/media/other_media/photos" cache disk2 bash /boot/config/plugins/user.scripts/scripts/diskmv/script -f -v "/mnt/user/media/other_media/videos" cache disk2 # /usr/local/sbin/mdcmd set md_write_method 0 # echo "Turbo write mode now disabled" echo "$(date "+%d.%m.%Y %T") INFO: Removing Tracker file." rm /mnt/user/appdata/other/rclone/diskmv_running exit Is it possible to combine? Thanks in advance
  11. @testdasi and @yendi I understand. But, it's worth it. This isn't a small improvement - it's a major one with worthwhile performance improvements. (I) simplest migration: Just change your unionfs mount command for the new mergerfs one is worth doing - you can test it works first by just adding the mergerfs command to a new blank script and mounting in a new location (II) best migration: do (I) + move all your mappings to within /user/mount_mergerfs i.e. /user/mount_mergerfs/downloads and point your download dockers to this location to get hardlinking, file transfer benefits etc
  12. In this script it checks if parity is running and I'd like to do a similar check in a different script for if the mover is running. Can anyone help please ie. what's the mover equivalent for: $vars = parse_ini_file("/var/local/emhttp/var.ini"); if ( $vars['mdResync'] ) { Thanks in advance. Update: Found the answer: https://gist.github.com/fabioyamate/4087999 if [ -f /var/run/mover.pid ]; then if ps h `cat /var/run/mover.pid` | grep mover ; then echo "mover already running" exit 0 fi fi
  13. You can ignore that and just ensure all your dockers, including nzbget etc, use mappings within the mergerfs folder and not the local folders to fully benefit from mergerfs's better file transfer capabilities. I decided to ditch rclone_upload to make it easier for users to not mess up and to clean flows up. E.g. before my flow before was: /mnt/user/downloads/movie -->/mnt/user/import/movie -->/mnt/user/mount_unionfs/movies/movie (via /mnt/user/rclone_upload/movies/movie) I.e. 3 different shares. Now I have /mnt/user/mount_mergerfs/downloads/movie --> /mnt/user/mount_mergerfs/complete/movie --> /mnt/user/mount_mergerfs/movies/movie where any local files are in /mnt/user/local and I exclude mnt/user/local/downloads mnt/user/local/complete and mnt/user/local/seeds in the upload script. Makes my life a hell of a lot easier and by including my 'pre-plex' files in the mount I get the full transfer benefits of mergerfs, and I think it will be easier for anyone new to the scripts.
  14. --dir-cache-time can be large as you want - uploads flush the cache. No real reason, just decided to put a larger number in for the (rare) days when no new content added --fast-list - yep, that shouldn't be there. I forgot to delete when I removed the rc command.
  15. Everything is self-contained in the script - no need to touch CA, nerd tools etc except to install the rclone plugin. Re the mergerfs docker - I'm not an expert, but it's building it direct from the mergerfs author's repo, so the script will only need changing if he updates his build options which I think will be unlikely: https://github.com/trapexit/mergerfs#build-options
  16. unRAID is based on slackware and mergerfs doesn't support slackware. Luckily the author just built a docker version we can use.
  17. The Shield is the best client for Plex - you won't find better for the price, and probably if you are ok spending a bit more. Everything works flawlessly 99.999% of the time for me and Plex+Shield is a great combo. Re Plex V Kodi, the only reason I sonetimes use Kodi is the TVH/DVB-T/PVR functionality/support is better, but Plex is catching up fast and is good enough to be my main PVR now as I hate switching apps.
  18. Easy. Either, use rclone config to change the name of your remote to gdrive_media_vfs: , or: change line 45 of the mount script: rclone mount --allow-other --buffer-size 256M --dir-cache-time 720h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs & to: rclone mount --allow-other --buffer-size 256M --dir-cache-time 720h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes NAME_OF_UNENCRYPTED_REMOTE: /mnt/user/mount_rclone/google_vfs & and 43 of upload script: rclone move /mnt/user/local/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude downloads/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 9500k --tpslimit 3 --min-age 30m to: rclone move /mnt/user/local/google_vfs/ YOUR_REMOTE: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude downloads/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 9500k --tpslimit 3 --min-age 30m
  19. Thanks - updated. As I stated earlier, my scripts are a bit different so it's hard for me to test the github version. You're probably right about the docker changes. I setup my mergerfs in putty and I haven't rebooted yet and I cobbled the command together from the one I use to edit dockers, so the interactive bit definitely isn't needed in a script and is probably a bad idea.
  20. Thanks. The previews (sonarr and radarr) have been stable for the last couple of months for me. Radarr V3 in particular has been a godsend, as my library/indexers/something was too big for it and it wouldn't open sometimes for days and if did, was almost impossible to use.
  21. I've been running Sonarr v3 using lsiodev/sonarr-preview for quite a while, but I ran into some problems over the last couple of days which I finally fixed when I tried using linuxserver/sonarr:preview as mentioned on the sonarr page: https://sonarr.tv/#downloads-v3-docker Looking at the versions lsiodev/sonarr-preview was at 3.0.0.348 and linuxserver/sonarr:preview was at 3.0.3.679, so probably quite old 1. Should I use linuxserver/sonarr:preview now? 2. Was it safe to just switch the repository in docker? It worked, but I kept a backup just in case Thanks in advance.
  22. Thanks for confirming the migration works. My scripts are quite different in places so I wasn't sure. I've added: mkdir -p /mnt/user/appdata/other/mergerfs/mergerfs to my post and github to see if that helps. Glad you got there.
  23. ooohhh, never seen that one before! I really need to read-up on the advanced config options one day. https://rclone.org/drive/#drive-server-side-across-configs I wish this was available last year when I was trying to do some major moving. Added. Thanks
×
×
  • Create New...