Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. My current 360/180 ISP sent me an email saying my upload was going at 100% for a few months and they wondered if I'd been hacked. I thanked them for their concern and said I was ok and knew what the traffic was. I use bwlimits so my upload runs now at about 80Mbps average over course of a day as my big upload days are over. My previous 1000/1000 ISP didn't say anything despite my upload running at about 60-70% for over a year. I keep a copy of the most recent vm and appdata backups locally. If there was ....an accidental deletion I probably would just write off all the content as 1) I can't build a 0.5PB array and 2) it'd probably be easier to ....replace the content I want than spend weeks/months downloading it from gdrive. I did look into backing up my tdrives on another user's setup (he currently backs his up to mine), but I stopped as the actual process of downloading off his tdrive would face the same problems.
  2. It's the logical next step. I've ditched my parity drive (I backup to gdrive using duplicati), sold all but 2 of my HDDs that store seeds, pending uploads and my work/personal documents. I don't really use the mass storage functionality anymore other than pooling the 2 HDDs - kinda impossible and would be mega expensive to store 0.5PB+ of content..... My unRAID server main purpose is to power VMs (3xW10 VMs for me and the kids + pfsense VM) and Dockers (plex server with remote storage, Home Assistant, unifi, minecraft server, nextcloud, radarr etc).
  3. If you want some partitioning, you could do /mergerfs --> /mnt/user/gdrive_mergerfs and then within your dockers use the following paths: /mergerfs/downloads for /mnt/user/gdrive_mergerfs/downloads/ /mergerfs/media/tv for /mnt/user/gdrive_mergerfs/tv/ The trick is your torrent, radarr, sonarr etc dockers have to be moving files around the mergerfs mount i.e. /mergerfs. If you map: /mergerfs --> /mnt/user/gdrive_mergerfs /downloads --> /mnt/user/gdrive_mergerfs/downloads /downloads_local (adding for another example) --> /mnt/user/gdrive_local/downloads when you ask the docker to hardlink a file from /downloads or /downloads_local to /mergerfs it won't work. It has to be from /mergerfs/downloads to /mergerfs/media/tv - within /mergerfs. To be clear, when I say I do /user --> /mnt/user it's because it just makes my life easier when I'm setting up all dockers to talk to each other (I'm lazy) - within my media dockers I still only use paths within /user/mount_mergerfs e.g. /user/mount_mergerfs/downloads and /user/mount_mergerfs/tv_shows
  4. You're messing up your mappings somewhere as hardlinks work
  5. If want full hardlink support map all docker paths to /user --> /mnt/user then within the docker set all locations to a sub path of /user/mount-mergerfs. Then behind the scenes unraid and rclone will behave as normal and manage where the files really reside
  6. I just spotted that Document server 0.1.7 is out - has anyone successfully upgraded from 0.1.5?
  7. I may be misunderstanding what you are trying to do, but wouldn't it be simplier to just use the mover or the custom mover script? i.e. set your download/mergerfs local share to be prefer cache and then mover will move files to the array when the cache fills up? If you need torrents to move off your cache faster than your mover settings or on a different schedule, you could do something like what I do. I use diskmv to move certain shares and folders off my cache when the cache gets to a certain capacity - that way I can keep say work files on the cache longer (almost forever) and 'archive' files where I don't need the fast access. Below you'll see I move .../downloads/complete (torrents that have completed but haven't been imported i.e. seeding) and ..../downloads/seeds (seeds that have been imported) to the array when my cache gets to a certain utilisation. ######################################## ####### Move Cache to Array ########## ######################################## # check if mover running if [ -f /var/run/mover.pid ]; then if ps h `cat /var/run/mover.pid` | grep mover ; then echo "$(date "+%d.%m.%Y %T") INFO: mover already running. Not moving files." else # move files ReqSpace=150000000 AvailSpace=$(df /mnt/cache | awk 'NR==2 { print $4 }') if [[ "$AvailSpace" -ge "$ReqSpace" ]];then echo "$(date "+%d.%m.%Y %T") INFO: Space ok - exiting" else echo "$(date "+%d.%m.%Y %T") INFO: Cache space low. Moving Files." # /usr/local/sbin/mdcmd set md_write_method 1 # echo "Turbo write mode now enabled" echo "$(date "+%d.%m.%Y %T") INFO: moving backup." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/backup" cache disk2 echo "$(date "+%d.%m.%Y %T") INFO: moving local." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/local/tdrive_vfs/downloads/complete" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/local/tdrive_vfs/downloads/seeds" cache disk2 echo "$(date "+%d.%m.%Y %T") INFO: moving media." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/books" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/calibre" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/magazines" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/photos" cache disk2 bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/media/other_media/videos" cache disk2 echo "$(date "+%d.%m.%Y %T") INFO: moving software." bash /boot/config/plugins/user.scripts/scripts/unraid-diskmv/script -f -v "/mnt/user/software" cache disk2 # /usr/local/sbin/mdcmd set md_write_method 0 # echo "Turbo write mode now disabled" fi fi fi Edit: I disable turbo write as I don't have a parity drive anymore
  8. Yes, but if you upload a file from the local folder the mount won't register the change. and a one-stop solution.
  9. you have to install it at every boot. Once you've got python installed via nerdpark just add this to a script that runs at array start or similar: pip install plexapi
  10. I had time to play at the weekend and there's unfortunately a few showstoppers - 1) it doesn't poll the local folder for changes and 2) it doesn't poll the remotes either. 2) might be because 1) isn't working, so if 1) can be fixed 2) might be fixed. @ncw from rclone is having a look, so we might have a fix/workaround. I hope he finds one as playback is definitely better than mergerfs I guess from removing the middleman.
  11. @watchmeexplode5Are you having any joy with union? I just had another go and I've encountered 2 show-stopping issues: 1. Changes to the local folder aren't picked up in a timely manner 2. Changes to remotes, EVEN if via rclone e.g. via the mount aren't picked up Have you got a link for this please?
  12. I'm curious how many people are now running this setup? unionfs or mergergs. The threads got quite big over nearly 2 years. It's a shame I can't add a poll to the first post.
  13. I have a similar problem. Whilst trying to do the same upgrade I'm now stuck on the Maintenance page and can't access nextcloud:
  14. make sure you are running the latest version of the unmount script at array start or manually now to fix your problem. There was an error in an earlier version.
  15. Could be - we're not sure. At the moment my system has been shutting down ok. Don't know - but if /mnt/user/local is working I'd just run with it! #1 I have no idea what's going on there as I see negliable performance difference and I think @watchmeexplode5 has said the same #2 I got my first API ban in over a year a few weeks ago, but I was doing some big scans in Plex as well as in Radarr, Sonarr etc all at the same time sounds like something is wrong with your radarr mappings not rclone which just uploads anything it sees in the local share - radarr controls where files are moved.
  16. @TecneoI haven't had a chance to play with union this week - have you had any progress before I start?
  17. ncw just said they should work - let me know how you get on. I'm going to focus on fixing the local polling tonight.
  18. It doesn't appear in your /dockers page - it's a bit of an odd case. It has to be re-installed everytime unRAID starts as part of the script. It's why I want to remove it if possible - because unRAID can't support it natively e.g. via a plugin or a 'normal' docker, it's a bit confusing for unRAID users. Not mergerfs' problem, but just makes it a bit clunky. Having everything in rclone will be a lot cleaner.
  19. I've just read the full post you linked to and saw Nick's comment - I'm going to try after work removing the -dir-cache-time and having poll set to maybe 1s. Need to read up a bit first/have a refresher on what both are doing
  20. nope but it defaults to 1m so it wouldn't help - I think rclone looks for new changes based on --dir-cache-time - when I had this set to 720h as usual changes weren't getting picked up - I waited about 5 mins. With --dir-cache-time 1m they got picked up pretty quickly.
  21. @watchmeexplode5 @Kaizac @testdasi and everyone else - I need help testing rclone union please which landed yesterday as part of rclone 1.5.2. https://forum.rclone.org/t/rclone-1-52-release/16718 I've created a test union ok and playback seems good - better than mergerfs, although only tried a few files. It'd be great if we can get this running as I think it'll be easier to support than mergerfs which has been brilliant but must be installed via a docker. We'll also be just using one app. I've encountered one problem so far in that -dir-cache-time applies to local folder changes as well, so a small number is needed to spot any changes made to /mnt/user/local. I've asked if there's a way to have a long cache for just the remotes. I've asked for advice here: https://forum.rclone.org/t/rclone-union-cache-time/16728/1 My settings so far: [tdrive_union] type = union upstreams = /mnt/user/local/tdrive_vfs tdrive_vfs: tdrive_uhd_vfs: tdrive_t_adults_vfs: gdrive_media_vfs: action_policy = all create_policy = ff search_policy = ff rclone mount --allow-other --buffer-size 256M --dir-cache-time 1m --drive-chunk-size 512M --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --vfs-cache-mode writes tdrive_union: /mnt/user/mount_mergerfs/union_test
  22. @axeman #1 if you delete locally a file that rclone has synced,then rather than delete it on the remote it is moved to the backupdir for your chosen number of days #2 it should show your files. Are you using krusader? If so, restart it as it has problems. Or try ssh or Windows explorer
  23. yes the upload script doesn't run if the mount is not running.
×
×
  • Create New...