Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. DZMM

    Plexdrive

    One problem I haven't been able to overcome is slow writes by radarr and sonarr to my unionfs mount - anyone else having this problem?
  2. DZMM

    Plexdrive

    It is cool, isn't it? ? It makes you rethink how to use storage when unlimited cloud storage is so cheap via this method and pretty much indistinguishable from local storage. If and when any of my local drives die, it's going to be an interesting decision for me what to do. I've already removed one (small) HDD to free up space for an SSD, and I'm contemplating removing another to make way for another SSD I'm not using.
  3. I've added a /scripts pointing to /boot/config/plugins/user.scripts/scripts/ in radarr and sonarr to run a couple of my scripts. Do I need to do anything funky like RO/Slave etc, or just leave it as Read/Write? Thanks
  4. DZMM

    Plexdrive

    hmm I hadn't considered that - I'm searching the forum to see if anyone's ever added a script to a user script to a docker before and how
  5. DZMM

    Plexdrive

    Brilliant - what kind of launch times are you getting? How does it compare to PD? I never really got PD working and whilst investigating the rclone bits, I realised I could reduce the number of moving parts I needed to figure out and the support on the rclone forum is as good as on this one, whereas with PD I couldn't see anywhere to go.
  6. DZMM

    Plexdrive

    I install the rclone plugin via a script as I have a pfsense VM so I can't use the main plugin as it needs connectivity when unraid starts, whereas I don't have connectivity for a min or two until my pfsense VM kicks in. I have that install check there to make sure that I don't run the script twice - it creates a dummy file when the script starts that it checks isn't there before starting, and removes it when the script stops. Once things settle down a bit, I'm going to set the script to run say every 5 mins so if for some reason the mount drops, it will re-mount it. In radarr/sonarr you can run scripts in settings/connect and then add 'Custom Script'. I just created an extra docker mapping for /scripts --> /boot/config/plugins/user.scripts/scripts/ . I did it this way as I might have extra scripts in the future.
  7. DZMM

    Plexdrive

    Sorry, I thought I'd removed bits that weren't relevant. I was previously using rclone's cache which is great at merging local files not uploaded yet with cloud files a la unionfs, with local files uploaded automatically. But, media launches were shocking so I moved to vfs in tandem with unionfs. My /mnt/dicks/rclone_cache_old mount is where I've mounted and decrypted the cache files that hadn't been uploaded yet, so that I can manually upload them. Here's my rclone config with irrelevant bits removed this time: [gdrive] type = drive client_id = xxxxxxxxxxxxxxxx.apps.googleusercontent.com client_secret = scope = drive root_folder_id = service_account_file = token = [gdrive_media_vfs] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = password2 = [upload_gdrive_media] type = crypt remote = gdrive:crypt filename_encryption = standard directory_name_encryption = true password = password2 = [backup] type = crypt remote = gdrive:backup filename_encryption = standard directory_name_encryption = true password = password2 = Correct - just create a vfs mount with your uploadcrypt: remote and you're good to go. Mount it at the same location as gdrivecrypt: and you shouldn't have to update anything else.
  8. DZMM

    Plexdrive

    Ignore anything you've seen me posting here or the rclone forums previously as most of it was when I really didn't know what I was doing or asking - now I'm at about 50% I use the user scripts plugin to do most of the work, so I'll just post my scripts. Rclone install - mounts rclone, creates unionfs mounts with a few checks built in. Runs at array start #!/bin/bash mkdir -p /mnt/disks/rclone_vfs mkdir -p /mnt/disks/rclone_cache_old ####### Check if script already running ########## if [[ -f "/mnt/user/software/rclone_install_running" ]]; then exit else touch /mnt/user/software/rclone_install_running fi ####### Check if rclone vfs mount is mounted ########## if [[ -f "/mnt/disks/rclone_vfs/tv_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/tv_kids_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_kids_gd/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone_vfs mounted success." else ####### Check if internet / pfsense VM has started else add some pauses before installing rclone ########## if ping -q -c 1 -W 1 google.com >/dev/null; then echo "The network is up - installing rclone" plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg else echo "The network is down - pausing for 5 mins" sleep 5m if ping -q -c 1 -W 1 google.com >/dev/null; then echo "The network is now up - installing rclone" plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg else echo "The network is still down - pausing for another 5 mins" sleep 5m plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg fi fi # Mount rclone vfs mount rclone mount --allow-other --dir-cache-time 24h --cache-dir=/tmp/rclone/vfs --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 256M --log-level INFO --stats 1m gdrive_media_vfs: /mnt/disks/rclone_vfs & fi sleep 5 if [[ -f "/mnt/disks/rclone_vfs/tv_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/tv_kids_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_adults_gd/mountcheck" ]] && [[ -f "/mnt/disks/rclone_vfs/movies_kids_gd/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone_vfs mount success." else echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone_vfs mount failed - please check for problems." rm /mnt/user/software/rclone_install_running exit fi ####### Mount unionfs ########## # check if mounted if [[ -f "/mnt/disks/unionfs_tv_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_tv_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_uhd/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." rm /mnt/user/software/rclone_install_running exit else # Unmount before remounting fusermount -uz /mnt/disks/unionfs_movies_adults fusermount -uz /mnt/disks/unionfs_movies_kids fusermount -uz /mnt/disks/unionfs_movies_uhd fusermount -uz /mnt/disks/unionfs_tv_adults fusermount -uz /mnt/disks/unionfs_tv_kids unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_adults_upload=RW:/mnt/disks/rclone_vfs/movies_adults_gd=RO /mnt/disks/unionfs_movies_adults unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_kids_upload=RW:/mnt/disks/rclone_vfs/movies_kids_gd=RO /mnt/disks/unionfs_movies_kids unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/movies_uhd_upload=RW:/mnt/disks/rclone_vfs/movies_uhd_gd=RO /mnt/disks/unionfs_movies_uhd unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/tv_adults_upload=RW:/mnt/disks/rclone_vfs/tv_adults_gd=RO /mnt/disks/unionfs_tv_adults unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/tv_kids_upload=RW:/mnt/disks/rclone_vfs/tv_kids_gd=RO /mnt/disks/unionfs_tv_kids if [[ -f "/mnt/disks/unionfs_tv_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_tv_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_adults/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_kids/mountcheck" ]] && [[ -f "/mnt/disks/unionfs_movies_uhd/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." else echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed." fi fi ####### End Mount unionfs ########## rm /mnt/user/software/rclone_install_running exit rclone upload - radarr, sonarr etc add files to the unionfs (unionfs_****) mounts with the files actually getting added to the **_upload folders, not the ***_gd folders that are the rclone folders. This script moves files to gd i.e. the _gd folder and removes them from the _upload folder. bwlimit is to try and not upload more than 750GB/day. I added the exclusions because hidden unionfs folders were getting uploaded as well from the _upload folders. I'm running this 24/7 at the moment - I'll probably schedule it every couple of hours once the backlog is cleared. I created a new remote upload_gdrive_media: for the background upload, with the same username, password and location gdrive:crypt as my gdrive_media_vfs: remote #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/software/rclone_upload" ]]; then exit else touch /mnt/user/software/rclone_upload fi # set folders uploadfolderTVKids="/mnt/user/tv_kids_upload" uploadfolderTVAdults="/mnt/user/tv_adults_upload" uploadfolderMoviesKids="/mnt/user/movies_kids_upload" uploadfolderMoviesAdults="/mnt/user/movies_adults_upload" uploadfolderMoviesUHD="/mnt/user/movies_uhd_upload" # move files rclone move $uploadfolderTVKids upload_gdrive_media:/tv_kids_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k rclone move $uploadfolderTVAdults upload_gdrive_media:/tv_adults_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k rclone move $uploadfolderMoviesKids upload_gdrive_media:/movies_kids_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k rclone move $uploadfolderMoviesAdults upload_gdrive_media:/movies_adults_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k rclone move $uploadfolderMoviesUHD upload_gdrive_media:/movies_uhd_gd -vv --drive-chunk-size 512M --delete-empty-src-dirs --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle/** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k rm /mnt/user/software/rclone_upload exit unionfs cleanup - unionfs hides deleted mount (RO) files rather than deleting them, which would cause major problems if you ever mounted gd differently. This script cleans up the unionfs folder and actually deletes the old mount files e.g. upgraded or deleted files. I run this script overnight and I also run the script from radarr and sonarr whenever they upgrade a file #!/bin/bash ###########TV_KIDS############## find /mnt/disks/unionfs_tv_kids/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/disks/unionfs_tv_kids/.unionfs} newPath=/mnt/disks/rclone_vfs/tv_kids_gd${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done find "/mnt/disks/unionfs_tv_kids/.unionfs" -mindepth 1 -type d -empty -delete ###########TV_ADULTS############## find /mnt/disks/unionfs_tv_adults/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/disks/unionfs_tv_adults/.unionfs} newPath=/mnt/disks/rclone_vfs/tv_adults_gd${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done find "/mnt/disks/unionfs_tv_adults/.unionfs" -mindepth 1 -type d -empty -delete ###########movies_KIDS############## # find /mnt/disks/unionfs_movies_kids/.unionfs -name '*_HIDDEN~' | while read line; do # oldPath=${line#/mnt/disks/unionfs_movies_kids/.unionfs} # newPath=/mnt/disks/rclone_vfs/movies_kids_gd${oldPath%_HIDDEN~} # rm "$newPath" # rm "$line" # done # find "/mnt/disks/unionfs_movies_kids/.unionfs" -mindepth 1 -type d -empty -delete ###########movies_ADULTS############## find /mnt/disks/unionfs_movies_adults/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/disks/unionfs_movies_adults/.unionfs} newPath=/mnt/disks/rclone_vfs/movies_adults_gd${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done find "/mnt/disks/unionfs_movies_adults/.unionfs" -mindepth 1 -type d -empty -delete ###########movies_UHD############## find /mnt/disks/unionfs_movies_uhd/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/disks/unionfs_movies_uhd/.unionfs} newPath=/mnt/disks/rclone_vfs/movies_uhd_gd${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done find "/mnt/disks/unionfs_movies_uhd/.unionfs" -mindepth 1 -type d -empty -delete exit rclone backup - backs up my local folders to a new remote backup: . It syncs files to backup: and moves deleted files to backup:old with old files deleted after 365 days ( rclone delete --min-age 365d backup:old) I'm not quite sure what happens with versioning - will check one day. I run this daily at the moment and I've excluded the bigger shares until my vfs uploads have finished. #!/bin/bash #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/software/rclone_backup_running" ]]; then exit else touch /mnt/user/software/rclone_backup_running fi ######## ENABLED ############ rclone sync /mnt/user/dzs backup:dzs --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k rclone sync /mnt/user/nextcloud backup:nextcloud --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k rclone sync /mnt/user/public backup:public --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k rclone sync /mnt/disks/sm961/iso backup:sm961/iso --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k ######## DISABLED ############ # rclone sync /mnt/user/backup backup:backup --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k # rclone sync /mnt/user/movies_adults backup:movies_adults --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k # rclone sync /mnt/user/movies_kids backup:movies_kids --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k # rclone sync /mnt/user/movies_uhd backup:movies_uhd --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k # rclone sync /mnt/user/other_media backup:other_media --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k # rclone sync /mnt/user/software backup:software --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k # rclone sync /mnt/user/tv_recordings backup:tv_recordings --backup-dir backup:old -v --drive-chunk-size 512M --checkers 4 --fast-list --transfers 4 --bwlimit 6000k rclone delete --min-age 365d backup:old rm /mnt/user/software/rclone_backup_running exit rclone uninstall - runs at array stop to make sure everything is ready to start again at array start #!/bin/bash fusermount -uz /mnt/disks/rclone_vfs fusermount -uz /mnt/disks/unionfs_movies_adults fusermount -uz /mnt/disks/unionfs_movies_kids fusermount -uz /mnt/disks/unionfs_movies_uhd fusermount -uz /mnt/disks/unionfs_tv_adults fusermount -uz /mnt/disks/unionfs_tv_kids plugin remove rclone.plg rm -rf /tmp/rclone if [[ -f "/mnt/user/software/rclone_install_running" ]]; then rm /mnt/user/software/rclone_install_running echo "install running - removing dummy file" else echo "Passed: install already exited properly" fi if [[ -f "/mnt/user/software/rclone_upload" ]]; then echo "upload running - removing dummy file" rm /mnt/user/software/rclone_upload else echo "rclone upload already exited properly" fi if [[ -f "/mnt/user/software/rclone_backup_running" ]]; then echo "backup running - removing dummy file" rm /mnt/user/software/rclone_backup_running else echo "backup already exited properly" fi exit
  9. DZMM

    Plexdrive

    I'm doing exactly that - loading the less vital content, stuff I don't really care about if it gets nuked at a later date by google or content I could replace if I was really bothered. I've still got my local array that can hold about 30TB for content I can't afford to lose (I'm also backing this up to gd), although if a drive fails in the future I will have to consider whether to replace it or just load the content online. Given some of the insane amounts of storage I've seen on the rclone forums that's being stored, including people using GD for seedboxes....., I don't think this is something google are currently bothered about. The bigger risk I think is that they will one day enforce the 5 user requirement for unlimited storage, rather than letting people like me through with one account for £6/pm.
  10. DZMM

    Plexdrive

    I had the same launch problems, which went went away when I moved to the new rcone vfs feature which allows you to directly mount your encrypted remote without incurring any API hits i.e. you don't need plexdrive. My launches are only a second or two longer than local files i.e. I can't really tell. Assuming your encrypted remote is gdrive as in the guide, try: rclone mount --allow-other --dir-cache-time 24h --cache-dir=/tmp/rclone/vfs --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit 1G --buffer-size 256M --log-level INFO gdrive: /mnt/disks/crypt I'm not sure that: --cache-dir=/tmp/rclone/vfs is needed as nothing seems to get written to a cache, but I've left this is in to ensure it goes to ram if it does. If you can be bothered to try and speed-up launch times, try playing with --vfs-read-chunk-size 64M This is the size of the first chunk rclone reads, which keeps doubling until the limit is reached - 1G in my case i.e. it reads 64,128,256,512,1024.....1G. A lower value could mean you have faster launches - didn't seem to work for me, and 64M seems to be the currently recommended number over on the rclone forums. The buffer is there just in case of any connectivity problems and in my scenario gets filled quickly between the second and third chunk being downloaded. If you're worried about API calls you could increase --dir-cache-time as rclone mounts now poll every min for updates, so you could have an insanely high time. I've kept mine lowish as I'm still uploading content, and I have had a few problems where if a new series is added, the polling hasn't picked this up. I don't think this is a real concern as I've uploaded 20TB of my content so far and I've done numerous plex library updates, restarts etc while I was removing duplicates and getting it all working. you can write direct to the mount, but it's not recommended as if it fails you lose the file. So, best to keep the scheduled upload job as in the guide and the unionfs mount so that plex can still see files that haven't uploaded yet.
  11. I'm still trying to get my head around this. If radarr hardlinks a file, if I in the future delete it from deluge or from the media location that this is safe? I.e I have to delete it from both locations for it really to be deleted?
  12. Thx - that's a good spec.
  13. How much do you pay for your server and what's the spec? I might have to go this road in the future if I move to somewhere that doesn't have good upstream bandwidth - at the moment I can handle the download and then upload to remote locations as I have a good upstream, but in the future I might have to 'dedicate' my upstream to loading content onto the dedicated server and not remote streams. I was nervous about placing so much content in the cloud, particularly after the shitty move ACD made. But, seeing how much content some people have loaded onto google I'm hoping 'fingers crossed' that google won't do the same!
  14. plexguide does look good - I had everything set up already except for rclone, so I went the DIY route. It's really changed 'overnight' the way I use my storage - I don't think I'll be adding any more HDDs to my system, other than to replace my 'working' drives when they die and even then I think I'll be focussing on bigger SSDs. In fact, I've already removed one of my HDDs and most of the others will become pretty redundant soon.
  15. @WasehHave you had a chance to contact Squid? It'd be nice to not need to install the plugin at every boot, which would protect against the situation like last month when the rclone site went down.
  16. Is anybody else using the new vfs mount? I finally got it working nicely this weekend with a google drive mount - I can't tell the difference 90% of the time in plex whether a file is being played locally or from gd. I tried the cache mount, but my plex starts were averaging around 30s (around 4s now - same as local files) and I was having problems uploading files. I'm using a rclone move job to upload files on a schedule to gd and unionfs to merge the gd files and files not yet uploaded, so that plex can always play something. 17TB uploaded so far and no problems.
  17. I store a lot of my content on google drive so my flow was /mnt/user/downloads--->/mnt/user/import--->/mnt/disks/google_media for files I'm adding to gd via a rclone mount. I couldn't hardlink because docker has to copy between /mnt/user and /mnt/downloads for those files. I've just added the following to my array start script: mount --bind /mnt/user/downloads/ "/mnt/disks/downloads" mount --bind /mnt/user/import/ "/mnt/disks/import" so my flow is now: /mnt/disks/downloads/ --> /mnt/disks/import --> /mnt/disks/google_media i.e. I've used these for my docker mounts e.g. unraid_disks as my mapping for /mnt/disks on all relevant docker. Files are still actually stored in /mnt/user/downloads and /mnt/user/import. I believe this will let me hardlink as docker now knows the files are all on the same filesystem /unraid_disks. Before i do this, can anyone see any problems? Thanks Edit: I think I've come up with one problem. If I understand hardlinks properly, for my flow files that appear to be in /mnt/disks/google_media will stay in mnt/disks/import to avoid unnecessary IO - this is no good for me as I need the file to actually move to /mnt/disks/google_media i.e. off my server not a problem I think as my rclone move script moves files from the mount, so I think the actual file is moved.
  18. Thanks I've set mine to 10 so it kicks in when 90% of memory used - I've got 64GB so this means it's there for insurance. Default is 60, which is too 'aggressive' for my use case.
  19. I'm so glad I found this post as this has been driving me mad the last couple of weeks. When I had a slow internet connection I never saw the problem, but now I've got a fast one my disks have been getting 'backed up' with slow transfers. Re 'Use hardlinks instead of copy' when I've finished seeding a file do I delete it via Deluge, which will only delete it from the system and not from the filesystem for radarr/plex etc? Also, is the real file at that point moved from Deluge's completed folder /mnt/downloads/complete to my /mnt/movies folder, or was it already there when radarr did the hardlink? I'm trying to gauge if I need to assign more disks to /mnt/downloads/ if files could potentially be staying there for months before moving to /mnt/movies Thanks
  20. That would still mess up my flow or lead to inefficiencies as I can't fully split my flows e.g. I download tv show via /mnt/user0/downloads ---> /mnt/user0/tv_shows, but then I want to manually import a file in sonarr from /mnt/user0/downloads - I want it to happen fast so I want to use the cache drive not /mnt/user0/tv_shows. Also, by pooling my writes (normally max two big writes per day max) means my array drives spin up fewer times so I get less noise, and the turbo write plugin kicks in nicely to make the array write go quickly in one big chunk, rather than drives spinning up and down all day. I'm happy with my flow, I've just been caught out a few times when doing a big download session
  21. I understand what you are saying. That's exactly how I use my cache 90% of the time. The problem I have is if I suddenly have a lot of writes to my cache, because my download speed is fast it can fill up quickly. Normal usage is the mover won't run more than once a day, but if it does fill fast I have a problem at the moment. I just want a bit of insurance by being able to poll more often. It's also hard for me to seperate 'background writes' from 'foreground writes' i.e. yes I'm ok with movies getting downloaded and written to /movies_adults slowly, but when I'm manually ripping or moving a movie to the same directory I want it to happen quickly.
  22. Is there a way to check manually via a script more frequently if the threshold has been passed e.g. every 30 or 15 minutes, and if so run mover? Every hour is too late for me as I have a fast connection . If I install the old script will they conflict? Thanks
  23. Ok, I've been using this for a few days - can I use a few questions please. I've created a 32GB swapfile as I've added an extra unassigned SSD that has spare space. It's been live for around 1.5 days and it's using 8GB so far, but I've got free memory. When does it move data to the swap? I thought it only did it when the physical ram was getting low? What controls it? If the swap data is accessed, is it moved back to ram? Thanks
×
×
  • Create New...