Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. DZMM

    Plexdrive

    Fast enough that I can't tell if local or remote. The main benefit is the occasional pause at the start, that required a a pause/play to resume, seem to have gone
  2. Thanks - it'll be good for LT to take it on as I agree it's become core functionality like the rest of your plugins
  3. Hi in a future update, if possible could you add the read/write info like the array drives please??
  4. DZMM

    Plexdrive

    I'm trying a new mount which makes sense based on what I learnt last night in this thread: https://forum.rclone.org/t/my-vfs-sweetspot-updated-21-jul-2018/6132/77?u=binsonbuzz and here: https://github.com/ncw/rclone/pull/2410 Apparently for vfs only the buffer is what's stored in memory. The chunk isn't stored in memory - the settings for the chunk control how much data rclone requests - it isn't 'downloaded' to the machine until plex or rclone requests it. To stop the buffer asking for extra chunks at the start, you need to make sure the first chunk is bigger than the buffer - this keeps API hits down Plex will use memory to cache data depending on if you are direct playing (NO) or streaming (yes) or transcoding (yes) - if transcoding it's controlled by the time to buffer setting in plex. I've got 300 seconds here for my transcoder but other users have higher even 900, which seems excessive to me. So, I'm going with: rclone mount --allow-other --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 100M --log-level INFO gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m
  5. Pity, that would have been cool. My plex usage can get high at times, so I'm nervous about not giving it access to all my cores but I really want to try and fix audio lag in my W10 VM. I think I'll doing isolcpus as a test to see if it solves the problem first. Thanks
  6. Can I combine --cpuset= with append isolcpus= to block unraid from using certain cores but allow certain dockers to use those cores e.g. could I do the following at the same time: # stop unraid using the cores assigned to my VM append isolcpus= 8,9,10,22,23,24 #within VM xml assign the same cores i.e. VM can use <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='22'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='23'/> <vcpupin vcpu='4' cpuset='10'/> <vcpupin vcpu='5' cpuset='24'/> <emulatorpin cpuset='0,14'/> </cputune> # allow priority dockers e.g. Plex to access all cores --cpuset-cpus=0-27 at the moment I stop most dockers using cores 8,.9,10,22,23,24 but I'm wondering if I can also stop unraid, but let other dockers through. Thanks Edit: If I do append isolcpus=8,9,10,22,23,24 does this automatically exclude dockers from using these cores unless I do --cpuset-cpus=0-27 in the docker settings??
  7. I found this deluge thread and I'm going to see if adding ignore_resume_timestamps for it_config helps https://dev.deluge-torrent.org/ticket/3044
  8. I ran xfs_fsr -v /dev/md6 out of curiosity for a bit and then quit out using CTRL C, and I've now got a /.fsr directory on that drive. Do I have to do a full defrag to remove? Edit: Decided to restart it and will run to completion (will probably take a few days) to see if /.fsr directory goes at the end
  9. A quick update as my system approaches its 2nd anniversary. Overall, even after 2 years my machine is still coping well with my needs without breaking a sweat, or any real upgrades so far. In fact, it's feeling like a more beefed up machine -mainly down to#5 below. I've successfully setup a rclone vfs mount thanks to a new 200/200 connection (previously awful 18/1 service) that allows me to stream files from an unlimited google drive account. It's working very well and so far I've uploaded over 30TB - I've now got more content in the cloud than on my local server, which feels like a major milestone #1 has taken the pressure off maxing out my sata slots and needing to remove my pfsense nic to make room for a sata expansion card. I don't think I'll be adding any more local drives, and I might not even replace the ones that die I've actually found a way to free up an expansion slot anyway by only running my nic at pcie 2.0 x1 rather than x4 - more than fast enough for my gigabit lan. This has also allowed me to passthrough my USB 3.1 controller to my main VM, which has solved webcam issues I was having and means I don't have to build a dedicated pfsense box I only use kodi now for live tv and will shift this to plex once they start rolling out the TV guide to my clients (I think it's just the web app at the moment) I've deliberately filled the slow 8TB archive drive so that it doesn't hold up future transfers I've ditched the cache pool and just gone with one 500GB xfs drive which moves at least once a day, so I'll never lose more than that much content (key stuff stored on other drives via nextcloud). The other 500GB is now unassigned for appdata (plex, deluge, nzbget) and nzbget downloads. I've also taken the 2TB out of the array and it's another unassigned drive for torrents. Moving all of this IO off my array and using my cache drive for all transfers to the array has really improved my transfer rates. Also, sorting out my shares, split levels and hardlinking has also helped
  10. @dlandon I know this is a bit old, but I'm trying to troubleshoot some VM freezes I get. I have 3x W10 VMs that are always on (1 used all day (me) and x2 used by my kids a couple of hours a day concurrently) and a pfsense VM. I have 14 cores and I used to pin x2 VMs to 0,14 and x2 VMs to 1,15 - my thinking was to share the load. After reading your post, I've been thinking: - should the emulator pin for the W10 VMs at least be on the same cores i.e. the windows emulator is same 'process', so using different pinning will cause resource problems/conflicts/issues etc? - should all VMs be pinned to the first core regardless of how many are running concurrently? Thanks
  11. DZMM

    Plexdrive

    I realised it was easier just to write the files direct to /mnt/mount_unionfs to have them in /unionfs !!! I switched my upload remote to my vfs remote - the old upload remote was a carryover from when I used a rclone cache remote before I switched to a vfs remote
  12. DZMM

    Plexdrive

    Easy to do if you want one folder view e.g. for plex. I don't think having the local media as RW i.e. 2x RW folders would work as I'm not sure how unionfs would know which RW folder to add content to. unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/upload_folder/=RW:/mnt/user/local_media/=RO:/mnt/user/google_media=RO /mnt/user/unionfs_mount I've got a couple of TBs queued, so at the moment it works as the upload is running constantly so over the course of a day it never uploads more than 750GB. It runs the rclone move commands sequentially, so it'll never go over the 750GB as each job goes no faster than 8MB/s On my low-priority to-do list is finding a way to do one rclone move that doesn't remove the top-level folders if they are empty. Edit: upload script was simple - I used to have rclone delete empty directories which was the problem. I also had a seperate rclone remote for uploading because I used to use rclone cache. Now that I'm mounting vfs not rclone cache, no longer needed. Now just got one move line and uploading to the rclone vfs remote: #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else touch /mnt/user/mount_rclone/rclone_upload fi ####### End Check if script already running ########## ####### check if rclone installed ########## if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/mount_rclone/rclone_upload exit fi ####### end check if rclone installed ########## # move files rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 10 --fast-list --transfers 4 --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 8500k # delete dummy file rm /mnt/user/mount_rclone/rclone_upload exit
  13. DZMM

    Plexdrive

    hmm I can't understand why when I have everything processing on UDs and my cache like you I can't get fast speeds. is this from cache to array?
  14. DZMM

    Plexdrive

    I've got the same directory structure/folder names in /mnt/user/rclone_upload/google_vfs as /mnt/user/mount_rclone/google_vfs so that I only need one unionfs mount to make my life easier i.e. /mnt/user/mount_unionfs/google_vfs/tv_kids_gd or the docker mapping/unionfs/tv_kids_gd is a union of /mnt/user/mount_rclone/googe_vfs/tv_kids_gd and /mnt/user/rclone_upload/google_vfs/tv_kids_gd.
  15. DZMM

    Plexdrive

    Not quite following you.... My unionfs mount is at level 2 in my user share i.e. /mnt/user/mount_unionfs/google_vfs - within the google_vfs folder are all my merged google and local files. I've created the docker mapping /unionfs at the top level /mnt/user/mount_unionfs because I have other things in the /mnt/user/mount_unionfs user share - maybe my naming is confusing you.
  16. DZMM

    Plexdrive

    it's not a folder - it's an empty file. create via command line touch /mnt/user/wherever_you_want/mountcheck
  17. DZMM

    Plexdrive

    All dockers as per the image - so that they are all referencing the same mapping which is important to have good comms between dockers. Within each docker I added the relevant sub-folders e.g. here's one of my plex libraries:
  18. DZMM

    Plexdrive

    Rather than doing several unionfs mounts/merges for my different local media and google folders, I've just done one and then for each docker I've pointed them to the relevant /unionfs sub-folders within the unionfs mount/merge. For the upload folders I'm still doing individual uploads from each media type sub-folder. I'm doing this as I'm playing it safe for now, becuase I don't want rclone to accidentally move the top-level folders. Once I've researched how rclone deletes empty folders a bit more, I'll probably just have one upload job as well. playing it safe again. Hardlinking doesn't work between mappings e.g. you can't hardlink from /import to /media, so I'm trying to see if I can get unraid to hardlink by placing my torrents and media in /unionfs I've used a bind mount so that my local media that is located somewhere else on my server, appears to the dockers as being located at /unionfs. Dockers move files with no io within docker mappings - if a file is moved within a docker from /import/file.mkv to /media/file/mkv the file is actually copied across even on the same disk - I'm trying to avoid copying (because my import folders are now e.g. /unionfs/import_usenet I want my local folders to be at /unionfs) by having all file references based on /unionfs/..... I use nzbget, but yes - radarr,sonarr,plex,nzbget etc all use /unionfs/... e.g. nzbget docker moves movies to /unionfs/import_usenet/movies (/mnt/user/mount_unionfs/import_usenet/movies being the real location) Edit: made a few edits
  19. DZMM

    Plexdrive

    I added /unionfs mapped to /mnt/user/mount_unionfs RW slave in the docker mappings, and then within sonarr etc added the relevant folders e.g. /unionfs/google_vfs/tv_kids_gd and /unionfs/google_vfs/tv_adults_gd and in radarr /unionfs/google_vfs/movies_kids_gd , /unionfs/local_media/movies_hd/kids etc etc
  20. DZMM

    Plexdrive

    have you got any apps other than plex looking at your mounts e.g. kodi, or maybe one of your dockers is not configured correctly and is mapped directly to the vfs mount rather than then unionfs folder
  21. DZMM

    Plexdrive

    Edit: 08/10/2018 - Updated rclone mount, upload script, uninstall script Edit: 11/10/2018 - Tidied up and updated scripts Sharing below what I've got in case it helps anyone else. I use the rclone plugin, custom user scripts plugin and unionfs via Nerd Pack to make everything below work. docker mapping: For my dockers I create two mappings /user ---> /mnt/user and /disks --> /mnt/disks (RW slave) rclone vfs mount :- /mnt/user/mount_rclone/google_vfs So, my rclone mount below is referenced within dockers at /user/mount_rclone/google_vfs I don't think it's safe in the top-level folder and also created google_vfs folder in case I do other mounts in the future rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m & Local Files awaiting upload: /mnt/user/rclone_upload/google_vfs A seperate script uploads these to gdrive on my preferred schedule using rclone move unionfs mount: - /mnt/user/mount_unionfs/google_vfs Unionfs to combine gdrive files with local files that haven't been uploaded yet. My unionfs mount below is referenced within dockers at /user/mount_unionfs/google_vfs. All my dockers (Plex, radarr, sonarr etc) look at the movie and tv_shows sub-folders within this mount, which masks whether files are local or in the cloud: unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs My full scripts below which I've annotated a bit. Rclone install I run this every 5 mins so it remounts automatically (hopefully) if there's a problem. #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting script already running." exit else touch /mnt/user/mount_rclone/rclone_install_running fi ####### End Check if script already running ########## mkdir -p /mnt/user/mount_rclone/google_vfs mkdir -p /mnt/user/mount_unionfs/google_vfs ####### Start rclone_vfs mounted ########## if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success." else echo "$(date "+%d.%m.%Y %T") INFO: installing and mounting rclone." # install via script as no connectivity at unraid boot /usr/local/sbin/plugin install https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/plugin/rclone.plg rclone mount --allow-other --buffer-size 1G --dir-cache-time 72h --drive-chunk-size 32M --fast-list --log-level INFO --vfs-read-chunk-size 64M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m & # pausing briefly to give mount time to initialise sleep 5 if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check rclone vfs mount success." else echo "$(date "+%d.%m.%Y %T") CRITICAL: rclone_vfs mount failed - please check for problems." rm /mnt/user/mount_rclone/rclone_install_running exit fi fi ####### End rclone_vfs mount ########## ####### Start Mount unionfs ########## if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." else # Unmount before remounting fusermount -uz /mnt/user/mount_unionfs/google_vfs unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs if [[ -f "/mnt/user/mount_unionfs/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Check successful, unionfs Movies & Series mounted." else echo "$(date "+%d.%m.%Y %T") CRITICAL: unionfs Movies & Series Remount failed." rm /mnt/user/mount_rclone/rclone_install_running exit fi fi ############### starting dockers that need unionfs mount or connectivity ###################### # only start dockers once if [[ -f "/mnt/user/mount_rclone/dockers_started" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: dockers already started" else touch /mnt/user/mount_rclone/dockers_started echo "$(date "+%d.%m.%Y %T") INFO: Starting dockers." docker start plex docker start letsencrypt docker start ombi docker start tautulli docker start radarr docker start sonarr docker start radarr-uhd docker start lidarr docker start lazylibrarian-calibre fi ############### end dockers that need unionfs mount or connectivity ###################### ####### End Mount unionfs ########## rm /mnt/user/mount_rclone/rclone_install_running exit rclone uninstall run at array shutdown Edit 08/10/18: also run at array start just in case unclean shutdown #!/bin/bash fusermount -uz /mnt/user/mount_rclone/google_vfs fusermount -uz /mnt/user/mount_unionfs/google_vfs plugin remove rclone.plg rm -rf /tmp/rclone if [[ -f "/mnt/user/mount_rclone/rclone_install_running" ]]; then echo "install running - removing dummy file" rm /mnt/user/mount_rclone/rclone_install_running else echo "Passed: install already exited properly" fi if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then echo "upload running - removing dummy file" rm /mnt/user/mount_rclone/rclone_upload else echo "rclone upload already exited properly" fi if [[ -f "/mnt/user/mount_rclone/rclone_backup_running" ]]; then echo "backup running - removing dummy file" rm /mnt/user/mount_rclone/rclone_backup_running else echo "backup already exited properly" fi if [[ -f "/mnt/user/mount_rclone/dockers_started" ]]; then echo "removing dummy file docke run once file" rm /mnt/user/mount_rclone/dockers_started else echo "docker run once already removed" fi exit rclone upload I run every hour Edit 08/10/18: (i) exclude .unionfs/ folder from upload (ii) I also run against my cache first to try and stop files going to the array aka 'google mover'. I also make it cycle through one array disk at a time to stop multiple disks spinning up for the 4 transfers and to increase the odds of the uploader moving files off the cache before the mover moves them to the array #!/bin/bash ####### Check if script already running ########## if [[ -f "/mnt/user/mount_rclone/rclone_upload" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: Exiting as script already running." exit else touch /mnt/user/mount_rclone/rclone_upload fi ####### End Check if script already running ########## ####### check if rclone installed ########## if [[ -f "/mnt/user/mount_rclone/google_vfs/mountcheck" ]]; then echo "$(date "+%d.%m.%Y %T") INFO: rclone installed successfully - proceeding with upload." else echo "$(date "+%d.%m.%Y %T") INFO: rclone not installed - will try again later." rm /mnt/user/mount_rclone/rclone_upload exit fi ####### end check if rclone installed ########## # move files # echo "$(date "+%d.%m.%Y %T") INFO: Uploading cache then array." # echo "$(date "+%d.%m.%Y %T") INFO: Temp Clearing each disk" rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk1/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk2/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk3/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk4/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk5/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/cache/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 rclone move /mnt/disk6/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 5 --fast-list --transfers 3 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --bwlimit 9000k --tpslimit 6 # end clearing each disk # remove dummy file rm /mnt/user/mount_rclone/rclone_upload exit unionfs cleanup: Daily and manually. I don't run from dockers anymore as it was running too often and overkill #!/bin/bash ################### Clean-up UnionFS Folder ######################### echo "$(date "+%d.%m.%Y %T") INFO: starting unionfs cleanup." find /mnt/user/mount_unionfs/google_vfs/.unionfs -name '*_HIDDEN~' | while read line; do oldPath=${line#/mnt/user/mount_unionfs/google_vfs/.unionfs} newPath=/mnt/user/mount_rclone/google_vfs${oldPath%_HIDDEN~} rm "$newPath" rm "$line" done find "/mnt/user/mount_unionfs/google_vfs/.unionfs" -mindepth 1 -type d -empty -delete ########### Remove empty upload folders ################## echo "$(date "+%d.%m.%Y %T") INFO: removing empty folders." find /mnt/user/rclone_upload/google_vfs -empty -type d -delete # replace key folders in case deleted so future mounts don't fail mkdir -p /mnt/user/rclone_upload/google_vfs/movies_adults_gd/ mkdir -p /mnt/user/rclone_upload/google_vfs/movies_kids_gd/ mkdir -p /mnt/user/rclone_upload/google_vfs/movies_uhd_gd/adults/ mkdir -p /mnt/user/rclone_upload/google_vfs/movies_uhd_gd/kids/ mkdir -p /mnt/user/rclone_upload/google_vfs/tv_adults_gd/ mkdir -p /mnt/user/rclone_upload/google_vfs/tv_kids_gd/ ###################### Cleanup import folders ################# echo "$(date "+%d.%m.%Y %T") INFO: cleaning usenet import folders." find /mnt/user/mount_unionfs/import_usenet/ -empty -type d -delete mkdir -p /mnt/user/mount_unionfs/import_usenet/movies mkdir -p /mnt/user/mount_unionfs/import_usenet/movies_uhd mkdir -p /mnt/user/mount_unionfs/import_usenet/tv exit
  22. DZMM

    Plexdrive

    I think somehow you are writing directly to the vfs mount and I think the error is showing the write is failing, which means you lose the file I think or at least it can't be retried: https://rclone.org/commands/rclone_mount/#file-caching
  23. DZMM

    Plexdrive

    your mount scripts look fine, although I don't know what the implications are of having two vfs mounts e.g. do they behave properly and don't interfere with each other. Post your move scripts as well - these are what used to cause my unraid server to run out of memory.
  24. DZMM

    Plexdrive

    Your problem is different. My mounts are fine as I can browse files via putty, SMB etc but my dockers won't see any anything. I used to run into memory problems with my move jobs - post your rclone lines. I've tried rw and that didn't help. Are they supposed to be rw slave even if mounted at mnt/user? I've been importing from array-->array or ud-->array. I'm going to try some ud-->cache imports today to see if it's been an io problem on my array. I'm pretty sure it's not, but that will confirm whether or not.
  25. DZMM

    Plexdrive

    I've just added a 2TB unassigned for my usenet and torrent downloads - hopefully this will speedup my import to mu unionfs folders as I'm only getting around 5MB/s. It's going to take a while for me to know as I need to wait for old torrents to finish downloading before moving them. I wonder sometimes if my unionfs mount at /mnt/disks rather than /mnt/user/somewhere is the culprit. If the UD doesn't work, I think I'll give /mnt/user/somewhere another crack. I aborted my last attempt as for some weird reason sonarr/radarr/plex etc sometimes wouldn't see the mount files - anyone else come across this? I could see the files via putty/SMB etc, but the dockers would be temperamental.
×
×
  • Create New...