markrudling

Members
  • Posts

    20
  • Joined

  • Last visited

Posts posted by markrudling

  1. 25 minutes ago, DZMM said:

    Post your mount settings from the script

    Pretty much stock

     

    # REQUIRED SETTINGS
    RcloneRemoteName="gdrive" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/user/local" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user/rclone_cache" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="3000G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="netdata Plex-Media-Server tautulli sabnzbd binhex-prowlarr radarr radarr4k sonarr overseerr" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{"Movies,Movies 4k,Series"\} # comma separated list of folders to create within the mount

     

  2. Hi

     

    I frequently get failed mount attempts on first start. I have to keep running it and then it will catch and run flawlessly.

     

    Reboots don't happen often, but I'm keen to try get it running on first attempt.

     

    2022/01/26 12:18:28 DEBUG : 4 go routines active
    26.01.2022 12:18:28 INFO: *** Creating mount for remote gdrive
    26.01.2022 12:18:28 INFO: sleeping for 20 seconds
    2022/01/26 12:18:28 NOTICE: Serving remote control on http://localhost:5572/
    26.01.2022 12:18:48 INFO: continuing...
    26.01.2022 12:18:48 CRITICAL: gdrive mount failed - please check for problems.  Stopping dockers

     

    Any one solved this yet?
     

  3. I have been running with no buffer and no read-ahead. Playback is still pretty much instant on all files, including 4k. Skipping backward is a dream, skipping forward is, well, good enough.

     

    The read-ahead when using plex is kinda mitigated if its set fairly small, as plex does its own magic between trans-coding and buffering etc.

     

    Im quite happy like this. Less disk read/writes, stable, fast. Me like :) 

  4. 22 minutes ago, DZMM said:

    I think I'm going to try raising my --buffer-size to 256MB or 512MB, rather than using --vfs-read-ahead as I'm nervous about multiple disk writes wiping out any potential benefit.

    --Buffer-size will be how much of the file is stored in ram?

    The cache will still write whatever was buffered from memory onto disk?

     

    Agree though with read ahead removing items from cache prematurely, unless you have the space for large cache. 

     

    I think where Im landing is that cache is great for rewinds. Bonus for content that is watched more than once. Buffer will help a bit with forward skips, but only a few seconds. 2-5 seconds to wait for a long skip is more than acceptable. :)

  5. I think the value --vfs-read-ahead will vary largely based on your use case as well as hardware and internet connection.

     

    In my testing, it does not add any extra load time to the start time of videos. Im in SA and it takes between 2-5 seconds to start most videos. Faster than spinning up a sleeping local drive. 

     

    I have a 1 gig internet connection, but single thread downloads from google drive are limited to about 150-220 megabits. So even if you set the read ahead value very high, it will still take a few min to complete the file and have it in cache. Most users will skip ahead during the first few moments of loading a new video. How many times will you skip when 15 min into your content?

     

    If download speeds from google maxed out my line, I would be inclined to pre-read 10-15gig and have 99% of my content pre downloaded. At full speed it would not take long and the forward skipping buffer issue is mitigated.

     

    I will leave it my current setting (1gig) because the cache is going to manage its self fairly well, and I have fast enough internet to support many streams at the same time. HDD speed will become an issue if there are a lot of reads and writes to the single cache disk though, so perhaps turning it off completely is best(again, depends on your connection).

     

    I think the cache as it is though, is fantastic for the more likely use case. Most users will skip back a few seconds, when they miss something. Cache here is fantastic and the backward skips are instant. Im very happy with the setup at the moment!

  6. 18 hours ago, DZMM said:

    Can I get some help testing please.  V1.5.3 of rclone (remember you have to remove and reinstall the plugin to update it) now supports better caching where files can be cached locally.  I'll add a variable in for setting the cache location once it's all working, but for now can a few people try these settings in the mount script:

     

    
    # create rclone mount
    	rclone mount \
    	--allow-other \
    	--dir-cache-time 720h \
    	--log-level INFO \
    	--poll-interval 15s \
    	--cache-dir=/mnt/user/downloads/rclone/tdrive_vfs/cache \
    	--vfs-cache-mode full \
    	--vfs-cache-max-size 500G \
    	--vfs-cache-max-age 336h \
    	--bind=$RCloneMountIP \
    	$RcloneRemoteName: $RcloneMountLocation &

    set the cache-dir to wherever is convenient.   The settings above will keep up to 500GB of files downloaded from gdrive for up to 2 weeks, with the oldest removed first when full.  I think this will work well with my kids who keep stopping and starting the same file, or when plex is indexing or doing other operations.  However, I don't think it will help majorly with playback for my setup, unless a user tries to open the same file within a few hours.  Dunno.

     

    There's another new setting --vfs-read-ahead that could potentially help with forward skipping/smoother playback by downloading more data ahead of the current stream position, that we can play with as well.

     

    Edit: poll-interval shortens the default 1m, so should hopefully add a bit more butter to updates.

     

    Edit 2:. Initial launch times are much faster even before the cache kicks in!!

    I am testing this now and it seems to work really well.

    I also have a few users with kids that repeat the same shows all the time.

     

    Does it do full read-ahead of the file to cache or only cache the contents that it downloads as per request? IE, if a user watches the first 5 min of a move, will only that 5 min be cached or will the internet connection go wild and pull the full file? Hopefully the former.

     

    I had wanted to implemented a smart read-ahead cache process that would monitor the last epp watched and pre-fetch the next epp and store it in local, but now this makes managing that file thereafter so much easier. I can pre"read" the next file and it will then be stored in the cache and will be read from there as required, or removed if not consumed in x days.

     

    Anything in particular you want me to test, seems to be working out of the box.

    • Like 1
  7. @watchmeexplode5

     

    Thanks for taking the time to read my ramble. 

     

    Looks like two things are at play. 1, my Ethernet card is broken. Very intermittent behaviour, slow then fast then slow. Really frustrating to deal with because it shows no real signs of fault, no errors. Just really bad behaviour. Using a USB3 Gigabit adaptor and things are a LOT better.

     

    2. Windows and Plex and remote shares do not seem to play well. I think that Plex asks for the first few MB of the file, but windows tries to be smart and asks for more. Perhaps because rclone is slower so Plex/Windows asks for more. Not sure.

     

    Anyway, I reinstalled Ubuntu on the second machine and with the new Ethernet adaptor, things are running well. Scan is not fast, but its acceptable. 3k movies takes about 6 hours.

     

    When looking at network activity on Unraid, I see many small bursts during the scan. Assuming its fetching a few MB of the file to scan.

     

    What im confused about is the mount settings and Rclone. Does the buffer or chunk size determine how much of the file to bring? IE, if plex asks for the first 10mb but its configured for 256mb, will all 256mb come through before Rclone delivers the 10mb to Plex? When Plex scans running locally, there is very little network activity, so im assuming it only brings a smaller portion of the file?

     

    The reason I ask is to try optimise the scanning process. I have 1gb internet, so 256mb comes through pretty fast, but 32mb or so may be a lot better. Changing any of the values in the mount script dont really make any difference to scan speeds.

     

    ANyway, thanks for reading

  8. Hi everyone.

     

    I am looking for some assistance. I have very slow scan speeds on plex, via SMB on another computer. Plex running in docker on my unraid machine is acceptable.

     

    I have quite a few users that have slow connections so I have a second i7 machine running Windows 10 and Plex that I send them too, leaving the Unraid server with some headspace to do everything else it does.

     

    The Win PC has a read only mapped network drive to the gdrive folder in mount_mergerfs share from unraid. Browsing this share can be slow, sometimes its fairly fast. Copy from this share can be fast, but it is very intermittent. Most of the time I get full 200meg copy speed though, so this is acceptable.

     

    When running the scan, network activity in the Win PC is as expected, fairly low. However, network activity on Unraid and my router is going nuts. What seems to be happening is that Plex on the windows PC is scanning the directory, asking for just a bit of the file, and rclone/unraid is attempting to serve much more of the file, meaning each file takes a long time to scan.

     

    I have tested the Win PC with RaiDrive and mounted a drive, and the scans through there are VERY fast and only 1-3meg of my line is used.

     

    I think windows and unraid are not playing well in this configuration.

     

    Can anyone offer some settings or advise? My mount settings are stock.

     

     

  9. 10 minutes ago, nuhll said:

    Ive checked both and in radarr it was off. In sonarr i cant find this setting.

     

    Where/Whats the best way to enable verbose? It feels like its 24/7 downloading something... 

    In Sonarr, under Media Management with Advanced Settings shown, its "Change File Date" under File Management.

     

    In your rclone mount script, change --log-level INFO to --log-level DEBUG

  10. 25 minutes ago, nuhll said:

    So everything works perfectly, im just not sure if there is a problem because iam seeing high downloads when plex radarr, sonarr and nzbget are doing nothing.

     

    Is there any way to check rclone what he is doing? (ive looked up via ssh and its definitly coming from rclone)

    Radarr or Sonarr may be updating the modified date on the file. To do this, the entire file is pulled locally, the time is updated and then the file is uploaded again. I turned this functionality off in Radarr/Sonarr

     

    To answer your question though, enable verbose logging in rclone

  11. 3 hours ago, brasi said:

    Anyone experiencing play back issues with nvidia shield plex client with this setup?  I keep getting "Your connection to the server is not fast enough to stream this video”

     

     

    I get this too. Its nothing to do with rclone and the setup, as local files do the same. Im wired with gig connection. There seems to be quite a bit on the forums about shield and plex and this issue. If i watch my network usage, i see good spike while it buffers to the shield, with bursts every few seconds then all of a sudden the bursts fall away and you get a low throughput rate until you get the message and playback stops. The same video will work some times, other times not. I can go weeks with it being ok, then i get it a few times a day. Search the nvidia and plex forums, its well documented with no consistent fix.

  12. 9 minutes ago, DZMM said:

    If you are renaming files you've already uploaded, unfortunately this is what unionfs does.  Hopefully rclone union in a future release will allow unraid users to ditch unionfs

    Ok great - I can live with that knowing its not due to a setup issue on my side. I will create clones of my dockers and have them point at the rclone mount to preform cleanup of my now uploaded media.

     

    Thank you again, looking forward to seeing how rclone will manage unions.

     

    Cheers

    Mark

  13. Firstly, massive thank you for this guide. I migrated from rclone cache to this today and streaming is far better. 👏

     

    Im having one issue. With sonarr/radarr i have the renamer section set to update file time to airdate of the media. This is fine for new files, but when doing a scan/library update, it seems to pull the entire file to my rclone_upload folder, then sets the time and waits for it to upload again with the mover script.

     

    Has anyone else seen this behavior?

  14. Hi there

     

    Im having issues getting this to work with google drive (gsuit). I have sorted out the permission issues on the mount.

     

    If i ls via docker from the terminal

    docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config

     

    i can see my files on google drive but when I access the mount it is empty. I also cannot add files to the mount.

     

    root@InternetBackup:~# docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" ls gdrive:
      6035089 Photos/20170614_095401.jpg
            0 Media/Series/series.txt
            0 Media/Movies/movies.txt

     

    In the docker log i get the following:

    [services.d] starting services
    Executing => rclone mount --config=/config/.rclone.conf --allow-other --allow-non-empty gdrive: /data
    [services.d] done.
    2018/10/26 22:41:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied
    
    2018/10/26 22:42:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied
    
    2018/10/26 22:43:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied
    
    2018/10/26 22:44:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied

    Any help would be greatly appreciated. 

  15. Hi guys

     

    I am a proud owner of brand new HP N36L Micro Server with 3x3TB WD drives. Yay!

     

    From initial reading, I found that you can run Vmware server with unRAID, so i can put the server to good use. This is where I get mixed up...

     

    Im new to Linux so there is a lot to learn. What i would like to know for now is, can unRAID 5 play ball with ESXi 5?

     

    Here is what i would like from the setup. unRAID running the 3x3tb drives(more to come) booting from the USB. Vmware also running from the USB within unRAID as a plug-in i guess you would call it. My VM disks would be saved on the 250gig and will not be part of the array at all.

     

    I have done quite a bit of reading and can follow instructions without to much difficulty, but need a bit of assistance with putting it all together, where do i start, what do i need, will i run with the latest beta and latest version of VMWare Server?

     

    Looking forward to the support, Thanks guys.

    Mark