markrudling

Members
  • Posts

    18
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

markrudling's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. I have been running with no buffer and no read-ahead. Playback is still pretty much instant on all files, including 4k. Skipping backward is a dream, skipping forward is, well, good enough. The read-ahead when using plex is kinda mitigated if its set fairly small, as plex does its own magic between trans-coding and buffering etc. Im quite happy like this. Less disk read/writes, stable, fast. Me like
  2. --Buffer-size will be how much of the file is stored in ram? The cache will still write whatever was buffered from memory onto disk? Agree though with read ahead removing items from cache prematurely, unless you have the space for large cache. I think where Im landing is that cache is great for rewinds. Bonus for content that is watched more than once. Buffer will help a bit with forward skips, but only a few seconds. 2-5 seconds to wait for a long skip is more than acceptable.
  3. I think the value --vfs-read-ahead will vary largely based on your use case as well as hardware and internet connection. In my testing, it does not add any extra load time to the start time of videos. Im in SA and it takes between 2-5 seconds to start most videos. Faster than spinning up a sleeping local drive. I have a 1 gig internet connection, but single thread downloads from google drive are limited to about 150-220 megabits. So even if you set the read ahead value very high, it will still take a few min to complete the file and have it in cache. Most users will skip ahead during the first few moments of loading a new video. How many times will you skip when 15 min into your content? If download speeds from google maxed out my line, I would be inclined to pre-read 10-15gig and have 99% of my content pre downloaded. At full speed it would not take long and the forward skipping buffer issue is mitigated. I will leave it my current setting (1gig) because the cache is going to manage its self fairly well, and I have fast enough internet to support many streams at the same time. HDD speed will become an issue if there are a lot of reads and writes to the single cache disk though, so perhaps turning it off completely is best(again, depends on your connection). I think the cache as it is though, is fantastic for the more likely use case. Most users will skip back a few seconds, when they miss something. Cache here is fantastic and the backward skips are instant. Im very happy with the setup at the moment!
  4. I am testing this now and it seems to work really well. I also have a few users with kids that repeat the same shows all the time. Does it do full read-ahead of the file to cache or only cache the contents that it downloads as per request? IE, if a user watches the first 5 min of a move, will only that 5 min be cached or will the internet connection go wild and pull the full file? Hopefully the former. I had wanted to implemented a smart read-ahead cache process that would monitor the last epp watched and pre-fetch the next epp and store it in local, but now this makes managing that file thereafter so much easier. I can pre"read" the next file and it will then be stored in the cache and will be read from there as required, or removed if not consumed in x days. Anything in particular you want me to test, seems to be working out of the box.
  5. @watchmeexplode5 Thanks for taking the time to read my ramble. Looks like two things are at play. 1, my Ethernet card is broken. Very intermittent behaviour, slow then fast then slow. Really frustrating to deal with because it shows no real signs of fault, no errors. Just really bad behaviour. Using a USB3 Gigabit adaptor and things are a LOT better. 2. Windows and Plex and remote shares do not seem to play well. I think that Plex asks for the first few MB of the file, but windows tries to be smart and asks for more. Perhaps because rclone is slower so Plex/Windows asks for more. Not sure. Anyway, I reinstalled Ubuntu on the second machine and with the new Ethernet adaptor, things are running well. Scan is not fast, but its acceptable. 3k movies takes about 6 hours. When looking at network activity on Unraid, I see many small bursts during the scan. Assuming its fetching a few MB of the file to scan. What im confused about is the mount settings and Rclone. Does the buffer or chunk size determine how much of the file to bring? IE, if plex asks for the first 10mb but its configured for 256mb, will all 256mb come through before Rclone delivers the 10mb to Plex? When Plex scans running locally, there is very little network activity, so im assuming it only brings a smaller portion of the file? The reason I ask is to try optimise the scanning process. I have 1gb internet, so 256mb comes through pretty fast, but 32mb or so may be a lot better. Changing any of the values in the mount script dont really make any difference to scan speeds. ANyway, thanks for reading
  6. Hi everyone. I am looking for some assistance. I have very slow scan speeds on plex, via SMB on another computer. Plex running in docker on my unraid machine is acceptable. I have quite a few users that have slow connections so I have a second i7 machine running Windows 10 and Plex that I send them too, leaving the Unraid server with some headspace to do everything else it does. The Win PC has a read only mapped network drive to the gdrive folder in mount_mergerfs share from unraid. Browsing this share can be slow, sometimes its fairly fast. Copy from this share can be fast, but it is very intermittent. Most of the time I get full 200meg copy speed though, so this is acceptable. When running the scan, network activity in the Win PC is as expected, fairly low. However, network activity on Unraid and my router is going nuts. What seems to be happening is that Plex on the windows PC is scanning the directory, asking for just a bit of the file, and rclone/unraid is attempting to serve much more of the file, meaning each file takes a long time to scan. I have tested the Win PC with RaiDrive and mounted a drive, and the scans through there are VERY fast and only 1-3meg of my line is used. I think windows and unraid are not playing well in this configuration. Can anyone offer some settings or advise? My mount settings are stock.
  7. In Sonarr, under Media Management with Advanced Settings shown, its "Change File Date" under File Management. In your rclone mount script, change --log-level INFO to --log-level DEBUG
  8. Radarr or Sonarr may be updating the modified date on the file. To do this, the entire file is pulled locally, the time is updated and then the file is uploaded again. I turned this functionality off in Radarr/Sonarr To answer your question though, enable verbose logging in rclone
  9. I get this too. Its nothing to do with rclone and the setup, as local files do the same. Im wired with gig connection. There seems to be quite a bit on the forums about shield and plex and this issue. If i watch my network usage, i see good spike while it buffers to the shield, with bursts every few seconds then all of a sudden the bursts fall away and you get a low throughput rate until you get the message and playback stops. The same video will work some times, other times not. I can go weeks with it being ok, then i get it a few times a day. Search the nvidia and plex forums, its well documented with no consistent fix.
  10. Ok great - I can live with that knowing its not due to a setup issue on my side. I will create clones of my dockers and have them point at the rclone mount to preform cleanup of my now uploaded media. Thank you again, looking forward to seeing how rclone will manage unions. Cheers Mark
  11. Firstly, massive thank you for this guide. I migrated from rclone cache to this today and streaming is far better. 👏 Im having one issue. With sonarr/radarr i have the renamer section set to update file time to airdate of the media. This is fine for new files, but when doing a scan/library update, it seems to pull the entire file to my rclone_upload folder, then sets the time and waits for it to upload again with the mover script. Has anyone else seen this behavior?
  12. Hi there Im having issues getting this to work with google drive (gsuit). I have sorted out the permission issues on the mount. If i ls via docker from the terminal docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" config i can see my files on google drive but when I access the mount it is empty. I also cannot add files to the mount. root@InternetBackup:~# docker exec -it Rclone-mount rclone --config="/config/.rclone.conf" ls gdrive: 6035089 Photos/20170614_095401.jpg 0 Media/Series/series.txt 0 Media/Movies/movies.txt In the docker log i get the following: [services.d] starting services Executing => rclone mount --config=/config/.rclone.conf --allow-other --allow-non-empty gdrive: /data [services.d] done. 2018/10/26 22:41:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied 2018/10/26 22:42:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied 2018/10/26 22:43:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied 2018/10/26 22:44:59 ERROR : Error traversing cache "/root/.cache/rclone/vfs/gdrive": lstat /root/.cache/rclone/vfs/gdrive: permission denied Any help would be greatly appreciated.
  13. Hello good people Is it possible to run Leia headless on this docker? Thanks Mark
  14. BUMP I would like to get this working. I have ezcacit and would love to monitor the disk usage, transfer rates, network and cpu etc. Thanks