Jump to content

mgutt

Moderators
  • Posts

    11,355
  • Joined

  • Last visited

  • Days Won

    124

Everything posted by mgutt

  1. Test results: Cache the first 200 MB of all movies in folder "09": ls /mnt/user/Movie/09/*/*.mkv | xargs head -c 200000000 > /dev/null Benchmark: echo "$(time ( head -c 200000000 "/mnt/disk4/Movie/09/12 Monkeys (1995)/12 Monkeys (1995) FSK16 DE EN IMDB8.0.mkv" ) 2>&1 1>/dev/null )" real 0m0.063s user 0m0.015s sys 0m0.047s While executing the benchmark disk4 still sleeps. Starting the movie through Plex... disk spins up... Does Plex use IO_DIRECT, which bypasses any caching? Let's check that. We clean the cache: sync; echo 1 > /proc/sys/vm/drop_caches RAM stats before starting a movie: free -m total used free shared buff/cache available Mem: 64358 1022 61371 761 1964 61974 Swap: 0 0 0 Started a movie in Plex. While the movie is playing the cache usage rises: free -m total used free shared buff/cache available Mem: 64358 1050 60141 761 3165 61945 Swap: 0 0 0 free -m total used free shared buff/cache available Mem: 64358 1050 59400 761 3907 61944 Swap: 0 0 0 free -m total used free shared buff/cache available Mem: 64358 1051 57769 761 5537 61942 Swap: 0 0 0 After stopping it: free -m total used free shared buff/cache available Mem: 64358 1043 59559 761 3755 61951 Swap: 0 0 0 Hmmm.. looks like it uses the cache. Spun down disk. Play the movie again from the beginning. Aha. Disk is still sleeping. So Plex should be cachable. But why didn't it work? Ahh.. I forget something I think. The external subtitle file. Ok, let's cache all files on the disk: ls /mnt/user/Movie/09/*/*.* | xargs head -c 200000000 > /dev/null Let's stop the disk and start a movie again. Nope. Still spinning up first. Ok, clean cache and cache a full movie: sync; echo 1 > /proc/sys/vm/drop_caches cat "/mnt/disk4/Movie/09/127 Hours (2010)/127 Hours (2010) FSK12 DE EN TR IMDB7.6.mkv" > /dev/null cat "/mnt/disk4/Movie/09/127 Hours (2010)/127 Hours (2010) FSK12 DE EN TR IMDB7.6.ger.forced.srt" > /dev/null Spin down disk, start movie in Plex and... ha.. starts directly and disk stays sleeping. Ok, maybe we need to cache more of the movie leader?1GB... 2GB... 3GB... 4GB... 5GB... nothing works. Does Plex read something from the end of the file? Let's read 5GB of the beginning and 5GB of the end of the file... aha. Movie starts directly. Puuhh... we are on the right way. Now 100MB from the beginning and 100MB from the end: head -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.mkv" > /dev/null head -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.ger.forced.srt" > /dev/null tail -c 100000000 "/mnt/disk2/Movie/WX/Wall Street (1987)/Wall Street (1987) FSK12 DE EN IMDB7.4.mkv" > /dev/null Ding Ding Ding Ding! It works. And the disk spins up while the movie plays and... no buffering. Nice! Ok. Now lets find out how much is really needed from the end of the file. 100/10... works. 100/1... works. 100/0.1... does not work. Ok, we need 1MB of the end of the file. Let's check with a 4K movie. 100/1... works. Yeah, baby! Ok, let's check out how much we need to read from the beginning until the buffer becomes empty. We still use the 4K movie. 50/1... works. 30/1... works... 10/1... complete fail ^^ 20/1... buffers. 25/1... works. Let's test wifi... 25/1... buffers. Wait.. my phone has only 65 mbit/s wifi and the movie has 107 mbit/s. Can't work ^^ Better position... now 866 Mbit/s connection. 25/1... buffers, 30/1... works. 25/1... buffers. This time I waited 2 minutes after each disk spun down. 30/1... buffers. Right.. I need to wait more until the disk completely stopped. 40/1... buffers. 50/1... buffers. 60/1... works. Ok, now let's test 4K to 1080P Transcoding. 60/1.. works. Good. So this would be the way to put our movies in the cache, but it works only if there is enough RAM for all movies: find /mnt/user/Movies/ -iname '*.mkv' -print0 | xargs -0 head -c 60000000 > /dev/null find /mnt/user/Movies/ -iname '*.mkv' -print0 | xargs -0 tail -c 1000000 > /dev/null find /mnt/user/Movies/ -iname '*.srt' -print0 | xargs -0 cat > /dev/null At the moment I'm stuck to create a command which sorts and uses head and tail at the same time, so it will fill the cache with the recent movies. head alone works: find /mnt/user/Movies/ -iname '*.mkv' -printf "%T@ %p\n" | sort -n | sed -r 's/^[0-9]+.[0-9]+ //' | tr '\n' '\0' | xargs -0 head -c 60000000 > /dev/null But adding tail does not work. Piping isn't my favorite ^^ find /mnt/user/Movies/ -iname '*.mkv' -printf "%T@ %p\n" | sort -n | sed -r 's/^[0-9]+.[0-9]+ //' | tr '\n' '\0' | tee >(xargs -0 head -c 60000000 > /dev/null) >(xargs -0 tail -c 1000000 > /dev/null)
  2. Only as a reminder for myself: Monitor user Check if it's possible to monitor if the user has opened a movie page in the Plex client to spinup the HDD and preload the movie file before the user starts it. Cache movie leader on SSD Maybe we could even use the SSD instead of the RAM as cache. This plugin should help and a different vm.swappiness value. The target would be to add from all movies the first 100MB to the RAM page cache and linux hopefully moves it to the SSD page cache (swap).
  3. Cool idea, but would only work with the RAM cache and needs testing to find the best size as Plex will fill the Clients buffer and we need to overcome up to 14 seconds in which the HDD spins up. @BRiT It should work. Test: # create random file dd if=/dev/urandom iflag=fullblock of=/mnt/disk8/Marc/1GB.bin bs=100M count=10 # it needs time to write the file from the write cache to the HDD sleep 90 # clean the read cache sync; echo 1 > /proc/sys/vm/drop_caches # wait for cache cleaning sleep 10 # benchmark the time to read the first 500 MB of the file this fills the cache echo "$(time ( head -c 500000000 /mnt/disk8/Marc/1GB.bin ) 2>&1 1>/dev/null )" # wait for processing all I/O sleep 30 # additional test, to check if the disk spins up mdcmd spindown 8 # wait for full spindown (view at the dashboard!) sleep 10 # benchmark the read time again echo "$(time ( head -c 500000000 /mnt/disk8/Marc/1GB.bin ) 2>&1 1>/dev/null )" First benchmark: real 0m1.964s user 0m0.077s sys 0m0.200s Second benchmark: real 0m0.139s user 0m0.062s sys 0m0.077s Conclusion: It is possible to cache the first x MB of a file in the RAM. But after the second benchmark the disk is still sleeping. That means if the client starts a movie and the buffer is filled through the cached movie leader, it will spinup the HDD while the client is already draining the buffer. As far as I know the Plex clients buffer has a total size of 75 MB. Let's say our movies have a bitrate of 50 Mbit/s. We multiply this with 15 seconds (maximum HDD spinup time ) and by that we get 100 Mbyte. This means a sleeping HDD could be a problem if it spins up very slowly. But if its active and/or fast, this trick should work. I will test that with movies on one drive and compare it with the latency of movies of an uncached drive.
  4. Is there a bug in the recent sysstat package? iostat returns traffic, although there isn't any (which worked last month): I uninstalled/reinstalled the package, but it does not help. EDIT: No, its not related to the version. Try manually installing V11 and recent V12 and all return the same. EDIT2: Hmm.. now I'm confused. Does iostat display at the first request an outdated value? Because if I choose an interval, the result of the following refreshes become correct: EDIT2: Ok, sometimes I need to read the documentation ^^ Really good example how to use it: https://askubuntu.com/a/669025/227119
  5. Correct. Now you should ask yourself. If you are an attacker and you want to earn some Bitcoins, will you leave some dirs in the victims cloud untouched or not, especially if it contains "backup" in its name? ^^ This backup-dir is an usual subdir in the google drive. There is nothing special about it. Google Drive does not know that rclone created it for special backup purposes.
  6. Didn't you read my post? There ransomware would have full access to your drive, so it would overwrite all the changed files, too. Regarding your question, use "date": https://forum.rclone.org/t/rclone-copy-sync-with-backup-dir-setup-question/7832/2
  7. Also a nice idea. Do you move it manually in a cache only share or how do you do this? Why not using rsync? By default it does not delete files from the target, so it will only add the new movies. You would only need to exclude files that are transcoded at the moment (older than x minutes)
  8. No, as my Plex config folder is located on the NVMe and I use the direct access tweak for the config folder AND the docker.img, everything feels really fast (/mnt/cache instead of /mnt/user): Next step would be locking the covers in the RAM through vmtouch, but as the server has really much free RAM, they should be already cached, thanks to Linux.
  9. It seems you don't already know it: https://amp.reddit.com/r/DataHoarder/comments/j61wcg/g_suite_becomes_google_workspace_12month/
  10. Many websites claim that Google Drive is safe against Ransomware as an deleted file can be restored through the Web GUI. And if its became encrypted, you can restore the original file as Google saves 100 versions of each file. But I'm not sure. What happens if the attacker replaces the file against an 1kB file and re-uploads this file 101x times? The Google FAQ uses only the word "may" and not "can recover files". This means if your server has been attacked, it has an authentificated connection to your Google drive and by that the attacker is able to overwrite all the data 101x times and the files are gone (feel free to contradict me). As it is not possible to mount only a specific subfolder, the only ransomware safe solution would be to share the data with a second Google Drive account which automatically moves the data out of the shared folder (this could be done by a password protected VM or an RPI in a different network area). Depending of the size of the files this could be done with a free (upload) and a paid account (the one that moves). But maybe its even better to use the Google Drive only as Disaster Backup against Thiefs, Earthquake, Fire, etc. and do not rely on Ransomware-Safety. Against Ransomware you should guard your Unraid server itself. Suggestion: - Use only one client to Log into Unraid through your Root account and this Client should be on a different OS than your other Clients and in addition it could be without internet access - User Shares should be backuped inside the server on a different Share which is not accessible through users (rsync) - Client backups should be picked up and not uploaded. As an example: I add a share on my Windows PC for the folder "users" and this share has only read rights and is password protected. Through Unassigned Devices this share is mounted on my Unraid server and rsync picks up the files. By that Unraid has only read access to the client and the client has no access at all (except to the user share, which is on a different disk) But finally the only ransomware-safe protection will be an offline backup.
  11. As the JSON files are overwritten by docker and not the container, it shouldn't be an issue of the maintainer. I'll try my luck ^^
  12. Yes you are right. Some people use this state inside of the container to check the run status or internet access. Its not how its meant to be, as for example the internet access check is absolutely crazy as 1 million users of the container, would result in 1 million google.com requests every x seconds ^^ I'm also not sure why Plex is doing this every 5 seconds: https://github.com/plexinc/pms-docker/blob/master/root/healthcheck.sh They connect to http://tower:32400/identity, which returns an XML file and the content of this file is thrown away. This means they only check, if the internal webserver is running. And this is needed every 5 seconds? Are they crazy? ^^ Besides of that. Are these JSON files really part of an usual docker installation or is this a special Unraid thing? I wonder why only Unraid users are complaining those permanent writes. Ok... maybe it has the most transparent traffic monitoring If these writes are related to Docker only, we should open an issue here. Because only updating a timestamp (or nothing) inside a config file, does not really sound like it's working as it ment to be.
  13. @Dmitry Spikhalskiy Do you know why /mnt/user/appdata/zerotier/zerotier-one/networks.d/*.conf is updated every minute? This totally hinders disk spindown / sleep states. I compared the recent file version with one that is 3 minutes old and the content is different. But what is so important that it needs to be updated every minute? Or this is an issue which I should post at ZeroTier's GitHub Page?
  14. This solved it!!! 🥳 I had this suspect, too. But renaming healthcheck.sh to healthcheck.delete didn't help, so I gave up Added to the Plex container as follows: After that I verified it as follows: Docker -> Plex Icon -> Console: find /config/*/ -mmin -1 -ls > /config/recent_modified_files$(date +"%Y%m%d_%H%M%S").txt Results: Empty, which means no modified files in /mnt/user/appdata/Plex-Media-Server) WebGUI -> Webterminal: inotifywait -e create,modify,attrib,moved_from,moved_to --timefmt %c --format '%T %_e %w %f' -mr /var/lib/docker > /mnt/user/system/docker/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt Result: Empty, which means no writes in docker.img at all!!! Whats next? How should @limetech use this information to solve the issue? Add --no-healthcheck to all containers by default? I think something similar is needed as most people do not know how to monitor their disk traffic. They will only conclude "hey Unraid does not send this disk to sleep!" or "Unraid produces really much traffic on my SSD". ^^
  15. Hmm... are you sure, that your system never went out of RAM? And isn't /dev/shm cleaned up somehow through the OS? Mayb its better to generate your own ramdisk as described in my post. This would be exclusive for Plex alone.
  16. As I said. Possibly a race condition. It depends on how you are starting your script and how long it takes to sync it, before the next execution starts. Thats the reason why you need an atomic execution.
  17. Maybe a race condition, which means your script is executed twice somehow and your docker start/stop becomes mixed up while rsync is running? Create a lock mechanism to be sure that your script is not executed parallel. I use this for my CA User Scripts: # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1 fi trap 'rmdir "/tmp/${0///}"' EXIT But finally you need to think about, if it's a good idea to put everything into the RAM. Isn't it sufficient to put only the database or covers into the RAM? Depending on the collection size and settings (like video preview thumbnails), this can result a folder size, which is bigger than the free RAM. I would add only the database to a ramdisk as follows (which doesn't need to change the path to dev/shm) Intial Setup: - check if Plex is in idle - stop Plex container - mv "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases" "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases_backup" - mkdir "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases" - mount -t tmpfsmount -t tmpfs -o size=50% tmpfs "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases" - cp -av "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases_backup/.“ "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Plug-in Support/Databases" - start Plex container Now, only the ".../Databases" subdir is a ramdisk (which is allowed to use up to 50% of the total RAM). Backup: - as you're already doing it (idle check, stop container, rsync, etc), target would be ".../Databases_backup" as used by the initial setup. On Unraid reboot: - check if ".../Databases_backup" exists - if no, start initial setup - if yes, stop Plex container (Plex does not work as ".../Databases" is empty) - delete content of ".../Databases" (to be sure that Plex didn't create something) - create ramdisk with tmpfs as through Initial Setup - cp -av from "..../Databased_backup" - start Plex container But still dangerous as the user could change the path or Unraid crashes while rsync is running etc Maybe vmtouch is the better idea. It allows preloading complete folders into the RAM and is able to lock them. This means you can preload all the covers, while they still exist on the disk. But I'm not sure about file modifications like with the database. Wouldn't need further investigation. I asked for it through the Nerd Pack.
  18. I like to have vmtouch which allows pre loading files into the RAM and lock them inside the RAM. As an example you could preload and lock all Plex Movie/Music covers. I found only this build: https://github.com/lotabout/my-slackbuilds/tree/master/vmtouch
  19. key1 can not be correct. Where do you found that variable? Regarding the description you can only set two path: https://hub.docker.com/r/eroz/airvideo "/Movies" which already points to your "/mnt/user/bigassnas/plex-data" and "/TVShows" which you didn't set (could be optional) Please post the docker command that is generated after you edited the container and it gets started. Should start with "root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d ..." While the docker is running you can click on the Icon -> Console. Then enter this command: ls -l /Movies Does it return the contents of your movie folder?
  20. Why did you set the Port 45633? Isn't the default 45631?
  21. Ja stimmt schon, aber wie finanzieren die sich denn. Ich mein schau dir TMDB an. Deren Seite sieht nicht so aus als seien die arm an fähigen Entwicklern. EDIT: Aha, von TiVo kommt die Kohle (Quelle) und die lizenzieren die Daten an andere Firmen weiter. So viel zu "Community". Ich will gefälligst bezahlt werden für meine Updates in die Datenbank. Waren bestimmt 10 Sachen oder so ^^
  22. Das hatte ich doch schon zitiert ^^ (über den Link) Damit meinte ich ja diese "Hintertür". Die Frage ist also ob Plex bereits dafür bezahlt, aber Emby zB nicht, weil die Emby nicht nachweisen können, wie viele Anfragen von denen verursacht werden. Oder sie machen es pauschal. Von Kodi werden sie sicher kein Geld sehen.
  23. Damit meinte ich wie TMDB und TVDB das sehen, dass Emby und Kodi Nutzer massiven Traffic auf den Schnittstellen verursachen, aber nichts dafür zahlen. Bei Plex ist das einfach. Der gesamte Traffic kommt von Plex (allerdings abgeschwächt durch den Proxy Cache, daher weiß nur Plex wie viele Nutzer Plex hat). Dadurch können die Metadaten-Anbieter genau erkennen wie viele Nutze von Plex kommen. Bei Emby und Kodi können sie das nicht. Aber sie könnten die Nutzungsbedingungen so anpassen, dass es nicht erlaubt ist, die API direkt aufzurufen. Wobei sie das natürlich jetzt schon einfach mit einem Schlüssel umsetzen könnten, den jeder Entwickler beantragen muss, um die API nutzen zu können.
  24. No, but I know about the re-alignment issue. It reduces the traffic if the SSD was BTRFS formatted and/or had a "wrong" blocksize/alignment. But if I didn't miss something in the different beta announcements, this does not prevent the writes itself, which is the main problem of this after-effect.
  25. Found the source of the writes (they still exist, even if SSD has been reformatted to XFS):
×
×
  • Create New...