Jump to content

mgutt

Moderators
  • Posts

    11,355
  • Joined

  • Last visited

  • Days Won

    124

Everything posted by mgutt

  1. @Valerio found this out first, but never received an answer. Today I found it out, too. But this is present since 2019 (or even longer). I would say it's a bug, as: it prevents HDD/SSD spindown/sleep (depending on the location of docker.img) it wears out the SSD in the long run (if docker.img is located here) - see this bug, too. it prevents reaching CPU's deep sleep states What happens: /var/lib/docker/containers/*/hostconfig.json is updated every 5 seconds with the same content /var/lib/docker/containers/*/config.v2.json is updated every 5 seconds with the same content except of some timestamps (which shouldn't be part of a config file I think) Which docker containers: verified are Plex (Original) and PiHole, but maybe this is a general behaviour As an example the source of hostconfig.json which was updated yesterday 17280x times with the same content: find /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44 -ls -name hostconfig.json -exec cat {} \; 2678289 4 -rw-r--r-- 1 root root 1725 Oct 8 13:46 /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/hostconfig.json { "Binds":[ "/mnt/user/tv:/tv:ro", "/mnt/cache/appdata/Plex-Media-Server:/config:rw", "/mnt/cache/appdata/Plex-Transcode:/transcode:rw", "/mnt/user/movie:/movie:ro" ], "ContainerIDFile":"", "LogConfig":{ "Type":"json-file", "Config":{ "max-file":"1", "max-size":"50m" } }, "NetworkMode":"host", "PortBindings":{ }, "RestartPolicy":{ "Name":"no", "MaximumRetryCount":0 }, "AutoRemove":false, "VolumeDriver":"", "VolumesFrom":null, "CapAdd":null, "CapDrop":null, "Capabilities":null, "Dns":[ ], "DnsOptions":[ ], "DnsSearch":[ ], "ExtraHosts":null, "GroupAdd":null, "IpcMode":"private", "Cgroup":"", "Links":null, "OomScoreAdj":0, "PidMode":"", "Privileged":false, "PublishAllPorts":false, "ReadonlyRootfs":false, "SecurityOpt":null, "UTSMode":"", "UsernsMode":"", "ShmSize":67108864, "Runtime":"runc", "ConsoleSize":[ 0, 0 ], "Isolation":"", "CpuShares":0, "Memory":0, "NanoCpus":0, "CgroupParent":"", "BlkioWeight":0, "BlkioWeightDevice":[ ], "BlkioDeviceReadBps":null, "BlkioDeviceWriteBps":null, "BlkioDeviceReadIOps":null, "BlkioDeviceWriteIOps":null, "CpuPeriod":0, "CpuQuota":0, "CpuRealtimePeriod":0, "CpuRealtimeRuntime":0, "CpusetCpus":"", "CpusetMems":"", "Devices":[ { "PathOnHost":"/dev/dri", "PathInContainer":"/dev/dri", "CgroupPermissions":"rwm" } ], "DeviceCgroupRules":null, "DeviceRequests":null, "KernelMemory":0, "KernelMemoryTCP":0, "MemoryReservation":0, "MemorySwap":0, "MemorySwappiness":null, "OomKillDisable":false, "PidsLimit":null, "Ulimits":null, "CpuCount":0, "CpuPercent":0, "IOMaximumIOps":0, "IOMaximumBandwidth":0, "MaskedPaths":[ "/proc/asound", "/proc/acpi", "/proc/kcore", "/proc/keys", "/proc/latency_stats", "/proc/timer_list", "/proc/timer_stats", "/proc/sched_debug", "/proc/scsi", "/sys/firmware" ], "ReadonlyPaths":[ "/proc/bus", "/proc/fs", "/proc/irq", "/proc/sys", "/proc/sysrq-trigger" ] }
  2. Ok, I did not solve the big "json" issue, but the "loopback / DEBUG" one. I added through: nano "/mnt/user/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Preferences.xml" the following to the xml: logDebug="0" Now, the "Plex Media Server.log" isn't updated every 5 seconds. Instead it stays the same since the last 5 minutes. Good. EDIT: Ok, I found the same setting through the Plex WebGUI which needs to be disabled: As the "JSON" issue seems to be a bug, I opened an issue.
  3. We monitor the docker.img writes as follows: inotifywait -mr /var/lib/docker Which returns a huge amount of changed files inside the docker image. So I executed the following command which I manually interrupted after 1 minute: inotifywait --timefmt %c --format '%T %_e %w %f' -mr /var/lib/docker > /mnt/user/system/docker/recent_modified_files_$(date +"%Y%m%d_%H%M%S").txt Attached you find this log file. After viewing it, I would say we can ignore all non writing events, so I removed all of them with PSPad and this regex: .* (OPEN|ACCESS|CLOSE_NOWRITE_CLOSE|OPEN_ISDIR|CLOSE_NOWRITE_CLOSE_ISDIR) .* which leaves the following: Thu Oct 8 11:48:00 2020 CREATE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json453644074 Thu Oct 8 11:48:00 2020 MODIFY /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json453644074 Thu Oct 8 11:48:00 2020 CREATE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json420902017 Thu Oct 8 11:48:00 2020 MODIFY /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json420902017 Thu Oct 8 11:48:00 2020 CLOSE_WRITE_CLOSE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json420902017 Thu Oct 8 11:48:00 2020 ATTRIB /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json420902017 Thu Oct 8 11:48:01 2020 MOVED_FROM /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json420902017 Thu Oct 8 11:48:01 2020 MOVED_TO /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ hostconfig.json Thu Oct 8 11:48:01 2020 CLOSE_WRITE_CLOSE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json453644074 Thu Oct 8 11:48:01 2020 ATTRIB /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json453644074 Thu Oct 8 11:48:01 2020 MOVED_FROM /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json453644074 Thu Oct 8 11:48:01 2020 MOVED_TO /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ config.v2.json Thu Oct 8 11:48:06 2020 CREATE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json583460332 Thu Oct 8 11:48:06 2020 MODIFY /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json583460332 Thu Oct 8 11:48:06 2020 CREATE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json231622235 Thu Oct 8 11:48:06 2020 MODIFY /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json231622235 Thu Oct 8 11:48:06 2020 CLOSE_WRITE_CLOSE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json231622235 Thu Oct 8 11:48:06 2020 ATTRIB /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json231622235 Thu Oct 8 11:48:06 2020 MOVED_FROM /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-hostconfig.json231622235 Thu Oct 8 11:48:06 2020 MOVED_TO /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ hostconfig.json Thu Oct 8 11:48:06 2020 CLOSE_WRITE_CLOSE /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json583460332 Thu Oct 8 11:48:06 2020 ATTRIB /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json583460332 Thu Oct 8 11:48:06 2020 MOVED_FROM /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ .tmp-config.v2.json583460332 Thu Oct 8 11:48:06 2020 MOVED_TO /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ config.v2.json ... constantly repeating every 5 seconds So let's check whats inside this "40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44" container: ls -l /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/ total 32 -rw-r----- 1 root root 7082 Oct 8 10:51 40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44-json.log drwx------ 1 root root 0 Oct 8 09:27 checkpoints/ -rw------- 1 root root 4746 Oct 8 12:00 config.v2.json -rw-r--r-- 1 root root 1725 Oct 8 12:00 hostconfig.json -rw-r--r-- 1 root root 6 Oct 8 10:51 hostname -rw-r--r-- 1 root root 77 Oct 8 10:51 hosts drwx------ 1 root root 0 Oct 8 09:27 mounts/ -rw-r--r-- 1 root root 170 Oct 8 10:51 resolv.conf And what is inside this config.v2.json and hostconfig.json (attached you find a formatted version) cat /var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/*.json {"StreamConfig":{},"State":{"Running":true,"Paused":false,"Restarting":false,"OOMKilled":false,"RemovalInProgress":false,"Dead":false,"Pid":24422,"ExitCode":0,"Error":"","StartedAt":"2020-10-08T08:51:13.276579046Z","FinishedAt":"2020-10-08T08:47:39.688294924Z","Health":{"Status":"healthy","FailingStreak":0,"Log":[{"Start":"2020-10-08T12:02:42.459650529+02:00","End":"2020-10-08T12:02:42.569505318+02:00","ExitCode":0,"Output":""},{"Start":"2020-10-08T12:02:47.69109973+02:00","End":"2020-10-08T12:02:47.801361267+02:00","ExitCode":0,"Output":""},{"Start":"2020-10-08T12:02:52.904547907+02:00","End":"2020-10-08T12:02:53.012702276+02:00","ExitCode":0,"Output":""},{"Start":"2020-10-08T12:02:58.126848656+02:00","End":"2020-10-08T12:02:58.235984475+02:00","ExitCode":0,"Output":""},{"Start":"2020-10-08T12:03:03.338230958+02:00","End":"2020-10-08T12:03:03.451949041+02:00","ExitCode":0,"Output":""}]}},"ID":"40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44","Created":"2020-10-08T07:27:54.088945211Z","Managed":false,"Path":"/init","Args":[],"Config":{"Hostname":"Black","Domainname":"","User":"","AttachStdin":false,"AttachStdout":false,"AttachStderr":false,"ExposedPorts":{"1900/udp":{},"3005/tcp":{},"32400/tcp":{},"32410/udp":{},"32412/udp":{},"32413/udp":{},"32414/udp":{},"32469/tcp":{},"8324/tcp":{}},"Tty":false,"OpenStdin":false,"StdinOnce":false,"Env":["claim-Bwfnkm7ZKn5-DZMcP9tz=Insert Token from https://plex.tv/claim","PLEX_UID=99","PLEX_GID=100","VERSION=latest","TZ=Europe/Berlin","HOST_OS=Unraid","PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin","TERM=xterm","LANG=C.UTF-8","LC_ALL=C.UTF-8","CHANGE_CONFIG_DIR_OWNERSHIP=true","HOME=/config"],"Cmd":null,"Healthcheck":{"Test":["CMD-SHELL","/healthcheck.sh || exit 1"],"Interval":5000000000,"Timeout":2000000000,"Retries":20},"Image":"plexinc/pms-docker:latest","Volumes":{"/config":{},"/transcode":{}},"WorkingDir":"","Entrypoint":["/init"],"OnBuild":null,"Labels":{}},"Image":"sha256:e5e4fc9b0413a8a7ca4f1defea1dc07e4a419d1d4313350fbb7c58ced2e01767","NetworkSettings":{"Bridge":"","SandboxID":"0dae3e7d2a79a03eae10a401a686966a9f3398f556930775b6e2af1b15258c41","HairpinMode":false,"LinkLocalIPv6Address":"","LinkLocalIPv6PrefixLen":0,"Networks":{"host":{"IPAMConfig":null,"Links":null,"Aliases":null,"NetworkID":"e57bd2ed83e24be957361c8e471ebf39700dbe54102ccde5a182314532a5781e","EndpointID":"5823d8a0fcd4cc06699b1666baa42cc9b0f37d470d4e21fb95743655c656ea68","Gateway":"","IPAddress":"","IPPrefixLen":0,"IPv6Gateway":"","GlobalIPv6Address":"","GlobalIPv6PrefixLen":0,"MacAddress":"","DriverOpts":null,"IPAMOperational":false}},"Service":null,"Ports":{},"SandboxKey":"/var/run/docker/netns/default","SecondaryIPAddresses":null,"SecondaryIPv6Addresses":null,"IsAnonymousEndpoint":false,"HasSwarmEndpoint":false},"LogPath":"/var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44-json.log","Name":"/Plex-Media-Server","Driver":"btrfs","OS":"linux","MountLabel":"","ProcessLabel":"","RestartCount":0,"HasBeenStartedBefore":true,"HasBeenManuallyStopped":false,"MountPoints":{"/config":{"Source":"/mnt/cache/appdata/Plex-Media-Server","Destination":"/config","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/mnt/cache/appdata/Plex-Media-Server","Target":"/config"},"SkipMountpointCreation":false},"/movie":{"Source":"/mnt/user/movie","Destination":"/movie","RW":false,"Name":"","Driver":"","Type":"bind","Relabel":"ro","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/mnt/user/movie","Target":"/movie","ReadOnly":true},"SkipMountpointCreation":false},"/transcode":{"Source":"/mnt/cache/appdata/Plex-Transcode","Destination":"/transcode","RW":true,"Name":"","Driver":"","Type":"bind","Relabel":"rw","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/mnt/cache/appdata/Plex-Transcode","Target":"/transcode"},"SkipMountpointCreation":false},"/tv":{"Source":"/mnt/user/tv","Destination":"/tv","RW":false,"Name":"","Driver":"","Type":"bind","Relabel":"ro","Propagation":"rprivate","Spec":{"Type":"bind","Source":"/mnt/user/tv","Target":"/tv","ReadOnly":true},"SkipMountpointCreation":false}},"SecretReferences":null,"ConfigReferences":null,"AppArmorProfile":"","HostnamePath":"/var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/hostname","HostsPath":"/var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/hosts","ShmPath":"","ResolvConfPath":"/var/lib/docker/containers/40b4197fdea122178139e9571ae5f4040a2ef69449acf14e616010c7e293bb44/resolv.conf","SeccompProfile":"","NoNewPrivileges":false} {"Binds":["/mnt/user/tv:/tv:ro","/mnt/cache/appdata/Plex-Media-Server:/config:rw","/mnt/cache/appdata/Plex-Transcode:/transcode:rw","/mnt/user/movie:/movie:ro"],"ContainerIDFile":"","LogConfig":{"Type":"json-file","Config":{"max-file":"1","max-size":"50m"}},"NetworkMode":"host","PortBindings":{},"RestartPolicy":{"Name":"no","MaximumRetryCount":0},"AutoRemove":false,"VolumeDriver":"","VolumesFrom":null,"CapAdd":null,"CapDrop":null,"Capabilities":null,"Dns":[],"DnsOptions":[],"DnsSearch":[],"ExtraHosts":null,"GroupAdd":null,"IpcMode":"private","Cgroup":"","Links":null,"OomScoreAdj":0,"PidMode":"","Privileged":false,"PublishAllPorts":false,"ReadonlyRootfs":false,"SecurityOpt":null,"UTSMode":"","UsernsMode":"","ShmSize":67108864,"Runtime":"runc","ConsoleSize":[0,0],"Isolation":"","CpuShares":0,"Memory":0,"NanoCpus":0,"CgroupParent":"","BlkioWeight":0,"BlkioWeightDevice":[],"BlkioDeviceReadBps":null,"BlkioDeviceWriteBps":null,"BlkioDeviceReadIOps":null,"BlkioDeviceWriteIOps":null,"CpuPeriod":0,"CpuQuota":0,"CpuRealtimePeriod":0,"CpuRealtimeRuntime":0,"CpusetCpus":"","CpusetMems":"","Devices":[{"PathOnHost":"/dev/dri","PathInContainer":"/dev/dri","CgroupPermissions":"rwm"}],"DeviceCgroupRules":null,"DeviceRequests":null,"KernelMemory":0,"KernelMemoryTCP":0,"MemoryReservation":0,"MemorySwap":0,"MemorySwappiness":null,"OomKillDisable":false,"PidsLimit":null,"Ulimits":null,"CpuCount":0,"CpuPercent":0,"IOMaximumIOps":0,"IOMaximumBandwidth":0,"MaskedPaths":["/proc/asound","/proc/acpi","/proc/kcore","/proc/keys","/proc/latency_stats","/proc/timer_list","/proc/timer_stats","/proc/sched_debug","/proc/scsi","/sys/firmware"],"ReadonlyPaths":["/proc/bus","/proc/fs","/proc/irq","/proc/sys","/proc/sysrq-trigger"]} Ok, this is obviously a plex configuration file. But why is it updated every 5 seconds 🤨 I'll do some research about these files. Maybe I found something out. EDIT: @Valerio found this out, too. And he confirmed, that this setting is also set of the pihole docker. This means, it's not a bug of Plex, instead its a bug of Unraid?! recent_modified_files_20201008_114759.zip cat_mnt_disks_docker_containers_json.zip
  4. Since 2019 there seems to exist a bug which causes constantly writes: https://forums.plex.tv/t/pms-docker-unraid-is-constantly-writing-to-its-docker-home-library/419895 https://www.reddit.com/r/unRAID/comments/gw4k6x/plex_help_it_keeps_writing/ https://www.reddit.com/r/unRAID/comments/beaavt/plex_docker_is_constantly_writing_to_my_cache_ssd/ https://www.reddit.com/r/PleX/comments/bece3f/official_plex_docker_on_unraid_server_is/ Some people suggest to change the SSD cache filesystem from BTRFS to XFS, which seems to be part of an Unraid bug. But finally this reduces only the write size. They are still happening. This is Plex on my backup server with BTRFS cache drive: And this is my main server with XFS cache drive: To investigate this problem I went to Docker -> Plex Icon -> Console and entered this command on both servers to obtain all files that were changed in the last minute: find / -mmin -1 -ls > /config/recent_modified_files$(date +"%Y%m%d_%H%M%S").txt Attached you find the result. I searched for /config/ and found on the backup server this changed files: 996794 172 -rw-r--r-- 1 plex users 176120 Oct 8 10:12 /config/Library/Application\ Support/Plex\ Media\ Server/Logs/Plex\ Media\ Server.log I checked the log file first. Those changes were added in the last minute: Oct 08, 2020 10:11:46.176 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45102 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:11:46.176 [0x154d45512700] DEBUG - Completed: [127.0.0.1:45102] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:11:51.389 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45104 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:11:51.389 [0x154d45512700] DEBUG - Completed: [127.0.0.1:45104] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:11:56.622 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45106 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:11:56.623 [0x154d45512700] DEBUG - Completed: [127.0.0.1:45106] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:01.835 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45108 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:01.836 [0x154d45512700] DEBUG - Completed: [127.0.0.1:45108] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:07.070 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45110 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:07.071 [0x154d45713700] DEBUG - Completed: [127.0.0.1:45110] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:12.293 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45112 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:12.293 [0x154d45713700] DEBUG - Completed: [127.0.0.1:45112] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:17.502 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45114 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:17.502 [0x154d45512700] DEBUG - Completed: [127.0.0.1:45114] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:22.703 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45116 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:22.703 [0x154d45713700] DEBUG - Completed: [127.0.0.1:45116] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:27.915 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45118 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:27.915 [0x154d45512700] DEBUG - Completed: [127.0.0.1:45118] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:33.130 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45120 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:33.130 [0x154d45713700] DEBUG - Completed: [127.0.0.1:45120] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:38.331 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45122 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:38.331 [0x154d45713700] DEBUG - Completed: [127.0.0.1:45122] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Oct 08, 2020 10:12:43.548 [0x154d2f1f8700] DEBUG - Request: [127.0.0.1:45124 (Loopback)] GET /identity (2 live) Signed-in Oct 08, 2020 10:12:43.548 [0x154d45713700] DEBUG - Completed: [127.0.0.1:45124] 200 GET /identity (2 live) 0ms 398 bytes (pipelined: 1) Does anyone know why those "loopback" requests are happening? I wonder about those "DEBUG" entries as my Plex Debug Level is disabled: But those small log entries can't be the main problem. As we can read in this thread, the main problem are the writes to the docker.img file. I'm not sure if this is really a bug or a misconfigured container. For example I had made a mistake in the past and set the wrong transcoding path. By that Plex wrote all temporary transcoding files to the docker.img itself. Maybe we only need to add a path, so the docker.img is left untouch?! To find it out we need to investigate the files inside the docker.img, but at first I want to test if changing the Docker vDisk location path from: /mnt/user/system/docker/docker.img to: /mnt/cache/system/docker/docker.img By that we bypass Unraid's SHFS overhead and enable direct disk access (use with caution, read this tweak). EDIT: Nope. Didn't help to reduce traffic. But nevertheless I will keep this tweak as it reduces cpu load. So now, let's check the docker.img. This follows in the next post. recent_modified_files20201008_101243.zip
  5. People having problems with the official Plex docker container can answer to this thread. If you use the version of Linuxserver.io, post here. So I hope we can help each other.
  6. Bei den NUCs kann das durchaus mal unterschiedlich ausfallen.
  7. Keine Ahnung ob das für dich interessant ist, aber evtl unterstützt der Celeron SATA Port Multiplier. Dann könnte man an einen SATA Port 2 bis 5 weitere Platten anschließen (natürlich reduziert sich dadurch die Bandbreite). Edit: geht tatsächlich und steht sogar im Wiki ^^
  8. Woher bekommt denn dein Emby die Metadaten oder hast du bei allen Filmen selbst eine NFO-Datei hinterlegt? Emby nutzt doch die selben bekannten Verdächtigen wie TMDB, Fanart, usw, wenn man nicht alle Metadaten selbst anlegt und die Verbindung ins Internet kappt. Ich meine, super, wenn das geht, aber macht das überhaupt jemand? Also ich bin faul. Ich will, dass das automatisch geht ^^ Ich weiß, dass Plex 2017 offiziell gesagt hat, dass sie sammeln, aber tun das nicht alle? Selbst bei Kodi bekommt man die Daten ja nur über eine API. Gut, bei Kodi kommen die Daten direkt von dem jeweiligen Anbieter. Also die Anfrage läuft nicht über eine zentrale Domain wie bei Plex, dafür verdient Kodi aber auch kein Geld. Dh die API Betreiber können Kodi keine Rechnung schicken. Bei Plex ist das anders. Auch dürften die Anzahl der API Anfragen bei Plex viel höher sein. Aus dem Grund speichert Plex die Anfragen und Ergebnisse der API zwischen (was manche nervt, weil die Daten manchmal veraltet sind). Hat aber auch den Vorteil, dass sich Plex nicht von den Servern der externen Dienstleister abhängig macht. Für ein Unternehmen, das Geld verdient, nachvollziehbar. Wie das bei Emby läuft, konnte ich nicht herausfinden. Edit: ok, bei Emby gehen die Anfragen wie bei Kodi direkt an die jeweilige API zB von TVDB https://emby.media/community/index.php?/topic/79269-tvdb-api-issues/ Keine Ahnung wie das finanziell läuft. TMDB erlaubt scheinbar doch die kommerzielle Nutzung, lässt sich aber eine Hintertür offen, den Zugriff kostenpflichtig zu machen: https://www.themoviedb.org/documentation/api/terms-of-use?language=de-DE#:~:text=TMDb is committed to free,have real costs for TMDb. Aber grundsätzlich stimmt es, Plex sammelt mehr Daten als die anderen.
  9. Bei VMs bin ich noch eine absolute Niete, aber ich habe das letzte mal Windows 7 mit OS Install CDRom Bus "SATA", VirtIO Drivers CDRom Bus "SATA" und Primary vDisk Bus VirtIO installiert. Ist das nicht 10x schneller als mit "IDE"? Du weißt, IDE ist das von früher mit den breiten Flachbandkabeln, da gab es noch kein Windows 10 ^^
  10. In Unraid gibt es kein Raid. Falls du auf die Parity anspielst... wenn du "Parity Slots" und "Cache Slots" bei New Config auswählst, sollte die denke ich erhalten bleiben und muss dann nicht neu berechnet werden. Allerdings ohne Gewähr
  11. Wenn die ID anders ist, muss man eine "new config" starten und die Platten entsprechend manuell zuordnen. ID ändern geht nicht.
  12. No gar nicht. Ich nutze rsync und kopiere jeden Tag das komplette Image (wenn sich was geändert hat, was aber quasi jeden Tag der Fall ist). Dumm gelaufen ^^ Aber ist genug Platz auf der HDD und x Tage/Wochen reichen ja als Backup.
  13. Did you mean the screenshot of the Unraid Dashboard? No, of course not. This was only to show the high CPU load and i/o wait. But you are right. This could be confusing. I removed it from my first post.
  14. Ok as expected, I was able to transcode even more. Settings - Enabled Hardware Acceleration (requires Plex Pass) - Transcoding to 8GB Ramdisk (4GB was not sufficient for more than 3x 4K streams, didn't tested 6GB) - Direct disk access for Plex Config - Direct disk access to 4K Movies located on NVMe: Results Transcoding 5x 4K Streams without judder: I'm not sure if even 6x 4K would be possible. The only limitation seems to be I/O wait: EDIT: Ok I think its not really an I/O wait, instead the iGPU reached its limit. This output is generated through the Intel GPU Tools: 5x 4K streams = 100% video core load 4x 4K streams = 99% video core load 3x 4K streams = 74% video core load 2x 4K streams = 35% video core load 1x 4K stream = 24% video core load As you can see the Plex CPU dashboard isn't useful for hardware transcoding as it shows only the CPU load and not the video core load. But hey, we were able to transcode 5x 4K streams parallel. Should be enough I think I think it would be possible to get even 6x 4K streams if the iGPU maximum frequency would be 1.150 Mhz as it is with the i3-8300. Or with the 1.200 Mhz of the i5-10600 and many 8th to 10th gen i7 CPUs.
  15. Many people on Reddit ask me how good the 8th/9th/10th Gen Intel iGPU performs and most of them do not believe, when I say "better than a Quadro P2000". Next time I will link to this screenshots Settings - Enabled Hardware Acceleration (requires Plex Pass) - Transcoding to 8GB Ramdisk (4GB was not sufficient for more than 3x 4K streams, didn't tested 6GB) - Enabled direct disk access for Plex Config Results Transcoding 4x 4K Streams without judder: If I transcode 5x 4K it judders from time to time in one stream (randomly): For my next benchmark I will copy some 4K Movies to my SSD cache. Test results follow...
  16. Also meine Sammmlung (ca 2500 Blu-Rays) ist nicht vom LKW gefallen. 🤨 Und nur 1 TB? Schön wär's ^^
  17. Google wird sein Geschäftsmodell ändern und den unendlichen Speicherplatz auf 2TB reduzieren: Laut den Kommentaren kann man aber wohl noch auf ein teureres Enterprise Paket wechseln, was wohl um die 20 € im Monat kostet?! Da werden so einige Plex Nutzer, insbesondere in den USA aufschreien. Da war das nämlich sehr beliebt um die gesamte Film-Sammlung über ein gemountetes Google Drive anschauen zu können. Mit der Zeit haben dann manche angefangen gar kein NAS mehr zu haben bzw nur noch eines mit weniger Speicherplatz. Das wird sich wohl bald wieder ändern. Vielleicht gibt es ja die Tage mehr Zulauf bei Unraid. Wird Zeit WD Aktien zu kaufen ^^
  18. Das hat keinen Nachteil. Es zählt dann einfach nur die kleinste Plattengröße. Der restliche Platz der größeren Platten wird ignoriert. Ich habe leider gar keine Ahnung von den neuen Pools. Ich habe sonst einfach immer die Config zurückgesetzt (Tools -> New Config). Dadurch konnte ich zB die Reihenfolge der Platten ändern. Die Daten auf den Platten blieben dabei immer erhalten. zB hatte ich beim neuen Mainboard zwei Platten über USB angeschlossen, weshalb sie Unraid nicht mehr erkannte. Also habe ich New Config gemacht, die zwei Platten zugeordnet und danach die Parity neu berechnen lassen (die geht bei New Config logischerweise verloren). Wie gesagt. Wenn Du snapshottest, dann nur innerhalb eines RAID oder wenn kein RAID vorhanden, dann nur innerhalb einer Disk. Snaptshots gehen ja so schnell, weil sie einfach nur eine Art Hardlink auf bereits existierende Daten darstellen. Und auf Daten verlinken geht halt nicht außerhalb der bestehenden Partition. Fazit: Will man sie exportieren, ist das kein Snapshot mehr, sondern ein Backup (was dann natürlich auch viel länger braucht).
  19. The writing speed depends on the size of the RAM (vm.dirty_ratio) and the size of the files that are written. But reads would be faster. But as @itimpi pointed out correctly, I had a mistake in thinking. If you add files to a mounted image, those files will be written sequentially to the image. And as we have 4 huge image parts across 4 disks, it will only write to the first part as long its not filled and never touch the other involved disks. So to realize this idea it would be required to have much more parts across the disks, so even a small/medium file is written to multiple disks. This means the parts need a size of 1MB or so. ^^
  20. This is only an idea. I do not know if a software exists that would make this possible. Feedback is welcome. 1.) We create a share across multiple disks 2.) We use a software (don't know if it exist) to create a VHD, DMG, RAW disk image, etc which consists of as many files as disks are used by the share. Lets say 4 disks and 1TB image means 4x 250GB image parts. 3.) We copy the 4 parts to the 4 different disks 4.) We mount the image through the client. Now we theoretically have a read and write speed 4 times faster than only with one disk.
  21. 1.) Du kannst ein Snapshot nicht auf ein externes Medium erstellen (was die HDD in dem Fall wäre). 2.) Du musst ein separates Tool wie BTRBK nutzen, um das Snapshot auf die HDD zu bekommen. 3.) BTRBK kann auch auf Nicht-BTRFS Datenträger sichern, weshalb diese nicht unbedingt BTRFS formatiert werden müssten Und warum kein RAID5? Sollte dann 480GB insgesamt ergeben mit doppelter Lese- und Schreibgeschwindigkeit. Also je nach Modell insgesamt 1000 MB/s. Er wird auch deutlich performanter sein als eine einzelne HDD in Unraid, aber meiner Ansicht nach muss man sich überlegen welche Daten überhaupt schnell verfügbar sein müssen und welche nicht. Ich habe zB 64 GB RAM installiert und 75% meines freien RAMs, nutze ich als Schreibcache. Dh um die 40GB habe ich immer beim Upload frei und egal wohin die Daten gehen, also auch bei einer HDD, ich schiebe die mit 1 GB/s aufs NAS. Up- und Download auf die SSD gehen ebenfalls grundsätzlich mit 1 GB/s (bis 1TB). Die einzige Einschränkung ist der Download von den HDDs. Dieser ist naturgemäß begrenzt auf die Geschwindigkeit einer Platte. Wenn ich also mal einen Film-Rip runterlade, geht das "nur" mit 100 bis 200 MB/s. Und das meine ich mit Hot und Cold. Diese Daten sind für mich Cold, da ich sie wenn nur sehr selten benötige. Die meinte ich. Da gibt es ja sowohl Versionen als auch zwei verschiedene Typen, die sich geringfügig unterscheiden. Das CA Users Scripts Plugin hilft dir bei Skripten, die nach Zeit ausgeführt werden sollen. Eine GUI für BTFRS gibt es, aber nicht für Unraid. Das nennt sich Snapper. Aber das gibt es weder als Docker Container, noch als Slackware Build (das Linux auf dem Unraid läuft).
  22. My next test should cover downloading from HDDs. I don't think this is the best we could get: I will test transfering the file to the server's nvme to check the highest possible speed without SMB. Then I'll test FTP and maybe I find a way to boot ubuntu on my client machine to test NFS as well. The most interesting part are these fluctuations.
  23. Warum BTRFS? Willst du Snapshots auf einzelnen HDDs hinterlegen? Macht doch eigentlich nur bei VM Images Sinn und die liegen ja auf den SSDs. Welchen Mehrwehrt hat das? EIn RAID5 aus drei SSDs sollte doch schneller sein als zwei einzelne Pools?! Du hast dazu das folgende geschrieben, aber mir erschließt sich nicht warum die VM "jetzt" gecached werden kann und vorher nicht: Bei mir liegen alle VMs auf einer SSD und fertig. Das ist auch so eine Sache. Ich kauf mir lieber große SSDs als wieder RAID-Spielchen mit den HDDs zu machen. Für mich steht seit Unraid klar fest: Es gibt nur noch Hot und Cold Storage. Und der Cold Storage ist eben "lahm". Das ist wirklich komisch, dass du das nicht zum Laufen bringst. Ich habe leider keine Erfahrungswerte, habe aber bei meinen VMs immer wieder festgestellt, dass es einen großen Unterschied macht welchen Chipsatz (Machine) und BIOS man verwendet. Hast du da auch mal gewechselt? Ist natürlich auch immer toll, wenn man der einzige auf der Welt mit einer Fehlermeldung ist ^^ Laut den verschiedenen Tipps im Forum einfach bei Update auf "Restore" gehen oder so. Allerdings fraglich was mit den SSD Pools passiert. Die gibt es ja erst seit der Beta.
  24. This discussion started here. Use Geizhals to find dual cards. The QNAP QXG-10G2T-107 seems to be the cheapest. But I can not guarantee that the drivers will work in Unraid. So order it with return option, so you can test it. The Intel X540-T2 could be a choice, too, but beware, many of them are fake (but seem to work without problems). This is my research result regarding power consumption (for single cards): X557-AT 3,4W TDP (not sure if total or only the controller) https://ark.intel.com/content/www/de/de/ark/products/series/82749/intel-ethernet-connection-x557-series.html X550-AT 8W TDP (a X550-T1 network card should consume 8.4W) https://ark.intel.com/content/www/de/de/ark/products/series/75020/intel-ethernet-controller-x550-series.html 82599 7,1W TDP (specification page 988) https://ark.intel.com/content/www/de/de/ark/products/series/32609/intel-82599-10-gigabit-ethernet-controller.html (laut Marvell/Aquantia AQC107 ~6W Measured by myself (in idle) Finally I don't know if the Intel values are idle or max values. Onboard 10G ports need also a heatsink. But it seems that the TDP became smaller in the newer intel controller generations. While the X540 had such a big heat sink, you will find a really small one on this board (with Dual Intel X557) : https://geizhals.de/supermicro-x11sdv-16c-tln2f-retail-mbd-x11sdv-16c-tln2f-o-a1808467.html But at the moment I was not able to find X557 network adapters. Only this Mezzanine-Module: https://www.idealo.de/preisvergleich/OffersOfProduct/6042536_-ethernet-network-connection-ocp-x557-t2-intel.html This "monster" uses two different controllers ^^ https://geizhals.de/supermicro-aoc-stg-i4t-a1711988.html Conclusion: Sadly really nobody seems to test power consumption of 10gbe cards, but the size of the heatsink looks promising for the X557 as the Aquantia Single port card has the same heatsink size as the Dual Intel X577.
×
×
  • Create New...