Moz80

Members
  • Posts

    9
  • Joined

  • Last visited

Moz80's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Such excellent news! really look forward to upgrading! Thank you so much!
  2. Thanks for this, I really do appreicate it. And it certainly seems I glossed over this one (the remount option) in a hurry and may have missed it. Also, as he said "But like mentioned it's not a complete fix" ...I'm really kinda hanging on for an actual "complete fix". Without me breaking anything in the meantime. I was happy to reduce the writes as much as I could by chainging a few containers (I shouldn't have defined that as a 'fix' i guess), as I was comfortable with this, but i'm not personally very comfortable with many other changes proposed beyond that without the guarantee it's not going to break something if an actual fix is pushed out. But i'm absolutely not trying to diminish the efforts yourself and the many others here have made to finding these options for other users! I probably didn't need to make another post I guess, should have continued to observe from the sidelines. Just frustration that my poor little SSD's life is ticking away quite quickly with this issue present. And still no idea if it's been officially acknowledged and a fix in the works? [Edit] My (basic) calculations has my SSD's life expectancy at 258 days worth of "percent lifetime remain".
  3. So back on June 9 I was showing 89% life left on my SSD. Fast forward to today: 9 Power on hours 0x0032 100 100 000 Old age Always Never 820 (1m, 3d, 4h) 202 Percent lifetime remain 0x0030 086 086 001 Old age Offline Never 14 And I've lost another 3%. Now down to 86% in 1 month and 3 days of use. After applying the "fixes" of changing a couple of docker containers (i haven't reformatted my cache drive to another filesystem). I've tried to keep up-to-date on this thread (reading all the notifications) and I may have missed it, but it would be still be REAL beneficial to know that a real actual fix was coming for this one.
  4. Yeah, not great, but considerable relief that it wasn't worse news! Do we know if this problem is LIKELY to have a solution, to continue using BTRFS, or is reformatting the cache drive to XFS really the best option at this point?
  5. I have just now looked at the smart data for my ssd in unraid it has a line that says; 202 Percent lifetime remain 0x0030 089 089 001 Old age Offline Never 11 ...Does that really mean there is only 11% life left of my ssd that’s less than a month old? Popping into a calculator, using the lbas written, of 57527742008 shows me 26.79TB. From the crucial data sheet for the ssd they state 180TB written as the endurance of the drive (so I wasn’t as worried) ... but the smart data says only 11% so I’m freaking out a little now! Should I be worried?
  6. Another person with this issue signing in. Had 26TB in 21 days written to my cache drive, when i finally decided to investigate the high temp notifications etc. Changed from official plex to the binhex plex container, which brought be down from TB/day to about 120GB-ish per day. Really look forward to finding a resolution to this one.
  7. Hey mate, absolutely, I found and bookmarked that last night on my ipad, I guess that's where the discussion on this is happening now. I'll have a good read through and see where things are up to! Thanks for all your efforts thus far!
  8. So now with 24 hours of iotop data, i have 120GB on loop2 (much better than the TB's i would have had on the official plex container) Interestingly I can see a few Plex Media Server entries (that seem consistent with some watch activity that occured overnight) 44GB of transmission activity seems a bit up there, there was a single 8GB torrent overnight. But if all this activity is seperate to the loop2 entry, what exactly is going on with loop2? Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 350.07 K/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 5041 be/0 root 1936.26 M 119.93 G 0.00 % 0.31 % [loop2] 5062 be/4 root 1968.00 K 3.21 G 0.00 % 0.08 % [btrfs-transacti] 5110 be/4 root 0.00 B 65.55 M 0.00 % 0.04 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5090 be/4 root 4.00 K 75.10 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5112 be/4 root 4.00 K 72.13 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5115 be/4 root 16.00 K 79.57 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5114 be/4 root 24.00 K 67.71 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5109 be/4 root 212.00 K 80.83 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5086 be/4 root 0.00 B 67.98 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 9424 be/4 root 484.43 M 54.46 M 0.00 % 0.03 % python Tautulli.py --datadir /config 4706 be/4 root 0.00 B 0.00 B 0.00 % 0.03 % [unraidd4] 5075 be/4 root 48.00 K 58.54 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5120 be/4 root 20.00 K 81.87 M 0.00 % 0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5123 be/4 root 52.00 K 64.18 M 0.00 % 0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5119 be/4 root 224.00 K 51.89 M 0.00 % 0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5107 be/4 root 12.00 K 60.95 M 0.00 % 0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 5087 be/4 root 48.00 K 55.74 M 0.00 % 0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 7538 be/4 nobody 44.87 G 1104.00 K 0.00 % 0.02 % transmission-daemon -g /config -c /watch -f 8870 be/4 nobody 488.26 M 2.27 M 0.00 % 0.02 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config 5113 be/4 root 0.00 B 26.85 M 0.00 % 0.01 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 8267 be/4 nobody 3.07 G 20.00 K 0.00 % 0.00 % Plex Media Server 8268 be/4 nobody 3.02 G 12.00 K 0.00 % 0.00 % Plex Media Server 8869 be/4 nobody 3.51 G 27.51 M 0.00 % 0.00 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config 5108 be/4 root 0.00 B 4.16 M 0.00 % 0.00 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error 4865 be/4 root 252.00 K 67.11 M 0.00 % 0.00 % [btrfs-transacti] 8952 be/4 nobody 86.22 M 7.18 M 0.00 % 0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config 28612 be/4 root 90.55 M 0.00 B 0.00 % 0.00 % homebridge 31972 be/4 root 15.64 M 1756.00 K 0.00 % 0.06 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0 32221 be/4 root 11.69 M 1528.00 K 0.00 % 0.06 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0 8868 be/4 nobody 78.38 M 2040.00 K 0.00 % 0.00 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config 8953 be/4 nobody 79.08 M 1432.00 K 0.00 % 0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config 3728 be/4 root 4.11 M 352.00 K 0.00 % 0.14 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0 5076 be/4 root 76.00 K 252.00 K 0.00 % 0.37 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0 8160 be/4 nobody 298.57 M 624.00 K 0.00 % 0.00 % Plex Media Server 9133 be/4 nobody 781.49 M 1476.00 K 0.00 % 0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config 9134 be/4 nobody 317.68 M 1300.00 K 0.00 % 0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config 9135 be/4 nobody 212.10 M 1568.00 K 0.00 % 0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config 5077 be/4 root 236.00 K 140.00 K 0.00 % 0.28 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0 28633 be/4 root 16.68 M 0.00 B 0.00 % 0.00 % homebridge-config-ui-x 8954 be/4 nobody 77.48 M 1108.00 K 0.00 % 0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config 8638 be/4 nobody 2.34 M 32.00 K 0.00 % 0.00 % tint2 -c /home/nobody/tint2/theme/tint2rc 4324 be/4 root 6.19 M 8.28 M 0.00 % 0.12 % qemu-system-x86_64 -name guest=Windows 10,debug-thread~n=deny,resourcecontrol=deny -msg timestamp=on [worker] 4931 be/4 root 5.45 M 128.00 K 0.00 % 0.18 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0 28613 be/4 root 428.00 K 0.00 B 0.00 % 0.00 % tail -f /dev/null 18758 be/4 root 948.00 K 11.84 M 0.00 % 0.00 % [kworker/u12:3-btrfs-scrub] 4922 be/4 root 1608.00 K 104.00 K 0.00 % 0.15 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0 9423 be/4 root 2.70 M 764.00 K 0.00 % 0.00 % python Tautulli.py --datadir /config 27586 be/4 root 110.12 M 68.00 K 0.00 % 0.00 % python Tautulli.py --datadir /config 9441 be/4 root 6.93 M 244.00 K 0.00 % 0.00 % python Tautulli.py --datadir /config 8114 be/4 root 0.00 B 304.00 K 0.00 % 0.00 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error 8415 be/4 root 9.79 M 544.00 K 0.00 % 0.00 % python /usr/bin/supervisord -c /etc/supervisor.conf -n 24792 be/4 root 110.03 M 64.00 K 0.00 % 0.00 % python Tautulli.py --datadir /config
  9. Only just realised yesterday that this was happening to me. My 21 day old 500GB sATA SSD has 26TB written already (yikes!). I can only account for about 6TB of that when installing and not turning off my cache on a share during an initial copy of data. Last night did some trial and error with different running containers and definately saw the official Plex one using ALOT of data. Pihole seems to be using a fair bit for me aswell, although no where near as much. So having switched now to binhex-plex but still running PiHole, i've got 59.6GB written in the last 8 hours... still seems a little high. I'm wondering if the linuxserver container will be better than the binhex one?