Jump to content

chanrc

Members
  • Content Count

    4
  • Joined

  • Last visited

Community Reputation

0 Neutral

About chanrc

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Anyone try out 6.90-beta22 yet? I'm assuming since we haven't heard anything from the LT guys thsi is still probably an issue.
  2. Isn't the absolute easiest way just to run a SMART test on the drive and look at the report? If you click on the name of the drive from the main menu you can download it from there. I just did that, ran a iotop -oa -d 3600, and averaged 10 hours of usage to give me a rough average of how much data loop2 was generating. Multiply that by the power on hours in the drive attributes and you'd get an approx. of how bad this bug is killing the drive and how much of the TBW this bug is responsible for. In my case my drive reported 64.1TBW total in the SMART data, I measured 8.6MB/s from loop2 averaged over a ten hour period with users to plex, access to my server, etc. or about 30.2GB/hour. My drive attributes showed 1064 on hours... so rough napkin math I'm looking at loop2 having generated ~31.4TBW by itself (basically halving the life of my drive). Rest will be from me doing transfer of about 12TB from my old NAS (stupidly had cache enabled for the initial transfer), downloads, heavy handbrake H265 transcoding, couple of VM installs and futzing around with my server in general. For comparison after converting to XFS I'm generating ~9MB/minute from loop2 or what would be about ~0.6TBW over the lifetime of my drive.
  3. I'm 3 days now since the switch to a single un-encrypted XFS cache and consistently getting better results. loop2 is producing only ~9MB/min during idle for writes with all my dockers started (included binhex Plex, sonarr, radarr, sabnzbd, deluge, mariaDB, nextcloud, letsencrypt, cloudflare-ddns, pihole, ombi, grafana, teleconf, influxDB) compared to the ~8MB/s I was seeing before after stopping all my dockers but only having docker enabled on my un-encrypted BTRFS cache. Not sure what the trigger for @nas_nerd's XFS issue but I can't repro it with mariaDB and nextcloud enabled (no user connects in the last 3 days though, maybe I should try and upload something). over 10 minutes using iotop -oa -d 600 over four hours using iotop -oa -d 14400 with several small uploads to nextcloud and a couple of downloads.
  4. I have the same issue and testing with all dockers stopped, loop2 by itself would still be writing data at 5-15MB/s in iotop to my single unencrypted BTRFS cache SSD. Tried converting my cache drive xfs and now it's down to 20MB over the past 10 minutes with no dockers running and 100MB over 10 minutes with all my dockers up (binhex sonarr, radarr, tautulli, sabnzbd, deluge, ombi, pihole, nextcloud). Huge improvements with XFS over BTRFS though still a problem when there is really no usage in any of those dockers. My month and half old cache SSD is already at 66TBW (of the 640TBW my manufacturer rates the drive for) before I noticed this Can devs look at this as an urgent instead of minor issue? Probably cratered a lot of peoples SSDs already.