maust

Members
  • Posts

    1
  • Joined

  • Last visited

Everything posted by maust

  1. More or less have followed the same thing as OP with similar "results" but no permanent fix. Issue became much more apparent after upgrading to 6.11.5 from 6.9. Consistently sitting between 5-10% IOWAIT. Anytime qbittorrent or any other container really does any large scale file operations it shoots the IOWAIT upwards to 30-50%, sometimes sitting there for hours at a time. This causes all network traffic to grind to a halt. Some of what I have tried: Swapped Cache drives. Tried adding more cache drives Tried Splitting the workloads between cache drives. Switched all cache drives from BTRFS to XFS (greatly improved the baseline IOWait, but issues continue to persist) Switched Docker.img from BTRFS to XFS (again, improved IOWait issues but they continue to persist) Rebuild Docker.img from 150GB -> 50GB after fixing naughty containers (no performance changes) Ensured docker containers were not writing to Docker.img after build (no performance changes) Switched Docker.img from XFS to Directory (no change) Tried adding better, faster, pool drives (no perceived difference) Replace both CPUs with E5-2650v2 from E5-2650 What I am working to try: Replacing all RAM with higher capacity sticks (128GB -> 384GB) Things that really trigger the IOWait: Qbittorrent Cache flushing (get a better IO and system performance if all qbittorrent caching is disabled) Mover (even with/without Nice) Radarr/Sonarr (file analysis) Sonarr (Every 30 seconds on Finished Download Check, typically causes 5-6% IOWait every 30 seconds for ~10 seconds) Sabnzbd (no longer an issue once Nice was adjusted) Unzip/unrar (any kind, have to be incredibly harsh on the nice values to get it to not choke the server) NFSv3 (full stop, any remote NFSv3 actions cause massive IOWait, talking upwards of 40-50% IOWait on just READ ONLY) BTRFS (literally anything BTRFS causes issue on my R720XD, I do not experience this issue on my other servers) Specs: R720XD E5-2650V2 128GB DDR3-1600 MHz Parity - 2 Drives 16TB WD Red 18TB WD Gold Array (not including Parity) - 16 Drives - 236TB Usable, all tested with DiskSpeed, monitored with Seagate 16TB Exos x7 WD x2 14TB x4 12TB x3 Cache Pools Team 1TB (Weekly Appdata Backups) P31 1TB (Appdata) 1TB - WD Black NVME (Blank) 4TB - Samsung 870 EVO (for download caching) Dell Compellent SC200 Dell 165T0 BROADCOM 57800S QUAD PORT SFP+ Dell H200 6Gbps HBA LSI 9211 Working Hypothesis: Monitoring with NetData. Noticing IOWait jumps typically correlate with Memory Writeback. Specifically Dirty Memory Writeback. All my research comes back to either bad/lacking ram (which I will be swapping all of them out to 384GB) or Tunables need further adjustment.