Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

6 Neutral

About CowboyRedBeard

  • Rank
    Advanced Member
  • Birthday February 5


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Yeah, all that is beyond me.... But hopeful you've found something that can be addressed by the team here. 🤞
  2. Yeah... I mean, to me this seems like a significant issue. Also doesn't seem to be a configuration issue. ¯\_(ツ)_/¯ Anyone?
  3. Bumping this back up... Can someone help me get a resolution?
  4. I can see that angle... But SSD's are cheap these days also. I'm running VMs on it, sabnzbd and a few other operations. Also a few VMs. I guess I could migrate it to XFS to test and see if that fixes it... but honestly I'd prefer the BTRFS / cache issue fixed if that's what's going on here. What's the best way to do that? Copy everything off and then start in maintenance mode and then copy back? Anyone from Limetech looked into this situation at all since it seems I'm not the only one?
  5. You can't have redundant cache (pool) with XFS... https://wiki.unraid.net/UnRAID_6/Storage_Management#Switching_the_cache_to_pool_mode That's a problem... I suppose this is something I could try, but don't see lack of a pool as a viable option going forward.
  6. No it did not happen when copying to the Optane drive, however it is formatted to XFS. Maybe that is part of the issue? I wonder if there's an easy way for me to convert my cache pool to XFS and then try?
  7. So this is my latest test, I shut the system down and installed a PCIE SATA controller card. Moved both cache drives over to it (from the MB SATA3) and here was an 8G file: More or less the same issue with IO wait, however the speed might be a little better... hard to say as this file was half the size of the previous tests
  8. Like here is that same file coming off the same disk on the array to the Optane drive at 190MBs vs the 85MBs the cache pool was able to manage
  9. I mean, I'm not seeing the speeds as limited as others... although I'm unable to obtain the speeds I did in the past. The larger issue for me is that when I DO write to the cache as fast as it can... it crushes the server and other services basically stop. As seen in the first post.
  10. And the IO during that cache write operation
  11. For a test, here's me copying a file (83G) from the array to the cache drive (via MC in shell): And where this drive drops off utilization, you can scroll down to the other cache drive and watch it pick up where this one left off. Now this is writting at only 85MBs (+/-), it's pulling from a spinning disk... which should be able to feed it more than that. But there's your backlog you asked about. And as a test, I have a PCIE Intel Optane drive in the box that I copied this file to FROM cache: And this is the Optane, which typically seems to be able to take as fast as you can give: Hopefully this adds some light to the issue?
  12. I appreciate the help, but it isn't the drives. Check out the netdata graphs I posted at the start of this thread. As you can see from my first posts, I'm not getting anything near these speeds: https://ssd.userbenchmark.com/SpeedTest/667965/Samsung-SSD-860-QVO-1TB https://www.pcworld.com/article/3322947/samsung-860-qvo-ssd-review.html The issue was the same with the Crucial drives. So... I'm pretty sure it's not the drives themselves. I'm not on the same level as most of you guys with this stuff, but given the discussion thus far it seems clear to me that this is something unique to unRAID and cache.
  13. And with the MX500's I'd get 500mbs transfers...
  14. The Crucial were MX500.... and the part you might have missed is that this issue hasn't always happened. In fact, part of the reason I put the Samsung drives in (apart from running out of space regularly on the MX500) was because I wondered if they might had been part of the issue. So, MX500 drives didn't have the issue... then at some point it started. I'm pretty sure it started right when I went to 6.7 but mostly had these operations happening late at night.