CowboyRedBeard

Members
  • Posts

    227
  • Joined

  • Last visited

About CowboyRedBeard

  • Birthday February 5

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

CowboyRedBeard's Achievements

Explorer

Explorer (4/14)

7

Reputation

  1. Hey fam, I'm considering upgrading now that CA tells me I don't meet the minimum build version. I have been running unraid for 6+ years on this same box, and in the past had a few strange issues that went away when I upgraded to 6.10.3. So I've been apprehensive to upgrade. I run about 25 different dockers, and 4 VMs full time in addition to general services like shares. MB is a Supermicro X9DRi-LN4+, and I have the NVIDIA pass-thru situation going on (plugin). What is some general advice on finding a safe upgrade path? Also I've had this same SanDisk Cruzer Fit USB for unraid for 6 years now, and the server has been basically turned on that entire time (a few power outages & maint windows for additional drives). Should I swap that out at some point? What's the best way to back up my unraid install? I guess I'd go to latest stable which is 6.12.4 at this time, anything to look out for there? Thanks for the help!
  2. I keep getting "not available" update status on the docker tab, is this still the right URL? https://hub.docker.com/r/netdata/netdata
  3. Any chance this could have something to do with my memory configuration or something like that? I am using the SCU cable for most of the array drives, although the Parity disk Cache drive are on the onboard SATA3 ports.
  4. Do you think this is a product of the configuration? I can't see how it's a hardware issue, so maybe reinstalling unRAID would help? Also, more than happy to post any logs / test data needed to solve this.
  5. I have another new Crucial MX500 ( CT1000MX500SSD1 ) I can replace it with to try.... but I had this problem with MX500 500Gb drives that I had prior. And those were 2 drives in a pool at first, split the cache to a single drive and was still a problem. I tried both BTRFS and XFS I'll do whatever tests you guys think make sense. Let me know. THANK YOU FOR THE HELP! -yes that's caps 🙃
  6. Yeah, I had an Intel Optane drive in there that I was using with VMs. I was able to send to it at a pretty crazy clip, but the SSDs won't achieve anything even approaching what the benchmarks say they can do for more than a few minutes.
  7. Both SSDs are connected via SATA3 ports on the MB (SuperMicro) and I've also tried having them connected to a SATA card plugged via PCIE
  8. But shouldn't the below behavior be basically all the time? I mean, those SSDs should be able to keep 300MiB/s and more for extended periods of time right? I'm pretty sure this doesn't have anything to do with network, since it exhibits the same behavior even disk to disk. The IO Wait has got to be a byproduct of whatever the problem is. This wasn't the case for a long time prior to 6.7 in this very same server, with all the same hardware. These current SSD drives were even an attempt to rule out the previous SSDs.
  9. Probably the posts on this page are a very depiction of the problem as it appears currently. But essentially, with any file write process to cache I end up with high i/o wait times. I will see the cache drive able to write at around 300MiB/s for just a minute or two and then after that it will only give around 80MiB/s after. This shows up in netdata and on the unraid dashboard as in the following posts: And in that second one you can even see the CPU temps rise, which as was mentioned here was thought to be odd since it's just "waiting" ... but I monitor CPU temp / Fan speed with IPMI and then send that data to influxDB where I can trend it (which is that graph in the second post) Happy to conduct any tests you think are meaningful and post the results here. But primarily I see this with cache drives only (spinning disks don't obtain the same sort of speeds so I guess the system can keep up with them). And, I also see this if it's Sab downloading / unpacking a file, or transferring data to or from a non-array / non-cache SSD. Also, earlier on in this thread, I had an Intel Optane NVME drive in the box on PCIE slot and was able to get crazy sustain write speeds to it without this issue occurring. I've since pulled that out, but could put it back in for testing if needed.
  10. Supermicro X9DRi-LN4+ the onboard controller. The cache drive is connected to a SATA 3 port on the motherboard. The other SSD that's not assigned to the array (VMs on this) is on a SATA 3 port also. I've done tests to/from those. And I've even tried a PCIE SATA controller that I have in the machine. The network is the onboard ethernet from the motherboard. Which doesn't seem to be a bottleneck at all, I can copy at a full 1Gbps on that
  11. Hi, welcome to the club! haha I'm currently on 6.9.2 but have had this issue since 6.7
  12. I have Sab using a cache enabled share, I guess pausing during post processing may help... but that's really just another way to mask the issue without fixing it. I'd love to figure out why this is happening
  13. So this is last night, downloads start at 1:30... which you can see it goes up to 430'ish MiB/s... and then IO Wait goes up as high as 22% and write speeds level off at 80MiB/s Then this is when the mover starts: