Jump to content

Hikakiller

Members
  • Content Count

    21
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Hikakiller

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. @limetech any insight? @Benson @johnnie.black Changing my vm dirty cache numbers didn't change my network transfer speed, but they made my average parity check go from 180Mb/s to 970Mb/s.
  2. Right, but I'm seeing a slowdown before 5% cap. So even if It were forcing a hard stop, I'm not seeing the slowdown at that hard cache flush.
  3. Sure, but from my understanding background ratio is when it starts to flush cache, and dirty ratio is when it hard stops and forces a flush. So even at 1% hard flush I should be getting 2gbs of transfer at 10gbe, and I'm not getting that at 90% hard cap.
  4. I kind of suspected that this isn't related to network cache size. Look at this post. @johnnie.black Are you sure these are the correct values? There's even a value for tcp file caching over the network in sysctl.
  5. Ah, sorry. My main windows computer is virtualized on my unraid machine.
  6. Sorry, this is a vm on the unraid array.
  7. Yeah, I do. Thanks, I'll try that. Alright, I'll do that.
  8. I tried this. I sst background ratio to 5% and dirty ratio to 90% and transferred a 5gb file. It took 25 seconds. I also set the flush time to 8 minutes for good measure, it didn't change anything. I'm quite confused about the naming scheme here. Wouldn't these values be mainly used by vms?
  9. Are there any logs I could include to help diagnose this?
  10. I suppose I could be getting less than 10GBe, but I think that's just my cache drives maxing out. I've got dual WD Blacks in raid 0. I am running on HDDs for most of my array. If I throw my 970 evo plus in there, or my 850 pro, I can write to them faster than 200MB/s. But that's not the main issue. I have gone above 1gb, measured by windows task manager. And both sides of the link show 10gbe. I'll try upping and lowering the caching ratios, but I don't know which would be ideal. I'm on a dual ups backup with about 20m of runtime each, so I guess I could up it. But how often do you copy something you only have one single copy of? I'm guessing that I can calculate max ram amount set to caching if I multiply my disk speed by my battery backup runtime, and subtract like 20% for safety, then I should have enough time for the os to destage the cache.
  11. Isn't iperf for network speed testing? I'm getting full 10GBe. And I've also tried it on two vms on the array, with two seperate ssds.
  12. Hmm, even at default ratios, those amounts would be hugely more than my file transfer sizes. Maybe something else is the issue? Even transferring world of warcraft (classic) over the network take minutes, not tens of seconds. It's a 4.5gb file, and I'm seeing a peak of 200MB/s, then a drop to 5, etc. At default values it should load into ram instantly, basically.
  13. I've noticed unraid doesn't seem to be using my ram cache, or it's using it in small bursts. When writing to my array over 10GBe network, it has deep troughs of speed. It will start out writing, say, a movie, at 200MB/s, but basically stop every 2 seconds. During this stop, I see disk usage go up on the server. Why not either write all of the file directly to the disks, or write all of the files to the ram and complete the move in the background? I have 256gb ram, and two 2643v2. I'm doubting it's a lack of server resources. On top of that, the files are usually <10gb. That's small compared to my available ram. Total ram allocated to vms is 20gb, 4gb for Ubuntu and 16gb for windows, with however much a deluge-vpn docker needs being allocated dynamically. Is there anything I can do to increase performance? Does it have anything to do with the settings such as nr_requests, md_sync_window, etc? Thanks.