Jump to content

LennyNero

Members
  • Posts

    5
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

LennyNero's Achievements

Noob

Noob (1/14)

3

Reputation

  1. Assuming that you're using Windows I'm afraid I can't help... as I'm just using the inbuilt firewall of macOS.
  2. Update: With the firewall disabled: at first it looked like an improvement - but as I re-tested I couldn't confirm the result. The fluctuation it seems was just within the margin of error.
  3. Hello all, I just did an iperf check to reconfirm the issue is not the network itself. Here are the results: Looks pretty solid I think. I'll check performance with different firewall settings next...
  4. I did read that mixing user shares and disk shares is somehow unwise. Is that not true anymore? For my stats shown in the first post I had adjusted the y-axis of the network diagram to show MB/s and not Mbit/s.... so it's directly comparable. In the good old days of 1 Gbits/s network I could easily get a transfer speed of 110 MB/s. When I apply the same ratio to a 10 Gbit/s network I should get a transfer speed of 1100 MB/s. Especially when I'm using 4 NVME SSD's: even a single one of these should be able to saturate the network speed more than twice. The poor performance of the SSD cache is what concerns me most.
  5. Greetings! I've been experimenting with Unraid for almost 2 weeks now. By now my new server is more or less ready to be deployed. However I've got some questions regarding my findings: probably there is still something wrong with the settings... or perhaps I've got just wrong expectations? To test the performance I did copy a 70,62 GB file back and forth a few times. I checked the transfer time with a stop watch and calculated the MB/s rate accordingly. Docker and VM are disabled. This is intended to be a pure file server. My desktop computer is equipped with a fast SSD, so that should not be a limiting factor. 1. Write test directly into the array: user share with no cache enabled: 7:57 min -> 152 MB/s This is a slightly disappointing result; even my old external USB drive was faster than this. But I think the reason is: Parity needs to be updated at the same time(right?). Disk settings (md_write_method) is set to reconstruct write. I understand this should be the fastest option? As you can see in the picture above: there is no constant stream of data over the network. It looks more like pulsing. Same for the drive activity. Is this considered to be normal? 2. Read test directly from the array: user share with no cache enabled: 4:37min -> 261 MB/s As the read speed relates directly to the performance of a single drive... I think this result is acceptable. As seen above: network and drive activity seem to be just the same... and a constant stream of data. Looks ok, right? 3. Write test to a cache only user share: 2:05min -> 579 MB/s Now with this result I'm very disappointed. As far as I understand parity is not updated when writing to the SSD cache? What setting could have an influence here? (Cache pool of 4 Samsung 970EVO) Again there is no constant stream of data. That's normal? 4. Read test from a cache only user share: 1:24 min -> 861 MB/s Again a slightly disappointing result. The stats above seem to be ok.... it's just too slow. I was expecting to get at least >1 GB/s from a 10Gbase-T network. I set the MTU to 9000, and in Global Share Settings I enabled Direct IO. Is there any other setting that I overlooked so far? Thank you for your time reading all this and have a nice day!
×
×
  • Create New...