stuoningur

Members
  • Posts

    11
  • Joined

  • Last visited

Everything posted by stuoningur

  1. Thank you for your reply! Time zones are just a social construct, I'm sure we would manage somehow if needed. I also had a feeling the test might be weird, but the idea for me at least was: He has this and that performance with that settings, so I should be around there as well. Wendell has bigger disks (but same amount) and who knows about the rest of the system, but he claims with the same test command ~160MB/s while I was around 20. Of course with the same ZFS settings. Not sure how to test the performance better and how to compare it.
  2. Sorry I didn't see the message, but I really appreciate the offer. In the meantime I did some progress already and I think at least one issue is my CPU performance, as it doesn't boost as high as it could. Performance increased a lot with the cpu governor on performance, but still slower than I would expect. I will try around a bit more and would possibly come back to your offer. Another comment had the valid complain that I didn't provide much information so here it goes: It's a Ryzen 7 5700G system with 32GB DDR4 4x4TB Seagate IronWolfs on a LSI Broadcom 9201-8i HBA I tried Raid-Z1 and 2x 2 drive mirrors. In general I basically followed the guide on level1techs forum for the general zpool, dataset and test command.
  3. I finally tried this plugin and set everything up, 3x 4TB hdds with a raidz1 and I did run a command to test the speed from level1techs forum and I have abysmal performance. I get around 20MB/s read and writes, not sure how to troubleshoot this. In the example given in the forum he had around 150MB/s with 4 8TB hdds The command is: fio --direct=1 --name=test --bs=256k --filename=/dumpster/test/whatever.tmp --size=32G --iodepth=64 --readwrite=randrw
  4. Disabling bonding seems to have done the trick. Thank you. Seems like the newer drivers don't play that nicely with the default settings.
  5. Attached the new diagnostics file. Also the comparison between 6.9.2 and 6.10.0 RC3 in the read speeds. 6.9.2: 6.10.0 RC3: Both transfers were of a similarly sized video file. pantimos-diagnostics-20220311-1839.zip
  6. Quick test with RC3 shows the same behaviour. I will test in more detail later and provide new diagnostics.
  7. I will wait for RC3 then and update this report accordingly.
  8. I foolishly believed reporting in the thread was the way to go, but I reported it now separately as well.
  9. When running the current prerelease 6.10.0-RC2, I noticed my reads are limited to about 750Mb/s, regardless if from the array or ssd cache. While the writes are normal and maxing out my 1 Gb/s network. I can reliably reproduce the issue, returning to the stable release the reads return to 1Gb/s, switching back to the prerelease back to the limited speed. Attached my diagnostics. pantimos-diagnostics-20220307-2154.zip
  10. It seems that running the RC2 I'm limited in reads from my array and cache to about 750 Mb/s. Rolling back to 6.9 gets me back to 1 Gb/s. Went back and forth a few times and can constantly reproduce it. No issues with writes though.
  11. is there a reason some default settings like the interface and advanced dns settings have been changed? Especially the "permit all origins"?