Jump to content

JorgeB

Moderators
  • Posts

    67,662
  • Joined

  • Last visited

  • Days Won

    707

Everything posted by JorgeB

  1. This was the result (PCIe 3.0 x4): ndk_sh t u v w x y z aa /dev/sdt: 409.95 MB/s /dev/sdu: 409.91 MB/s /dev/sdv: 409.88 MB/s /dev/sdw: 410.22 MB/s /dev/sdx: 410.31 MB/s /dev/sdy: 410.54 MB/s /dev/sdz: 412.00 MB/s /dev/sdaa: 410.20 MB/s Total = 3283.01 MB/s Ran it 3 times and this was the best of the 3, so strangely a little slower than an Unraid read check. Do you mind sharing that one also? Have 4 NVMe devices in a bifurcated x16 slot but no good way of testing, since an Unraid read check produces much slower than expected speeds with those. That's a good result, but I expect NVMe devices will be a little more efficient, consider that you have the NVMe controller on the device(s) going directly to the PCIe bus, with SAS/SATA you have the SAS/SATA controller on each device, then you have the HBA and only then the PCIe bus, so I believe it can never be as fast as NVMe, though I have no problem in now acknowledging that the PCIe 3.0 bus itself can reach around 7GB/s with an x8 link, but IMHO those speeds will only be possible with NVMe devices, still believe with SAS/SATA HBA it will always be a little slower, around 6.6-6.8GB/s. P.S. what adapter are you using for 5 NVMe devices in a one slot? Mostly find 4 device adapters to use on bifurcated slots, though I guess it will be more expensive due to the PICe bridge, still might be worth to get one more NMVe device in a single slot while maintaining good performance.
  2. Interesting, I feel the other way, i.e. that they did a good job with it, I have an older HP expander that is SAS2 but SATA2 only, so with SATA devices it can only do 1100MB/s per wide link, without Databolt and the PMC equivalent, we'd only be able to get 2200MB/s per wide link with a SAS3 HBA+expander, so 4000MB/s (and around 5500MB/s with dual link) seem good to me, I think it's difficult to expect that a 6G link would have the exact same performance as a native 12G link. It's not exclusive to SATA, it's link speed related, using SAS2 devices with a SAS3 HBA+expander will be the same, since they also link at 6G, and because there's no 12Gb/s SATA nothing much can really be done about it, SAS2 users can upgrade to SAS3 devices if they really want max performance, some more interesting info I found on this: Note that read speed aligns well with my results with dual link and the expander: Write speeds are even better, basically same as native 12G, that is something I can never replicate since the SSDs I have for testing are fast at reading but much slower at writing. It can be seen for example with lsscsi -v: [1:0:12:0] enclosu LSI SAS3x28 0601 - dir: /sys/bus/scsi/devices/1:0:12:0 [/sys/devices/pci0000:00/0000:00:01.0/0000:01:00.0/host1/port-1:0/expander-1:0/port-1:0:12/end_device-1:0:12/target1:0:12/1:0:12:0]
  3. Again that's not an Unraid issue, it's usually board/BIOS related, those diags are after rebooting, if you need help diagnosing the unclean shutdown you should post the diags that are automatically saved to the flash drive after one, in the logs folder.
  4. It doesn't, but if there were any sparse files on source (like vdisks) they won't be sparse on dest unless you use the appropriate flag.
  5. Not clear what you mean, do you want to use the new DL380 as a new server or as an enclosure?
  6. Start by checking network bandwidth by running a single stream iperf test.
  7. Yep, so that it's moved to the array.
  8. Remembered that while I don't have to hardware to test the real max for an x8 PCIe 3.0 slot, I can test with the HBA in an x4 slot, so same 9300-8i with 8 directly connected SSDs as above but this time I enabled bifurcation on the slot effectively turning it into an x4 slot: 02:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 (rev 02) LnkSta: Speed 8GT/s (ok), Width x4 (downgraded) So an x8 slot should be able to do around 6800MB/s, possibly a little more with different hardware, still think 7000MB/s+ will be very difficult, but won't say it's impossible, though again don't think the PCIe bandwidth is what's limiting the OP's speed, IMHO and more likely it's because of the expander being used with non 12G devices.
  9. Looks like it's not possible to just expose the disks individually with that enclosure, unless you create individual raids, so not ideal for Unraid: https://www.dell.com/community/PowerVault/Dell-MD3200-quot-Enhanced-JBOD-quot-IT-mode/td-p/3804722
  10. This is usually a board/BIOS issue, the unclean shutdown part means you're timeouts are likely too low, you can analyze or post the diags saved on the flash drive at shutdown time to look for what is the problem.
  11. For the amount of data you have probably best to backup pool to array, create new pool and restore, but if you prefer to keep dockers/VM online it's also possible to add the new device and then remove the other ones one by one, just note that direct device replacement is broken on v6.9.x and can't be used unless done manually.
  12. It should be, another common issue with Ryzen is the RAM speed, good to check you're withing max official specs.
  13. This is usually a flash drive problem, try recreating it or using a different one.
  14. You can either run a correcting check then a non correcting one and the 2nd one should result in 0 errors, or alternatively run two non correcting checks and see if you get the same results, if the number of errors is the same but a low number good to check they are the same blocks.
  15. No, if memtest didn't detect errors with both DIMMs very unlikely it will with just one, remove one DIMM and run two consecutive parity checks, if the second one still finds errors do the same with the other DIMM, if still errors with either one alone I would try a different board/CPU next.
  16. You can, if you don't need a pool, but that's not the main issue, main issue is finding out what is corrupting data.
  17. You can reboot before or after, it's just to clear the errors on the GUI, it won't affect anything else.
  18. Forgot to mention, to clear the read errors reboot.
  19. Test passed so disk is good for now, you need to run a correcting parity check because due to previous read errors parity won't be 100% in sync, then keep monitoring that disk.
  20. It's not a device problem, it's a filesystem problem, corruption likely happened for the same reason you're getting sync errors. Most likley. Not that I can see, you could try with just one DIMM at a time and see if you don't get sync errors like that, note that the first check after the problem is fixed may still find errors.
  21. No reason it shouldn't, SAS expanders are transparent to the OS. Depending on the number of disks you think you might need you may also consider the Intel RES2SV240 or the RES2CV360, both are SATA3 and can be found considerably cheaper on ebay, either one will give you a nice bandwidth bump, for more you'd need a PCIe 3.0 HBA.
  22. Booting in safe mode disables plugins, so worth trying.
  23. With 16 drives the bottleneck is going to be the expander, since it's SATA2 only it will be limited to 2200MB/s usable max, assuming dual link, that's around 140MB/s max per disk when used concurrently, you can see here for some more numbers, just upgrading the HBA won't help here, you'd need a SAS2/SATA3 expander and to improve even more also a PCIe 3.0 HBA to go above the PCIe 2.0 limit.
×
×
  • Create New...