frodr

Members
  • Posts

    526
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

frodr's Achievements

Enthusiast

Enthusiast (6/14)

38

Reputation

1

Community Answers

  1. Very nice and informative session. I kind of feel home with Unraid. Imagine that we can get 2-3 Pro licences for the price of a hard drive. Really looking forward to version 7.0
  2. Nice if anybody can confirm/deny this? Cheers,
  3. Creating dataset thru the plugin -> appear as user folder in the Array. Is this as it should be?
  4. Some runs: Testing receiving server1 main storage (10x raidz2) write performance: About 1.3 GB/S Testing sebding sending server2 (6x raidz2) read performance: About 500 MB/s Server2 (6x hdd raidz2) write speed was about 700 MB/s. This is quite surprising to me. I thought it would be the write speed on the receiving server1 (10x raiz2)that would be the limting factor. But its the read speed of the 6x hdd raid2 on server 2. thats the most limiting factor.
  5. I´m running two servers with the same user data on both. If one dies, I have the other server on hand. The first server is online 24/7, except when I mess around. The second server also serve as a game VM. Every now and then (every 2-3 Months) I rebuild the main storage on Server1 due to expanding the pool. Write speed is a priority. My goal is to achieve saturation for 2 x 10Gb port in Server1. As of today: Server1: 10 x sata 8 TB ssd raidz2 pool. Server2: 6 x sata 18 TB hdd raidz2 pool. What will be the best pool setup on server1 for max write speed over 2 x 10 Gb nic with FreeFileSync (or simalar)? Server2 have a 25Gb nic. Current speed: Happy for every feedback. //F
  6. I removed the HBA card and mounted in the three M.2 slots on the mobo to get a baseline. cache: 2 x mirror ZFS nvme ssd tempdrive: 1 x btrfs nvme ssd kjell_ern: 10 x sata ssd in raidz2 The nvme drives seems to be ok. The reason for introducing the HBA in the first place, was to rebuild the storage pool "kjell_ern" (where the media library lives), to include nvme ssd´s as meta cache to improve write speeds. Unless Supermicro Support comes up with a fix, I abort the idea of a hba, and saves 30W on top. We can kinda call this case closed. Thanks for holding hand.
  7. Ok, I know a little bit more. The HBA only works with a pcie card in the slot that is connected to the X16 slot. This slot then only works at x8: LnkSta: Speed 16GT/s, Width x8 (downgraded) Removing the pcie card from the switched slot, the mobo hangs at code 94 pre bios. I have talked to Supermicro Support, which by the way responds very quickly these days, Highpoint products are not validated. But they are consulting bios engineer. This means that the HBA is running 6 x M.2´s at x8. And it is some switching between the slots as well. The copy test was done between 4 of the M.2´s, The last 2 is only sitting as unassigned devices.
  8. Yes, Rocket 1508. Great that its running x16. I thought that is mobo slot7 only runs x8 when slot4 is populated. I will try Ubuntu or debian and move m.2´s around when a ZFS scrub test is done, tomorrow it seems. I have 6 x m.2 in the hba as when tested. Slot7 is the only slot available for x16. (Dear Intel, why can´t we have a proper HEDT motherboard with iGPU cpu´s?)
  9. I will test moving around m.2´s. Strange thing, the mobo hangs on code 94 if hba in slot7 and none in the connected slot4. When adding a NIC in slot 4, the mobo do not hang. Tried resetting CMOS, no change. Also tried a few bios setting without luck. I will have to address Supermicro Support.
  10. The hba should run at x8 (slot7), not x16, as I have a pcie card the connecting pcie slot 4, Removing pcie card from the connecting slot (slot4), the hba in slot 7 should run at x16. Doing so the mobo startup stops before bios setting with code 94. I will try to solve this issue.
  11. I might have an idea. Intel Core/W680 supports 20 pcie lanes, right? In the server is hba (x16), nic (x4) and sata card (x4). I guess the hba might drop down to x8 effectively.
  12. Ok, the test shows 420 - 480 MB/s from 2 x Kingston KC3000 M.2 2280 NVMe SSD 2TB in ZFS mirror to 2 x WD Black SN850P NVMe 1TB in btrfs Raid 0. Test both directions. What I forgot to tell you, and to remember myself, is that the NVMe drives sits on a HBA, HighPoint Rocket 1508. It is a pcie 4 with 8 ports M.2 HBA. Well, I now know the penalty for being able to populate 8 NVMe´s with good cooling in a W680 chipset mobo. Thanks for following along. My use case isn't that dependent on max NVMe speed. (But I would quickly change the mobo if Intel includes IGPU into higher I/O cpus as Xeon 2400/3400).
  13. Sorry, its local speed, not over smb. The singel threaded performance of I7-13700 is quite good. Can you see/suggest any reason this performance is well below 1 - 2GB/s?
  14. Can anybody share some light on this topic, please? If the speed above is what's to be expected, well, then I now that. And now need chasing improvements. Cheers,