frodr

Members
  • Posts

    526
  • Joined

  • Last visited

Everything posted by frodr

  1. Very nice and informative session. I kind of feel home with Unraid. Imagine that we can get 2-3 Pro licences for the price of a hard drive. Really looking forward to version 7.0
  2. Nice if anybody can confirm/deny this? Cheers,
  3. Creating dataset thru the plugin -> appear as user folder in the Array. Is this as it should be?
  4. Some runs: Testing receiving server1 main storage (10x raidz2) write performance: About 1.3 GB/S Testing sebding sending server2 (6x raidz2) read performance: About 500 MB/s Server2 (6x hdd raidz2) write speed was about 700 MB/s. This is quite surprising to me. I thought it would be the write speed on the receiving server1 (10x raiz2)that would be the limting factor. But its the read speed of the 6x hdd raid2 on server 2. thats the most limiting factor.
  5. I´m running two servers with the same user data on both. If one dies, I have the other server on hand. The first server is online 24/7, except when I mess around. The second server also serve as a game VM. Every now and then (every 2-3 Months) I rebuild the main storage on Server1 due to expanding the pool. Write speed is a priority. My goal is to achieve saturation for 2 x 10Gb port in Server1. As of today: Server1: 10 x sata 8 TB ssd raidz2 pool. Server2: 6 x sata 18 TB hdd raidz2 pool. What will be the best pool setup on server1 for max write speed over 2 x 10 Gb nic with FreeFileSync (or simalar)? Server2 have a 25Gb nic. Current speed: Happy for every feedback. //F
  6. I removed the HBA card and mounted in the three M.2 slots on the mobo to get a baseline. cache: 2 x mirror ZFS nvme ssd tempdrive: 1 x btrfs nvme ssd kjell_ern: 10 x sata ssd in raidz2 The nvme drives seems to be ok. The reason for introducing the HBA in the first place, was to rebuild the storage pool "kjell_ern" (where the media library lives), to include nvme ssd´s as meta cache to improve write speeds. Unless Supermicro Support comes up with a fix, I abort the idea of a hba, and saves 30W on top. We can kinda call this case closed. Thanks for holding hand.
  7. Ok, I know a little bit more. The HBA only works with a pcie card in the slot that is connected to the X16 slot. This slot then only works at x8: LnkSta: Speed 16GT/s, Width x8 (downgraded) Removing the pcie card from the switched slot, the mobo hangs at code 94 pre bios. I have talked to Supermicro Support, which by the way responds very quickly these days, Highpoint products are not validated. But they are consulting bios engineer. This means that the HBA is running 6 x M.2´s at x8. And it is some switching between the slots as well. The copy test was done between 4 of the M.2´s, The last 2 is only sitting as unassigned devices.
  8. Yes, Rocket 1508. Great that its running x16. I thought that is mobo slot7 only runs x8 when slot4 is populated. I will try Ubuntu or debian and move m.2´s around when a ZFS scrub test is done, tomorrow it seems. I have 6 x m.2 in the hba as when tested. Slot7 is the only slot available for x16. (Dear Intel, why can´t we have a proper HEDT motherboard with iGPU cpu´s?)
  9. I will test moving around m.2´s. Strange thing, the mobo hangs on code 94 if hba in slot7 and none in the connected slot4. When adding a NIC in slot 4, the mobo do not hang. Tried resetting CMOS, no change. Also tried a few bios setting without luck. I will have to address Supermicro Support.
  10. The hba should run at x8 (slot7), not x16, as I have a pcie card the connecting pcie slot 4, Removing pcie card from the connecting slot (slot4), the hba in slot 7 should run at x16. Doing so the mobo startup stops before bios setting with code 94. I will try to solve this issue.
  11. I might have an idea. Intel Core/W680 supports 20 pcie lanes, right? In the server is hba (x16), nic (x4) and sata card (x4). I guess the hba might drop down to x8 effectively.
  12. Ok, the test shows 420 - 480 MB/s from 2 x Kingston KC3000 M.2 2280 NVMe SSD 2TB in ZFS mirror to 2 x WD Black SN850P NVMe 1TB in btrfs Raid 0. Test both directions. What I forgot to tell you, and to remember myself, is that the NVMe drives sits on a HBA, HighPoint Rocket 1508. It is a pcie 4 with 8 ports M.2 HBA. Well, I now know the penalty for being able to populate 8 NVMe´s with good cooling in a W680 chipset mobo. Thanks for following along. My use case isn't that dependent on max NVMe speed. (But I would quickly change the mobo if Intel includes IGPU into higher I/O cpus as Xeon 2400/3400).
  13. Sorry, its local speed, not over smb. The singel threaded performance of I7-13700 is quite good. Can you see/suggest any reason this performance is well below 1 - 2GB/s?
  14. Can anybody share some light on this topic, please? If the speed above is what's to be expected, well, then I now that. And now need chasing improvements. Cheers,
  15. The copy speed on my servers are generally quite poor. For Server1 in the signature. Copy a single file from 10 x sata ssd raidz2 to 2 x NVMe ssd in btrfs raid 0 should be significantly better than below 400MB/s. That's the speed from one single sata ssd. To my understanding the speed should be: 10 x 550 MB/s x 50% = 2750 MB/s. Right/wrong? Very nice to get views on this topic. // kjell-diagnostics-20240305-1820.zip
  16. The files are all .mkv. The progress bar also stops moving.
  17. Syncing with DirsyncPro suddenly "stops", then some files are marked yellow of size blown up. Happy for any ideas of what's wrong. helmaxx-diagnostics-20240226-1554.zip
  18. I can´t destroy a dataset. Tried several datasets right after restart. Not aware of any services or containers user them. How do go about to destroy datasets? And to find out if they are in use? Happy of any help. kjell-diagnostics-20240211-1925.zip
  19. OK. I will test this during the coming days. What it seems to me, is that the nginx is restarting. If I wait a 20-40 sec., I usually can log in.
  20. It did not happened when booting into safe mode, tested 2 times.
  21. I will check tomorrow. A bit busy on the server now.
  22. Most of the time I login I get this one: After several refreshes, and plugging user/pass, I get inn. This time I had this: This came up because I plugged the other server password. I can't add any diagnostics as of now. But how to fix the first issue? The second as well as guess. The 503 issue as also on my other server. Cheers, Frode
  23. How can I destroy a dataset? Thru ZFS Master I get this: ZFS Master settings in destructive mode.
  24. I was just Safari having a hick up. Strange, because it worked on the other server.