David Dernovoj

Members
  • Posts

    17
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

David Dernovoj's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I finally managed to take down my array for maintenance. A direct connection doesn't help at all, same results. I have also tried various attempts at fixing from the famously referenced 10G SMB guide, but nothing helps me. I have noticed that I can get a good 6-8 Gbps of throughput, using parallel copies via the explorer, rather than individual files. I'd rather not start 10 processes though if I can just do one batch. Doesn't help if I'm moving one large file either. Do diagnostics help at this point?
  2. I guess I'm not going to be able to test this for a while then, because I'd need to move the 10G NIC accordingly then. However, I am getting the appropriate download speeds (copying from the server), but the upload sucks. I also have a proven 4-6 GBit via iperf, so I feel like something's wrong with SMB. But with my inability to test this right now I guess we're just spinning in circles.
  3. Sadly not the solution, already had it disabled. I even exited the glasswire process and it didn't change transfer speeds. I strongly believe a firewall would block the entire traffic, not just slow it down.
  4. That's true. However, I am not sure how to proceed on this SMB problem I have. I can copy from the server relatively fast (I'd be cool if I even got up to 500-700 MB/s), but I am barely scratching 300. And only sometimes. It starts at like 90, and climbs up to the mid 200s and then keeps jumping between 100 and 300. Always in the optimal scenario with large files, not with smaller / many files.
  5. I might have been a bit unclear in general. I am having 70% cpu load on the client side. On a Windows machine.
  6. no top in Windows I'm afraid, unless I missed something ^^
  7. Sorry, I had to put this down for a while and want to revive this. I was able to actually do some maintenance now and try disk shares. For a few specs: On the server (which runs unraid) I am using: 8x 14 TB HDD, in a hardware raid with a MegaRAID SAS 9361-8i controller (array) 4x 4 TB Western Digital Ultrastar DC SN630 (cache) AMD Epyc 7702p 512 GB DDR4-2933 ECC Supermicro H11SSL-i mobo (I believe) Intel 82599ES dual 10 gbit SFP+ NIC on the client, I am using: i9 [email protected] GHz 128 GB DDR4-3200 (running at 2800) Samsung 960 PRO 2 TB Samsung 970 EVO 1 TB x2 Samsung 860 EVO 2 TB x2 RTX2080Ti Intel X520 dual 10 gbit SFP+ NIC I have both machines connected via an Ubiquiti UniFi 48-port switch, it has 2 SFP+ ports on it which I am using on each machine. On unraid, I have made the 10 GBit NIC my primary network device, and on my desktop it is, too. I have tried a disk share, and my writes to the cache were very inconsistent (100-350 MB/s), when writing to the storage array, I got about 200-300 MB/s. Not a lot of difference between that and writing to a user share. I noticed that every time I do copy, my CPU usage is quite high, on all 18 cores / 36 threads. Up to 70% at times. Is SMB that taxing? I didn't even copy individual small files, I am trying to do big files (>4 GB individually) to try and get the best sequential speed out of it. If I can provide any more information, should dump any information I can from unraid itself, please let me know.
  8. I have limited ability to just randomly take my array down every time to change settings, but I'll give this a try tonight and report back, thanks!
  9. I have a RAID controller hooked into my array as storage, and have 4x 4 TB NVMe drives set up as cache. I have a 10 GBe network card (SFP+) in both my workstation and my Unraid server. iperf showed about 6.5 Gbps, which is a bit lackluster but shouldn't affect me that much. The share I'm copying to is set to cache:yes (I have also tried a different share with cache:prefer) and I am getting sub par speeds of 100-150 MB/s. To my understanding, everything I'm copying should go to the cache drives, and then later be moved onto the storage array. So what exactly am I doing wrong? Even though the network throughput is subpar (6.5 of 10 Gbps), I should be getting somewhere around that in copying speeds, not the equivalent of a GBit connection. If I can provide any further information to help resolve this, please let me know what exactly that would entail. Edit#1: I just copied a file from the server and got 700-950 MB/s, so about what I'd expect / hope to obtain. That means something is wrong when I'm trying to write to the server. Edit#2: I am copying from an NVMe disk locally to the server (which, as mentioned, uses NVMe drives for cache), so my local read speeds aren't the bottleneck.
  10. Thanks itimpi and trurl, that helps a lot. I marked the topic as solved by editing the title, I hope that's correct.
  11. Ah alright that explains it... I guess I shouldn't have just pressed add and run
  12. Okay, thanks. I have one last question: why is it that my cache is only 50% of my 4 SSDs? Is it doing some mirroring between them, effectively being something comparable to a RAID10?
  13. So is there any specific way I should be setting this up? I don't really want to toss the raid controller. How or where exactlly should I be setting up the cache? I don't really see a lot to configure in the storage array. Sorry if I'm being a pain and need to RTFM, please just point me where I should read up if that's the case and I'll go ahead and do that. I've tried looking into the documentation, but haven't quite found what I'm looking for so far. Edit: are we talking about this?
  14. I just set up my Unraid earlier (thanks to the help from the forum, had UEFI boot issues) and started messing around with it. I have a huge (logical) HDD from my RAID controller (that runs RAID 6), and 4x 4 TB NVMe SSDs. My initial instinct was to use the huge (logical) HDD as the storage pool, and the 4 NVMe SSDs as a Cache. However, is there any way to specifically address the SSDs as actual storage, to have them work fast all the time? Or am I just not getting the mentality of this storage approach that I need to rethink? Here's my storage array configuration right now. I have still not set up anything of value, so I can reformat and throw over everything without any hassle.
  15. Wow, if I had known it could be so easy... I thought it was something about UEFI but I wasn't sure what to do Renaming did the trick, but how would I enable it in the flash creator? Customize -> Allow UEFI Boot? Thank you rachid for your suggestions, too. Looks like it was as simple as getting the EFI boot to work. Now I just need to get it up and running, but that's for me to play around with again. Edit: I'm new here as you can probably tell, can I somehow mark this as resolved?