ryann

Members
  • Posts

    16
  • Joined

  • Last visited

Everything posted by ryann

  1. Parity check finished. Reads are right at 1GB/s... so that's a little improvement over the 800MB/s I was seeing, but still not the 1.2GB/s of a fully saturated connection. However, I can live with that. Writes on the other hand... still around 300MB/s. For those talking about the UDMP, I'm on the same VLAN now as the server, so traffic is not getting routed through the UDMP.
  2. Sorry for the confusion... Recycle Bin is a Unraid Plugin: Docker hung as I took my array down and because I had to do a hard reset, my array got flagged as dirty. You can edit smb-extra.conf from the /boot/config, or from The Settings->SMB page. Where I'm thinking I may have a bad config is where the Plugin updated and seemd to goof up my config a bit: veto files = /._*/.DS_Store/ [global] # SMB Performance & Security Tweaks min protocol = SMB2 server multi channel support = yes aio read size = 1 aio write size = 1 use sendfile = yes nt acl support = no # min receivefile size = 16384 # socket options = IPTOS_LOWDELAY TCP_NODELAY IPTOS_THROUGHPUT SO_RCVBUF=131072 SO_SNDBUF=131072 #vfs_recycle_start #Recycle bin configuration [global] syslog only = Yes syslog = 0 logging = 0 log level = 0 vfs:0 #vfs_recycle_end It's easy enough to fix, but I cannot try it until the Parity check finishes (probably sometime tomorrow morning).
  3. I'm not sure if I successfully got this enabled or not. I took my array down, Docker hung, so it never went fully down. I had to force power-cycle the server. Now it's going through a parity check and I found out the Recycle Bin modified the SMB config so now I have two [global] sections and I cannot change that until the parity check finishes. If two [global] sections is valid, then I don't see any improvement. It's either about the same or a little slower than before.
  4. I did try SMB3, but didn't see any performance improvement. That said, I misunderstood the original comment. I have it set to a min of SMB3 now. With Jumbo Frames, I enabled 9014 on the server and 9014 on my desktop. I don't see any change in performance. Like I mentioned in my last experiment, transferring files to the and from the VM from and to my desktop behaves as I would expected a 10Gbps connection to work. It's reading & writing to the same SSD on my server as is shared in SMB.
  5. # SMB Performance & Security Tweaks min protocol = SMB2 server multi channel support = yes aio read size = 1 aio write size = 1 This is the best combo so far. Read speeds are around 850MB/s. Writes are still pretty low at 320MB/s. Read from share to desktop (not perfect, but I can accept it): Write from desktop to share: I retested transferring a file from my Desktop PC (Windows 10) the VM (Windows Server 2016) running on Unraid and hit my network bottleneck: Read: Write:
  6. Looks like I can add this to my SMB extra parameters? #disable SMB1 for security reasons [global] min protocol = SMB2
  7. I'll admit... this isn't my day job, so I don't have a bunch of expertise. Just relying on what I can read. I actually have my MTU set to 1500. I found issues getting some of my Dockers running with Jumbo Frames enabled (don't recall if it was 9000 or 9014). I know the lower MTU can affect speeds, but everything I've read so far says Jumbo Frames won't really fix the transfer issue. Let me figure out how to disable SMBv1/2 support. That does sound promising.
  8. That's where I'm at now. What's funny is I can setup a share in my VM running on the server and transfer at near 10Gb speeds, but using SMB, I'm hitting around 1.2Gbps just as you. iperf3 w/ a single thread I can hit around 9Gbps. Multi-streams at 9.8Gbps. So I think my target should be close to 9Gbps for SMB transfers (w/ NVME SSDs)
  9. Last update. My network runs on top of Ubiquiti Unifi equipment. For the 10Gb network, I have a US-XG-16 and a Dream Machine Pro. The US-XG-16 is a layer 2 switch, so no routing table lives on the switch. To cross the VLAN, traffic has to go back to the gateway (Dream Machine Pro).
  10. Yea, that was the NIC I first noticed it on. When I found out today that the 82599ES didn't support SR-IOV, I switched to the X722. I fought the 82599ES for several days though. Great idea trying on a simple network first. The VM and my PCs are all on the same VLAN. The server is on a separate network. And that's exactly what is going on... I can get much closer to my 10Gbit ceiling. So now I need to figure out how to fix that... Thanks so much! It's been driving me nuts.
  11. It affects file transfers as well: From Desktop to VM running on server (on cache drive): From Desktop to SMB share running on server (also on cache drive):
  12. New experiement. I ran iperf3 on the VM running on the server. I get about 10GB throughput... so the hardware seems to work.
  13. Forgot to mention... I get about the same speeds testing with a VM running on the server to the server's iperf3 server. So it's something local to the server itself.
  14. Hey everyone, I've had this issue for months (although I put it on the back burner for a bit). I have 10Gb networking setup at my home. I can easily saturate the link between my normal desktop and laptop using iperf3 w/ -P4: Unraid however... I cannot get it to work even close to 10Gb with iperf3 w/ -P4: Details: - I tried setting my MTU to 9014 across the board. It works between my laptop and desktop, so they're not the issue. This goes through my US-16-XG switch, so that's also not the issue. - If I set my MTU to anything over 1500, I can no longer acess the Web UI or my drives. Sounds like jumbo frames aren't supported... but they should be (Intel X722). - I found, by default, using ethtool, that my RX/TX buffers are 512. Setting them to 4096 doesn't change anything. I'm somewhat confused on the RX Jumbo max is 0... but looking online, that doesn't seem all too uncommon. I know my Unraid is unregistered (fun story there... my USB drive died this morning during a reboot to change the interfaces around). Running on a freebie drive until I get a new trusted USB drive tomorrow.) Other info: nazaretianrack-diagnostics-20200801-1224.zip
  15. Count me in for this issue. Attached are my diagnostics. Besides the error, everything seems normal. nazaretianrack-diagnostics-20190528-1341.zip
  16. Upgrade did not go smooth from 6.5.1. I ran the CA Upgrade Checker, which showed I was good to go. I'll have to debug when I get home. It's not up locally at all. I'll need to get a monitor & keyboard connected to debug further. ☹️ Upgrade was fine. I believe the system got stuck in BIOS or at shutdown. Power button immediately powered the system down and it booted up like normal afterward.