Kyle Boddy

Members
  • Posts

    23
  • Joined

  • Last visited

Everything posted by Kyle Boddy

  1. Huh, interesting solution. Unfortunately this wouldn't work with my small business' use case and deployment, but it'd be interesting to hear from the devs why sshfs is so much faster than SMB, if it indeed can be replicated.
  2. Shrug. I'm already planning on moving back to Windows Server 2019 because of the fuse filesystem (primarily) and unRAID's general not very good support on this as well as other issues (like my Mellanox thread). It's insanely slow for large amounts of files. Only wish I had figured that out before I installed unRAID, paid for a license, and spent 3 weeks migrating data + configs. It's a shame because I love the dashboard, docker ecosystem, and tooling, but the core product is extremely flawed for this use case.
  3. From what others have said that I've had to piece together (general support has not been very good with unRAID, I might add), this seems like an intractable problem with the fuse filesystem that unRAID uses. I'll likely be switching back to Windows Server 2019, which is unfortunate.
  4. Good advice - I do have this set along with other settings on client-side computers to improve the access speed, and it's still 200x+ slower in unRAID.
  5. I might try that in the future, but I'd probably just switch back to Windows if I was going to remigrate the data. Is there any way to cache the folder structure in RAM (I have 40 GB available) to reduce this issue? Is the Folder Caching plugin supposed to do this?
  6. This is an excellent guide - very easy to follow and I did so! Unfortunately, browsing directories with thousands of files in them is still very slow. File transfer speeds are acceptable, but indexing/browsing is incredibly slow and I can't seem to get Folder Caching to work to boot. This was not a problem when the server was Windows 10, but using unRAID, browsing/indexing speeds are intolerably slow. Any idea if RSS should help here, or any other mods?
  7. Ran a Passmark bench just for the heck of it: https://www.passmark.com/baselines/V10/display.php?id=502030445134
  8. I am running the latest unRAID 6.10.3 with Folder Caching implemented and activated, with cache_pressure of 1 set. I have a few directories/shares that have thousands of files in the top level. Formerly these shares were on a Windows Server box, and SMB browsing of the directories was a bit slow in Windows Explorer, but after indexing, was fine. From the command line after a single file listing (to presumably cache the file structure), access was nearly instant. This is not the case on unRAID. I have batch scripts that require listing the files from the directory nightly, and the slowdown is somewhere between 200-1000x slower despite the Folding Caching plugin, a 10G-SFP connection, local access, and RSS support enabled (along with a client reboot and server samba restart command issued). Checking the Folder Caching logs doesn't seem like it's doing much, just repeating an "Executed find" log message every ten seconds or so despite a client machine requesting a directory listing and hanging for several minutes: I applied these fixes without much change: Any suggestions? Screenshots of my Folder Caching uploaded along with Diagnostics. And heck, while you're here, if you can look into why I can't bind my Mellanox 10G-SFP card to eth0, feel free to have a look at this thread that has been dying a slow death: morra-diagnostics-20220807-1625.zip
  9. Got it. Yeah, it's a weird one. Not sure how else to diagnose or try to fix on my end. I recently patched the BIOS, updating it to the latest version (2019), but had no effect.
  10. Just in case anyone asks, no, there are no other MAC addresses in the pulldown menu under Interface eth0 in Interface Rules:
  11. I successfully was able to bind eth2-4 to vfio-pci which removed them from unRAID. However, the Mellanox card still does not show up in the Interface Rules list, as shown by the images attached. I have also attached anonymized Diagnostics. Can someone please look into this or advise me to file a bug ticket? I can try binding the last 1G ethernet port to vfio-pci and rebooting and seeing what happens, but I doubt that would fix the problem. ifconfig on the command line clearly shows eth1 = Mellanox card, and when it isn't bridged to br1, it gets the same IP (static IP set from the router of 172.16.0.124). Thanks for reviewing. This seems like a pretty serious bug. @JorgeB @bonienl tower-diagnostics-20220731-1837.zip
  12. Would like to know as well if this was merged into the main branch.
  13. I'm going to try the vfio-pci method next week when I have some downtime I can schedule, but wanted to bump this one more time to see if anyone had any ideas before I do so. Thanks.
  14. I'm not sure I can on this machine - it's a Supermicro X8DT3-LN4F motherboard, and I looked at doing this a few days ago and saw no options to do that. I pulled the motherboard manual from the website and disabling onboard LAN ports is not featured in it. https://www.supermicro.com/manuals/motherboard/5500/MNL-1062.pdf
  15. Wondering if there was an update here, thanks. My fallback plan will be to run 4x1GBE bonded, but would really prefer to use the 10G-SFP Mellanox card I have.
  16. Got it thanks. I submitted hardware information via the webGUI to unRAID if that helps.
  17. I had issues with getting this to show up in the Interface List - does yours show up there? If so, did you do anything specific to get it there? I am trying to bind it to eth0 and running into this issue:
  18. I suppose this topic is relevant: However, multiple people still seem to have a related issue. Are there fixes in the RC branch available? Any idea what the known regression is? I cannot downgrade as I only started on the recent production version of unRAID, so this is a relatively important issue for me. Thanks.
  19. There are many posts saying to move the MAC address of an eth3 or similar to eth0 to get the DNS settings and features of eth0. For example, I have my Mellanox 10G-SFP card connected via DAC and working over 10G link, pulling an IP with no issue in unRAID. It's marked as eth4, and when it is connected while eth0 is not (1GBE), DNS issues occur, because unRAID does not allow eth4 to get the DNS server from the DHCP server. No problem. Re-assign eth4's MAC to eth0 in the Interface List, I've read 20 posts about that subject. No one I've found has this issue, however: The Mellanox card simply does not exist in the Interface List despite being in the Network Settings. Take a look at the attached image - it shows Interface eth0, eth1, eth2, and eth3, which are all onboard NICs and all 1GBE. eth4 is nowhere to be found despite being directly above that section, and working, to boot. At the moment I either have to set the DNS server by manually editing the /etc/resolv.conf file every boot by adding nameservers 172.16.0.1 to the file, or keep eth0 connected with 1GBE along with eth4 connected via 10G-SFP, which causes Fix Common Issues plugin to yell about how two NICs should not be on the same subnet (which is true). Furthermore, on booting with just eth4 plugged in, the unRAID console says that IPv4 address is not set, even though it definitely is by DHCP (another artifact of not having eth0 up, I'm sure). Any resolution to get this to show up? EDIT: The card is indeed a Ethernet controller: Mellanox Technologies MT27500 Family [ConnectX-3], but the mode is set properly to ethernet mode (eth), not infiniband mode (IB) as evidenced by the following command's output: root@Tower:~# cat /sys/bus/pci/devices/0000\:07\:00.0/mlx4_port1 eth tower-diagnostics-20220711-2134.zip