Jump to content

Kyle Boddy

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Kyle Boddy's Achievements

Noob

Noob (1/14)

1

Reputation

  1. Huh, interesting solution. Unfortunately this wouldn't work with my small business' use case and deployment, but it'd be interesting to hear from the devs why sshfs is so much faster than SMB, if it indeed can be replicated.
  2. Shrug. I'm already planning on moving back to Windows Server 2019 because of the fuse filesystem (primarily) and unRAID's general not very good support on this as well as other issues (like my Mellanox thread). It's insanely slow for large amounts of files. Only wish I had figured that out before I installed unRAID, paid for a license, and spent 3 weeks migrating data + configs. It's a shame because I love the dashboard, docker ecosystem, and tooling, but the core product is extremely flawed for this use case.
  3. From what others have said that I've had to piece together (general support has not been very good with unRAID, I might add), this seems like an intractable problem with the fuse filesystem that unRAID uses. I'll likely be switching back to Windows Server 2019, which is unfortunate.
  4. Good advice - I do have this set along with other settings on client-side computers to improve the access speed, and it's still 200x+ slower in unRAID.
  5. I might try that in the future, but I'd probably just switch back to Windows if I was going to remigrate the data. Is there any way to cache the folder structure in RAM (I have 40 GB available) to reduce this issue? Is the Folder Caching plugin supposed to do this?
  6. This is an excellent guide - very easy to follow and I did so! Unfortunately, browsing directories with thousands of files in them is still very slow. File transfer speeds are acceptable, but indexing/browsing is incredibly slow and I can't seem to get Folder Caching to work to boot. This was not a problem when the server was Windows 10, but using unRAID, browsing/indexing speeds are intolerably slow. Any idea if RSS should help here, or any other mods?
  7. Ran a Passmark bench just for the heck of it: https://www.passmark.com/baselines/V10/display.php?id=502030445134
  8. I am running the latest unRAID 6.10.3 with Folder Caching implemented and activated, with cache_pressure of 1 set. I have a few directories/shares that have thousands of files in the top level. Formerly these shares were on a Windows Server box, and SMB browsing of the directories was a bit slow in Windows Explorer, but after indexing, was fine. From the command line after a single file listing (to presumably cache the file structure), access was nearly instant. This is not the case on unRAID. I have batch scripts that require listing the files from the directory nightly, and the slowdown is somewhere between 200-1000x slower despite the Folding Caching plugin, a 10G-SFP connection, local access, and RSS support enabled (along with a client reboot and server samba restart command issued). Checking the Folder Caching logs doesn't seem like it's doing much, just repeating an "Executed find" log message every ten seconds or so despite a client machine requesting a directory listing and hanging for several minutes: I applied these fixes without much change: Any suggestions? Screenshots of my Folder Caching uploaded along with Diagnostics. And heck, while you're here, if you can look into why I can't bind my Mellanox 10G-SFP card to eth0, feel free to have a look at this thread that has been dying a slow death: morra-diagnostics-20220807-1625.zip
  9. Got it. Yeah, it's a weird one. Not sure how else to diagnose or try to fix on my end. I recently patched the BIOS, updating it to the latest version (2019), but had no effect.
  10. Just in case anyone asks, no, there are no other MAC addresses in the pulldown menu under Interface eth0 in Interface Rules:
  11. I successfully was able to bind eth2-4 to vfio-pci which removed them from unRAID. However, the Mellanox card still does not show up in the Interface Rules list, as shown by the images attached. I have also attached anonymized Diagnostics. Can someone please look into this or advise me to file a bug ticket? I can try binding the last 1G ethernet port to vfio-pci and rebooting and seeing what happens, but I doubt that would fix the problem. ifconfig on the command line clearly shows eth1 = Mellanox card, and when it isn't bridged to br1, it gets the same IP (static IP set from the router of 172.16.0.124). Thanks for reviewing. This seems like a pretty serious bug. @JorgeB @bonienl tower-diagnostics-20220731-1837.zip
  12. Would like to know as well if this was merged into the main branch.
  13. I'm going to try the vfio-pci method next week when I have some downtime I can schedule, but wanted to bump this one more time to see if anyone had any ideas before I do so. Thanks.
  14. I'm not sure I can on this machine - it's a Supermicro X8DT3-LN4F motherboard, and I looked at doing this a few days ago and saw no options to do that. I pulled the motherboard manual from the website and disabling onboard LAN ports is not featured in it. https://www.supermicro.com/manuals/motherboard/5500/MNL-1062.pdf
  15. Wondering if there was an update here, thanks. My fallback plan will be to run 4x1GBE bonded, but would really prefer to use the 10G-SFP Mellanox card I have.
×
×
  • Create New...