_whatever

Members
  • Posts

    31
  • Joined

Everything posted by _whatever

  1. Yeah, I get that. I tend to like to leave my Finder windows uniform in alphabetical sort in list view, just like Explorer in Windows, Nemo in Mint, etc. Apple has an article that explains this setting but they don't go out of their way to provide details. https://support.apple.com/en-us/HT208209
  2. Finder uses those files to remember sort order, view options, icon positions, etc., for each folder. That may be desirable for some, but the problem is that on network shares it will add latency while it tries to parse the file contents and enumerate the settings, which on top of all of the other Apple file metadata and attibutes is just bloat on top of standard SMB. In my testing it did help speed Finder up when traversing and showing the contents of a folder with a lot (500+) items. It's hard to quantify exactly how much it helps, but it is especially noticeable in folders with thousands of objects in it. https://en.wikipedia.org/wiki/.DS_Store
  3. After more testing I was able to get some decent performance with a few changes. It still isn't as fast as a Windows 10 or 11 client, or any of the Linux distros I've tested with (Ubuntu, Mint, Pop!_OS), but it's better than the default Samba config settings. I know it has to do with Apple's implementation of SMB so I think this is likely as good as it gets. I would recommend against vetoing files in your Samba config since that will cause slowness when using Finder to browse shares. You're better off using find on Unraid to delete all .DS_Store and ._ files and run the command below to tell macOS to stop creating them on network shares. To delete all .DS_Store and ._ files run the following as root from the terminal on Unraid: find /mnt/user \( -name “.DS_Store” -or -name “._*” \) -delete Run the following on my MacBook to stop .DS_Store and ._ files from being created: defaults write com.apple.desktopservices DSDontWriteNetworkStores true This is what I settled on in my smb-extra.conf: [global] fruit:metadata = stream fruit:posix_rename = yes readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no /etc/nsmb.conf on my MacBook (12.5.1): [default] mc_prefer_wired=yes Good luck!
  4. Adding the following does seem to help Finder show folder contents faster. I'll play around with it more tomorrow, but I'm curious what others' experience would be adding this: [global] readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no
  5. As much as I'd like to blame Apple because of the dumb crap they do, I don't think it's on them. I copied the same folder that takes minutes to load in Finder from a share on Unraid to a Windows 10 and Windows 11 machine and shared it there. In both cases when hitting the shares on the Windows machines Finder doesn't have any lag, nor do I see the GetInfo Request for each file in the folder when doing a Wireshark capture. I suspect it's either in how Samba was compiled or one of the default config settings. Unfortunately I think it's going to require a process of elimination by getting the Samba config using testparm -v, changing settings and testing.
  6. I think there's three distinct problems that tend to get conflated. One is the speed in which Finder will traverse and list folder contents on an SMB share. The second is throughput when copying from the server. The third is copying from the client to the server. On average it takes anywhere from 2-10 minutes for Finder to populate the contents of a folder with 1-2 thousand files in it, and sometimes it goes completely unresponsive and I get beachballs. Contrast this with a Windows endpoint which immediately shows the contents of folders with zero lag. As far as transfer speeds from the server to the client, it really depends on the number of files and their sizes, but I can get about 130MBps if I copy a single, large file from the server to the client. Copying many files of various sizes can be a mixed bag regardless of if the client endpoint is Mac, Windows or Linux and just par for the course in my experience. Even in corporate environments when copying files from flash storage arrays over 10GbE or faster I see this. The third issue is speed writing to the Unraid server over SMB where the speed suddenly plummets after writing at a reasonable speed. This I think is more of an issue with SHFS and has been well documented elsewhere. To be clear, I think issues 1 and 2 combine to make the slowness worse when using Mac endpoints. My particular issue is Finder being slow to list folder contents, but also if I copy a folder with many objects, especially with many nested folders. When doing Wireshark captures I can see the Mac do a GetInfo Request to get the file size and file attributes for EVERY FILE! Doing the same capture from a Windows endpoint while browsing a share does not do this. It is unclear to me at this point if it is due to something stupid Apple is doing in their SMB implementation and/or something with how Samba is compiled on Unraid. I'm still doing additional testing but haven't had a lot of time to dedicate to this. At first blush I do think it's something to do with Samba since I don't have this experience when traversing the same folder when I copy it to a share on a Windows 10 PC and then browse the share from my Mac. I need to do more Wireshark captures to see if I can find anything definitive. Apologies if this is scatter brained. I'm sick with Covid and the cold/flu medicine I took makes me feel like I drank a pot of coffee. 🥴
  7. I'm on 12.5.1 as well and I have extremely slow performance in Finder just showing folder contents, especially in folders with more than a thousand objects. When I first got my MacBook it was running Big Sur and I don't recall having any slowness issues at all, but I was also on a 6.9.x build of Unraid. I've implemented https://support.apple.com/en-us/HT208209 and I deleted all of the .DS_Store files that had been previously created, I've also tried turning off caching per https://support.apple.com/en-us/HT207520, and I've tried setting multichannel to prefer wired connections and even turning off multichannel support per https://support.apple.com/en-us/HT212277. If I use a Windows PC or a Windows VM running in Parallels on my MacBook the same folders on the share that are slow in Finder populate immediately. And I've used iperf and confirmed that I get a full Gigabit between my MacBook and the Unraid server when docked. I've tried a variety of Samba configurations including https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X and the behavior is the same. Edit - I wanted to mention that vetoing files in the config can cause slowness. You might try doing 'find /mnt/user/data \( -name '.DS_Store' \) -delete' on Unraid to remove any existing files and run 'defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE' on your Mac to stop it from creating .DS_Store files. Maybe that will help with performance for you?
  8. While this docker container works great on Unraid, if you're using it on a drive formatted with Btrfs, you might be killing your drive(s). A couple of months ago I gave my son my TP-Link OC200 for this apartment setup and I decided to replace it with this docker container on my Unraid server. The other day I noticed that my cache drives, where the docker data lives, had unusually high numbers of writes accumulated, and after using iotop I saw the biggest contributor was mongod, followed closely by btrfs-transacti. Then a lightbulb went off in my head. I remembered reading some time ago about the gotchas of Btrfs and workloads with a lot of random writes (databases) causing excessive fragmentation and thrashing discs. This excellent post on the MongoDB community forums explains the issue very well. So I was left with 3 choices in that moment: Upgrade my Unraid license so I could add an XFS formatted drive for the Omada controller data. Remove one of the drives from my mirrored Btrfs array and make it a standalone XFS drive for the Omada controller. Figure out a way to retroactively turn off CoW for the MongoDB data. Since I was pressed for time (and money), option 3 seemed best, so that's the route I went, using these great instructions on the Arch Wiki. Because I had been previously bitten by the loopback write amplification bug with in Unraid 6.8 I had switched from using a docker image to having docker write directly to disk. This made things a little bit easier since I had direct access to the database. So I stopped the docker service went to the terminal, renamed the db folder in /mnt/cache/appdata/omada/data to 'dbold', created a new folder named 'db' and added the +C attribute, then used cp to recursively copy everything from dbold to db using the --reflink=never switch. Started the docker service back up and kept my fingers crossed. So far the number of writes does seem to be down considerably. I'll keep an eye on it over the next few weeks to be sure. My long term plan is to rebuild my server soon anyway b/c these drives are REALLY old SATA SSDs from like 2017. So once I get to that point I'll probably switch to NVMe for cache and dedicate a cheap SATA drive formatted XFS for anything with a database workload, just to play it safe. YMMV.
  9. It looks like this container has quite a few vulnerabilities, including log4j. I was just curious if @Djoss had plans to update the container before I try to update the contents of my container to mitigate the issues, and hopefully not break anything. Thanks. https://try.trivy.dev/results/mLA3h91Jqqx14ZibyMYnx3mG
  10. For others coming from 3.2 you can use the link below to migrate your config. Note, you'll have to reconfigure any ACLs or captive portals you have setup since they won't migrate. You also have to make sure that the paths for your data, work and logs folders all point to a single disk and not a share. In my case I'm using /mnt/cache/. https://www.tp-link.com/us/omada-sdn/controller-upgrade/#content-5_1_2 Skip to bullet 2 step 1.
  11. I just kept the stock default config and ran through the wizard and then imported my config from the old controller. Seems to have worked fine. Thanks for your work on this container so I can use a much newer version!
  12. Just out of curiosity, do you know if on first run this will upgrade a 3.2 database if I copy over my config and db from a 3.2 install and use it with this container? I was going to try it later but thought I'd ask first in case it's a waste of time.
  13. I'm not sure why, but after disabling the onboard NIC, setting an IP on eth0 which was now an interface on my 4 port card, and then re-enabling the onboard NIC (which then became eth4) I was able to get LACP working with all 5 ports again and they seem to be stable. I'm just chalking this up to either something in the new kernel or something with the e1000 and/or bonding driver.
  14. FYI, it appears to only be an issue when you try to create a bond using eth0, no matter which interface is assigned eth0. I can bond all of the interfaces on my 4 port NIC and it works fine as long as none of them are eth0.
  15. I've been having issues with LACP since beta 25, and still have them after upgrading to beta 29. I noticed when logging into my switch after getting the beta 25 upgrade that my 4 port Intel NIC wasn't being used even though I had it and my onboard Intel NIC bonded using 802.3 (5 NICs total, all using e1000). For whatever reason the 4 ports showed as a bonded member in unRAID but on the switch they showed they were connected at 10Mbps Full. In playing around with it I broke the LACP LAG in unRAID and on my switch, but no matter what I try I can't get it to work. I disabled the onboard NIC thinking maybe it didn't like bonding w/ the PCI-e card. They've been configured using 802.3ad using the same switch (and firmware) and server hardware for around 6 months and the individual interfaces all work fine and negotiate 1Gpbs full and testing them using iperf.
  16. I'll check and see. I'm just using file explorer to copy the files from a local folder to a cached user share on unRAID. The throughput seems to be consistent with what I get when copying the same files from a Windows 10 machine to another Windows 10 machine both on 1GbE connections. When my scheduled image backups run using Macrium Reflect, I get pretty much full gigabit transfer rates for the duration of the backup. I'm also going to copy using some multi-threaded programs, like robocopy, unstoppable copier, etc. It seems to me like the throughput I get using Explorer is about what I should expect to get with many small files, but to your point, the issue people experience may have something to do with multiple concurrent threads/connections.
  17. I tested this using about 2500 4KB files and I saw no discernible difference with nt acl support = no and nt acl support = yes. Transfer rates bounced between 95Kbps and 108Kbps with either value used.
  18. I have not had a chance to test but I plan to later today while copying a lot of 4KB files.
  19. Well, that isn't the only use case. If you have another share that is private and multiple users have access, you can use the filesystem permissions to stop certain users from being able to traverse certain folders within the share. If you're not doing this, you'll be okay.
  20. ACL is access control list. It's a way of controlling permissions. On a Windows machine you have NTFS filesystem permissions with an ACL and you also have the share level permissions. Typically you control permissions using the filesystem since it offers more control. On *nix with Samba, it will try to map the filesystem permissions to NT filesystem permissions. Since unRAID really only relies on share level permissions to control access this may eliminate some extra overhead. I'm going to test it myself to see if I notice any improvement. If you're not interested in having a public share that you can restrict access to individual folders under the share, you don't really need this enabled.
  21. When copying large files to my Unraid server over gigabit Ethernet I saturate the client NIC at around 120MB/s and I see no difference between copying directly to disk or using a share in /mnt/user. This is the same speed at which I can copy files to a Windows 10 share. However, if I copy a lot of small files, I only copy at around 60KB/s to a share in /mnt/user, but I get around 120KB/s direct to a disk share. The latter is the same performance I get when copying to a Windows 10 share. I know copying small files I should expect to see a performance hit, but, that's a pretty big difference and really the only way I'm able to reproduce any example of slowness. Even though my server has a 4x1Gbps port NIC configured using 802.3.ad I don't have a client with a NIC faster than 1Gbps that can get anywhere near saturating my cache drive. When doing packet captures and copying a few thousand 4k files I notice that Unraid returns a lot of errors that indicate it's waiting for an I/O operation to complete, so the client actually waits for a timeout to elapse. I don't see these when doing a capture when copying the same files to/from a share on an Ubuntu box or Windows 10 machine. Yes, FUSE is slowing things down, but, the SMB protocol is adding additional slowness making it more pronounced. I've been playing around with various Samba settings but haven't a lot of time to dedicate to it. I need to get a capture when copying directly to a disk share to see what it looks like.
  22. If I copy a single large file 1GB in size I can max out my client NIC at around 125MB/s. However, If I copy a bunch of small 4K files my transfer speeds are greatly reduced to around 65KB/s. I don't have any clients with NICs more capable than 1Gbps to really test saturating my storage or NIC on my server though so this isn't as good a test as others had done previously. Still, that's pretty bad. With 'case sensitive' set to true I saw no difference. I'm going to test copying the same files to a Windows 10 machine just for good measure.
  23. I read the entire thread and saw your posts (and blog) explaining this. I have not noted the large performance issues others have reported, though, I have not bothered to try doing a disk share vs. a user share. I may try this to see what the difference I see is on my setup. However, I do think there is a problem with SMB, based on @TexasUnraid's packet captures. I don't think it's an SMB problem directly, and it's just being affected by something else under the hood, to your point, likely FUSE. SMB is trying to verify writes but can't, which explains the "STATUS_OBJECT_NAME_NOT_FOUND" errors, which are the equivalent of "file not found". This, I believe, is adding additional overhead and latency. I'm actually playing around with this now myself to see what, if any difference I see in performance. I typically don't deal with lots of small files but I've generated a bunch now to play with.
  24. That is interesting. Can you do the following for testing? From a shell on your Unraid server do: dd if=/dev/urandom bs=1024 count=10240 | split -a 3 -b 4k - /mnt/disk1/share/path/to/test/file. this will create a bunch of files named file.xxx 4k in size. Be sure to keep the "/file." on the end of the path. time rm -rf /mnt/disk1/share/path/to/test/file.* Be sure the path is accurate so you don't delete anything important! This will return the time required to execute. Repeat these steps again, but this time in step 2 (and only in step 2) replace disk1 with user, so /mnt/user/share/path/to/test. When I do this it takes about .3 seconds to delete directly from the disk, but when deleting from /mnt/user, deleting the same number of files of the same size, it takes 5.3 seconds. This is from a 7200RPM SATA drive formatted with XFS. Now, I'm sure some additional latency is expected with the way unRAID references files within the array, so I don't think this is a smoking gun by any means. But I'm curious to see what the difference is for you.