_whatever

Members
  • Posts

    31
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

_whatever's Achievements

Noob

Noob (1/14)

4

Reputation

  1. Yeah, I get that. I tend to like to leave my Finder windows uniform in alphabetical sort in list view, just like Explorer in Windows, Nemo in Mint, etc. Apple has an article that explains this setting but they don't go out of their way to provide details. https://support.apple.com/en-us/HT208209
  2. Finder uses those files to remember sort order, view options, icon positions, etc., for each folder. That may be desirable for some, but the problem is that on network shares it will add latency while it tries to parse the file contents and enumerate the settings, which on top of all of the other Apple file metadata and attibutes is just bloat on top of standard SMB. In my testing it did help speed Finder up when traversing and showing the contents of a folder with a lot (500+) items. It's hard to quantify exactly how much it helps, but it is especially noticeable in folders with thousands of objects in it. https://en.wikipedia.org/wiki/.DS_Store
  3. After more testing I was able to get some decent performance with a few changes. It still isn't as fast as a Windows 10 or 11 client, or any of the Linux distros I've tested with (Ubuntu, Mint, Pop!_OS), but it's better than the default Samba config settings. I know it has to do with Apple's implementation of SMB so I think this is likely as good as it gets. I would recommend against vetoing files in your Samba config since that will cause slowness when using Finder to browse shares. You're better off using find on Unraid to delete all .DS_Store and ._ files and run the command below to tell macOS to stop creating them on network shares. To delete all .DS_Store and ._ files run the following as root from the terminal on Unraid: find /mnt/user \( -name “.DS_Store” -or -name “._*” \) -delete Run the following on my MacBook to stop .DS_Store and ._ files from being created: defaults write com.apple.desktopservices DSDontWriteNetworkStores true This is what I settled on in my smb-extra.conf: [global] fruit:metadata = stream fruit:posix_rename = yes readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no /etc/nsmb.conf on my MacBook (12.5.1): [default] mc_prefer_wired=yes Good luck!
  4. Adding the following does seem to help Finder show folder contents faster. I'll play around with it more tomorrow, but I'm curious what others' experience would be adding this: [global] readdir_attr:aapl_max_access = no readdir_attr:aapl_finder_info = no readdir_attr:aapl_rsize = no
  5. As much as I'd like to blame Apple because of the dumb crap they do, I don't think it's on them. I copied the same folder that takes minutes to load in Finder from a share on Unraid to a Windows 10 and Windows 11 machine and shared it there. In both cases when hitting the shares on the Windows machines Finder doesn't have any lag, nor do I see the GetInfo Request for each file in the folder when doing a Wireshark capture. I suspect it's either in how Samba was compiled or one of the default config settings. Unfortunately I think it's going to require a process of elimination by getting the Samba config using testparm -v, changing settings and testing.
  6. I think there's three distinct problems that tend to get conflated. One is the speed in which Finder will traverse and list folder contents on an SMB share. The second is throughput when copying from the server. The third is copying from the client to the server. On average it takes anywhere from 2-10 minutes for Finder to populate the contents of a folder with 1-2 thousand files in it, and sometimes it goes completely unresponsive and I get beachballs. Contrast this with a Windows endpoint which immediately shows the contents of folders with zero lag. As far as transfer speeds from the server to the client, it really depends on the number of files and their sizes, but I can get about 130MBps if I copy a single, large file from the server to the client. Copying many files of various sizes can be a mixed bag regardless of if the client endpoint is Mac, Windows or Linux and just par for the course in my experience. Even in corporate environments when copying files from flash storage arrays over 10GbE or faster I see this. The third issue is speed writing to the Unraid server over SMB where the speed suddenly plummets after writing at a reasonable speed. This I think is more of an issue with SHFS and has been well documented elsewhere. To be clear, I think issues 1 and 2 combine to make the slowness worse when using Mac endpoints. My particular issue is Finder being slow to list folder contents, but also if I copy a folder with many objects, especially with many nested folders. When doing Wireshark captures I can see the Mac do a GetInfo Request to get the file size and file attributes for EVERY FILE! Doing the same capture from a Windows endpoint while browsing a share does not do this. It is unclear to me at this point if it is due to something stupid Apple is doing in their SMB implementation and/or something with how Samba is compiled on Unraid. I'm still doing additional testing but haven't had a lot of time to dedicate to this. At first blush I do think it's something to do with Samba since I don't have this experience when traversing the same folder when I copy it to a share on a Windows 10 PC and then browse the share from my Mac. I need to do more Wireshark captures to see if I can find anything definitive. Apologies if this is scatter brained. I'm sick with Covid and the cold/flu medicine I took makes me feel like I drank a pot of coffee. 🥴
  7. I'm on 12.5.1 as well and I have extremely slow performance in Finder just showing folder contents, especially in folders with more than a thousand objects. When I first got my MacBook it was running Big Sur and I don't recall having any slowness issues at all, but I was also on a 6.9.x build of Unraid. I've implemented https://support.apple.com/en-us/HT208209 and I deleted all of the .DS_Store files that had been previously created, I've also tried turning off caching per https://support.apple.com/en-us/HT207520, and I've tried setting multichannel to prefer wired connections and even turning off multichannel support per https://support.apple.com/en-us/HT212277. If I use a Windows PC or a Windows VM running in Parallels on my MacBook the same folders on the share that are slow in Finder populate immediately. And I've used iperf and confirmed that I get a full Gigabit between my MacBook and the Unraid server when docked. I've tried a variety of Samba configurations including https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X and the behavior is the same. Edit - I wanted to mention that vetoing files in the config can cause slowness. You might try doing 'find /mnt/user/data \( -name '.DS_Store' \) -delete' on Unraid to remove any existing files and run 'defaults write com.apple.desktopservices DSDontWriteNetworkStores -bool TRUE' on your Mac to stop it from creating .DS_Store files. Maybe that will help with performance for you?
  8. While this docker container works great on Unraid, if you're using it on a drive formatted with Btrfs, you might be killing your drive(s). A couple of months ago I gave my son my TP-Link OC200 for this apartment setup and I decided to replace it with this docker container on my Unraid server. The other day I noticed that my cache drives, where the docker data lives, had unusually high numbers of writes accumulated, and after using iotop I saw the biggest contributor was mongod, followed closely by btrfs-transacti. Then a lightbulb went off in my head. I remembered reading some time ago about the gotchas of Btrfs and workloads with a lot of random writes (databases) causing excessive fragmentation and thrashing discs. This excellent post on the MongoDB community forums explains the issue very well. So I was left with 3 choices in that moment: Upgrade my Unraid license so I could add an XFS formatted drive for the Omada controller data. Remove one of the drives from my mirrored Btrfs array and make it a standalone XFS drive for the Omada controller. Figure out a way to retroactively turn off CoW for the MongoDB data. Since I was pressed for time (and money), option 3 seemed best, so that's the route I went, using these great instructions on the Arch Wiki. Because I had been previously bitten by the loopback write amplification bug with in Unraid 6.8 I had switched from using a docker image to having docker write directly to disk. This made things a little bit easier since I had direct access to the database. So I stopped the docker service went to the terminal, renamed the db folder in /mnt/cache/appdata/omada/data to 'dbold', created a new folder named 'db' and added the +C attribute, then used cp to recursively copy everything from dbold to db using the --reflink=never switch. Started the docker service back up and kept my fingers crossed. So far the number of writes does seem to be down considerably. I'll keep an eye on it over the next few weeks to be sure. My long term plan is to rebuild my server soon anyway b/c these drives are REALLY old SATA SSDs from like 2017. So once I get to that point I'll probably switch to NVMe for cache and dedicate a cheap SATA drive formatted XFS for anything with a database workload, just to play it safe. YMMV.
  9. It looks like this container has quite a few vulnerabilities, including log4j. I was just curious if @Djoss had plans to update the container before I try to update the contents of my container to mitigate the issues, and hopefully not break anything. Thanks. https://try.trivy.dev/results/mLA3h91Jqqx14ZibyMYnx3mG
  10. For others coming from 3.2 you can use the link below to migrate your config. Note, you'll have to reconfigure any ACLs or captive portals you have setup since they won't migrate. You also have to make sure that the paths for your data, work and logs folders all point to a single disk and not a share. In my case I'm using /mnt/cache/. https://www.tp-link.com/us/omada-sdn/controller-upgrade/#content-5_1_2 Skip to bullet 2 step 1.
  11. I just kept the stock default config and ran through the wizard and then imported my config from the old controller. Seems to have worked fine. Thanks for your work on this container so I can use a much newer version!
  12. Just out of curiosity, do you know if on first run this will upgrade a 3.2 database if I copy over my config and db from a 3.2 install and use it with this container? I was going to try it later but thought I'd ask first in case it's a waste of time.
  13. I'm not sure why, but after disabling the onboard NIC, setting an IP on eth0 which was now an interface on my 4 port card, and then re-enabling the onboard NIC (which then became eth4) I was able to get LACP working with all 5 ports again and they seem to be stable. I'm just chalking this up to either something in the new kernel or something with the e1000 and/or bonding driver.
  14. FYI, it appears to only be an issue when you try to create a bond using eth0, no matter which interface is assigned eth0. I can bond all of the interfaces on my 4 port NIC and it works fine as long as none of them are eth0.