• Content Count

  • Joined

  • Last visited

Community Reputation

1 Neutral

About _whatever

  • Rank
  1. For others coming from 3.2 you can use the link below to migrate your config. Note, you'll have to reconfigure any ACLs or captive portals you have setup since they won't migrate. You also have to make sure that the paths for your data, work and logs folders all point to a single disk and not a share. In my case I'm using /mnt/cache/. https://www.tp-link.com/us/omada-sdn/controller-upgrade/#content-5_1_2 Skip to bullet 2 step 1.
  2. I just kept the stock default config and ran through the wizard and then imported my config from the old controller. Seems to have worked fine. Thanks for your work on this container so I can use a much newer version!
  3. Just out of curiosity, do you know if on first run this will upgrade a 3.2 database if I copy over my config and db from a 3.2 install and use it with this container? I was going to try it later but thought I'd ask first in case it's a waste of time.
  4. I'm not sure why, but after disabling the onboard NIC, setting an IP on eth0 which was now an interface on my 4 port card, and then re-enabling the onboard NIC (which then became eth4) I was able to get LACP working with all 5 ports again and they seem to be stable. I'm just chalking this up to either something in the new kernel or something with the e1000 and/or bonding driver.
  5. FYI, it appears to only be an issue when you try to create a bond using eth0, no matter which interface is assigned eth0. I can bond all of the interfaces on my 4 port NIC and it works fine as long as none of them are eth0.
  6. I've been having issues with LACP since beta 25, and still have them after upgrading to beta 29. I noticed when logging into my switch after getting the beta 25 upgrade that my 4 port Intel NIC wasn't being used even though I had it and my onboard Intel NIC bonded using 802.3 (5 NICs total, all using e1000). For whatever reason the 4 ports showed as a bonded member in unRAID but on the switch they showed they were connected at 10Mbps Full. In playing around with it I broke the LACP LAG in unRAID and on my switch, but no matter what I try I can't get it to work. I disabled the onboard NIC t
  7. I'll check and see. I'm just using file explorer to copy the files from a local folder to a cached user share on unRAID. The throughput seems to be consistent with what I get when copying the same files from a Windows 10 machine to another Windows 10 machine both on 1GbE connections. When my scheduled image backups run using Macrium Reflect, I get pretty much full gigabit transfer rates for the duration of the backup. I'm also going to copy using some multi-threaded programs, like robocopy, unstoppable copier, etc. It seems to me like the throughput I get using Explorer is abou
  8. I tested this using about 2500 4KB files and I saw no discernible difference with nt acl support = no and nt acl support = yes. Transfer rates bounced between 95Kbps and 108Kbps with either value used.
  9. I have not had a chance to test but I plan to later today while copying a lot of 4KB files.
  10. Well, that isn't the only use case. If you have another share that is private and multiple users have access, you can use the filesystem permissions to stop certain users from being able to traverse certain folders within the share. If you're not doing this, you'll be okay.
  11. ACL is access control list. It's a way of controlling permissions. On a Windows machine you have NTFS filesystem permissions with an ACL and you also have the share level permissions. Typically you control permissions using the filesystem since it offers more control. On *nix with Samba, it will try to map the filesystem permissions to NT filesystem permissions. Since unRAID really only relies on share level permissions to control access this may eliminate some extra overhead. I'm going to test it myself to see if I notice any improvement. If you're not interested in having a
  12. When copying large files to my Unraid server over gigabit Ethernet I saturate the client NIC at around 120MB/s and I see no difference between copying directly to disk or using a share in /mnt/user. This is the same speed at which I can copy files to a Windows 10 share. However, if I copy a lot of small files, I only copy at around 60KB/s to a share in /mnt/user, but I get around 120KB/s direct to a disk share. The latter is the same performance I get when copying to a Windows 10 share. I know copying small files I should expect to see a performance hit, but, that's a pretty big differ
  13. If I copy a single large file 1GB in size I can max out my client NIC at around 125MB/s. However, If I copy a bunch of small 4K files my transfer speeds are greatly reduced to around 65KB/s. I don't have any clients with NICs more capable than 1Gbps to really test saturating my storage or NIC on my server though so this isn't as good a test as others had done previously. Still, that's pretty bad. With 'case sensitive' set to true I saw no difference. I'm going to test copying the same files to a Windows 10 machine just for good measure.
  14. I read the entire thread and saw your posts (and blog) explaining this. I have not noted the large performance issues others have reported, though, I have not bothered to try doing a disk share vs. a user share. I may try this to see what the difference I see is on my setup. However, I do think there is a problem with SMB, based on @TexasUnraid's packet captures. I don't think it's an SMB problem directly, and it's just being affected by something else under the hood, to your point, likely FUSE. SMB is trying to verify writes but can't, which explains the "STATUS_OBJECT_NAME_NO
  15. That is interesting. Can you do the following for testing? From a shell on your Unraid server do: dd if=/dev/urandom bs=1024 count=10240 | split -a 3 -b 4k - /mnt/disk1/share/path/to/test/file. this will create a bunch of files named file.xxx 4k in size. Be sure to keep the "/file." on the end of the path. time rm -rf /mnt/disk1/share/path/to/test/file.* Be sure the path is accurate so you don't delete anything important! This will return the time required to execute. Repeat these steps again, but this ti