Supacon

Members
  • Posts

    26
  • Joined

  • Last visited

Everything posted by Supacon

  1. That works going to https://192.168.0.4/, (with SSL errors, obviously). I think this used to be 0.2, so that's likely what caused some DNS issues. I think I can probably sort it out now (or just turn off SSL, since I'm really only using it locally anyways).
  2. Suddenly I'm finding myself unable to connect to the web GUI on my Unraid server today. Everything else seems to work. I can SSH into the server, all the Docker containers are running, etc. I believe I'm using the unraid.net plugin as when I connect to my local IP or local hostname, I'm redirected to https://(long random hex number).unraid.net. Could it be that there's an issue with this server that is preventing me from logging in? I could test connecting directly via http port 80 if there's a way to disable that SSL feature from the command line.
  3. Interesting, thanks for the post. I've had some serious performance issues with SMB on macOS, but my "workaround" is just to not use SMB, as I can access the data in other ways for most of what I am doing. I mainly only use SMB for occasional archival, but when I do use SMB, especially with lots of small files, it's extremely painful, almost slow to the point of having to question if it's even working at all and restart my mac because some low-level process gets hung up. Not sure if this workaround will help my issue, but it's something I'll look into the next time I try to do some performance tuning.
  4. No, I don’t think I ever got this working at a level I am satisfied with. In practice it doesn’t affect me all that much because I only use SMB occasionally to write large files. For keeping some things synced automatically I use Resilio with a Docker in Unraid and it’s very fast compared to SMB.
  5. @rhard it seems like you've really evaluated the bulk of other options on the market. I've got some experience with QNAP but I liked the idea of building something on my own for less money with much better specs, Linus Sebastian of LTT speaks highly of Unraid, which is how I discovered this OS. I found an old server locally for cheap that seemed to be a perfect fit. Everything came together so easily in my initial testing. Pop the installer on a USB drive, plug it in, boot, and you're good to go! I really like the Unraid web-based UI (which is high praise from me, as a web developer), the OS is familiar enough to me as I used to administer Linux systems, Unraid has a host of great capabilities, an excellent community of helpful users and developers (@SpaceInvaderOne is awesome), and a broad selection of handy plugins that are easy to install. I'd say that Unraid would be damned near perfect if it wasn't for the alarming fact that (in its current state, in my configuration) it can't seem to reliably and efficiently do the primary thing for which it is designed - share files over SMB! All this might not necessarily be a deal-breaker, though, as it serving as backup for things I rarely access, a self-contained media server that can handle BitTorrent and Plex and later, possibly host a video surveillance VM. So far it seems to work very well at these things. The worst part of actually using it for me was getting a few TB of data moved over to it, which required many restarts, many attempts, and a lot of waiting and restarting. One would think that something like dragging and dropping a few folders from macOS's Finder shouldn't completely kill a high-end computer on a super fast network connection. Hopefully there's just some kind of bug (in my configuration, in macOS, Samba, or in Unraid itself) that will become apparent and be fixed in time, but man, this experience with moving large numbers of files is a major disappointment.
  6. Something else I thought I'd try was to create a share on a Windows 10 VM running on Unraid (which is stored on the SSD). My 100MB small file test copied in 1m50s, peaking around 8MB/s. This is by far the fastest of any test with this set of files I've run so far. On an Unraid share that lives exclusively on cache this took 5m31s. Three times longer. What does this tell you? I guess the problems aren't really on the Mac's side after all. Obviously the hardware on my machine is capable of great speed, even through a virtualization layer into Unraid but there's something about Unraid or Samba severely limiting the performance. I'm not sure what this tells me about what I can actually do to solve this, however.
  7. I had an issue like this, and eventually I did get Time Machine System Preferences to pick up the share... it seems that you’ll have to mount it first (Cmd+K in Finder, smb://[email protected]/ShareName) and remember the credentials before it will show. This alone might not be enough... maybe try restarting finder from Force Quit and reconnecting the share. Does any of this help?
  8. That appears to be the command to start as a Daemon - doesn't that just mean it runs in the background? I ran the server with that parameter and it seems to make no difference. When I run the client with -k 8 I get an extremely fast speed that claims 20Gbits/sec which seems like a dubious result. Edit: Ah, I think I figured out what you meant - -P 8 runs with 8 parallel threads. Doing that didn't make it any faster for me, I only got 6.6 Gb/s. At any rate, although my network speed isn't quite full 10Gbit, I don't think the network speed is the bottleneck.
  9. Not sure what iperf3 -D 8 does, that command doesn't make sense to me, but just a regular run of iPerf3 -c (imac) where the iMac is the server gets me up to 7Gb/s typically. Interestingly just now when i ran it, I had the hardware settings in system preferences/network set to Auto and I was only getting 2.6Gb/s. Turning jumbo frames back on made it go back up to 7. In the past, turning jumbo frames on maybe got me from 6 to 7 Gb/s, so that's odd. Maybe because Jumbo frames were on in Unraid but not on my Mac? When I connect with Unraid as the server and use the Mac as the client, I get slower speeds, usually around 6.5 Gb/s when using Jumbo frames. In the past I'd get 6 Gb/s without Jumbo frames.
  10. I have tried Jumbo Frames (it is probably off right now, partly because I have to keep swapping ports because I don't have a 10Gb switch) and it makes no appreciable difference. If you read my post following tjb_altf4's, you'll see I have tried Case Sensitive names before and it did seem to make a minor difference. The weird thing is that all the tests I tried at that time aren't repeatable - if I try the exact same tests now under the same configuration I'll almost certainly get worse results. It's like this whole system just gets slower and slower over time. When I first set it up I was getting 400MB/s transfers on large files and now there's no way I can even get a consistent 200MB/s with large files. And small file transfers are horrendously slow, maybe 100KB/s on a good day. The transfer to the SAS drive might have been limited by the SAS drive itself, but I don't know why it was slow and stuttery. I'd think that it'd be a smooth, consistent speed, but it kept pausing and looked like it was choking at times.
  11. Another quick test I just ran was to run the array with the Parity disks removed. I thought maybe a potential bottleneck was in calculating and writing parity, but it seems that the performance (writing only to cache, which doesn't even really involve parity) was the same. So I don't think Parity is the issue here.
  12. I installed the Unassigned Devices Plugin and tried enabling a share on an old 32GB 15K RPM SAS drive I have. Now the drive probably only does around 80MB/s writes or so, so I wasn't expecting much. Here are my quick and crude test results: Doing a large file write (10GB over 10Gbit) to the SAS HDD was kind of sluggish and bursty, taking a few minutes, where it'd probably write to the SSD in a minute. Writing a folder with 100MB of small files only took about 3 minutes, however... doing the same write directly to the NVMe SSD cache took about 4 times as long. This is fascinating that it's 4 times faster to a drive that only runs at a tenth of the speed. Something is clearly very not good in Unraid. Is this possibly because I'm using two parity disks? Would I expect a performance increase by using only one?
  13. This is interesting and sounds plausible. How does one create a share on an unassigned drive like this? I don’t happen to have another SSD lying around to test this with, however. It did seem like I got much better performance with a small test using only an SSD cache and a single hard drive with no parity back when I was still evaluating UnRAID. I get quite decent performance with something like Reslio Sync (Averaging over 150-200 MB/s) but trying to transfer files over SMB to Unraid is painful - either sub 1MB/s speeds or hang ups and pauses that require me to reboot something.
  14. Following up on my last issue with the Mac becoming unresponsive and having long pauses in the middle of transfers, one change I made that seems to have helped was for me to turn off “Enhanced Mac Interoperability” from the Settings->SMB section and start the array again. I’m not sure if this was a fluke or not but things seemed to go way more smoothly with this off. I’m not necessarily noticing speeds that are different in general, but not having to reboot my Mac every time I try doing a big transfer is a welcome improvement. I will update this thread if I learn more about this, continue to see these issues, or find that this setting isn’t what made the difference. The drawback to this change is that now I won’t be able to use Time Machine (which didn’t work well on my iMac Pro, but somehow worked quite well on my MacBook Pro).
  15. Well... this just gets worse and worse. Apparently whatever I have done or am doing is causing even more issues. First, I’m noticing that I basically never can hit 400MB/s anymore like I used to... at best I can hit 200MB/s, but usually much less than that, and only on very large files over a few GB. Significantly worse than that, however, is that my iMac basically just craps out after copying anything for a while to the point where no network transfers at all work, and the only way to get Finder responding is to reboot the whole machine. This happens almost every time. It’s pretty miserable. The transfer just gets more and more sporadic, taking longer breaks between little spurts of transferring files until finally it stops altogether. If I’m lucky it might error, but more often Finder is completely frozen and I can’t even just restart Finder to continue on. I’m going to have to probably start over and undo any changes I made to see if I can get this working reliably again. I’ll start by turning AFP back off in Unraid and maybe resetting the SMB security and ACK delay settings. I’m pretty unhappy with how badly this is going right now :(
  16. Another problem I've been seeing sometimes is that when I'm writing a lot of files, the copy job just "freezes" and nothing is written for a while while all writing and network activity pauses, seemingly for no reason. I can't think of why this would happen. I used iStat Menus to show a graph of network utilization during these times. I wonder if the server is busy for some reason, but it shouldn't be doing anything else intensive and it has 8 CPU cores and 32GB of RAM and a very fast SSD. So weird. Using Dynamix System Statistics I'm not seeing substantial CPU use, nor any disk activity or network activity during this time, so I don't think it's even doing parity activity or anything of the sort.
  17. I did more testing, this time comparing Unraid's SMB implementation to SMB. For my 120MB small file test, AFP transferred the files in 3:00 and SMB did the transfer in 6:43. Quite different results! The 1GB photo transfer took 20–70 seconds over AFP and 30–60 seconds over SMB. Finally, a 10GB movie transfer took 63 seconds over AFP and 37 seconds over SMB. It seems that AFP has some performance advantages here for smaller files, but performs worse for large transfers. Perhaps I'll start using that instead of SMB when I know I'm transferring lots of small files... (despite the warnings about it being deprecated/obsolete). This isn't a terribly satisfying solution, but it's an option.
  18. I just attempted another change from this article discussing how to disable a "client signing" requirement, which can apparently help speed things up if you don't care so much about security. https://mackonsti.wordpress.com/2016/12/21/speed-up-smb-transfers-el-capitan-mac-os-10-11/ Doing this involves using the following command on the Mac: sudo printf "[default]\nsigning_required=no\n" | sudo tee /etc/nsmb.conf >/dev/null Although I couldn't see any difference with smbutil statshares -a, trying my 1GB photo test again managed to transfer in 22 seconds, which is way faster than before. The 100MB small file test took 6 minutes. It seems there are some slight improvements, but still not close to where it could be.
  19. Hmm, interesting. I set "Case-sensitive names" to "Yes" for one share and tried my 1GB photo copy (which was only 350 files) and this copy job went down to 40 seconds. So that was a promising but slight improvement. I then tried a 800MB folder of emails (many tiny files) and this wasn't too promising... parts of the copy job started running at oldskool dial-up speed, almost, measured in KB/sec. In this case I'd probably just be better to zip it up first. Copying this folder around between SSDs on my Mac is practically instant, maybe takes 10 seconds max. Edit: Tried again with a 100MB subset of this data and it took 7 minutes. Pretty lame. The same copy on a low-end Windows laptop over Gigabit took 3:40, so nearly twice as fast with slower hardware and network speed.
  20. Sorry if I was unclear, but in this case I'm talking about editing /etc/sysctl.conf on the Mac, not on Unraid.
  21. Good advice, it should probably have been more obvious that there was something Mac specific happening. I did find this article which talked about one thing that helped a fair bit: http://www.techkaki.com/slow-samba-file-copying-speeds-in-mac-os-x/ By turning off TCP delayed ACK on the offending Mac with the following command sudo sysctl -w net.inet.tcp.delayed_ack=0 This resets on reboot, so to make this permament /etc/sysctl.conf needs the line net.inet.tcp.delayed_ack=0 Performance increased significantly. Copying a folder with 1GB of photos went from taking around 90 seconds to 50, so that's not insignificant, but still seems suboptimal. I'll see what else I can dig up. Incidentally I had actually found and tried this solution before, but found no difference — but in that case I was trying to see if it could improve iperf3 performance (to see if I could get from 7Gb/s to 10Gb/s). My chief issue is with small files, though, and it makes more sense that this setting would affect many small transfers more than one big one.
  22. I'm just trying to set up a new Windows 10 VM in Unraid 6.8.3 and the machine wouldn't boot into the Windows installer... it would default to going into the UEFI prompt unless you hit a key, but even if you do, it just gets stuck at the customized "TianoCore" boot logo. The trick for me to getting it to work seemed to be changing the CDRom busses from IDE to SATA. Has anyone else observed this? I'm wondering why the i440fx defaults to this old bus... it doesn't seem to work for me.
  23. I purchased Unraid Plus for an HP Proliant P4300 G2 server with dual quad core CPUs, 32GB RAM, six 6GB SAS disks, and a 500GB NVMe cache that tests at around 800MB/s. I installed a 10Gbe NIC and have it directly connected to an iMac Pro's 10Gb interface. The server is set up with a bridged interface so the iMac can access the rest of the network. In iPerf between the iMac Pro and NAS I can hit 7Gb/s. Not quite 10Gb, but decent performance. (Maybe it's slow because I'm using a bridge interface?) I'm using SMB shares that I have mounted on macOS and pretty much everything is peachy ... except that I'm often very disappointed in the performance I'm seeing in some scenarios. I can transfer large files at decent speeds, about 400 Megabytes per second. I can copy a 16GB movie in a minute or so. Awesome. But not everything is awesome... if I copy a folder with a lot of smaller files over the network I see dismal performance... A backup with a lot of text files and documents took 3-4 hours to copy 2GB. I copied a 70GB collection of photos and it took several hours. I'm not sure if it's related but on the iMac Pro a Time Machine backup is taking over a week to backup 500GB of data, but my MacBook Pro connected over Gigabit (not even 10Gb!) did this in about five hours. This might be something else screwy going on specific to this desktop. Obviously there's more overhead when copying a lot of small files versus streaming one big one, TCP sliding window size and protocol overhead and so on, but this seems like an outrageous discrepancy. This begs the question: Is there anything I can do to improve performance with these smaller transfers? I've already got jumbo frames turned on both on the Unraid box and the iMac Pro, and I'm just using SMB for most things. I considered something like iSCSI but I'm not sure if this performs better - I know virtually nothing about it, and I don't think Unraid supports it anyways. For doing these large backups or initial copies it almost seems like it would be faster to just TAR everything and transfer big files over to the NAS. But are there any performance optimizations that I might be missing, protocols I could try, or any other ways of connecting the NAS that might be faster for these sorts of tasks? I feel like something isn't right here, I'm getting performance that seems like a fraction of what I should get on this beast of a configuration with a 10Gb network connection and NVMe SSD cache. I'm really enjoying the Unraid OS but would like to see if I could improve on the speed of these smaller transfers. Does anyone have pro-tips or thoughts on things to check or try to make things a bit faster? Advice would be much appreciated!
  24. I just ran into this myself on Catalina 10.15.3. Very inconvenient for Mac users. I ended up installing from a Windows 10 VM and it seemed to work fine.