SixClover

Members
  • Posts

    13
  • Joined

  • Last visited

SixClover's Achievements

Noob

Noob (1/14)

1

Reputation

1

Community Answers

  1. Every single time I update this docker it adds in the default ports, as in, appends redundant configuration settings and not just reverts my existing ones. I then have to go into the docker settings and delete the two newly added default ports to get this working again. I have custom ports configured, this works, leave it alone, stop appending new default ports - then this won't break on every single update. All of the other dockers manage to do this, if they can update without screwing with the ports then so can you.
  2. Solved. My critical error was assuming my source would perform as expected and didn't benchmark disks on my PC, I was too focused on the server. The disks on my PC are supposed to do at least 500 MB/s read, and while they momentarily can do 500 MB/s+ read it seems they crap out real fast and slow down. The relationship to memory filling up and transfer slowing down was a coincidence. I started doing copies off of a low capacity NVMe SSD I had and transfer speeds flew high and consistently. So I'll be replacing my PC's HDD's with higher capacity SSD's that can maintain high read speeds for the transfers. Thanks for helping out JorgeB, you did even mention the source could be the issue but it took me a minute to get there. Happy ending.
  3. I'm not talking about transfer speeds to the system at a high level, can you specifically confirm with iotop and htop that 1. memory is clearing out rapidly to disk and 2. IO to disk is > 100 MB/s? If you have sufficient memory for the filesize you transfer you might never notice the difference. I can transfer a small file at 700 MB/s and would otherwise not be the wiser. I believe it's because I'm testing with a file size at substantially larger size than the total memory I have available to cache that I'm hitting this mystery bottleneck. I can get 3-4 GB into a transfer at pretty high speeds, after that things crawl, and my memory utilization is such that I can get about that much free typically. I'm sure your system is actually writing faster to disk, something's just off on my NAS.
  4. I definitely learned a lot of interesting details on this last run. Writing to a new HDD slows down to 100 MB/s still. /dev/shm is apparently a built-in ram disk, very convenient, I tested a transfer to that as a destination which started out 1.0 GB/s (!!) but quickly slows down to 100 MB/s as well. (FTP Memory -> (slow) -> Ram disk Memory) - seriously, how does that one make any sense? Interesting fact, iotop does not capture IO operations to /dev/shm. Something makes absolutely no sense. Can anyone confirm SMB writing to disk faster than 100 MB/s? What if this whole time people have been transferring to huge amounts of RAM and never noticed the crazy slow transfer rates from system memory to NVMe? I might be the only person to setup a 10gbe network to a NAS with only 8 GB of memory and so I'm the first person to see this behavior. (I'm just kidding. Or am I?! 😟) I boosted MTU across my network to 9k (jumbo frames on NIC's on both ends and each SFP+ switch port). That did get my network throughput up to 9.5 Gbps consistently in iperf3, quite nice, but didn't help anywhere else. I ran strace on vsftpd, ~92% of the time is spent on read/writes, so I don't see any obvious signs of interference with memory flushing. To clarify, SMB and FTP in unraid behave identically on the transfer slow downs, the common factors between them are, of course; network, memory and storage. At least I feel 100% about storage and network performance, it's the memory I'm not entirely sure of I suppose, or any other system buffers/caches between memory and storage. I tried moving a 2GB file from /dev/shm (ram disk) to /mnt/cache/Media/Movies (NVMe SSD) and the file transferred just about instantly. I suppose that confirms data moving from Memory to Disk at GB/s speeds, it happened instantly. Notice, this transfer involves no transfer protocol applications (SMB, FTP, etc), so remove software and things are lightening fast again *** interesting ***. I'm borderline ready to write my own TCP file transfer software to really investigate what the heck is going on here, hypothetically I should see the same behavior in my own software as depicted in FTP and SMB which I can then analyze in fine detail. I don't know what else to really do.
  5. I do have a cold spare in the event of a disk failure (I forgot about this, had it buried away), I could utilize that in a test. I suppose I could also create a ram disk. I'll explore some options there and continue with testing. It also occurred to me I can use strace to record all the system calls with timestamps, I might be able to figure out specifically where shfs/vsftpd are spending most of their time while data transfers are occurring.
  6. Before I test, the disks are not empty, is that a bust? All of my drives are nearly at capacity at the moment. I don't have an empty HDD to test with. I'm not really sure this is going to address the NVMe issue regardless so I kept doing research and found something awesome. For everyone's records, here is a really clever trick to test vast amounts of non-0 write performance. Check out this awesome line, it uses openssl to encrypt 0's in memory, generating a very large amount of non-0 data to write to disk. dd if=<(openssl enc -aes-256-ctr -pass pass:"$(dd if=/dev/urandom bs=128 count=1 2>/dev/null | base64)" -nosalt < /dev/zero) of=/mnt/cache/Media/Movies/test.file bs=1M count=15000 iflag=fullblock 15000+0 records in 15000+0 records out 15728640000 bytes (16 GB, 15 GiB) copied, 17.0936 s, 920 MB/s Here's the proof it wrote non-0 data at 920 MB/s, as you can see the data is a cipher stream, varying data. cat test.file | head -c 100 | xxd 00000000: 0e19 44e2 e693 258d a1e6 2da7 6e11 31e9 ..D...%...-.n.1. 00000010: 0523 df56 c2bb 77be 68d8 5e15 87e4 4ae7 .#.V..w.h.^...J. 00000020: b15d 3773 0d67 bb7f 2702 9c0f 3b91 8182 .]7s.g..'...;... 00000030: fba6 6ea5 8572 85b5 39f6 4e7f cb01 f5e3 ..n..r..9.N..... 00000040: b8e9 9b8a 799a 00b2 0845 e71a 34a6 e53c ....y....E..4..< 00000050: 1a20 52f5 6dd7 2e3d 90d4 9e3c b757 7672 . R.m..=...<.Wvr 00000060: b365 3ad4 .e:. I think the NVMe performance is quite clear, also independently tested and verified at 1,100 MB/s, and the official specs are far higher. This NVMe is specifically built for durability and speed which is why I bought it.
  7. If you saw my last post I misunderstood your comment. I just now tested on a 6TB WD Red 7200 RPM non-SMR (verified not one of the fake SMR drives from the WD SMR drama last year). The parity disk is also the same model/size/speed. FTP (no encryption), the transfer averaged down to 75 MB/s.
  8. It's a Corsair Force MP510, benchmarked it at 950+ MB/s write speed sustained. I've run the following command a number of times with the similar results. dd if=/dev/zero of=/mnt/cache/Media/Movies/test.file bs=64M count=220 oflag=dsync 220+0 records in 220+0 records out 14763950080 bytes (15 GB, 14 GiB) copied, 15.3636 s, 961 MB/s I also tried unraid's FTP and in iotop can see vsftpd averaging 100 MB/s after a while, briefly 400 MB/s+. Unraid web GUI shows the SSD writing over 400 MB/s briefly as well, so I know it can actually write to the SSD faster. It's very confusing, I'm not understanding how the NVMe SSD can sustain 950 MB/s with dd but any other transfer protocol I try so far slows down so much. It does seem to correlate with memory utilization, but I don't know of any way to analyze memory IO to see what's happening in more detail.
  9. I'm sure everyone clicked on this and thought "oh great another SMB performance thread", lol. Ok, but at least I can assure you have I dived quite deep into this rabbit hole and emerged with an extremely specific question/issue on the subject. I've setup a 10gbe network, I can transfer data at about ~8.5 gbps right now without having done any tuning, confirmed by numerous and extensive iperf3 tests. So we have a baseline for my network throughput right now. In additional tests I've written 15 GB files to my /mnt/cache/Media/Movies location with DD, I can tell you with absolutely confidence my NVMe SSD cache can sustain 950 MB/s write throughput. Absolutely. For memory I have 8 GB of DDR4 2666 memory, single stick. And here is where I'm seeing a performance issue I can't make sense of. I will begin a file transfer which starts off 700 MB/s over SMB and within moments falls to 100 MB/s and stays there or worse. As I monitor htop I can see a very obvious and consistent behavior, my memory fills up and the transfer slows down. This does seem to make sense at face value, but it really doesn't make any sense at all. My components are capable of dumping data from memory to disk at 950 MB/s. I've tested the throughput of my NVMe, I bought this thing specifically for these speeds. And yet memory clears out at a very slow rate. In HTOP Samba is utilizing maybe 10% of CPU after transfers slow down, and shfs is writing maybe 100 MB/s to disk at most. Why are these processes moving so slowly? I'm pulling out my hair trying to understand why the memory fills up so fast and does not dump out faster to disk. Why does memory seem to be a bottleneck when there is plenty of CPU unused to do work and such a high throughput availability on my SSD to get the data written? I am only using a single stick of memory and have ordered a second stick to see if dual channel offers any improvement. I'm not optimistic about that solving this issue. Any ideas, insight or guidance would be greatly appreciated.
  10. Hello! I had been dealing with an issue and finally found a solution, I don't know if this is the best solution but it does get the job done. I really couldn't find anything when searching google or this forum that addresses what I was experiencing below and so I thought I would just create this post so it's out here on the internet for others. I'm using User Scripts to send me notifications in Slack when my array is stopped and started for any reason, I specifically wanted to know this for reboots. The events I configured my script for is "At Startup of Array" and "At Stopping of Array". There's also a "At First Array Start Only" if you want to narrow this down a boot up only alert. The issue I ran into is that the notifications which send over the internet worked in every case where the array was stopped and started except on system boot up. I finally realized this is because it's not guaranteed that internet connectivity will be available at the moment "At Startup of Array" triggers in User Scripts. The solution I implemented in my startup event is a bash 'until' condition which pings google.com one time every second and greps for a positive result, when the script confirms DNS resolution and IPv4 connectivity are both working then the script continues. I implemented some extra stuff so that if the server ever boots without internet connectivity (ISP actually down, etc) then it won't try to ping forever. Script for "At Stopping of Array" is straight forward. /usr/local/emhttp/webGui/scripts/notify -s "unRAID Array STOPPED." -i "alert" -m "unRAID array has stopped, plex is down." -d "unRAID array has stopped, plex is down." Script for "At Startup of Array" requires extra steps. The 'maxpings' variable is the number of times the script will check for internet connectivity and then exits. There's a 1 second sleep between each try. 300 checks means it'll try for about 5 minutes, based on my own system it could take up to 1m 45s to fully boot with web server up, so it seemed like a decent default, adjust as needed. maxpings=300 numpings=0 until [[ $(ping -q -W 1 -c 1 google.com | grep '1 received' -o) == "1 received" ]] do sleep 1s ((numpings=numpings+1)) if [ $numpings -gt $maxpings ] then exit 1 fi done /usr/local/emhttp/webGui/scripts/notify -s "unRAID Array STARTED." -i "alert" -m "unRAID array has started, plex will be online momentarily." -d "unRAID array has started, plex will be online momentarily." Victory!
  11. I have been wanting a home NAS for more than a couple years now. I must have spec'd out half a dozen possible DIY NAS builds and researched every out of the box solution there is. Striking a balance between cost and accomplishing my goals was frustrating. One day I was watching LTT and Unraid came up, I had never heard of it, but holy ****, it only took a few days to convince me that I found what my NAS needed to be built around. A month later I had everything purchased and put together (today). I was able to reuse so much hardware that I had laying around already, most importantly my existing set of assorted HDD's (20TB). I saved so much money, only had to spend 1/3 of what I was already planning and wound up with a better solution. Like, what?! One of the main things I really wanted to do is hardware accelerated Plex transcoding. I couldn't believe how easy it is to set that up in Unraid with the Nvidia patches for unlimited streams. I must have done something wrong, no way it was really that easy, maybe I just need to give it more time. But right now everything I setup just works. Thank you for making this amazing product with clearly a great community around it.