jayseejc Posted April 23, 2018 Share Posted April 23, 2018 (edited) Strange performance issues I seem to be encountering. Testing via ssh has led me to believe this is some sort of issue in shfs, but I could be wrong. Simply put, writing to the array gives me about 20MB/s write speed (cached share), while writing to any of /mnt/diskX or /mnt/cache gives the expected speed. Writing to /mnt/user writes to the cache drive, as expected. Tested with dd to get some hard numbers, but issues are also observed via smb and 9p mounting in a vm. Any ideas? Observed in unraid 6.5. jon@core:/mnt$ for i in disk{1..6} cache user user0; do echo $i; dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000; done disk1 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 125.09 s, 83.8 MB/s disk2 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 114.943 s, 91.2 MB/s disk3 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 74.9314 s, 140 MB/s disk4 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 93.4166 s, 112 MB/s disk5 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 109.873 s, 95.4 MB/s disk6 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 155.933 s, 67.2 MB/s cache 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 107.349 s, 97.7 MB/s user 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 387.066 s, 27.1 MB/s user0 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 517.551 s, 20.3 MB/s Edited April 23, 2018 by jayseejc Quote Link to comment
jayseejc Posted April 23, 2018 Author Share Posted April 23, 2018 Array is fully encrypted. Don't think that would make much difference though as writing to the specific disks runs at expected speeds. Quote Link to comment
John_M Posted April 23, 2018 Share Posted April 23, 2018 2 minutes ago, Benson said: Relate encrypted ? No. I'm seeing the same with unencrypted XFS volumes. I'm just running my tests again. Quote Link to comment
John_M Posted April 23, 2018 Share Posted April 23, 2018 @jayseejc I suggest rebooting into Safe mode and not starting any dockers or VMs, then running your test from the command line again. If you still see the same then post your findings and diagnostics zip in a new thread in the Defect Report area of the board. There have been reports of slowness and high CPU usage in shfs but Limetech has been struggling to reproduce. I have unencrypted XFS array disks, read/modify/write dual parity and a BTRFS RAID-1 cache pool and I believe I'm seeing a similar effect as you. I'm in normal boot mode with plugins running so it isn't a "clean" test, but I'll post my results when they're complete. Quote Link to comment
John_M Posted April 23, 2018 Share Posted April 23, 2018 I hope other people run this test but if they do they'll first need to create the misc user share and then for i in disk{1..6} cache; do mkdir /mnt/$i/misc; done before running the test for the first time - and tidy up afterwards, of course. Here are my results: root@Mandaue:~# for i in disk{1..6} cache user user0; do echo $i; dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000; done disk1 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 137.52 s, 76.2 MB/s disk2 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 154.189 s, 68.0 MB/s disk3 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 159.984 s, 65.5 MB/s disk4 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 163.882 s, 64.0 MB/s disk5 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 154.006 s, 68.1 MB/s disk6 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 160.713 s, 65.2 MB/s cache 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 79.6297 s, 132 MB/s user 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 368.695 s, 28.4 MB/s user0 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 366.692 s, 28.6 MB/s root@Mandaue:~# Quote Link to comment
jayseejc Posted April 23, 2018 Author Share Posted April 23, 2018 Yup... Nearly identical in safe mode. I did see shfs spike occasionally when writing to /mnt/user, though only for a few seconds at a time. I'll post in the Defect reports tomorrow evening. Quote Link to comment
JorgeB Posted April 23, 2018 Share Posted April 23, 2018 If direct IO is disable try again with it enable (Settings -> Share Settings) Quote Link to comment
jayseejc Posted April 23, 2018 Author Share Posted April 23, 2018 Tried again changing direct io from auto (which allegedly just turns it off) to on, and get about the same write performance with worse read performance. Quote Link to comment
JorgeB Posted April 23, 2018 Share Posted April 23, 2018 That's weird. if anything it should be faster, but I'm not seeing any performance issues whit v6.5 and reads/writes, and I always use /mnt/user, so no idea what the problem is. Quote Link to comment
John_M Posted April 23, 2018 Share Posted April 23, 2018 4 hours ago, johnnie.black said: If direct IO is disable try again with it enable (Settings -> Share Settings) I ran the test again with Direct IO enabled (and parity calculation set to reconstruct write this time) and got this with unRAID 6.5.1-rc6: root@Mandaue:~# for i in disk{1..6} cache user user0; do echo $i; dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000; done disk1 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 72.6232 s, 144 MB/s disk2 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 80.7624 s, 130 MB/s disk3 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 80.8881 s, 130 MB/s disk4 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 79.746 s, 131 MB/s disk5 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 79.3907 s, 132 MB/s disk6 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 81.6692 s, 128 MB/s cache 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 50.9043 s, 206 MB/s user 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 269.639 s, 38.9 MB/s user0 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 210.128 s, 49.9 MB/s root@Mandaue:~# It would be interesting to see more people's results. Quote Link to comment
John_M Posted April 23, 2018 Share Posted April 23, 2018 I'm curious to know what Direct IO actually does. I'll try to find what Tom wrote about it at the time. Quote Link to comment
John_M Posted April 23, 2018 Share Posted April 23, 2018 For completeness, my results with Direct IO enabled but parity calculation reset to read/modify/write, which I normally use: root@Mandaue:~# for i in disk{1..6} cache user user0; do echo $i; dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000; done disk1 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 135.155 s, 77.6 MB/s disk2 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 149.988 s, 69.9 MB/s disk3 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 150.553 s, 69.6 MB/s disk4 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 157.497 s, 66.6 MB/s disk5 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 155.788 s, 67.3 MB/s disk6 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 161.483 s, 64.9 MB/s cache 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 66.8645 s, 157 MB/s user 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 270.437 s, 38.8 MB/s user0 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 212.621 s, 49.3 MB/s root@Mandaue:~# I haven't done any read testing with Direct IO enabled. @jayseejc do you have a specific read test you would like me to try? Quote Link to comment
jayseejc Posted April 23, 2018 Author Share Posted April 23, 2018 (edited) For a read test we can just use the same files we're creating with the write test, dd'ing them to /dev/null. Here's my most recent tests, safe mode, mover, docker and virtual machines disabled. For some reason now I'm getting better speeds all around. I'll try again in safe mode and just turning direct IO off, and update this post with the results. I should also note that misc is an actual share on my system (the place to put stuff when there's not a place to put it). It's pretty basic share, use the cache and move when applicable. All the usual defaults. Script used for testing. #!/bin/bash for i in disk{1..6} cache user user0; do echo "$i write:" dd if=/dev/zero of=/mnt/$i/misc/test-$i bs=1024 count=10240000 # Needed to flush the ram cache and not get ridiculous read speeds echo 3 > /proc/sys/vm/drop_caches echo "$i read:" dd if=/mnt/$i/misc/test-$i of=/dev/null bs=1024 count=10240000 done Read and write test. Direct IO enabled: disk1 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 83.4614 s, 126 MB/s disk1 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 88.6699 s, 118 MB/s disk2 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 43.989 s, 238 MB/s disk2 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 66.3664 s, 158 MB/s disk3 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 36.161 s, 290 MB/s disk3 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 64.9237 s, 162 MB/s disk4 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 36.0347 s, 291 MB/s disk4 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 63.0174 s, 166 MB/s disk5 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 51.0516 s, 205 MB/s disk5 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 90.1883 s, 116 MB/s disk6 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 141.119 s, 74.3 MB/s disk6 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 118.275 s, 88.7 MB/s cache write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 108.233 s, 96.9 MB/s cache read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 23.9585 s, 438 MB/s user write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 192.514 s, 54.5 MB/s user read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 111.867 s, 93.7 MB/s user0 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 159.209 s, 65.9 MB/s user0 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 124.317 s, 84.3 MB/s Read and write test. Direct IO disabled disk1 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 97.0911 s, 108 MB/s disk1 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 66.7662 s, 157 MB/s disk2 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 36.0981 s, 290 MB/s disk2 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 65.3774 s, 160 MB/s disk3 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 61.8678 s, 169 MB/s disk3 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 54.0103 s, 194 MB/s disk4 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 61.2027 s, 171 MB/s disk4 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 56.765 s, 185 MB/s disk5 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 75.509 s, 139 MB/s disk5 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 76.2456 s, 138 MB/s disk6 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 151.803 s, 69.1 MB/s disk6 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 124.251 s, 84.4 MB/s cache write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 79.4928 s, 132 MB/s cache read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 37.0443 s, 283 MB/s user write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 341.036 s, 30.7 MB/s user read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 21.237 s, 494 MB/s user0 write: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 342.749 s, 30.6 MB/s user0 read: 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 68.6197 s, 153 MB/s Looking at the results, direct IO seems to have SOME effect, but I'm still seeing about half the write speed vs writing to the disk directly. Edited April 23, 2018 by jayseejc Add results for direct IO disabled. Quote Link to comment
jayseejc Posted April 24, 2018 Author Share Posted April 24, 2018 I will add that enabling direct io seems to break a bunch of my docker containers. Stuff like Plex and influxdb just won't start. Quote Link to comment
JonathanM Posted April 24, 2018 Share Posted April 24, 2018 1 hour ago, jayseejc said: I will add that enabling direct io seems to break a bunch of my docker containers. Stuff like Plex and influxdb just won't start. If you rebuild the containers with direct disk references instead of user share references it will work. Instead of /mnt/user/appdata, set it to /mnt/cache/appdata. This assumes that the appdata folder in question resides on your cache disk. Quote Link to comment
John_M Posted April 24, 2018 Share Posted April 24, 2018 On 23/04/2018 at 2:57 PM, John_M said: I'm curious to know what Direct IO actually does. I'll try to find what Tom wrote about it at the time. Well, it was first introduced in unRAID 6.2.0-rc5 and was quite controversial at the time. It seems the aim was to improve write performance to the cache when using 10Gb/s Ethernet. It was tested and questioned. The only information about it is in the GUI help and here. Specifically, Quote It only should be used if you are using 10gbps networking and aren't getting full transfer rates when copying data to user shares. I've turned the option back to Auto because it doesn't seem to fix the issue. Never mind 10 Gb/s speeds, we're not seeing 1 Gb/s speeds. It would be nice is some other people were to test this and comment. 9 hours ago, jayseejc said: I will add that enabling direct io seems to break a bunch of my docker containers. Stuff like Plex and influxdb just won't start. Further to what @jonathanm pointed out, there's a discussion here. Quote Link to comment
JorgeB Posted April 25, 2018 Share Posted April 25, 2018 12 hours ago, John_M said: It would be nice is some other people were to test this and comment. I already mentioned earlier but I didn't notice any slowdown on v6.5 on any of my servers, on some of them I write to an user share @ 150MB/s+ Quote Link to comment
jayseejc Posted April 25, 2018 Author Share Posted April 25, 2018 @jonnie.black any chance you could post the results of our little dd test? As a way of verifying that it is actually measuring the correct write speed and not being limited by something else. Quote Link to comment
JorgeB Posted April 26, 2018 Share Posted April 26, 2018 I didn't have a lot of time to look into it but I did run the test on one of servers and the write speeds on the test were much lower than the actually writes speed I get copying files, there were some writing overlaps. i.e., it started writing to the next disk while still writing (from RAM) to the previous one, resulting in simultaneous parity writes, @John_Mdo you actuality have a slow write speed to user shares (especially using turbo write) like the OP or it's just the test? Quote Link to comment
John_M Posted April 26, 2018 Share Posted April 26, 2018 4 hours ago, johnnie.black said: @John_Mdo you actuality have a slow write speed to user shares (especially using turbo write) like the OP or it's just the test? That's interesting. I must say that in normal use I haven't felt that writes to user shares were especially slow and for that reason I've never questioned them or tried to test them until I saw this thread. It was @Benson's comment about the use of encryption that prompted me to run the test because in another thread the discussion was about the overhead incurred by using encryption. Perhaps the dd test needs to sleep for a while before moving on to the next disk to avoid the overlapping writes, but I don't see how that would account for writes to /mnt/user and /mnt/user0 being so slow in this test when compared with writes to /mnt/diskX. I'll try to find time to do some more testing - I want to test read speed as well to see if Direct IO is worth leaving enabled. Quote Link to comment
JorgeB Posted April 26, 2018 Share Posted April 26, 2018 3 minutes ago, John_M said: but I don't see how that would account for writes to /mnt/user and /mnt/user0 being so slow in this test when compared with writes to /mnt/diskX Yes, and did test just /mnt/user after all activity stopped and got a very low speed, like 20 or 30MB/s, but copying from a desktop I get close to around 90/100MB/s on this server (with turbo write on), so not sure what's going on, but on actual use I don't notice any issues. Quote Link to comment
John_M Posted April 26, 2018 Share Posted April 26, 2018 (edited) 54 minutes ago, johnnie.black said: copying from a desktop I get close to around 90/100MB/s on this server (with turbo write on) I just did a test from the Windows 10 desktop dragging and dropping a folder containing two 4.6 GB files to a user share. 1 Gb/s Ethernet. Direct IO was Off. Cache: No, Turbo write: Off 66 MB/s Cache: No, Turbo write: On 109 MB/s Cache: Yes 113 MB/s I'm happy with those results. I don't have 10Gb/s Ethernet so don't need Direct IO. It would be interesting to see what @jayseejc sees from the desktop. EDIT: Speeds as measured by Windows Explorer. Reads are at 113 MB/s regardless of settings and whether the files are on the cache or array disks. Edited April 26, 2018 by John_M Quote Link to comment
JorgeB Posted April 26, 2018 Share Posted April 26, 2018 31 minutes ago, John_M said: I'm happy with those results. Yes, those are normal and similar to what I'm seeing, for some reason the test is not accurate, still doensn't mean the OP doens'h have a problem, but doubt it's a general 6.5 issue. Quote Link to comment
salcio Posted June 18, 2020 Share Posted June 18, 2020 @johnnie.black @John_M have you ever find out why there is a difference between writes to /mnt/diskX and /mnt/cache ? I'm seeing significant differences in speeds - using your tests: cache 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 51.2255 s, 205 MB/s user ^C1225556+0 records in 1225556+0 records out 1254969344 bytes (1.3 GB, 1.2 GiB) copied, 177.72 s, 7.1 MB/s Writing to the same share. Share set up as using cache only. I'm using unRAID Version: 6.8.3. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.