user Posted June 29, 2019 Share Posted June 29, 2019 Hello. I am new to Unraid. I have a question about write speeds to different mounts. Right now I am doing some performance tests to characterize my system. My array only has 1 disk (10TB WD Red) and no parity disk. I am using an xfs encrypted filesystem. I am noticing a big difference in write speeds writing to /mnt/disk1 vs writing to /mnt/user. When writing to /mnt/disk1 using the following command from the Unraid terminal I get speeds of 171 MB/s root@Tower:~# dd if=/dev/zero of=/mnt/disk1/test/test1 bs=1024 count=10240000 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 61.2586 s, 171 MB/s When writing to /mnt/user using the following command from the Unraid terminal I get speeds of 52.4 MB/s root@Tower:~# dd if=/dev/zero of=/mnt/user/test/test2 bs=1024 count=10240000 10240000+0 records in 10240000+0 records out 10485760000 bytes (10 GB, 9.8 GiB) copied, 199.97 s, 52.4 MB/s I've run the test multiple times with the same results. I also get the same results when using reconstruct write. I have a few questions about this. 1. Is this expected? 2. Is /mnt/user slower because it's using shfs? 3. If I want to transfer data from other Unassigned devices, can I transfer it to /mnt/disk1 or should I always go through /mnt/user? 4. When I start using a parity drive, would the parity be computed even if I write directly to /mnt/disk1? Thanks in advance! Quote Link to comment
Squid Posted June 29, 2019 Share Posted June 29, 2019 Writes to a user share will always be slower than a write directly to a disk share, because while they may wind up at the exact same destination, the user share has to go through an extra layer to get there. Your speeds though seems to me to be an abnormality as the difference shouldn't be that extreme, and should only account for 1-2MB/s ish unless your processor is severely underpowered, or there was other activity going on with the array at the time. Quote Link to comment
user Posted June 29, 2019 Author Share Posted June 29, 2019 Thanks for the reply Squid. I have a Xeon quad core CPU. When doing the tests there is no other activity in the array or system. When copying data to /mnt/user I see the shfs process using 83.7% CPU, but the overall load in the system is less than 15%. I'm wondering if the shfs mount is the bottleneck. 3014 root 20 0 347208 2772 908 S 83.7 0.0 9:13.08 shfs Thanks! Quote Link to comment
user Posted June 29, 2019 Author Share Posted June 29, 2019 I restarted the system, re-ran the tests and got the same results. Also I am on Unraid version 6.7.2. Quote Link to comment
user Posted June 29, 2019 Author Share Posted June 29, 2019 (edited) There should be a logical explanation. I just want to know if other people on 6.7 see the same speeds. For example, what write speed do you see when running the following command from the Unraid terminal? dd if=/dev/zero of=/mnt/user/appdata/test1 bs=1024 count=10240000 Edited June 29, 2019 by user 1 Quote Link to comment
testdasi Posted June 29, 2019 Share Posted June 29, 2019 I don't have the same problem. May array is parity less. Writing to disk or user has no discernable differences. Potentially it has to do with encryption. Quote Link to comment
Squid Posted June 29, 2019 Share Posted June 29, 2019 Actually, it is huge, depending upon the drive. Writing to the array, yields /mnt/user is 10MB/s slower than directly to a disk. Writing to my NVMe however yields 99MB/s to a user share and 997MB/s directly to the cache. Enabling DirectIO yields 200MB/s to /mnt/user (cache) P Quote Link to comment
user Posted June 29, 2019 Author Share Posted June 29, 2019 6 minutes ago, testdasi said: I don't have the same problem. May array is parity less. Writing to disk or user has no discernable differences. Potentially it has to do with encryption. Do you see speeds higher than 52.4 MB/s when writing to user? Quote Link to comment
user Posted June 29, 2019 Author Share Posted June 29, 2019 6 minutes ago, Squid said: Actually, it is huge, depending upon the drive. Writing to the array, yields /mnt/user is 10MB/s slower than directly to a disk. Writing to my NVMe however yields 99MB/s to a user share and 997MB/s directly to the cache. Enabling DirectIO yields 200MB/s to /mnt/user (cache) P What write speed do you see when writing to the array under /mnt/user? Quote Link to comment
Squid Posted June 29, 2019 Share Posted June 29, 2019 (edited) With DirectIO 70MB/s without 40. But, that is exactly where I expect it to be under a parity enabled system Edited June 29, 2019 by Squid Quote Link to comment
user Posted June 29, 2019 Author Share Posted June 29, 2019 52 minutes ago, Squid said: With DirectIO 70MB/s without 40. But, that is exactly where I expect it to be under a parity enabled system Ok I just tested with DirectIO writing to /mnt/user. Without DirectIO I was getting 52.4 MB/s. With DirectIO I am getting 111 MB/s. These numbers look proportional to yours. Is everyone using DirectIO? Also, are there any downsides to writing directly to a disk when using a single disk? Quote Link to comment
Squid Posted June 29, 2019 Share Posted June 29, 2019 1 minute ago, user said: Also, are there any downsides to writing directly to a disk when using a single disk? None. When you have multiple disks though, when moving around files, never mix /mnt/user and /mnt/diskX together in the same operation. Always use one or the other for both source and destination Quote Link to comment
user Posted June 30, 2019 Author Share Posted June 30, 2019 (edited) 51 minutes ago, Squid said: None. When you have multiple disks though, when moving around files, never mix /mnt/user and /mnt/diskX together in the same operation. Always use one or the other for both source and destination Would the parity still work when using the /mnt/diskX mount or do I need to copy files through the /mnt/user share for the parity to work? Edited June 30, 2019 by user Quote Link to comment
user Posted June 30, 2019 Author Share Posted June 30, 2019 6 minutes ago, BRiT said: Parity still works. Thanks! Quote Link to comment
Vr2Io Posted June 30, 2019 Share Posted June 30, 2019 Disk share and User share shouldn't have much different, I make test on same 3TB disk with 84% full ( that means writing in slow inner area ) and result as below. root@U12:~# dd if=/dev/zero of=/mnt/user/D1/test bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 63.5459 s, 169 MB/s root@U12:~# dd if=/dev/zero of=/mnt/disk1/D1/test bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 70.514 s, 152 MB/s BTW, if use bs=1024, will got same result as you, thats I never notice have so big different. 1 Quote Link to comment
user Posted June 30, 2019 Author Share Posted June 30, 2019 30 minutes ago, Benson said: Disk share and User share shouldn't have much different, I make test on same 3TB disk with 84% full ( that means writing in slow inner area ) and result as below. root@U12:~# dd if=/dev/zero of=/mnt/user/D1/test bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 63.5459 s, 169 MB/s root@U12:~# dd if=/dev/zero of=/mnt/disk1/D1/test bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 70.514 s, 152 MB/s BTW, if use bs=1024, will got same result as you, thats I never notice have so big different. Interesting. I also see a big difference using bs=1M root@Tower:/mnt# dd if=/dev/zero of=/mnt/user/test/test1 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 58.5442 s, 183 MB/s root@Tower:/mnt# dd if=/dev/zero of=/mnt/disk1/test/test2 bs=1M count=10240 10240+0 records in 10240+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 60.0663 s, 179 MB/s What is the block size used by Unraid? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.