jsmontague Posted January 10, 2021 Share Posted January 10, 2021 (edited) I upgraded my 2x 240GB SSD cache pool yesterday to 2x 1.6TB PCI NVMe drives and I'm not getting the expected speeds out of them, in fact they are much slower than the SSD's I swapped out. Validated test.file exists on cache drives through browser prior to tests. Validated share "media" is set to "use cache = yes". Test 1 - copy file /mnt/user/media/testfolder1/test.file ---> /mnt/user/media/testfolder2/test.file Result- 60-70MBps Test 2 - copy file from /mnt/cache/media/testfolder1/test.file ---> /mnt/cache/media/testfolder2/test.file Results - 990MBps Why are speeds so slow using the standard user/share directory vs cache directory? Running version 6.8.3 Edited January 11, 2021 by jsmontague Quote Link to comment
jsmontague Posted January 10, 2021 Author Share Posted January 10, 2021 anyone got any ideas? Attached diagnostics if that could help? Ive had to move all my VMs off the array onto unassigned devices due to the slowness.alexandria-diagnostics-20210110-1530.zip Quote Link to comment
jsmontague Posted January 11, 2021 Author Share Posted January 11, 2021 creating new share with cache=prefer same issue of slow speeds when using "/mnt/user/test/" vs "/mnt/cache/test". Is there another avenue for support I can reach out to on this issue? Quote Link to comment
jsmontague Posted January 11, 2021 Author Share Posted January 11, 2021 (edited) bump. Would love some ideas on what to try to fix this from anyone. Edited January 11, 2021 by jsmontague Quote Link to comment
Pixel5 Posted January 11, 2021 Share Posted January 11, 2021 which SSDs did you buy exactly? The speeds you are getting look a lot like the speed you get out of QLC memory chips once their cache is full. Quote Link to comment
jsmontague Posted January 11, 2021 Author Share Posted January 11, 2021 Intel PCI NVMe P3600 The transfers in the image there is immediately following each other and I can do the cache -> cache transfers over and over and over with no slowness. I've transferred a few 100GB worth of data off/on the cache as well as duplicated files already contained on the cache and as long I do that from within /mnt/cache its near 1GB speeds. Quote Link to comment
hawihoney Posted January 11, 2021 Share Posted January 11, 2021 On 1/10/2021 at 4:53 PM, jsmontague said: Why are speeds so slow using the standard user/share directory vs cache directory? Nothing you want to hear: I had to ignore user shares to have "usual" speed on my PCI NVMe M.2 cache disks. In fact I changed all locations of Dockers and VMs from /mnt/user to /mnt/cache. No big deal to do and results in no more speed problems. Quote Link to comment
jsmontague Posted January 11, 2021 Author Share Posted January 11, 2021 Is that then the "way" with faster than SSD devices? I hate doing 1 offs, but if thats what everyone does I guess thats what i'm doing to Thanks for response @hawihoney! Quote Link to comment
Energen Posted January 12, 2021 Share Posted January 12, 2021 I could be entirely wrong here but I suspect that the share->share transfer is slower because Unraid also has to build parity at the same time. cache to cache involves no parity therefore it is speedier, also nvme to nvme is not bottlenecked in any way by hdd to hdd (a theory anyways). Quote Link to comment
hawihoney Posted January 12, 2021 Share Posted January 12, 2021 8 hours ago, Energen said: I could be entirely wrong here but I suspect that the share->share transfer is slower because Unraid also has to build parity at the same time. cache to cache involves no parity therefore it is speedier, also nvme to nvme is not bottlenecked in any way by hdd to hdd (a theory anyways). The file, OP did copy /mnt/user --> /mnt/user, was stored on cache already. I don't know what it is, but the user share layer seems to eat performance. That's my own personal experience. I never did measure that but it was a huge difference here. Plex with it's over 1 million metadata directories stored on the PCI NVMe is snappy since changing the directory entries from /mnt/user to /mnt/cache. As I wrote, I don't use user shares at all since some months. Quote Link to comment
JorgeB Posted January 12, 2021 Share Posted January 12, 2021 It's somewhat of a mystery but some users see a very large performance difference using user shares vs disk shares, some overhead is normal, but not so much, best bet is to use disk shares when possible. Quote Link to comment
jsmontague Posted January 12, 2021 Author Share Posted January 12, 2021 I got a response back from support with the same feedback and a little explanation. After migrating my dockers and their directories to leverage cache vs user I am getting the more accurate performance. It's just not a complete workaround. Support: User shares go through a union file system that has been shown to impact speed when using extremely high performance storage devices. We will continue to work to improve Unraid's performance but in the meantime, bypassing the user share file system and writing directly to disks/cache does resolve this issue. Quote Link to comment
jsmontague Posted January 13, 2021 Author Share Posted January 13, 2021 I went back and moved all my data back to the array, replaced the NVMe drives with SSDs and my speeds were back to normal. I then upgraded to 6.9 RC2 and used my NVMe drives as 2nd cache pool and their speeds are much improved. I'm still 20-30% slower than true disk speeds but nowhere near as slow as before so i'm leaving them and leveraging user path for now and we'll see how it works going forward. Quote Link to comment
tjb_altf4 Posted January 13, 2021 Share Posted January 13, 2021 (edited) 18 minutes ago, jsmontague said: I went back and moved all my data back to the array, replaced the NVMe drives with SSDs and my speeds were back to normal. I then upgraded to 6.9 RC2 and used my NVMe drives as 2nd cache pool and their speeds are much improved. I'm still 20-30% slower than true disk speeds but nowhere near as slow as before so i'm leaving them and leveraging user path for now and we'll see how it works going forward. The new position alignments for SSDs would have been beneficial, but there should be a good chunk of performance improving when we go to a newer kernel as the current kernel has a BTRFS performance regression bug. Edited January 13, 2021 by tjb_altf4 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.