Cull2ArcaHeresy Posted April 16, 2020 Share Posted April 16, 2020 My array is getting write speeds of ~3MB/s, cache getting ~0.5MB/s, and unassigned devices ssd getting full 333MB/s. Speeds got from runs of "dd if=/dev/zero of=test.img bs=1G count=1 oflag=dsync && rm test.img " with same results with bs of 1M/10M/100M/1G. Windows gets full speeds tho. For a bit I enabled scheduling of ssd trim, but results didn't change and then realized that speeds were bad for hdds as well, and then that windows was unaffected. Speaking of ssd trim, should that be running? It has never been enabled and when looking for info on unraid/trim, it is not current posts. raza-diagnostics-20200416-0126.zip Quote Link to comment
itimpi Posted April 16, 2020 Share Posted April 16, 2020 25 minutes ago, Cull2ArcaHeresy said: Speaking of ssd trim, should that be running? It has never been enabled and when looking for info on unraid/trim, it is not current posts. You probably want to install the Dynamix SSD Trim plugin to get this done at regular intervals. Quote Link to comment
Cull2ArcaHeresy Posted April 16, 2020 Author Share Posted April 16, 2020 7 minutes ago, itimpi said: Dynamix SSD Trim Has been installed since idk, just never scheduled it (except for a bit when testing this issue). Just set it to run daily. Quote Link to comment
Cull2ArcaHeresy Posted April 25, 2020 Author Share Posted April 25, 2020 Server has been updated to 6.8.3 and rebooted. After reboot speeds were full and a bit later they weren't so tested and same results as before. Unassigned ssd great, cache pool <1m, and drive array ~3m. Issue is not disks being used by other things as there is little to no io shown on main when running the tests. cd /mnt/disks/Samsung_SSD_860_EVO_500GB_S3YANB0K808920W/ && dd if=/dev/zero of=test.img bs=100M count=1 oflag=dsync && rm test.img && cd /mnt/user/cache-only/ && dd if=/dev/zero of=test.img bs=100M count=1 oflag=dsync && rm test.img && cd /mnt/user/experiments-no-cache/ && dd if=/dev/zero of=test.img bs=100M count=1 oflag=dsync && rm test.img 1+0 records in 1+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 0.36781 s, 285 MB/s 1+0 records in 1+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 204.271 s, 513 kB/s 1+0 records in 1+0 records out 104857600 bytes (105 MB, 100 MiB) copied, 36.9284 s, 2.8 MB/s raza-diagnostics-20200425-0351.zip Quote Link to comment
testdasi Posted April 25, 2020 Share Posted April 25, 2020 Issue is with your testing method. dsync will create artificially and unrealistically slow result. Try testing with something like rsync --progress or rsync--info=progress2 And you need to use a test file that is large enough, minimum 8GB (preferably at least larger than your RAM). You have to remember that dd is 45 years old. If you introduce any modern intermediary layer (e.g. Unraid /mnt/user, mergerfs etc.), it tends to crap itself. Quote Link to comment
Cull2ArcaHeresy Posted April 26, 2020 Author Share Posted April 26, 2020 On 4/25/2020 at 4:19 AM, testdasi said: Issue is with your testing method. dsync will create artificially and unrealistically slow result. Try testing with something like rsync --progress or rsync--info=progress2 And you need to use a test file that is large enough, minimum 8GB (preferably at least larger than your RAM). You have to remember that dd is 45 years old. If you introduce any modern intermediary layer (e.g. Unraid /mnt/user, mergerfs etc.), it tends to crap itself. CrystalDiskMark (w10 with direct 10gig line) on M is a cache enabled array share, S is no cache share. Not sure how RAM would have any impact here as under normal use or high disk IO use have never seen it above 50% (128g installed). But tested with 256g and 64 (max of CDM). So my testing method was problematic, but there still is an issue. Remote things speed problems seem to be VM based. I know it could be headless but installed desktop version because this is still a relatively new addition and it made testing easier (few months old). I have an ubuntu vm that I manually run autossh -M port1 -R port2:localhost:22 user@seedbox (again place that needs to be automated eventually, but unraid server is not rebooted often). The VM has movie and tv shares mounted from the vm options in unraid. The vm can still access all server shares, so the whole point of doing it this way for security is not good but better than the unraid server itself making the connection (and the hassle of getting autossh running on unraid again). On seedbox side I have sshfs -o reconnect,ServerAliveInterval=15,ServerAliveCountMax=3,cache_timeout=3600 [email protected]:/mnt/TV/ /home/user/RazaTVdir/ -p port2 and a movie share command. Internet is 200/20, and sonarr would import via this tunnel at ~80mbits previously, which I'm now thinking might have been the VM limiting it not a sonarr limitation (base on the ~10mbs speeds below). Older method (before vm) was to manually download files with desktop with ftp/filezilla which would get ~170-220mbits with 220 pegged if using segmented and parallel downloads. [I know 10g is small here, but this vm is on a vm ssd which only has like 30g free.] Seeing other posts about br0 being an issue with some VMs speed, but there was not a network spike during test (why monitor is behind terminal). Rebooted the vm and tested again with a speed of 6.80 so about the same. Not sure that this would be a br0 issue tho as I used to get 80mbits/10mbs, and now i get 20mbits when lucky, most the time 5 to 10. Using desktop/ftp I get the same speeds from seedbox, but ssh tunnel thru vm is slow. The issue(s) fall into 3 categories VM speeds unraid or seedbox side problem im way off and have a different problem Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.