Shamalamadindong Posted December 31, 2015 Share Posted December 31, 2015 Hi sorry to bother people with this but i just started using Unraid and i was wondering if there was any way to crank up my transfer speeds. I'm in the process of copying all my data back and its taking a long ass time. When i'm mass transferring data to the server i get about 70 MB/s to 80 MB/s up until when the cache drive fills up and then it falls back to about 50 MB/s. Do you guys have any suggestions to increase speeds or if there is anything wrong? Also some context on the strange mix of disks, most of them are second hand. All the 3TB Hitachi's came from Iomega's Dutch repair center where they used them to test RMA'd NAS devices. Got them for 70/80/90 euro's a piece with barely any hours on them. Specs: Intel Core i7 860 12GB 1333Mhz Non-ECC RAM Gigabyte P55-UD6 with just the cache SSD hooked up Dell H200 with all the data drives hooked up HITACHI_HDS724040ALE640 - Parity HITACHI_HUA723030ALA640 - Disk1 HITACHI_HUA723030ALA640 - Disk2 HITACHI_HUA723030ALA640 - Disk3 HITACHI_HDS723030ALA640 - Disk4 HITACHI_HUA723030ALA640 - Disk5 HITACHI_HUA723030ALA640 - Disk6 SEAGATE_ST3000DM001-1CH166 - Disk7 SAMSUNG_MZ7WD120HAFV - Cache I used this method to test the individual drive speeds, To test raw write speed, use the attached script, it was posted by WeeboTech on another thread, it will write a 10GB test file to any disk. Example for disk1: Write_speed_test.sh /mnt/disk1/test.dat Disk 1 HITACHI_HUA723030ALA640 - Low: 56.3 MB/s 5GB: 62.0 MB/s High: 228 MB/s | Time: 181.958s Disk 2 HITACHI_HUA723030ALA640 - Low: 26.5 MB/s 5GB: 30.5 MB/s High: 214 MB/s | Time: 386.404s Disk 3 HITACHI_HUA723030ALA640 - Low: 23.5 MB/s 5GB: 22.9 MB/s High: 177 MB/s | Time: 435.594s Disk 4 HITACHI_HDS723030ALA640 - Low: 39.3 MB/s 5GB: 45.5 MB/s High: 195 MB/s | Time: 260.361s Disk 5 HITACHI_HUA723030ALA640 - Low: 37.3 MB/s 5GB: 42.8 MB/s High: 191 MB/s | Time: 274.516s Disk 6 HITACHI_HUA723030ALA640 - Low: 36.3 MB/s 5GB: 40.9 MB/s High: 217 MB/s | Time: 282.377s Disk 7 SEAGATE_ST3000DM001-1CH166 - Low: 39.6 MB/s 5GB: 45.4 MB/s High: 232 MB/s | Time: 258.750s Cache SAMSUNG_MZ7WD120HAFV - Low: 254 MB/s 5GB: 270 MB/s High: 368 MB/s | Time: 040.292s With sysctl vm.dirty_ratio=50 (so about 6GB used as RAM cache) i tested the slowest, the fastest and the cache. Disk 1 HITACHI_HUA723030ALA640 - Low: 52.9 MB/s 5GB: 77.5 MB/s High: 447 MB/s | Time: 193.688s Disk 3 HITACHI_HUA723030ALA640 - Low: 49.7 MB/s 5GB: 75.1 MB/s High: 405 MB/s | Time: 206.137 Cache SAMSUNG_MZ7WD120HAFV - Low: 297 MB/s 5GB: 364/418 MB/s High: 475 MB/s | Time: 034.503s I also did some network transfer tests. As you can see RAM cache didn't really make any difference with the network tests, it pretty much always averaged 70/75 MB/s. Copying a 11.8GB movie from Crucial SSD (Desktop Windows 10 Pro Build 10240) > Intel I217V > RB2011UiAS-2HnD-IN > Zyxel GS-1100-24 > RTL8111D > RAM cache > SSD Cache (Server) Diagnostics: https://mega.nz/#!mtcFEIbY!OOR6Eg9ebFROz_H7dXoiEb59zbS_VDPrkeRsBfrE_AY And just for good measure some disk read tests, https://mega.nz/#!zs0jDCrJ!IEqJaobhRvgNnFD1uXnmkAV6IsW4jI56RAqG2u1PwWw Quote Link to comment
gundamguy Posted December 31, 2015 Share Posted December 31, 2015 You could try unmouting the parity disk for now, doing all your initial population without parity mounted (should remove a big bottleneck) and then once all your data is in your new home, remount your parity drive and calculate parity. (This does mean your array is unprotected during the initial population) I don't know if this actually speeds up the process, but it reduces the amount of time you have to spend copying and pasting. Quote Link to comment
JorgeB Posted December 31, 2015 Share Posted December 31, 2015 Couple of thoughts: First confirm network is working at gigabit speed, e.g., if you copy from Unraid to your desktop do you get gigabit speed? If yes, you could try this, it happened to me more than once and without a reason I could find, where copying to the cache disk or a cached share was limited to 70/80MB/s, format the cache disk in different filesystem, Reiserfs for example, try a new copy, if speed improves you can format back to XFS that it should hold, at least it worked for me. Obviously if you do this you’ll lose all data on cache disk. Quote Link to comment
Shamalamadindong Posted December 31, 2015 Author Share Posted December 31, 2015 I don't know what happened but suddenly the individual disk write tests are 20 MB/s faster. The slowest result i'm getting now is faster than the fastest one in the first round of tests. I did restart the Plex docker between the first test and the ones i'm running now. Any chance that the indexing Plex was doing last night had a residual effect on speeds today? First confirm network is working at gigabit speed, e.g., if you copy from Unraid to your desktop do you get gigabit speed? Unraid > Desktop (to SSD) just got me the most consistent transfer ever, it never dipped below 63 MB/s and never went over 64 MB/s. Single 11.8GB movie file. I also just tried 3 1080p Castle episodes 350MBish a piece and those also did a very consistent 63-64 MB/s Obviously if you do this you’ll lose all data on cache disk. I assume i can backup the Plex folder and copy it back afterwards so i don't have to re-index everything? Quote Link to comment
JorgeB Posted December 31, 2015 Share Posted December 31, 2015 Looks to me like your problem is network related, you’re not getting full gigabit speed, should be ~114MB/s, all your disks speeds look normal. Quote Link to comment
Shamalamadindong Posted December 31, 2015 Author Share Posted December 31, 2015 Confirmed that its network issues. As a small side question, as i understand it docker appdata/img goes in /mnt/cache/ but every time i do that the docker img seems to go in to read-only after a while. However i also have a /mnt/ssd/ share where it doesn't do that. Quote Link to comment
Squid Posted December 31, 2015 Share Posted December 31, 2015 As a small side question, as i understand it docker appdata/img goes in /mnt/cache/ but every time i do that the docker img seems to go in to read-only after a while. However i also have a /mnt/ssd/ share where it doesn't do that. Can you expand on that read-only problem a little more... The mnt/ssd is probably because you're mounting an ssd drive through the unassigned devices plugin. Quote Link to comment
Shamalamadindong Posted December 31, 2015 Author Share Posted December 31, 2015 As a small side question, as i understand it docker appdata/img goes in /mnt/cache/ but every time i do that the docker img seems to go in to read-only after a while. However i also have a /mnt/ssd/ share where it doesn't do that. Can you expand on that read-only problem a little more... The mnt/ssd is probably because you're mounting an ssd drive through the unassigned devices plugin. Entirely possible if not for the fact that there's only 1 ssd in the server and i don't have the unassigned devices plugin Also, when i create the img in /mnt/ssd/ the number of writes on the cache ssd increases. Scenario: I turned Docker on, put the img in /mnt/cache/, added the Lime-Tech Plex container, tried to add a second container and then i got a similar error to this one, https://lime-technology.com/forum/index.php?topic=42932.0 I came across the above topic, deleted the img, recreated it again in /mnt/cache/, added the Lime-Tech Plex container, tried to add a second container and then the error happened again. Then tried the same thing all over again but started with a different container and same result. Several variations of this happened when i noticed the /mnt/ssd/ share, wrote the image to that and haven't had a problem since. And i was about to end my comment on that note when i thought "lets try adding a container". Tried to add l3iggs Owncloud container, Warning: file_put_contents(/var/lib/docker/unraid-update-status.json): failed to open stream: Read-only file system in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 291 Warning: file_put_contents(/var/lib/docker/unraid-update-status.json): failed to open stream: Read-only file system in /usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php on line 438 Full log, https://mega.nz/#!ilVjxIbB!5qFiIZrUwGkbrpcMg0XW4TPFQ8LvaoS4kZ5zbQbupIk I'dd add Diagnostics but that results in: 404 File not found Quote Link to comment
Shamalamadindong Posted December 31, 2015 Author Share Posted December 31, 2015 Just tried to get the diagnostics through ssh, Starting diagnostics collection... Warning: file_put_contents(): Only 0 of 15 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 12 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 9280 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 6333 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 4785 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 3813 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 1761 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 3708 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 36 Warning: file_put_contents(): Only 0 of 2 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 50 Warning: file_put_contents(): Only 0 of 34 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 52 Warning: file_put_contents(): Only 0 of 36 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 66 done. ZIP file '/boot/logs/tower-diagnostics-20151231-1901.zip' created. https://mega.nz/#!7htEDayC!mbHRE_ItIFIL-dRrc6OFNbn_1a1u0wGsmHbVvJaABJo The http://tower/log/syslog, https://mega.nz/#!jhNC2TJS!uyKTv4C-lvaSL6J0GS8QmiBmDwLVPeQWSrAEyymF_Gc syslog through cp /var/log/syslog /boot, https://mega.nz/#!bwcn2SAZ!f_enqDRpYmMZSngo9Gt4tgpag0BUgw-N6r-ctb7rcNQ And here's diagnostics from earlier today, https://mega.nz/#!mtcFEIbY!OOR6Eg9ebFROz_H7dXoiEb59zbS_VDPrkeRsBfrE_AY Quote Link to comment
trurl Posted December 31, 2015 Share Posted December 31, 2015 There is no /mnt/ssd unless you create one, either accidentally or on purpose. Probably you have accidentally created one with your docker volume mappings and it is breaking things. There are only /mnt/cache, /mnt/disk#, and /mnt/user/sharename Quote Link to comment
Shamalamadindong Posted January 1, 2016 Author Share Posted January 1, 2016 There is no /mnt/ssd unless you create one, either accidentally or on purpose. Probably you have accidentally created one with your docker volume mappings and it is breaking things. There are only /mnt/cache, /mnt/disk#, and /mnt/user/sharename Yes but how do i fix it? Its not like the share is listed in the shares menu. Quote Link to comment
trurl Posted January 1, 2016 Share Posted January 1, 2016 There is no /mnt/ssd unless you create one, either accidentally or on purpose. Probably you have accidentally created one with your docker volume mappings and it is breaking things. There are only /mnt/cache, /mnt/disk#, and /mnt/user/sharename Yes but how do i fix it? Its not like the share is listed in the shares menu. It's not listed on the shares menu because it is not a share. The only things that appear in the shares menu are /mnt/user/sharename, which are just the aggregate of all the /mnt/disk#/foldername, and sharename = foldername. You need to figure out which docker or plugin is mis-configured to write to /mnt/ssd and fix it. Then you can remove /mnt/ssd at the command line or in mc, or just reboot and it will not be recreated unless you do it again. Quote Link to comment
Shamalamadindong Posted January 2, 2016 Author Share Posted January 2, 2016 Everything is working fine now. Slow transfer speeds got fixed by turning on Jumbo Frames on my network card and increasing the send/recieve buffers. The dockers issues are also solved after completely deleting the docker img and restarting the server. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.