Speeding up windows transfers


Recommended Posts

I am not sure where else to post this, but i want to ensure that I am using best practice methods to ensure the speediest transfer speeds over Gigabit from my windows PC to the Array. Here is what I am seeing.....

I start transfers of video files and anything after the first 6 or 7 gigs or so, slows down from 110MBps (Basically full gigabit plus a little overhead) to 24MBps for the rest of the transfer. I have the shares using cache drive which is a Crucial 512SSD that unload to the array of spinning disks each night. I am not certain why, when I have files that range in size from 10 to over 100GB in size, why it will start off really well, and then suddenly slow down instead of staying at Gigabit the entire way. As I move files this way it is kind of getting to be a problem now. I can see and verify the files are indeed writing to the cache disk in the array only, and I can see them if I go into the cache drive, that they are being added. I can see the parity disk and all other drives in the array are spun down also, which is good. I am a network engineer so I am wondering if there are some setting in the NIC on either the array or the windows PC that perhaps need to be played with in order to get better performance? Both machines are on the same network and are full gigabit so I really do not understand where the slowness is coming from. The Array seems to not be pegged on CPU used when I run HTOP via a telnet session and monitor it. I can see the process counter and nothing seems busy in there at all. I am doing these transfers by just \\tower\usershare\fold\file and dropping it that way. Is there another better method than this? I am really not certain if I am missing something and just need a sanity check here. I am running 6.8.3 BTW and can provide any logs if asked. I just think the array is not the issue here, but maybe a setting or checkbox somewhere? Like I'm running over a buffer or write cache somewhere?

Link to comment

I have had disk shares enabled for several years. I tried doing this today but the same problem happens. I don't move files from disk shares to user shares and vice versa so I'm understanding what's going on there. Thanks for the help. Writing a file directly to cache still drops the speed after the first 5 or 6 gigs. Not sure why. It has to be a stupid Windows thing I am missing.

Link to comment
20 hours ago, bardsleyb said:

I start transfers of video files and anything after the first 6 or 7 gigs or so, slows down from 110MBps (Basically full gigabit plus a little overhead) to 24MBps for the rest of the transfer.

This suggests more a device problem, enable turbo write and see if it's faster transferring to the array, assuming no major controller/hdd bottlenecks.

Link to comment
21 minutes ago, JorgeB said:

This suggests more a device problem, enable turbo write and see if it's faster transferring to the array, assuming no major controller/hdd bottlenecks.

Unfortunately turbo write and jumbo frames did not stop from throttling down after the first few gigs either. At this point I am debating throwing in an NVME and testing that instead of the SSD I am using for Cache now. Seems like an excuse to spend money to me......   :)

Link to comment
43 minutes ago, itimpi said:

Do you have "turbo write" enabled?   It almost sounds as if you are not writing to the cache as expected but directly to the array so that things slow down as soon as RAM buffers run out.

Yes it's enabled. I know its not writing directly to the array since I am copying the file directly to the disk share for the cache disk, and I can visibly see all the drives in the array are spun down doing nothing. It's really weird. I can read from any disk in the array and write to my windows NVME drive at gigabit. I just tested that with a 36 Gig file. But when I write that file back to the cache drive, it starts at 110-115MBps and holds that speed for about 6 gigs, and then drops down to about 24 Meg write speeds for the rest of that transfer to the Cache disk. It's a crucial MX 512 GB SSD. I have an LSI HBA flashed to IT mode, but my cache drive goes straight to the Motherboard SATA port. I have 10 ports on the motherboard, so in fact, most of the drives in the array don't have to use that LSI card, just a few of the spinning disks are. I am wondering if again, this is just an excuse to get new hardware, starting with that cache drive. I thought this might be something easy that I had missed as I have been away from the forums and unraid updates for a while and thought maybe something had changed that I missed. The array works so well, that I have not made any real changes or updates in quite a while. For example, my parents unraid that I manage has not been rebooted all year and has an uptime of 390 days and counting now. Even runs a windows VM that I use for remote management of the server itself.

Link to comment

I had the same problem with slowness writing big files after a while. It's usually something to do with vm.dirty_ratio & vm.dirty_background_ratio. You can adjust these with the Tips And Tweaks community application. The percentage ratio depends on how much RAM you have and your use cases. Here's an explainer for what these do: https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.