Jump to content

Copying speeds


Recommended Posts

I am new to Unraid and just in the trial period. I have noticed that the disks did 220mbyte a second during preclear and building parity. I have no cache yet. 

 

When I copy files to the array through SMB it goes at about 60mbyte a second. When I download from my sftp which usually maxes out my 8mbyte a second internet, it goes at about 7.2mbyte a second to the SMB mapped drive. 

 

When I use the filezilla docker it only goes at 3mbyte a second. I am guessing this is because the driver of the docker does not work very well in bridge mode? This was kind of a disappointing speed, as I was hoping to download directly to the array. Now it seems the fastest way is still to download to my PC and then copy it across at 60 mbyte a second. 

 

I have looked in to this Turbo Write feature and thinking I should give that a try, at least during this stage when I want to write like 15TB to the array right from the start. Is that a good idea? Any comments on the other speeds? 

 

Sorry for some of the posts that I know have probably been asked a million times before. 

Edited by Simontv
Link to comment
38 minutes ago, Simontv said:

I have looked in to this Turbo Write feature and thinking I should give that a try, at least during this stage when I want to write like 15TB to the array right from the start. Is that a good idea?

 

Yes, it's a good idea when you have lots of data to write. Since each drive will only need one access per block you want to write you get way better performance than in the normal mode where the parity drive needs to perform a read/modify/write.

 

41 minutes ago, Simontv said:

When I download from my sftp which usually maxes out my 8mbyte a second internet, it goes at about 7.2mbyte a second to the SMB mapped drive. 

 

If sftp doesn't buffer and run the retrieval asynchronously with the save then the extra latency in the save might result in sftp becomming a bit slower. A transfer program really shouldn't stop the read code while waiting for each block to be written - the only reason to pause the read operation is if there is a big write backlog. And since unRAID can write write way ore than 7.2 MB/s there should never be any backlog.

 

Potentially you will get better results from sftp and filezilla too, since the write latency will be much smaller in turbo write mode.

Link to comment
6 minutes ago, pwm said:

 

Yes, it's a good idea when you have lots of data to write. Since each drive will only need one access per block you want to write you get way better performance than in the normal mode where the parity drive needs to perform a read/modify/write.

 

 

If sftp doesn't buffer and run the retrieval asynchronously with the save then the extra latency in the save might result in sftp becomming a bit slower. A transfer program really shouldn't stop the read code while waiting for each block to be written - the only reason to pause the read operation is if there is a big write backlog. And since unRAID can write write way ore than 7.2 MB/s there should never be any backlog.

 

Potentially you will get better results from sftp and filezilla too, since the write latency will be much smaller in turbo write mode.

 

Strangely I have a similar unrelated issue with transfer speeds over WIFI. I bought expensive wireless AC 5ghz kit and my internet won't go above 3mbyte a second with it. Which I know is an issue because i have tested wireless ac at other houses and they max out the internet line speed without an issue. 

 

I do have a new pfsense in place as a firewall but my internet is a flaky VDSL setup and it doesn't take much for it to lose sync. So I am leaning towards not blaming unraid completely for the slow speeds within the filezilla docker, although i think we can say it is unusual. 

 

When downloading to the SMB share through my other deskop PC the speed is OK and saves me a recopy but one thing i noticed is that it seems to stop and start for a ms every few seconds and is not as stable of a connection, like it is over my normal internet line. I guess that is because of SMB or maybe unraid? I would have thought that 7 or 8 mbyte a second would have been no issue for SMB. This is my first time downloading from SFTP directly to an SMB share at home. At work on enterprise equipment I never had an issue with this stop start and slow speeds. I am thinking I should buy a switch and put them all in to the switch, rather than rely on pfsense to route between interfaces/subnets.

 

I have enabled the other write mode, do I need to stop and start the array to see that applied? I am not seeing any significant write speed increase and I read some people saw their speed doubling to the max 110-120mbyte. 

Link to comment
1 hour ago, Simontv said:

I have enabled the other write mode, do I need to stop and start the array to see that applied?

 

No. When the program/protocol isn't the weak link you should be able to get over 100 MB/s over a 1gbit/s link. But this assumes that not some other program is concurrently accessing one of the drives, because you lose a lot of transfer rate whenever the drives has to perform extra seeks.

Link to comment
23 hours ago, pwm said:

 

No. When the program/protocol isn't the weak link you should be able to get over 100 MB/s over a 1gbit/s link. But this assumes that not some other program is concurrently accessing one of the drives, because you lose a lot of transfer rate whenever the drives has to perform extra seeks.

 

I am aware that speed tests have to be done when the drives are not experiencing any other activity. 

 

I copied some data from the NAS to my desktop pc, so this should go full speed because it is not doing anything with parity on the destination and is going to an SSD in my desktop PC and it still stops at 60 mbyte a second. Which to me indicates the bottleneck is the network for some reason. Whether it is windows 10 or smb or maybe my physical network itself i am not sure yet. 

 

I thought I saw in another thread a SMB reg key fix that speeds up transfers. Maybe I need to do that. 

Link to comment

I upgraded the NIC in the pfsense to HP nc365T pcie V2 instead of the 364 pcie v1 card. This has increased the copying speed to unraid, now I am maxing out the gigabit connection at 100-120mbyte second. Still want to buy a switch at some point. 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...