Slow Drive Write Speed and Network Transfers - 30 - 70 MB/s


pish180

Recommended Posts

Issue:

I'm constantly having slow write performance transfering over a 10G link from a Windows system that can saturate a 10G link (9 drive Raid6).  1 issue that I'm going to shelf is that max speeds for the 10G link seem to only peek at 3G.  Outside of that I saturate the 3G for about 30 seconds of starting the transfer and then it drop to 30-70Mb/s average write.  It is consistent.  Files being transfered are very large moves, so 2-30GB single files.  When it starts to write a new file to another drive in the share it will spike up again for 30 seconds (3GB/s) and then drop back to the 30-70MB/s speeds.  If I pause the transfer and let the parity stop read/writes and resume it will spike 3GB/s again.  On the MAIN page the Parity drives are showing READ and Writes that are sequential and generally both drives are very similar in numbers.  When the system spikes in write speed (network transfer speed)  the reads go down to KB/s.  (so it Reconstrust really working?).  On average I see both Parity drives displaying 52MB/s Read and 52MB/s Write (for both Parity drives).

 

From a READ perspective I've tried:

letting all transfer stop working.  Copying a file from UNRAID to a remote system.  I can easily max out a 1G link the entire transfer.  Between the 2 machines on 10G link I can get approximately what the single drive is capable of transferring at (180MB/s) SOLID. 

System:

  • 10G links on both ends - Flow Control is disabled. 
  • 2 Parity drives in Unraid - Reconstruct Write is enabled
  • Not using Cache drive or SSDs (writing directly to the HDDs) 
  • Drives are all ST10000VN0004's  - 10TB Seagate IronWolf drives. (2 cache,  6 data)
  • Share settings: Most Free,  Split top 2 directories (essentially putting the movies on all 6 drives based upon space available of each drive)
  • All drives are connected at SATA 6GB/s
  • Not sure if it was relevant but I did do SMB Local Master - Didn't make a difference

 

Solutions:

  • I already have 14TB of Parity information written removing the parity drives could speed up my transfers?  But then it would have to re-write ALL the data to the drives again.  This seems like a band-aid to the real issue.. 
  • I have 64GB of system RAM why cant some of that be used to cache?  If it is why so little?   
  • I could enable my SSD cache drives for this but they are only 1TB drives (raid 1).  I'm trying to transfer over 20TB of data... Not going to work with this design, Plus I would still have to run mover and then encounter the same issue again.  

 

 

Thanks in Advanced for the support.


And yes I did search the forums and read many articles and many are not resolved, some did but were not related, etc. 

Edited by pish180
More Info
Link to comment
  • Replies 50
  • Created
  • Last Reply

Top Posters In This Topic

Did you try this:

 

     https://forums.unraid.net/topic/50397-turbo-write/

 

By the way, there is a lot more to writing files to the disk than just transferring the data.  There is also the file system overhead, read/write head transport times and rotational latency that enter into the equation.  If you are writing tens of thousands of small files, these factors can be come significant.  The cache will speed up the transfer to the server for a very short period of time.  Once the cache is filled, you are limited by the disk speed.  Remember that every byte of file requires writing two bytes-- the actual data bytes and the parity up-date bytes.

Link to comment
3 hours ago, Frank1940 said:

Did you try this:

 

     https://forums.unraid.net/topic/50397-turbo-write/

 

By the way, there is a lot more to writing files to the disk than just transferring the data.  There is also the file system overhead, read/write head transport times and rotational latency that enter into the equation.  If you are writing tens of thousands of small files, these factors can be come significant.  The cache will speed up the transfer to the server for a very short period of time.  Once the cache is filled, you are limited by the disk speed.  Remember that every byte of file requires writing two bytes-- the actual data bytes and the parity up-date bytes.

Frank,

Thanks.  In my case I have enabled Turbo write via the Disk Setting as mentioned in original post.  Honestly didn't notice a difference between this being set to Auto or setting it to reconstruct write. 

As for my files, as I mentioned in the post I am transferring large files (2-30GB each).  About 10TB of data and it is taking FOREVER! Honestly its WAY to long to be considered a viable solution.  This can't be the experience everyone is having... right?   These speeds are honestly unacceptable, I could be transferring faster via USB 2.0!  

Link to comment
12 hours ago, pish180 said:
  • I already have 14TB of Parity information written removing the parity drives could speed up my transfers?  But then it would have to re-write ALL the data to the drives again.  This seems like a band-aid to the real issue.. 
  • I have 64GB of system RAM why cant some of that be used to cache?  If it is why so little?   
  • I could enable my SSD cache drives for this but they are only 1TB drives (raid 1).  I'm trying to transfer over 20TB of data... Not going to work with this design, Plus I would still have to run mover and then encounter the same issue again.  

How are you able to write 14TB of Parity information if your parities are 10TB?

Are you doing multiple transfers at the same time or single file one at a time?

Link to comment
14 hours ago, pish180 said:

I have 64GB of system RAM why cant some of that be used to cache?

This is exactly what you see, when you start copying files to your server. The first couple gigs are cached with the RAM after that, the transfers are directly written to the array. Direct write to the array means, as soon as you write to a array disk, the same data is read from it at the same time to generate the parity data, what causes another write operation. Reading and writing on a single disk at the same time causes the speed of the drive dropping to half it's possible speeds. As mentioned before you have to calculate in some overhead on top. If you have a drive in this chain, whats slower than the others, it will reduce the speed of the other operations. Also keep in mind if all drives are connected to the same controller, there also could be a bottleneck. Writing and reading from a array disk + writing to 2 parity disks at the same time + maybe a VM/Docker doin some read/write operations on a disk connected to the same controller can slow things down.

 

Best solution for 10G networking is to use a fast cache drive, that has enough free space to catch the transfered data. There is no other way to saturate your 10G speed.

Link to comment
16 hours ago, pish180 said:

I'm constantly having slow write performance transfering over a 10G link from a Windows system that can saturate a 10G link (9 drive Raid6). 

You can't break through Unarid array write performance, it not stripe across disks, worse if writing on inner track of disk. 10G link ( even 3G celling ) could't help that.

You should check write speed on disk instead focus on network transfer speed pattern ( input equals output ). I surprise if you can't found the differnet with "turbo write" on / off.

 

You could complaint Unraid not always perform as expect ( for array pool ), that's what I agree.

Edited by Benson
Link to comment

I don't have much to add in the way of helping you, but I can say you should be able to write at the Max drive speed of whichever drive in Unraid is being written to. I only have 1GB ethernet, and can saturate that easily. My arrary writes at ~112MBS. In my system, the fastest disk is capable of ~150MBS, so I don't see the point of 10gb in Unraid unless you have a large cache drive to write to that can actually utilize the speed.

 

Those 10tb Ironwolf's aren't shingled right? As in SMR drives?

 

Final question, you are using MB/s correctly right? As in 70 Megabytes per second? I ask because you mention you could be seeing higher speeds with USB 2.0.

USB 2.0's theoretical max speed is 480mbs. Megabits per second. 70MB/s is 560mbps. If you mean that you are seeing 70mbps then something is definitely not working correctly. 

Link to comment
9 hours ago, testdasi said:

How are you able to write 14TB of Parity information if your parities are 10TB?

Are you doing multiple transfers at the same time or single file one at a time?

Single transfer from 1 host.  

No, No.  Its 2 x 10 TB drives and dual parity and 6 x 10 TB drives as Data drives.  All drives are 10 TB.    I have over 14 TB of data to transfer to this array. 

Link to comment
6 hours ago, bastl said:

This is exactly what you see, when you start copying files to your server. The first couple gigs are cached with the RAM after that, the transfers are directly written to the array. Direct write to the array means, as soon as you write to a array disk, the same data is read from it at the same time to generate the parity data, what causes another write operation. Reading and writing on a single disk at the same time causes the speed of the drive dropping to half it's possible speeds. As mentioned before you have to calculate in some overhead on top. If you have a drive in this chain, whats slower than the others, it will reduce the speed of the other operations. Also keep in mind if all drives are connected to the same controller, there also could be a bottleneck. Writing and reading from a array disk + writing to 2 parity disks at the same time + maybe a VM/Docker doin some read/write operations on a disk connected to the same controller can slow things down.

 

Best solution for 10G networking is to use a fast cache drive, that has enough free space to catch the transfered data. There is no other way to saturate your 10G speed.

So basically what I'm hearing is... the initial transfer of data to Unraid should be recommending: DO NOT setup PARITY before the initial transfer.  This really should be a HUGE notice in the documentation.  Something on the lines of.  If you are setting up Unraid and need to transfer mass amounts of data you should do "X".  My experience has been less than optimal so far.  

Nobody is going to get 20 TB of SSDs to support their initial adoption of Unraid.  That's ridiculous for a 100$ OS investment to spend 5k$ worth of SSD which btw would require at least 2 more PCI-E slots for HBAs. 

 

 

Can someone of the other here please do my a favor and setup a SMB share to not use cache and transfer maybe 40GB of data over a 10G link and report back what speeds you get?  I'm curious if this is normal.   That will at least resolve my idea that I may have some issues internally. 

Link to comment
1 minute ago, pish180 said:

So basically what I'm hearing is... the initial transfer of data to Unraid should be recommending: DO NOT setup PARITY before the initial transfer.  This really should be a HUGE notice in the documentation.  Something on the lines of.  If you are setting up Unraid and need to transfer mass amounts of data you should do "X".  My experience has been less than optimal so far.  

Nobody is going to get 20 TB of SSDs to support their initial adoption of Unraid.  That's ridiculous for a 100$ OS investment to spend 5k$ worth of SSD which btw would require at least 2 more PCI-E slots for HBAs. 

 

 

Can someone of the other here please do my a favor and setup a SMB share to not use cache and transfer maybe 40GB of data over a 10G link and report back what speeds you get?  I'm curious if this is normal.   That will at least resolve my idea that I may have some issues internally. 

This IS mentioned quite frequently!

 

However whether that is a good approach varies from person to person as if you do not have good backups then getting the data parity protected ASAP can be more important than maximising the transfer speed.

 

 

Link to comment
3 hours ago, david11129 said:

I don't have much to add in the way of helping you, but I can say you should be able to write at the Max drive speed of whichever drive in Unraid is being written to. I only have 1GB ethernet, and can saturate that easily. My arrary writes at ~112MBS. In my system, the fastest disk is capable of ~150MBS, so I don't see the point of 10gb in Unraid unless you have a large cache drive to write to that can actually utilize the speed.

 

Those 10tb Ironwolf's aren't shingled right? As in SMR drives?

 

Final question, you are using MB/s correctly right? As in 70 Megabytes per second? I ask because you mention you could be seeing higher speeds with USB 2.0.

USB 2.0's theoretical max speed is 480mbs. Megabits per second. 70MB/s is 560mbps. If you mean that you are seeing 70mbps then something is definitely not working correctly. 

Totally get your point here.  Looking back at architecting this Unraid server there are many things I'm massively disappointed with but at the same time I do like it.  So its really a toss up.  With FreeNas I could saturate a 10G link EZ 1GB/s np.  This is honest MASSIVELY painful for me to transfer over this much data.  Something that should have taken me a weekend to do has taken me 4+ weekends.  

I would have figured the parity would not need to read as much data as it is writing during a write operation, this seems really inefficient.  There has to be a better way to use cache to alleviate the read process of the drive.   Unraid displays that the drives are reading and writing sequentially around 50-60MB/s.  If the reads were removed I could easily max out the drives write speed getting me at least over 100MB/s.  

As for SMR drives... not sure.  They are not cheap drives so I'm not sure if that is a feature of desktop drives, these are NAS drives.
https://www.seagate.com/internal-hard-drives/hdd/ironwolf/

Yes my use of MB vs Mb is correct.  Everything so far i've cited is Bytes not Bits.  If the bottleneck is the parity drives it won't matter what I do at any point in the future.  I have 2x 1TB SSDs for cache in a Raid1.  If any transfer is over 1TB (per day) then I'm fked.  

Link to comment
13 minutes ago, itimpi said:

This IS mentioned quite frequently!

 

However whether that is a good approach varies from person to person as if you do not have good backups then getting the data parity protected ASAP can be more important than maximising the transfer speed.

 

 

If you have a link handy with this noted please feel free to link.  I'm looking for something NOT in forums. The Unraid Documentation Wiki. 
Places I would have expected this information: 
https://wiki.unraid.net/UnRAID_Manual_-_FAQ
or
https://wiki.unraid.net/UnRAID_6/Getting_Started

To your comment... As for the approach I agree depends.  I would argue if you are transferring data from another source you have the option to leave it there until parity is written.  Thus you can transfer it without parity first and then keep it on the source until parity is done.  Honestly I'd like to just have an option without removing my parity drives from my array to just write the dam data as max speed and the do parity later.  Just keep track of what needs parity so you don't have to rewrite the existing 14TB of parity info again.

Link to comment
3 minutes ago, pish180 said:


I would have figured the parity would not need to read as much data as it is writing during a write operation, this seems really inefficient.  There has to be a better way to use cache to alleviate the read process of the drive.   Unraid displays that the drives are reading and writing sequentially around 50-60MB/s.  If the reads were removed I could easily max out the drives write speed getting me at least over 100MB/s.  

When writing to the array then the cache drive is not involved.

 

Have you read the description of Unraid write modes to understand how Unraid handles array writes?   If you want to eliminate the reads from the parity drive then the Turbo write mode achieves this.

Link to comment
5 minutes ago, pish180 said:

I would have figured the parity would not need to read as much data as it is writing during a write operation, this seems really inefficient.  There has to be a better way to use cache to alleviate the read process of the drive.   Unraid displays that the drives are reading and writing sequentially around 50-60MB/s.  If the reads were removed I could easily max out the drives write speed getting me at least over 100MB/s.  

The red section DEFINITELY does not look like you have turned on reconstruct write (aka Turbo Write) correctly.

I would recommend you check the settings again.

 

During write with reconstruct write, there should be zero read on the parity (only write).

Link to comment

I'm not sure what the problem is then, especially if you set it to reconstruct write and hit apply under disk settings. I just tested what my server's behavior is during a large write. It writes the file to one drive, then reads the rest to build the parity. The drive being written to does not have much in the way of reads occurring. How are the disks connected?  Are the disks directly connected to the motherboard sata ports, or do you have an HBA present?

 

To be honest, with you seeing high reads and writes simultaneously, I would try to set reconstruct write again. Like I said, for me, the disk being written to writes at line speed and has minimal read activity. If I fire up two transfers, then I start to see both high read and high writes on my drives. 

 

Also, and I don't believe it has to do with your problem, but I would install the tips and tweaks plugin from the apps and set your CPU scaling governor to performance. I was having some stuttering and other issues because the cpu speed was not ramping up and was stuck at 800mhz. When I set it to performance, it ramped up as needed. Conservative works as well.

Link to comment
1 hour ago, testdasi said:

The red section DEFINITELY does not look like you have turned on reconstruct write (aka Turbo Write) correctly.

I would recommend you check the settings again.

 

During write with reconstruct write, there should be zero read on the parity (only write).

Thats great news... but the setting is enabled... I've even stopped and started the array afterwards... 

Maybe I'll set it to rwr.  and set it back... idk

Edited by pish180
Link to comment

Ill give it a go for you with a 60GB MKV file:-

 

Copying from array to local nVME SSD via Gigabit - 110MB/s

Coping from nVME SSD to SATA SSD Cache - 110MB/s

Coping from nVME SSD to Array (no turbo write) - 230MB/s whilst using RAM for initial cache, then 90-110MB/s

Coping from nVME SSD to Array (turbo write enabled) - 110MB/s

 

Edited by sdamaged
Link to comment

yeah, something is wrong here.  Reconstruct write is not working...  

I have switched it RW -> RMW -> RW  

Stopped Array -> started -> Rebooted.  Nothing is stopping it from reading while doing parity. 

chrome_NllLgu4e5x.jpg

Edited by pish180
Link to comment

What happens if you change your Share setting to high water? I suspect that parity is being reconstructed, but it hasn't has a chance to do so because you keep changing which disk is being written to. As in it's still reading to generate parity from the last movie you transferred, when the next movie is being transferred to a new disk. If you minimize the number of disks you are writing to at a time, you shouldn't see the parity being written until the transfer moves to the next drive.

 

I use high water because I am able to leave more of my disks spun down when watching movies etc. You have 2 parity drives, so for you to lose data you would need to have 3 drives fail.  

 

edit: I would stop the transfer, then change the share setting.

 

Edited by david11129
Link to comment

Are you basing the speeds off of what the main page says? As you can see, when I started transfers to two different disks, my parity speed looks similar to yours. My actual transfer speed did not change from ~1GB/s. I am not sure if the listed speed is the speed at which it is updating parity, or if the main page is just unreliable for reported disk speed. I can 100% say that the windows transfer speed window never dipped from 112MB/s.

 

To show this, I again did the transfer, watched my main page list the speed as around ~55MB/s, then took a screenshot of iotop showing the actual speed. 

Screenshot (50)_LI.jpg

Screenshot (51).png

Edited by david11129
Link to comment
2 minutes ago, david11129 said:

Are you basing the speeds off of what the main page says? As you can see, when I started transfers to two different disks, my parity speed looks similar to yours. My actual transfer speed did not change from ~1GB/s. I am not sure if the listed speed is the speed at which it is updating parity, or if the main page is just unreliable for reported disk speed. I can 100% say that the windows transfer speed window never dipped from 112MB/s.

Screenshot (50)_LI.jpg

Weird... maybe its a UI glitch.  

Umm.... WTH... 
So I put it on High Water and I get significantly different results! It was able to transfer a 16GB video and rocked steady at around 200MB/s peaking at 400-500MB/s.  Once it changed to another video in the transfer queue it dropped down to 36MB/s and didn't return back to full speed.  WTF is going on here?!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.