Why do my write speeds make no sense?


Recommended Posts

   I've struggled with this issue for awhile, everytime I think I finally have figured it out I'm proven wrong.  Currently I'm using Unbalanced to empty the contents of a drive.  It's plodding along at 22MB/s.  If I transfer that same content to an external USB3 drive and then copy it back to the array I'll get 130+ MB/s, why?  I do not use the cache drive for any file copies, so that isn't it.  I have reconstruct writes enabled, all my drives are connected at sata 6g, I am not cpu/ram limited (96GB of ram, dual xenon) so what gives?  This is the one issue I have with unraid that really bothers me.  I get that I am not going to get the same performance as a raid 5 array but, when I can do a parity sync and average 140+MB/s and a file copy can't manage half that something seems very wrong.  Any one have ideas?

 

Thanks!

unbalance.jpg

sata.jpg

main.jpg

Edited by rclifton
Link to comment

Turbo write won't perform so good when reading and writing to array at the same time, since one of the disks will need to alternate/seek between reads and writes, still 22MB/s seems slow, unbalance uses rsync and rsync is great but not well regarded for speed, try doing a copy with mc or Windows from one disk to another, I would expect speeds around 60MB/s, but like I said it will never get close to turbo writes from outside the array.

Link to comment

A move operation is a copy followed by a delete. Both operations are read and write operations, with lots of cycling back and forth between the data area and the file system metadata area on both source and destination disks. Compound that with the wait introduced by every write updating the corresponding area of the parity disk, and there you go.

Link to comment

Sorry, real life pulled me away for a bit but I came back to finish moving data off that drive and thought I would give another example of why the randomly crappy writes are sooooo annoying.

 

Here is another set of screen shots, same exact two drives (Source is a 7200rpm seagate 6TB approx 2yrs old and destination is a brand new 8TB WD White Label Red) and as you can see decent transfer speed and it actually got slightly faster as it went on.  It's this random, can't really explain why its totally slow as molasses one time and about what I would expect another that is really getting to me..  I have the same issue with my monthly parity syncs as well.  One time they will finish in about 15 hours averaging 140+MB/s and the next time they will run for almost 30 hours with a average speed of 77MB/s.

 

Any ideas?

 

 

a.jpg

b.jpg

Link to comment
43 minutes ago, johnnie.black said:

Did you read the posts above?

 

Also, it can vary with file size it's transferring, the smaller the average size, the lower the speed.

 

 

Yes actually, I did read them.  Neither of them really explain the massively random difference in transfer speeds that I see.  I mean 22MB/s vs 108MB/s is a pretty wide margin is it not?  You would not be wondering if there was something seriously wrong if you frequently saw a swing like that moving files around?  By the way, that first transfer was my kids movie folder, so all fairly large files.  That second transfer as you can see is music.  Obviously much smaller individual files and yet its FASTER!

Link to comment
15 minutes ago, rclifton said:

Yes actually, I did read them. 

And did you try a transfer using mc and realize that transferring disk to disk, even with turbo write will never have the same performance the same as transferring from outside the array?

 

As for the unbalance instant speeds, they could be not that meaningful, since they can vary wildly, average speed in the end is what would be important, and if they are around 60/70MB/s they would be normal.

Link to comment
7 hours ago, rclifton said:

I have the same issue with my monthly parity syncs as well.  One time they will finish in about 15 hours averaging 140+MB/s and the next time they will run for almost 30 hours with a average speed of 77MB/s.

Assuming you mean parity checks, this would be a concern. Perhaps you have some bad cables or bad controllers causing CRC errors. Your diagnostics would reveal any problems like that.

Link to comment

Yeah, Johnny Black is literally giving you the exact reason why write speeds when transferring data from disk to disk INSIDE the array are slow.  It's because for each write (either deleting from the one disk or writing to the other), it has to update the parity disk for BOTH transactions.  That's why writing to disks in the array are fast when the source ISN'T on the array.

Link to comment
  • 7 months later...

I came here because I'm experiencing similar.  The unbalance app seems to be much, much slower than it should be.  I have come to the conclusion there is something wrong with it, or some ballooning error that happens with long transfers.  When it eventually finishes it's large file copy to (shifting between enterprise disks which is currently scheduled to take 14 hours for the remaining 1.6TB, I will do it by command line. I've done all sorts of large copy operations on the system and only since using unbalance has it been slow.

 

Also for @rclifton I note that the unbalance plugin, does not equal the same as the actual transferring speed of the drive as reported by unraid.  Clearly the speed field in unbalance is taking all sorts of things into account and averaging it out.  But, this being my first time doing whole drive cleanup with the unbalance plugin, I note it slowed way down after a couple of hours.  Originally based on it's then active transfer speed, it was going to take 5 hours for a 3TB copy, however it's now been running for 20 hours and it's done 1.4TB only.  

 

Some observations that seem odd to me include at times the disk is reading and writing from the same disk at the same speed in both read and write columns of 75MB/s and simultaneously the drive it's copying from is only running at 10 or 20MB/s sometimes less.  Other behaviour that seems odd to me, is it cycles between reading from the source drive (and not writing to the target), then not reading from the source drive and writing to the target.  So it's like copying it to a buffer somewhere.  Something I'm sure is not normal for a normal move or copy operation.

 

That all said, I accept I don't know lots about how Unraid operates and perphaps I don't understand something.  But it in no way feels normal.  I did have some custom disk tuning set up which gave me larger write speeds.  I've reset that to defaults, but it hasn't helped.

obi-wan-diagnostics-20190705-2317.zip

Link to comment
  • 3 years later...
On 12/5/2018 at 7:57 AM, jonp said:

Yeah, Johnny Black is literally giving you the exact reason why write speeds when transferring data from disk to disk INSIDE the array are slow.  It's because for each write (either deleting from the one disk or writing to the other), it has to update the parity disk for BOTH transactions.  That's why writing to disks in the array are fast when the source ISN'T on the array.

Yep. I came across this thread (now years later via google) because the speeds were horrible. I didn't even consider taking the disk I want to remove out of the array.  I stopped the array, went to Tools -> New Config and set everything up the same way except I didn't add the disk I wanted to repurpose outside of UnRAID.

 

When the disk was in the array I was getting ~50-60MB/s using rsync on the command line while transferring large (5+GB) files.  After I removed the disk from the array, restarted the array w/o the disk, then SSHed into the UnRAID system and manually mounted the disk I wanted to remove and re-ran the same rsync command, I was getting ~240MB/s. Which is the maximum my spinning disks can do for R/W ops. I would expect a destination array setup using SSDs to also reach their theoretical maximum throughput, depending on your data of course (block size, small files vs large files, etc).

 

It meant the difference between a 32 hour transfer and just over a 9 hour transfer for 7TB of data.

 

Steps I used, hopefully someone else that finds this thread via a Google search like I did finds it useful. Full warning: The below is only for people that understand that UnRAID Support on these forums will only help you as a 'best effort' and you are comfortable with the command line. There is no GUI way of doing this. You've been warned (though, that said, this is fairly easy and safe but since we are "coloring outside of the lines", BE CAREFUL).
 

After removing the drive from the array via Tools -> New Config and starting the array without the drive, manually updating all shares to a configuration where the mover will not run, and assuming /dev/sdf1 is the drive you want to remove, install 'screen' via the Nerd Pack plugin, launch a console session (SSH or web console via the GUI - either works) and type:

 

# Launch Screen
root@tower# 'screen'
# Create mount point for drive you want to remove data from
root@tower# 'mkdir /source'
# Mount the drive you want to remove data from
root@tower# 'mount /dev/sdf1 /source'
# Replicate the data from the drive you intend to remove TO the general UnRAID array
root@tower# 'rsync -av --progress /source/ /mnt/user/'
# --progress is optional, it just shows speed and what is happening
# Press 'CTRL+A, then the letter D' to DETACH from screen if this is a multi-hour process or you need to start it remotely and want to check on it later easily.
# Press 'CTRL+K, then the letter Y' to kill the entire screen session. Note that this WILL stop the transfer whereas 'CTRL+A, then D' will not.
#
# To reconnect, SSH back into the system and type:
root@tower# 'screen -r'

(wait for rsync to complete)

root@tower# umount /source
root@tower# rmdir /source
# IMPORTANT: If either of the above commands fail, you have ACTIVE processes that are using the drive you want to remove.
# Unless you know what you're doing, do not proceed until the above two commands work without any warnings or errors.

Shut down server, remove drive, turn server back on, and change the shares that were modified at the start of this process to their original state so mover will run once again.

 

Why use screen? You can certainly do this without screen, however if you don't use screen and you get disconnected from your server during the transfer (WiFi goes out, you're in a coffee shop, etc), your transfer will stop. Obviously this is not an issue if you're doing this on a system under your desk. But even then, it is probably still a good idea. What if the X session crashes while you're booted into the GUI? Screen does not care - it will keep going, and you can reattach to it later to check on the progress.

 

I did try to use the Unbalance plugin in conjunction with the Unassigned Drives plugin so that the drive I wanted to copy data FROM was not in the array, however Unbalance doesn't work that way - at least not that I could dig up.

Edited by jaylo123
Link to comment
12 hours ago, jaylo123 said:

After removing the drive from the array via Tools -> New Config and starting the array without the drive

You have to let parity rebuild if you remove a disk. Maybe you did but didn't mention it.

 

12 hours ago, jaylo123 said:

install 'screen' via the Nerd Pack plugin

NerdPack deprecated. See its thread for workaround.

 

12 hours ago, jaylo123 said:

I was getting ~240MB/s

Were you writing to a cached share? If writing directly to the array parity still has to be updated so single disk speed isn't possible. Turbo write can help some.

Link to comment
12 hours ago, jaylo123 said:

When the disk was in the array I was getting ~50-60MB/s using rsync on the command line while transferring large (5+GB) files.  After I removed the disk from the array, restarted the array w/o the disk, then SSHed into the UnRAID system and manually mounted the disk I wanted to remove and re-ran the same rsync command, I was getting ~240MB/s. Which is the maximum my spinning disks can do for R/W ops. I would expect a destination array setup using SSDs to also reach their theoretical maximum throughput, depending on your data of course (block size, small files vs large files, etc).

Write speeds by default are always going to be significantly slower than the raw speeds.

 

This is because the default write mode goes like this for any given sector on the device

 

  1. Read the sector on the parity disk
  2. Read the sector on the data drive about to be written
  3. Recalculate what parity should now be based upon what the new data for the sector will be
  4. Write the sector on the parity disk
  5. Write the sector on the data disk

IE: 4 IOPS with the default write method vs 1 with the drive being outside the array (or on a cache pool)

 

You can use "Turbo Write Mode" (ie: Reconstruct write) which will pretty much write at the full speed of the drive (subject to bandwidth considerations), but at the expense that all drives will have to be spinning.

 

 

 

Link to comment

@jaylo123It's a known fact that Unraid's 'Unraid' array has dawdling speed.  There is no workaround for this.  The only solution I can think of (which I have done) is to not use the unraid array.  So pretty much on unraid that means use ZFS array.  From experience the speed increase was notable. -add to that the remainder of benefits and (to me at least) it's a no brainer.

 

However, despite being very well implemented into unraid, you would need to be comfortable with the command line to use it and be prepared to do some reading on how it works.  So it isn't for everyone and I'm not trying to push you one way or the other.  I'm just saying the 'unraid' array is known to be extremely slow.

Link to comment
6 hours ago, trurl said:

Were you writing to a cached share? If writing directly to the array parity still has to be updated so single disk speed isn't possible. Turbo write can help some.

Nope. I did have the parity drive operational and the array online, other than the single disk that I removed from the array. That's why /mnt/user/ was my <dest> path, and I was getting those speeds. And the amount of data being transferred was 6x the size of my SSD cache disk. My signature has the drives I use, which are Enterprise Seagate Exos disks. I guess their on-disk cache is able to handle writes a bit more efficiently than more commodity drives? /shrug - but 240MB/s is the maximum for spinning disks w/o cache and I assume the writes were sequential.

 

6 hours ago, Squid said:

Write speeds by default are always going to be significantly slower than the raw speeds.

 

This is because the default write mode goes like this for any given sector on the device

 

You can use "Turbo Write Mode" (ie: Reconstruct write) which will pretty much write at the full speed of the drive (subject to bandwidth considerations), but at the expense that all drives will have to be spinning.

Oh that's interesting (Turbo Write Mode). That ... probably would have been beneficial for me! But I got it done in the end after ~9 hours. Of course, as a creature of habit, I re-ran the rsync one more time before wiping the source disk to ensure everything was indeed transferred. 

 

I didn't measure IOPS but I'm sure they would have been pretty high. I just finished benchmarking a 1.4PB all-flash array from Vast Storage at work and became pretty enamored with the tool elbencho (a pseudo mix of fio, dd and iozone, with seq and rand options - and graphs!) and after spending basically 4 weeks in spreadsheet hell I wasn't too interested in IOPS - I just needed the data off the drive as quickly as possible :).  That said, making 16 simultaneous TCP connections to an SMB share and seeing a fully saturated 100GbE line reading/writing to storage at 10.5GB/s felt pretty awesome!

 

For anyone interested in disk benchmarking tools I highly recommend elbencho as another tool in your toolkit. The maintainer even compiles a native Windows binary with each release. Take a look! 

 

breuner/elbencho: A distributed storage benchmark for file systems, object stores & block devices with support for GPUs (github.com)

 

41 minutes ago, Marshalleq said:

@jaylo123It's a known fact that Unraid's 'Unraid' array has dawdling speed.  There is no workaround for this.  The only solution I can think of (which I have done) is to not use the unraid array.  So pretty much on unraid that means use ZFS array.  From experience the speed increase was notable. -add to that the remainder of benefits and (to me at least) it's a no brainer.

 

However, despite being very well implemented into unraid, you would need to be comfortable with the command line to use it and be prepared to do some reading on how it works.  So it isn't for everyone and I'm not trying to push you one way or the other.  I'm just saying the 'unraid' array is known to be extremely slow.

Oh certainly and yes, I knew there was a performance hit using the unraidfs (for lack of a better term) configuration/setup. And agreed too, eschewing the UnRAID array entirely and hacking it at the CLI to set up ZFS is a route one could do. But at that point, it would be better to just spin up a Linux host and do it yourself w/o UnRAID. Or switch over to TrusNAS. The biggest draw for me to UnRAID was/is the ability to easily mix/match drive sizes and types into one cohesive environment.

 

I guess I was really just documenting how I was able to achieve what OP was trying to do for future people stumbling across the thread via Google (because that's how I found this post).

Edited by jaylo123
Link to comment
On 9/19/2022 at 9:52 AM, jaylo123 said:

And agreed too, eschewing the UnRAID array entirely and hacking it at the CLI to set up ZFS is a route one could do. But at that point, it would be better to just spin up a Linux host and do it yourself w/o UnRAID. Or switch over to TrusNAS. The biggest draw for me to UnRAID was/is the ability to easily mix/match drive sizes and types into one cohesive environment.

For those that associate the phrase hacking with something negative, I would just like to point out that putting ZFS on Unraid is not at all a hack.  The two devs have worked extremely hard on it including with Limetech to make the plugin work and update seamlessly alongside Unraid updates etc.  In fact it's them we have to thank for some of the nice new plugin features we are now all enjoying.  The fact that ZFS currently has no official GUI, is just how it is supplied at present, (the master code has no GUI by design, however there are ZFS plugins available to help with this).

 

And yes many people come to Unraid for the Unraid array - including myself.  But just because you run ZFS, does not mean Unraid has no value.  Actually the way that Unraid has implemented docker support and plugins is in a class all of its own.  I tried TrueNAS and even participated in the beta of TrueNAS scale to try give it a bit more polish, its implementation of docker is just awful and frustrating to use and second cousin to its installed Kubernetes.  Kubernetes is considered to be the king of containerisation on TrueNAS but that is some weird hack implementation that doesn't quite fit well for home installations nor enterprise installations, arguably Kubernetes is really meant for enterprises.

 

So basically I'm just trying to defend Unraid a bit here by saying it's array isn't it's only good feature.  And FYI, looking at the latest announcement for the latest unraid version it looks like baked in ZFS by limetech is coming in the next Unraid version.

 

Happy Weekend!

Link to comment
On 9/19/2022 at 5:52 AM, jaylo123 said:

fully saturated 100GbE line reading/writing to storage at 10.5GB/s felt pretty awesome!

Amazing.

 

On 9/19/2022 at 5:52 AM, jaylo123 said:

For anyone interested in disk benchmarking tools I highly recommend elbencho as another tool in your toolkit. The maintainer even compiles a native Windows binary with each release. Take a look! 

👍

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.