Jump to content

Slow drive to drive copy?


jeffreywhunter

Recommended Posts

Speed sounds about right for millions of small files.

 

Maybe we need to come up with a copy mechanism that skips the file copy process and just does a bit for bit transfer.  Probably a lot of holes in that idea, but given the OH in dealing with many (millions) of files is the problem, perhaps one of you geniuses can find a way to skip that process.  Yeah, I know, files are never contiguous on the disk, but its a nice fantasy... ;)

Link to comment

This is crazy, it just keeps slowing down more and more.  Is there a way to diagnose this performance issue?  Down to 1.59 MB/s now...

 

width=300http://my.jetscreenshot.com/12412/20161111-pkub-132kb.jpg[/img]

 

I'm just moving files from one disk to another (not sure if they are on the same controller or not), so it should not be too bad, not sure why its getting so slow...  Thoughts?

Link to comment

Sorry to be dense - Ok, then it is what it is.  I'm using MC to do the move.  Is there a better tool to use to move millions of small files?  RSYNC? Or is MC just using the same commands (CP?)...  Would it be faster to copy from unRaid to my windows machine, then back to unRaid so I don't halve the disk channel (since I'm pretty sure both drives are on the Supermicro Controller)?  Or maybe to the Cache drive (but its not big enough)...

Link to comment

Sorry to be dense - Ok, then it is what it is.  I'm using MC to do the move.  Is there a better tool to use to move millions of small files?  RSYNC? Or is MC just using the same commands (CP?)...  Would it be faster to copy from unRaid to my windows machine, then back to unRaid so I don't halve the disk channel (since I'm pretty sure both drives are on the Supermicro Controller)?  Or maybe to the Cache drive (but its not big enough)...

 

A lot of your overhead is seek time for the heads, it's not just about the raw quantity of data.  I don't think you'll see any appreciable benefit with rsync and doubling the number of copies required by copying it back and forth as well as adding in network overhead certainly won't make it any quicker.  You just got to ride it out.

Link to comment

Your other option is to zip the 16M files, copy the zip, then unzip it at the destination.  But then the actual transfer would go super fast, but the overhead in the zip / unzip operations would pretty much add up to the same total time you're seeing now.  It would however make a lot of sense to do the zip if you were transferring 16M files from a windows box to the unraid box

Link to comment

Your other option is to zip the 16M files, copy the zip, then unzip it at the destination.  But then the actual transfer would go super fast, but the overhead in the zip / unzip operations would pretty much add up to the same total time you're seeing now.  It would however make a lot of sense to do the zip if you were transferring 16M files from a windows box to the unraid box

 

Painful either way, thanks for the ideas!

Link to comment

 

You need to link the specific post in question (grab the permalink from the "share" link below the answer). SE answers will move around based on votes. "3rd" is fairly meaningless... ;)

 

I've spent countless hours on superuser, and I never knew that.  Probably because I never feel the need to "share" anything (I assumed it meant faceache or google-)  Thanks, that's actually really handy..  :)

Link to comment

Your other option is to zip the 16M files, copy the zip, then unzip it at the destination.  But then the actual transfer would go super fast, but the overhead in the zip / unzip operations would pretty much add up to the same total time you're seeing now.  It would however make a lot of sense to do the zip if you were transferring 16M files from a windows box to the unraid box

 

doing a tar to tar might work better, particularly no time will be wasted with compression.

tar -C /mnt/disk1/path1 -cf - * | tar -C /mnt/disk2/path2 -xvf -

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...