Very Fast? Write Speed


G Speed

Recommended Posts

Hey everyone,

 

I think something may be wrong... but I'm not exactly sure..

 

6.0-beta12..

Transferring some movies from PC to unraid

120gb worth...

Getting an average speed of 60-80mb/s... <- Laptop drive seems to be the bottleneck..

Usually I get an average of 35mb/s..

The drive I'm transferring it to, is a 3TB RED. Parity is the same..

The only change to the system is I swapped out 4 X 512mb sticks of ram, for 2 X 2gb sticks?

I have no cache drive?

 

speed.jpg.d23a0e59104623f6d53884c94b3b9020.jpg

Link to comment

Without parity 65 MB/s transfer speed would be expected I think. For eg I was transferring 2TB of data from a non-raid disk to a raid disk with and I was averaging around 55-60 MB/s.

 

About why the 2GB sticks perform better than a 4GB, I am not sure. Are the smaller RAM sticks faster (timing and clock speed)? Still for data transfer I wouldn't have expected to make twice the performance difference.

 

and almost done...

Link to comment

Without parity 65 MB/s transfer speed would be expected I think. For eg I was transferring 2TB of data from a non-raid disk to a raid disk with and I was averaging around 55-60 MB/s.

 

About why the 2GB sticks perform better than a 4GB, I am not sure. Are the smaller RAM sticks faster (timing and clock speed)? Still for data transfer I wouldn't have expected to make twice the performance difference.

 

and almost done...

 

I have parity still, and it's being written to, as the files are being transferred..

RAM is same specs, just larger..

4 X 512mb =2GB - Previously

2 X 2GB = 4GB - Currently

 

 

Link to comment

 

About why the 2GB sticks perform better than a 4GB, I am not sure. Are the smaller RAM sticks faster (timing and clock speed)? Still for data transfer I wouldn't have expected to make twice the performance difference.

 

 

The speed of the memory would have nothing to do with transfer rates.  The slowest memory out there is far faster than ethernet.

 

The difference here is that he's gone from 2G to 4G.  unRaid is now using more of the RAM to cache the transfer before writing it to the drive.  Net result is faster transfer

Link to comment

 

About why the 2GB sticks perform better than a 4GB, I am not sure. Are the smaller RAM sticks faster (timing and clock speed)? Still for data transfer I wouldn't have expected to make twice the performance difference.

 

 

The speed of the memory would have nothing to do with transfer rates.  The slowest memory out there is far faster than ethernet.

 

The difference here is that he's gone from 2G to 4G.  unRaid is now using more of the RAM to cache the transfer before writing it to the drive.  Net result is faster transfer

 

How would 2GB vs 4GB make a difference... I know that sounds funny but.. the file is only going at max 80mb/s...

Even with 2GB, it should be enough to buffer -> then disk 1 ->parity

Link to comment

Did you actually time it?  I never believe what is in those progress boxes that any operating program displays.  You can usually believe a log file if you program generates one.  (ImgBurn for Windows is one program which has an excellent log-type report screen that displays all of the pertinent information after the transfer has been completed.  It even waits until unRAID has finished emptying the RAM buffer and written all of the data to disk before it says the job is done!)

 

Imgburn (in my case) starts out transferring at 80+bps and then slowly drops to 35 to 40bps.  It usually takes another twenty or thirty seconds to finish the disk writes after the data transfer across the network is done. 

Link to comment

This thread used to piss me off because I was the typical unRAID'er that would see ~30MB/s - 40MB/s when writing to a parity protected array without the use of a cache drive.

 

I have since made 3 major changes to my server:

 

1.  Upgraded to unRAID v6

            -  Honestly, I did not see an increase in speed after migrating to v6 and prior to the next to items listed below being completed.

2.  Upgraded my parity drive from a 2TB 7200 RPM 6Gb/s Hitachi to a 4TB 7200 RPM 6Gb/s HGST

3.  Migrated all data drives from RFS to XFS (amount of data on each drive pretty much remained the same)

 

Here is what I now see when reading and writing to the array:

 

READING FROM ARRAY

KTPIaNK.jpg

 

 

WRITING TO ARRAY

spU8vC9.jpg

 

Which of the factors listed above do you think provided the largest impact?  I cannot explain the speed boost.  In fact, I found this to be so alien to me that I verified that I did not accidentally turn on caching and that the parity drive was being written to.  And yes...I did time the transfers and they really were that fast.

 

John

Link to comment

This thread used to piss me off because I was the typical unRAID'er that would see ~30MB/s - 40MB/s when writing to a parity protected array without the use of a cache drive.

 

I have since made 3 major changes to my server:

 

1.  Upgraded to unRAID v6

            -  Honestly, I did not see an increase in speed after migrating to v6 and prior to the next to items listed below being completed.

2.  Upgraded my parity drive from a 2TB 7200 RPM 6Gb/s Hitachi to a 4TB 7200 RPM 6Gb/s HGST

3.  Migrated all data drives from RFS to XFS (amount of data on each drive pretty much remained the same)

 

 

EACH of these has an impact. The 4TB 7200 RPM HGST is capable of 160MB/s on the outer tracks.

uRAID6 allows more buffering of writes in the buffer cache, especially with a large amount of ram.

Perhaps housekeeping on XFS is managed better. There used to be a single kernel lock in reiserfs for years that hurt performance.

The destination drive type plays a big role in this too.

 

 

I'm on unRAID 5 and I burst at 110MB/s from windows with data on SSD to an HP micro server.

There were kernel tunings I used plus I always make sure I purchase a high speed parity drive.

Link to comment

This thread used to piss me off because I was the typical unRAID'er that would see ~30MB/s - 40MB/s when writing to a parity protected array without the use of a cache drive.

 

I have since made 3 major changes to my server:

 

1.  Upgraded to unRAID v6

            -  Honestly, I did not see an increase in speed after migrating to v6 and prior to the next to items listed below being completed.

2.  Upgraded my parity drive from a 2TB 7200 RPM 6Gb/s Hitachi to a 4TB 7200 RPM 6Gb/s HGST

3.  Migrated all data drives from RFS to XFS (amount of data on each drive pretty much remained the same)

 

 

EACH of these has an impact. The 4TB 7200 RPM HGST is capable of 160MB/s on the outer tracks.

uRAID6 allows more buffering of writes in the buffer cache, especially with a large amount of ram.

Perhaps housekeeping on XFS is managed better. There used to be a single kernel lock in reiserfs for years that hurt performance.

The destination drive type plays a big role in this too.

 

 

I'm on unRAID 5 and I burst at 110MB/s from windows with data on SSD to an HP micro server.

There were kernel tunings I used plus I always make sure I purchase a high speed parity drive.

 

Thx Weebo.

 

The thing that confused me was that even with the new parity drive, my parity check speeds still clocked in around 75MB/s.  Would the outer tracks not be a factor here also?

 

John

Link to comment

This thread used to piss me off because I was the typical unRAID'er that would see ~30MB/s - 40MB/s when writing to a parity protected array without the use of a cache drive.

 

I have since made 3 major changes to my server:

 

1.  Upgraded to unRAID v6

            -  Honestly, I did not see an increase in speed after migrating to v6 and prior to the next to items listed below being completed.

2.  Upgraded my parity drive from a 2TB 7200 RPM 6Gb/s Hitachi to a 4TB 7200 RPM 6Gb/s HGST

3.  Migrated all data drives from RFS to XFS (amount of data on each drive pretty much remained the same)

 

 

EACH of these has an impact. The 4TB 7200 RPM HGST is capable of 160MB/s on the outer tracks.

uRAID6 allows more buffering of writes in the buffer cache, especially with a large amount of ram.

Perhaps housekeeping on XFS is managed better. There used to be a single kernel lock in reiserfs for years that hurt performance.

The destination drive type plays a big role in this too.

 

 

I'm on unRAID 5 and I burst at 110MB/s from windows with data on SSD to an HP micro server.

There were kernel tunings I used plus I always make sure I purchase a high speed parity drive.

 

Thx Weebo.

 

The thing that confused me was that even with the new parity drive, my parity check speeds still clocked in around 75MB/s.  Would the outer tracks not be a factor here also?

 

John

 

I am having a hard time believing this as well. First a speed of 113MB/s is basically saturating a gigabit LAN connection. Second that matches the speeds I get writing to a SSD cache drive. I thougth the parity calculations slowed writes to the array down somewhat... (This was my experience pre-cache) which is the basis of why people use cache drives and mover in the first place.  If you can write to the array at the same speed (or better) which you can transfer data over your LAN to the array why bother with a cache and open yourself up to data loss.

Link to comment

This thread used to piss me off because I was the typical unRAID'er that would see ~30MB/s - 40MB/s when writing to a parity protected array without the use of a cache drive.

 

I have since made 3 major changes to my server:

 

1.  Upgraded to unRAID v6

            -  Honestly, I did not see an increase in speed after migrating to v6 and prior to the next to items listed below being completed.

2.  Upgraded my parity drive from a 2TB 7200 RPM 6Gb/s Hitachi to a 4TB 7200 RPM 6Gb/s HGST

3.  Migrated all data drives from RFS to XFS (amount of data on each drive pretty much remained the same)

 

 

EACH of these has an impact. The 4TB 7200 RPM HGST is capable of 160MB/s on the outer tracks.

uRAID6 allows more buffering of writes in the buffer cache, especially with a large amount of ram.

Perhaps housekeeping on XFS is managed better. There used to be a single kernel lock in reiserfs for years that hurt performance.

The destination drive type plays a big role in this too.

 

 

I'm on unRAID 5 and I burst at 110MB/s from windows with data on SSD to an HP micro server.

There were kernel tunings I used plus I always make sure I purchase a high speed parity drive.

 

Thx Weebo.

 

The thing that confused me was that even with the new parity drive, my parity check speeds still clocked in around 75MB/s.  Would the outer tracks not be a factor here also?

 

John

 

I am having a hard time believing this as well. First a speed of 113MB/s is basically saturating a gigabit LAN connection. Second that matches the speeds I get writing to a SSD cache drive. I thougth the parity calculations slowed writes to the array down somewhat... (This was my experience pre-cache) which is the basis of why people use cache drives and mover in the first place.  If you can write to the array at the same speed (or better) which you can transfer data over your LAN to the array why bother with a cache and open yourself up to data loss.

 

I had the same exact concerns.

 

I guess the only way to really prove that the file I am testing with exists in parity would be to pull the drive that it lives on and replace with a new drive.  If the drive is rebuilt and the file exists then we will have our answer.

 

Other than that my only verification is watching the "Writes" indicator for the parity drive on the Main screen update as I write the file to the array (which it does).

 

John

 

EDIT:  I copied that MKV down to my dekstop, cleared the stats in unRAID and then copied the file back to the user share (/mnt/user/Movies).  The data was written to DISK1 and Parity as shown below:

 

aKFZXUE.png

Link to comment

Is it possible that the file is being written to memory first since I have 48GB?  Once the transfer was complete, both DISK1 and Parity showed ~11,000 writes.  I went back a minute later and the number of writes had increased to what you see above (~33,000).

 

This snapshot was taken almost immediately after Windows had said the transfer was completed:

 

BQ6VgYM.png

 

This one (also shown above) was take about a minute later:

 

aKFZXUE.png

 

I have no other activity on my server ATM other than NZBDrone updating some metadata (DISK6).

 

John

Link to comment

Is it possible that the file is being written to memory first since I have 48GB?  Once the transfer was complete, both DISK1 and Parity showed ~11,000 writes.  I went back a minute later and the number of writes had increased to what you see above (~33,000).

 

Based on what Weebo said, that seems likely.  That's extremely intresting if that is what's going on... becasue it means that unRAID is using extra ram like a temp cache disk and might reduce the need of a cache disk for systems with infrequent large writes.

 

Now are there any drawbacks to this, for example if your not using ECC ram does this increase the risk of bitrot?

Link to comment

Is it possible that the file is being written to memory first since I have 48GB?  Once the transfer was complete, both DISK1 and Parity showed ~11,000 writes.  I went back a minute later and the number of writes had increased to what you see above (~33,000).

 

Based on what Weebo said, that seems likely.  That's extremely intresting if that is what's going on... becasue it means that unRAID is using extra ram like a temp cache disk and might reduce the need of a cache disk for systems with infrequent large writes.

 

Now are there any drawbacks to this, for example if your not using ECC ram does this increase the risk of bitrot?

 

I never look a gift horse in the mouth so I should stop asking questions.  :)

 

I do use ECC so at least I know I am covered there.

 

But again...I am pretty damn sure I did not see these speeds in v6 until I upgraded the parity disk and migrated to XFS.  Maybe XFS is playing a part with the memory caching.  I don't know.

Link to comment

Is it possible that the file is being written to memory first since I have 48GB?  Once the transfer was complete, both DISK1 and Parity showed ~11,000 writes.  I went back a minute later and the number of writes had increased to what you see above (~33,000).

 

This is what I think I am seeing when I use Imgburn (on my Win7 Computer) when I use it to create an image file of a BluRay on the server.  The software states that the transfer is done but Imgburn does not 'finish' for many seconds after this event is logged.  (Remember Imgburn  was designed to be a bullet-proof program to generate CDs and DVDs and thus it will wait until the buffer on the burner is empty before it terminates its connection to the drive.)  As a result of this behavior, I have long suspected that unRAID does cache writes to memory.  If you have a limited amount of memory, the delay in time is short. Plus, until unRAID was basically limited to 4GB of memory usage due to 32 bit OS and many motherboards did not even support more than 8GB.  With these limitations, most people simply didn't install large amounts of memory. 

 

Today, with 64 bit unRAID OS and more people wanting to use vm's, there will be more and more systems with potentially large amounts of memory that unRAID might have available for caching.  It does present some interesting issues to consider; such as  are powerdown scripts and configuration parameters setup to assure that they wait until that cache is emptied onto the hard disk before they shut the system down?

Link to comment

I just performed another test...

 

I copied an MKV from a disk share (DISK11) of a drive that is pretty full down to my desktop.  I deleted the file from the disk share.  I copied the file back from my desktop to the disk share.

 

Right at the time the Windows transfer completed, I refreshed the unraid Main screen and saw ~11,000 writes to both parity and disk.

 

I then deleted the file from my desktop.

 

The parity/disk writes continued to increase over the next 1.5 minutes to 41,000 writes.

 

So, at least I know that the transfer is still not going on behind the scenes when windows said it was completed.

 

I am left to believe the the RAM is being used.  If this is the case, my concern is what would happen if I shut the server down in the middle of that 1.5 minutes?

 

John

Link to comment

The parity/disk writes continued to increase over the next 1.5 minutes to 41,000 writes.

 

So, at least I know that the transfer is still not going on behind the scenes when windows said it was completed.

 

I am left to believe the the RAM is being used.  If this is the case, my concern is what would happen if I shut the server down in the middle of that 1.5 minutes?

 

John

 

That is a real issue that someone will have to address.  I know that Windows will flush cache buffers before it shuts down.  I assume that if you call for a shutdown from a normal Linux distribution, it will also flush the buffers.  With unRAID, many of us are using a powerdown script that can either force processes to stop or can terminate them if they don't behave in any expected manner.  This script is often tied into monitoring program for an UPS which dictates that shutdown must occur after a power failure.  As an example, let's suppose the power fails and the UPS kicks in.  Your five year-old battery indicates that you have Five minutes (300 seconds) of battery time left and you have set that shutdown must start at 10% of battery life (the default, by the way) remaining.  Your server will first run 270 seconds, waiting for power to be restored.  If it doesn't, shutdown will start.  If there is a massive buffer to be flushed, will everything be able to finish in the remaining 30 seconds?

Link to comment

Is it possible that the file is being written to memory first since I have 48GB?  Once the transfer was complete, both DISK1 and Parity showed ~11,000 writes.  I went back a minute later and the number of writes had increased to what you see above (~33,000).

 

This snapshot was taken almost immediately after Windows had said the transfer was completed:

 

BQ6VgYM.png

 

This one (also shown above) was take about a minute later:

 

aKFZXUE.png

 

I have no other activity on my server ATM other than NZBDrone updating some metadata (DISK6).

 

John

 

To get a more accurate look at this, you'd have to transfer 96 gb of data.

 

Link to comment

Hi, very interesting post since I have a similar setup than johnodon and I was surprised too about performance in v6b1p.

What would you think would be the impact of changing the parity drive to one of those 10k rpm 4gb mainstream drives I.e. Seagate?

Rgds.

The problem is that any activity involving parity is going to be primarily constrained by the slowest of the parity drive and the data disks in use for the particular operation.    Therefore there is not much point in getting a parity drive that is faster then the fastest data drive.

Link to comment

Hi, very interesting post since I have a similar setup than johnodon and I was surprised too about performance in v6b1p.

What would you think would be the impact of changing the parity drive to one of those 10k rpm 4gb mainstream drives I.e. Seagate?

Rgds.

The problem is that any activity involving parity is going to be primarily constrained by the slowest of the parity drive and the data disks in use for the particular operation.    Therefore there is not much point in getting a parity drive that is faster then the fastest data drive.

Writing to multiple protected array platters at the same time may benefit slightly from a faster parity drive. Especially since the seek time on a 10K drive should be a good percentage better than a 5900 drive.
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.