Jump to content

Sluggish performance due to HDD or just the way unraid works?


Recommended Posts

I've noticed the performance on my unraid server while doing basic tasks have been so sluggish it's crashed windows explorer a few times and my machine keeps parity checking daily even though I have it set to do it monthly. Is this due to the hard drives I have in the machine or just how unraid works?

 

Only the SSD, flash drive, and 4TB is new. the rest are old junkers I had laying around and disk 1 has bad sectors, though my data's not /that/ important yet so I've left it in. You can see their model numbers in the attached picture.

 

Thanks.

 

Larger pic can be found at: http://puu.sh/jupiz/fed9464e09.png

ss.png.5763ed778c8c8cd97c74da1e992dacde.png

Link to comment

Sluggish performance is not the norm.

 

Try using it without junk hardware.

 

Thanks. Would it be close enough if I move the 4TB from parity into its own data drive and take the data drives out? I plan to buy new 4TB reds down the road, it's mainly cost that's holding me back as well as putting money into other things for now.

Link to comment

Disk1 is FAILING NOW.

 

Did you test any of these disks with preclear or something before trying to use them? ALL drives must be trustworthy because they are ALL needed if one of them fails.

 

Since these were just test drives I don't think I bothered at the time other than on the good stuff (such as the 4TB, ssd etc). Could you explain a bit more about the last part? I thought the way unraid worked was different from a typical raid that uses parity. If drive #23 fails it wouldn't affect drive #1 as only data on drive #23 would be at risk if the parity drive fails next. Am I wrong in how I understand unraid?

 

In the mean time I'll see what happens when I remove the bad disk.

Link to comment

Since these were just test drives I don't think I bothered at the time other than on the good stuff (such as the 4TB, ssd etc). Could you explain a bit more about the last part? I thought the way unraid worked was different from a typical raid that uses parity. If drive #23 fails it wouldn't affect drive #1 as only data on drive #23 would be at risk if the parity drive fails next. Am I wrong in how I understand unraid?

 

In the mean time I'll see what happens when I remove the bad disk.

 

unRAID with parity can protect you from a single drive failure. Either a data disk or the parity disk. If one fails, and all of the others are functional and have not had their contents altered outside of the array, the failed drive can be reconstructed. If, during reconstruction, another drives fails or starts to become flaky, the reconstruction will be compromised.

Link to comment

Since these were just test drives I don't think I bothered at the time other than on the good stuff (such as the 4TB, ssd etc). Could you explain a bit more about the last part? I thought the way unraid worked was different from a typical raid that uses parity. If drive #23 fails it wouldn't affect drive #1 as only data on drive #23 would be at risk if the parity drive fails next. Am I wrong in how I understand unraid?

 

In the mean time I'll see what happens when I remove the bad disk.

 

unRAID with parity can protect you from a single drive failure. Either a data disk or the parity disk. If one fails, and all of the others are functional and have not had their contents altered outside of the array, the failed drive can be reconstructed. If, during reconstruction, another drives fails or starts to become flaky, the reconstruction will be compromised.

 

On rereading - I'm not sure I hit your question exactly.

 

unRAID does not work like traditional RAID, in that each disk maintains an independent file system separate and distinct from the other disks in the array. The failure of one disk, or even more than one disk, would not affect the ability to access files on surviving disks. However, if more than one data disk fail, the parity disk, even if intact, becomes useless, and any failed disk contents unrecoverable.

Link to comment

Since these were just test drives I don't think I bothered at the time other than on the good stuff (such as the 4TB, ssd etc). Could you explain a bit more about the last part? I thought the way unraid worked was different from a typical raid that uses parity. If drive #23 fails it wouldn't affect drive #1 as only data on drive #23 would be at risk if the parity drive fails next. Am I wrong in how I understand unraid?

 

In the mean time I'll see what happens when I remove the bad disk.

 

unRAID with parity can protect you from a single drive failure. Either a data disk or the parity disk. If one fails, and all of the others are functional and have not had their contents altered outside of the array, the failed drive can be reconstructed. If, during reconstruction, another drives fails or starts to become flaky, the reconstruction will be compromised.

 

On rereading - I'm not sure I hit your question exactly.

 

unRAID does not work like traditional RAID, in that each disk maintains an independent file system separate and distinct from the other disks in the array. The failure of one disk, or even more than one disk, would not affect the ability to access files on surviving disks. However, if more than one disk fail, the parity disk becomes useless, and any failed disk contents unrecoverable.

 

You did, however I found out I had two bad drives and am in the middle of moving the data off of them and onto the good 1TB drive. I wanted to wait until I was finished and had some time to test speeds before I came back so I'd not spam the thread.

Link to comment

Still wondering if you tested the "good" drive.

 

I did minimal testing with three good, varied size drives and it seemed to be much faster, however I want to transfer files that caused problems so I can compare the before and after and get a better idea instead of going straight to "yup, it's fast now, all fixed" as I've been wrong before and rushed to a conclusion too quick. I'll post as soon as I get a better answer, I'm waiting on these moves between disks still.

Link to comment

Still wondering if you tested the "good" drive.

 

I did minimal testing with three good, varied size drives and it seemed to be much faster, however I want to transfer files that caused problems so I can compare the before and after and get a better idea instead of going straight to "yup, it's fast now, all fixed" as I've been wrong before and rushed to a conclusion too quick. I'll post as soon as I get a better answer, I'm waiting on these moves between disks still.

By testing I don't mean performance testing, I mean complete read/write of the entire disk as is done by the preclear script. Speed will not help if it can't ALL be reliably read and written.
Link to comment

Still wondering if you tested the "good" drive.

 

I did minimal testing with three good, varied size drives and it seemed to be much faster, however I want to transfer files that caused problems so I can compare the before and after and get a better idea instead of going straight to "yup, it's fast now, all fixed" as I've been wrong before and rushed to a conclusion too quick. I'll post as soon as I get a better answer, I'm waiting on these moves between disks still.

By testing I don't mean performance testing, I mean complete read/write of the entire disk as is done by the preclear script. Speed will not help if it can't ALL be reliably read and written.

 

I see. In that case I'll start preclearing them. S.M.A.R.T. all came back fine, though I understand that's not good enough to rely on solely.

Link to comment

I see. In that case I'll start preclearing them. S.M.A.R.T. all came back fine, though I understand that's not good enough to rely on solely.

 

It is much more common that a drive has some cabling/controller issue than the drive itself if bad. The SMART attributes, by and large, tell you about the disk not the cabling. There is one attribute - UDMA_CRC_Error_Count - which increments when you have the most common type of cabling problem - and is a good way for one of us to spot a cabling problem from a SMART report.

 

But real world testing of readability and write-ability is the ultimate test to tell you if you have a problem. It isn't so good at differentiating cabling for drive problems, though.

Link to comment

I see. In that case I'll start preclearing them. S.M.A.R.T. all came back fine, though I understand that's not good enough to rely on solely.

 

It is much more common that a drive has some cabling/controller issue than the drive itself if bad. The SMART attributes, by and large, tell you about the disk not the cabling. There is one attribute - UDMA_CRC_Error_Count - which increments when you have the most common type of cabling problem - and is a good way for one of us to spot a cabling problem from a SMART report.

 

But real world testing of readability and write-ability is the ultimate test to tell you if you have a problem. It isn't so good at differentiating cabling for drive problems, though.

 

Finally got done backing up my info, reorganizing the shares, and doing a super long 15.5 hour preclear on the 1TB drive and there are no errors and consistent speeds of 50+MB/S. Seek times appear to be drastically lower too.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...