Jump to content

HDDs not being used


Hinatanko

Recommended Posts

I have had my unRAID server running smoothly for 4 months so far. I initially added 8 NAS HDDs (2 for parity). Everything went well so I went ahead and added some older desktop drives 1x 3TB and 6x 4TB HDDs. They have been in the array for about a month.

 

My drives have been filling up to 50% and then going to the next drive which is fine. When the server filled up the last 8TB hard drive it went back to the first 8TB and started filling it up and ignoring the smaller hard drives. Below is a screenshot of my array looks like:

 

 

I am sure this is something with a setting but I can't find it. Any help is greatly appreciated! Thanks!

arraylisthdd.png

Link to comment

High-water is the default allocation method (and for a good reason). Take a look at one of your user shares and turn on Help to see if you can understand what it does.

 

I would normally not add a lot more drives than I need. More drives just means more opportunities for problems.

Link to comment

While I generally agree with trurl about not adding a bunch of extra disks, now that you have done it, I'd not suggest removing them. Removing disks is somewhat difficult, even if they are empty.

 

With new disks, I preclear them and then put on the shelf until they are needed. Another consideration, if you ever had a failure and needed to do a rebuild, you'd want one NOT yet in the array. You cannot rebuild onto a disk that is already present in the array.

Link to comment

So the way I understand it the reason it went back to the 8TB drives is because it looks the amount of data rather than the percentage free? If so that makes sense. I checked before posting to make sure the two main shares I have been using are both high water and they are set to high water.

 

I never thought about waiting to the drives until they were needed. In the future when would you suggest I add more drives? When all drives are at 50%? 70%?

 

Thanks for the info!

Link to comment

I don't have any fantastic rule of thumb. I would generally want enough space on the array to last me for the next 4-6 months, or even a little longer. If I got to the point that I had less than a month's worth of space, I'd definitely move quickly to add more. 8T is a lot of space and that's what I am adding now. I have 2 blank 8Ts for two different classes of data that I just added to my array. That should probably keep me for at least 6 months, maybe closer to a year.

Link to comment

Thanks for the help!

 

Some of the drives I added had 187 errors after preclear, but after looking into other threads it seemed like they were okay to keep as long as they didn't continue to increase. Is this correct?

 

Going forward I plan to only use brand new NAS drives and only after I need more space. I just wanted to these drives out of my main PC.

Link to comment
18 minutes ago, Hinatanko said:

Some of the drives I added had 187 errors after preclear, but after looking into other threads it seemed like they were okay to keep as long as they didn't continue to increase. Is this correct?

There are many different SMART attributes monitored by unRAID and reported by preclear, so I don't know exactly what you mean by 187 errors.

 

Go to Tools - Diagnostics and post the complete zip and we can take a look at the health of all your drives.

Link to comment

187 is reported uncorrect. Means ECC was not able to correct a data using error correcting info. I have a low threshold for smart issues on my array disks, and my experience is that once these types of things pop up, it is time to get it out of the array.

 

But let's have a look at the complete list as trurl recommends, and we'll see what is recommended.

 

Link to comment

I don't like the looks of those reported_uncorrect. You could try running them through extended read tests but I doubt they would pass, and even if they did I would not trust these drives.

 

Sorry - there are several of them with this problem. You;ll notice that the errors are associated with ata_errors which are logged below. There is definitely a problem.

Link to comment

Thanks for looking at my logs! I assume the drives in question are the newer drives I added and not the 8TB drives?

 

Is their a faster or better way of running extended SMART tests then going to the drive and clicking run extended self test?

 

You mentioned it is difficult to remove drives after they had been added. Would it be easier to replace them with new drives?

Link to comment
14 minutes ago, Hinatanko said:

You mentioned it is difficult to remove drives after they had been added. Would it be easier to replace them with new drives?

It isn't that difficult to remove drives, but the usual method requires rebuilding parity and you would be unprotected until parity was rebuilt. Of course, even without protection, you would only lose data if a data drive failed during the parity build.

 

Do you have backups of anything you consider irreplaceable or too much trouble to replace? You should have a good backup plan even with dual parity.

 

If your backups are good enough, I think I would remove the drives and rebuild parity, then later when you need more space you can add new drives.

Link to comment
1 minute ago, trurl said:

Unless one of the newer drives with data on it were to fail during the parity rebuild.

 

Don't think there is much we can do with that. The 8T disks are new and I'm assuming have been precleared. If one of them fails during the parity build, that would be bad, but nothing much I can think of to protect from that at this point.

Link to comment

All drives including the newer 3T and 4T drives have been precleared. Nothing on the server is irreplaceable.

 

Do you recommend I bother with the extended SMART tests on the older drives at this point? 2 of these drives don't have any warning when I go into the drive and check for SMART errors. Should these be replaced as well?

 

You mentioned just removing the drives and doing a new config. Is their a video or post with instructions on how to do this step by step?

Link to comment
8 minutes ago, Hinatanko said:

You mentioned just removing the drives and doing a new config. Is their a video or post with instructions on how to do this step by step?

Tools - New Config. All it does is allow you to assign disks, and then (by default) rebuild parity. The only chance of data loss is if you accidentally assign a data disk to a parity slot. When you go to the page, it gives several choices, such as retain all, or retain none. Whatever you choose, it will still allow you to make changes before starting the parity build. For example, you could retain all then unassign just those disks you don't want to use before starting.

 

If you remove any drives at all you will have to rebuild parity, so you might as well remove all the drives that don't have any data.

Link to comment

I ran short and extended SMART tests on all of the new drives I added and they all passed with no errors. Would it be safe to assume these drives are okay to keep?

 

Also I posted a screenshot of the stats page for disk usage. When will the newer drives start to get used by unRAID?

Capture.PNG

Link to comment
41 minutes ago, Hinatanko said:

When will the newer drives start to get used by unRAID?

When writing to a user share, whether and when a disk gets written to depends on that user share's setting, as already mentioned. Like many things, simple questions don't have simple answers. This isn't because the answers are too hard, it is rather because the questions are too simple.

 

Study the user share setting Help as already mentioned and give us a harder question about specific settings if necessary.

Link to comment
57 minutes ago, trurl said:

Like many things, simple questions don't have simple answers. This isn't because the answers are too hard, it is rather because the questions are too simple.

 

Belongs in a fortune cookie, grasshopper! :D

Link to comment

Highwater in a nutshell with differing size drives.  

 

First 8TB fills to 4TB, then system moves to next drive until all 8TB's are filled to 4TB.

Then the process starts over again.  All 8TB will fill to 2TB free, and then the system will move on (But the 3TB will fill up to 2TB free, and the 4 TB will fill to 2TB free)

Then the 8TB will fill to 1TB (and all the remaining drives will also fill to 1TB free)

 

Net result is that everyone of your 8TB's will have 2TB free before the system even touches the 3TB or the 4's.  But, as Trurl stated, the are complications on this depending upon share's split levels, etc.

 

Link to comment

I ended up removing 4 of the 6 newly HDDs that had any SMART errors. The two I kept had no SMART errors and passed extend tests without a problem.

 

I referred this page for removing these https://wiki.lime-technology.com/Shrink_array and rebuilt a new config. I don't appear to have lost any data and everything looks okay.

 

However, the two drives that remained are disk 8 and disk 10. In my Main tab I am seeing Party 1 & 2, Drive 1 - 6 and Drive 8 and Drive 10. No Drive 7 and 9 are being shown. I want to make sure this is the way it should be.

 

I also changed the global share settings (as mentioned in the guide) to Include all the disks I would be retaining (disks 1-6, 8 and 10). Should I keep this way for the future? If I add a new HDD later on should I change this?

 

Thanks for the help!

Link to comment
1 hour ago, Hinatanko said:

No Drive 7 and 9 are being shown. I want to make sure this is the way it should be.

It's not wrong, but you when you did New Config you could have filled the slots any way you wanted to. So you could have put the disk you have in slot 10 in slot 7 instead, for example. Then you would have had disks 1-8 with no gaps.

 

Have you done a parity check since reconfiguring? I always do after making changes like this just to make sure everything is starting out OK.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...