Replacing drive by harddisk that's bigger than parity drive


BartDG

Recommended Posts

  • Replies 92
  • Created
  • Last Reply

Top Posters In This Topic

... This has nothing to do with the speed issue but you should improve your disk cooling, they are way to hot, try do keep them below 45C.

 

Yes, the disks are WAY too hot -- 50, 54, and 58 !!    In fact, that may indeed have to do with the speed issue ... there may be a lot of CRC error corrections required, thus slowing down the read times (especially on the drive that's at 58).

 

Set a fan by your system blowing on the drives until the parity sync has completed !!

 

Link to comment

I know.  But HGST drives are known to run a little hotter than most other drives.  It's probably also because my case is still opened up at the moment.  In other words, airflow is not optimal right now.  Should this persist, I'll make sure to add another fan.

 

Just curious, what speeds should I be seeing, I mean, would be more normal with a config like mine?

 

Edit: ok, after reading Gary's respons, closing up the case now...

Link to comment

I agree.  These are enterprise type harddisks.  While for sure I agree it would be better if they were 10 to 15 degrees cooler, I'm pretty sure they can withstand this heat... (I've just looked it up, they should be fine up to 60°C - but lower is always better of course)

 

Your speed numbers do sound a lot better I must say!  I really hope somebody can come up with a reason for the low speeds.  Maybe tweak a BIOS setting of some sort?

Link to comment

Ok, I've now added two 8 cm fans (can't add bigger ones because of the enclosure ; it doesn't allow for it and even this is a bit of DIY - it's an old Thermaltake Lanbox Lite case (without the side window), not ideal for a server, but it does the job - I did remove the 6cm fans though, because they were incredibly noisy).  The temperature are dropping steadily (now below 50°C and still dropping), but speeds don't increase.

 

Edit: temps are now about 41°C.  :)  I'm guessing they'll stay at this temp, because even the WDC, that is idling at the moment, runs at 38°C.

Link to comment

Clearing that's a MAJOR improvement in the temps.

 

I don't see anything in the logs that "jumps out" r.e. what might be causing the very slow speeds.    While your drives aren't the high-density 1TB/platter units that many 4TB drives are, they're still 800GB/platter and with 7200rpm rotational speed you should easily see speeds well over 100MB/s

 

When the sync completes, power down and unplug/replug the SATA cables to be sure they're secure; then boot to the BIOS and double-check that everything's set to AHCI mode; and then boot to UnRAID and do a parity check and see what the speed looks like.

 

Link to comment

Yes, I agree Gary, thanks!  The are now at 37°C even :)  Very cool! ;)

 

I'm pretty sure I've got everything down with regards to the AHCI mode, but I'll check when this parity sync is complete. 

That said, your input did give me an idea.  See, I bought this motherboard because it's one of the "official" unRAID boards.  It wasn't available in the shops anymore, but I bought it on eBay.  It arrived bare.  This means I had to grab SATA cables from here, there and everywhere.  I have no idea about their quality, and it may be so that some of them don't support SATA-600.  (of even 300).  I'll order some certified cables online and replace them.  Maybe that'll help.  In any case, it's a minor operation, both in cost and effort, but I think it may make a world of difference.

Link to comment

Ensuring you have high-quality SATA cables with locking tabs is a good idea.  You might also confirm that your board has the latest BIOS version (v902).

 

Clearly SOMETHING is amiss => and often it's just a matter of experimentation to isolate these kind of issues.

 

 

Link to comment

I'm almost ashamed to say it, but how do I make this diskspeed script work?  I have absolutely no experience with scipts etc... 

 

Also: I'm now running a parity-check, and that one is running at 187 MB/s!!  Anybody got any idea how there is so much speed difference with a parity-sync? (which only managed about 38 MB/s?)

Link to comment

That’s odd as usually parity sync and check speed are very similar, if the parity check doesn’t slow down considerably (it’s normal to slow down as it goes to the slower inner tracks, should end at or close to 90MB/s) there’s not much point in running the script.

 

If the parity check is normal maybe best not worry for now about the parity sync speed, as you only need to do that again when upgrading your parity, while you should do regular parity checks.

 

If you still want to run diskspeed copy it to the root of your flash drive, ssh into unraid and type:

/boot/diskspeed.sh

 

Link to comment

Clearly your parity check is running at the speeds we'd expect  :)

 

As Johnnie noted, it will slow down a LOT (but still should be over 100MB/s) as it moves towards the inner cylinders.

 

Like Johnnie, I have no idea why the sync ran so slow.    Something clearly wasn't right ... and the fact is it may not even be an issue the next time you do a sync (which basically won't be until you eventually replace the parity drive).

 

I would write it off as one of "life's little mysteries" ... and simply not worry about it  8)

Link to comment

I have a similar problem on my HP N54L when I mix Seagate ST4000DM000 & ST3000DM001 and HGST 4TB NAS drives on the built in controller.  When they are all Seagate I get 100+MB/s parity builds but when they are mixed with HGST it is 35-40MB/s.  I have two more drives (at 60 hour intervals) before I can see if HGST 4TB NAS only will up the speed to 100+MB/s.  I've traced it to one of the Seagates connecting at IDE speeds when the HGST drives are present.  The parity checks are all 100+MB/s.  I'm also going to check replacing the ST3000DM001 with a 3TB WD Red and see if that makes a difference.

Link to comment

My "gut feel" is that one of the drives, despite what the log file showed, was running at a low IDE mode, and simply never reverted back to normal during the sync.    Once the access was stopped (i.e. the sync finished), the next time it was accessed all was well -- and likely will remain so.

 

In fact, even though they weren't part of the array, I believe there WERE other drives connected to the controller (BartDG -- is that correct?), which may have had some impact on this behavior.

 

Strange in any event  :)

Link to comment

That’s odd as usually parity sync and check speed are very similar, if the parity check doesn’t slow down considerably (it’s normal to slow down as it goes to the slower inner tracks, should end at or close to 90MB/s) there’s not much point in running the script.

Thanks Johnnie.  That's indeed exactly the speeds I was seeing.  I went to bed yesterday after starting the parity-check, but this morning I could see that it took unRAID 8hrs 5 minutes to complete the check, and the last speed was 92 MB/s.  So that's pretty much on par with what can be expected.

 

If you still want to run diskspeed copy it to the root of your flash drive, ssh into unraid and type:

/boot/diskspeed.sh

Thanks!  I've now ran the script, and these were the results:

- Parity (HGST 4TB - 7k4000) : 140 MB/s average

- Disk 1 (HGST 4TB - 7k4000): 139 MB/s average

- Disk 2 (HGST 4TB - 7k4000) : 138 MB/s average

- Cache disk (WDC - 2tb): 92 MB/s average

- unassigned WDC 3tb : 118 MB/s average

 

Not too bad I think. :)  The 2TB drive might not have been the best option to choose as a cache drive, but it'll have to do until I replace it with an SSD disk. 

 

Ah, another question: I don't plan on putting stuff permanently on the cache disk, for now.  So it'll be a pure cache disk.  If in the future I want to change it to an SSD disk, can I then simply unassign it and swap the disk out? (after having checked it's empty of course).

 

I would write it off as one of "life's little mysteries" ... and simply not worry about it

I won't. :)  As you say: it's not like I have to do one of those syncs every week.  As long as the speed of the array for daily use is on par, you won't hear me complain. :)

 

I have a similar problem on my HP N54L when I mix Seagate ST4000DM000 & ST3000DM001 and HGST 4TB NAS drives on the built in controller.  When they are all Seagate I get 100+MB/s parity builds but when they are mixed with HGST it is 35-40MB/s.

This could very well be the same case for me.  I don't use Seagate drives, but I do use HGST drives, combined with WDC.  Maybe HGST drives are not that tolerant when it comes to having different brands on the same controller.

 

My "gut feel" is that one of the drives, despite what the log file showed, was running at a low IDE mode, and simply never reverted back to normal during the sync.    Once the access was stopped (i.e. the sync finished), the next time it was accessed all was well -- and likely will remain so.

 

In fact, even though they weren't part of the array, I believe there WERE other drives connected to the controller (BartDG -- is that correct?), which may have had some impact on this behavior.

Yes, that's right.  The WDC were still not removed.  The 3TB is currently unassigned so I can remove it, but haven't so far because I would have had to power down first for that and I decided not to.  I will be removed now though.

 

Thanks for the help guys!!

 

 

Link to comment

 

- Cache disk (WDC - 2tb): 92 MB/s average

...

Not too bad I think. :)  The 2TB drive might not have been the best option to choose as a cache drive, but it'll have to do until I replace it with an SSD disk. 

 

It's actually fine -- if it's averaging 92MB/s, it's clearly doing well over 120MB/s on the outer cylinders ... and it's very unlikely it ever gets to the inner cylinders when you're just using it as a cache, since the mover is emptying it every time it runs.    Remember that writes to the array are limited by your network speed -- and for a Gb network that's ~ 120MB/s max speed, so the 2TB drive should be able to sustain that with no problem.    Bottom line:  There'd be effectively NO difference with an SSD  :)      [The story changes if you're using the cache for "local" operations ... i.e. VM's, Dockers, etc. ... where the network speed isn't a bottleneck.]

 

Link to comment

Bottom line:  There'd be effectively NO difference with an SSD  :)      [The story changes if you're using the cache for "local" operations ... i.e. VM's, Dockers, etc. ... where the network speed isn't a bottleneck.]

Yes, I thought that would be the case.  I do plan on using Dockers though, like Plex and a couple of others, but I suppose those would work equally well on a winchester disk as on an SSD, albeit a bit slower.

 

Diskspeed creates a file in the root of your flash disk with a nice graphical display of all speeds, diskspeed.html

I've checked but the file isn't there.  Does it remove itself when you terminate the session?

 

Edit: tried again, but nope, still no html file?

Link to comment

I am using Windows, but that doesn't work. (copying the string it in the win explorer address bar).  When I look at the flash disk via the user shares of Windows, I can see the diskspeed.sh file, but there is no HTML file there.  Strange.  Oh well, doesn't really matter.  I've got the results and I like those just as much without the nice graphical output. :)

Link to comment

Another day, another question...  8)

 

I've now started to re-arrange my server.  Basically I've got 4 shares now: movies, photos, music and series.  I've now decided that I want to create a new share "videos" and move the content of Movies and series both to that.  I also want to create a new share under "videos" called "kids', filled with kids stuff.

 

I've noticed that moving stuff if it resides under the same share is instantly, very fast.  But if I want to move stuff from "movies" to the new "videos/movies" share, it just copies everything over.  Even more, it copies it first to the cache drive (which is not a bad thing since it's faster).

 

In't there a way to do this faster, since all the data already resides on the server, albeit on different shares?

 

Second question: I've aready done one of those "moves" that copies everything to the cache first.  Then I got to bed and let the disk mover do it's job overnight.  But this morning the mover was still at it.  Is it safe then to start another move "and thus copy" job?  Or should I let the mover finish it's job first?  If I add extra stuff, will the move also eventually move it, or will it only move stuff that was on the cache drive when it started it's job last night, and leave the rest for the next night?

 

Thanks all!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.