Disk in parity slot is not biggest issue


zzkazu

Recommended Posts

I'm attempting to upgraded my UnRaid (6.3.5) server HDD's.

 

Previous Config: HP Microserver 4*2 TB Data drives and 2TB parity drive .

 

Problem is after upgrading my Parity drive to 4TB the system is reporting "Disk in parity slot is not biggest."  when I attempt to replace one of my data drives with a new 4TB drive (same make/model).

 

As outlined above I had previously been using same size drives for my Parity and Data, without issue. 

 

Is this an issue with 6.3.5 or am I missing something?

 

thanks

Link to comment

Heres the results, sdg is parity and sdb is new data drive.  Seems sdg is a bit smaller (HPA impact?)

 

I'll swap them around, after another parity synch.  

 

But will need to disable HPA on my machine (Gigabyte) prior.

 

 

Btw:  have a great XMAS.

 

HPA dump.JPG

Edited by zzkazu
Link to comment

Some motherboards look for a drive to slap an HPA onto. You might have ongoing issues. Suggest disabling. But if ever the battery goes or you reset the settings, the old default setting comes back, and you can have troubles again.

 

I believe gigabyte changed the default to be disabled.

Link to comment
  • 3 years later...

hi,

 

sadly I appear to have a similar issue. I am trying to use two 5 TB drives, however my supposed-to-be parity drive gives me this:

 

hdparm -N /dev/sda

/dev/sda:
SG_IO: bad/missing sense data, sb[]:  f0 00 0b 04 51 40 00 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
SG_IO: bad/missing sense data, sb[]:  f0 00 0b 04 51 40 01 0a 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00
 max sectors   = 9767475632/1(1?), HPA setting seems invalid (buggy kernel device driver?)

 

The "1(1?)" for the max sectors in particular really confuse me. I'd really appreciate it if anyone could help me with this issue.

 

Also: Since I don't really care for a few additional sectors, I would also be glad to reduce my array disk by X sectors, if that would is even possible...

 

Side note: I cannot switch the array and parity drives.

Link to comment

ahm, would that I could, except that... okay, here it goes: My unraid server is an old notebook of mine, with external 2.5" drives simply connected via usb... I know, maybe not the greatest solution, but it actually has some advantages (e.g. low power consumption, built-in UPS aka battery), and I do not care much for speed, since I mainly use it as a centralized storage for different devices with usually very little I/O load.

 

changing usb ports didn't help, no idea if that might help to begin with.

 

another note: if I have to hook said drive to an actual SATA controller which goes directly to the MB, that could be done quite easily... however, if the drive would have to "stay" there, i.e. I cannot go back to usb after that, that wouldnt work :/

Edited by RT87
Link to comment
7 minutes ago, RT87 said:

changing usb ports didn't help

That's not surprising, also not surprising that the command doesn't work with USB drives.

 

7 minutes ago, RT87 said:

if I have to hook said drive to an actual SATA controller which goes directly to the MB, that could be done quite easily... however, if the drive would have to "stay" there, i.e. I cannot go back to usb after that, that wouldnt work

If the drive has a HPA it should still work with USB after it is removed, assuming it's not the USB enclosure/dirive making it smaller, some USB drives are known to be smaller, e.g., some WD drives Mac.

Link to comment

It is indeed a WD Elements drive.... The other one is Seagate (next purchase will be seagate too, I think, I dislike both the HPA issue and the drive vibrations, which are rather strong for a 2,5" 5400 rpm drive imho, but that's another matter).

 

Can I find quickly out, if removing the HPA will do me some good (beforehand, that is)?

 

How would I best go about removing the HPA? I have found several posts regarding this issue, but I am still somewhat unsure...

Link to comment

At least I had no disconnections so far... Is there a comprehensive list of why-not-to-use-usb reasons ^^? Not kidding, btw!

 

So... if I were(!) to continue USB, how would I avoid this in the future? Just buy N+1 TB drives were N TB is the single largest array drive? Or can I assume that same-vendors drives will always work...?

Link to comment
7 hours ago, RT87 said:

why-not-to-use-usb reasons

Disk SMART not reported so disk health can't be monitored. Disks with nonstandard size. Disconnects which require rebuild. Non-unique identifiers so Unraid can't tell which disk is which. Performance impacts due to multiple disks on one connection, since parity checks and rebuilds require simultaneous access to all disks. Probably some others I can't recall offhand.

Link to comment

okay... but I do get SMART reports for bad sectors etc and also identifiers, although I do not know to which degree the uniqueness goes.

 

By "non-standard size" you mean this HPA issue, or is there another problem?

 

Performance isn't much of an issue for me, even for parity checks.

 

Is there a way to prevent any write operation if a (random) disconnect occurs? "Frequent" parity rebuilds would be painful to my usecase.

 

While on the subject of parity, is there a thread where I can read how it works in practice? I know the XOR theory and such, but concretely I am interested to know how I could mess it up accidentally. E.g. am I save as long as I do mv/cp cmds withing the /mnt structure? In other words: Hands off the /dev path!! And if parity is broken, e.g. due to a disconnect, and lets say during this disconnect a 1GB file gets written... Is partiy broken altogether? Or just for a "1Gb range" on the other array disks (however many files that are?

Edited by RT87
Link to comment
5 hours ago, RT87 said:

Hands off the /dev path

Hands off the /dev/sdX path (for array disks). If you work with the /dev/md# paths parity is kept in sync. If you use encryption there is a "mapper" in the path there somewhere but I don't have an example handy.

 

Best to stick with the /mnt paths though.

 

When a write to a disk fails, it is disabled. The disk isn't used again until rebuilt. Instead, any access of the disk is emulated from the parity calculation by reading all other disks. That initial failed write, and any subsequent write, is emulated by updating parity, so those can be recovered by rebuilding. The array will continue to function as if the disk were still there, but it is only the emulated disk that is actually used. So, you could conceivably write a lot of data to the emulated disk. None of that would go to the physical disk, but could be recovered by rebuilding.

 

If another disk disconnects, that one can't be emulated unless you have dual parity, so the array is offline. If you have dual parity, then 2 disks can be emulated.

 

It is also possible for a failed read to cause Unraid to get the data from the parity calculation and try to write it back to the disk. If that write-back fails the disk is disabled.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.