Jump to content

Accidentally hit the plug on a drive and now it's disabled, how can I remove that without completely rebuilding the drive?


Recommended Posts

Posted (edited)

I know this may seem stupid but I brushed up against one of the power connectors to one of my drives and it says it's Disabled and the device is emulated. After browsing the drive and doing SMART tests, I can confirm it's fine. I have had issues getting it to rebuild the drive and frankly (I know it may seem stupid) but, I lost about 6-8 TB doing that one time. It wants me to reformat the drive for some reason, I tried following this link but I just don't get the option when I tried to rebuild 

I would like to just accept the risk and edit whatever file is causing it to say that.

 

One of the main reasons is I am getting UDMA errors on another drive (I think one of the resistors, I think, might be going as it got replaced recently) and I'm worried that something will mess up rebuilding it causing me to lose everything. And if I replace the drive with UDMA, I'm sure I will get an error saying I can't add 2 drives. I would like to avoid having to rebuild everything on one drive, run a parity check, then replace the other on.

 

If I sound like I'm all over the place, I apologize, this has been giving me anxiety.

cd1git-unraid-diagnostics-20240507-1743.zip

Edited by Exes
Link to comment
Posted (edited)

have you already rebooted the server? Disk go into states to protect themselves and require a reboot to reinitalite

under main click spin down arrow xyz... the disks
image.png.9a53a1dc7e96239c617c5f6b4b6ffa92.png

I vague remember having a disabled disk. 


if you don't want to reboot stop all vms and dockers. then stop the array.

Per redit user Medical_Shame4079:

Stop the array, unassign the disk, start the array in maintenance mode, stop it again, reassign the drive to the same slot. The idea is to start the array temporarily with the drive “missing” so it changes from “disabled” to “emulated” status, then to stop it and “replace” the drive to get it back to “active” status

Edited by bmartino1
Link to comment
21 hours ago, bmartino1 said:

have you already rebooted the server? Disk go into states to protect themselves and require a reboot to reinitalite

under main click spin down arrow xyz... the disks
image.png.9a53a1dc7e96239c617c5f6b4b6ffa92.png

I vague remember having a disabled disk. 


if you don't want to reboot stop all vms and dockers. then stop the array.

Per redit user Medical_Shame4079:

Stop the array, unassign the disk, start the array in maintenance mode, stop it again, reassign the drive to the same slot. The idea is to start the array temporarily with the drive “missing” so it changes from “disabled” to “emulated” status, then to stop it and “replace” the drive to get it back to “active” status


Yeah I've rebooted it, then unmounted it and mounted using unassigned drives, just to check. I've never been able to get a drive to go from Disabled to Enabled without completely rebuilding it. I am not sure if it's the power cables I have or my luck is bad

Link to comment
Posted (edited)

Curious. Is this a shuck drive? with the 3.3volt sata power?

https://electronics.stackexchange.com/questions/604512/is-3-3-v-necessary-for-basic-functionality-of-any-sata-drives-how-to-check-if#:~:text=Wiki SATA,of the SATA power connector.
new sata hard disk have changed SATA power to add a 3.3volt diver restart power function.

 

is this drive new and came with a power adapter?

Edited by bmartino1
Link to comment
  • 2 weeks later...

kinda. I would wait for hte rebuild and then run the commands aginst the disk.

Only to fix/maintain unraid party and data. and only if the drive is in its forever home and not to be hit again.
 

Running the Test using the command line

XFS and ReiserFS

You can run the file system check from the command line for ReiserFS and XFSxfs as shown below if the array is started in Maintenance mode by using a command of the form:

xfs_repair -v /dev/mdX

or

reiserfsck -v /dev/mdX

where X corresponds to the diskX number shown in the Unraid GUI. Using the /dev/mdX type device will maintain parity. If the file system to be repaired as an encrypted XFS one then the command needs to be modified to use the /dev/mapper/mdX device

Link to comment
5 minutes ago, bmartino1 said:

kinda. I would wait for hte rebuild and then run the commands aginst the disk.

Only to fix/maintain unraid party and data. and only if the drive is in its forever home and not to be hit again.
 

Running the Test using the command line

XFS and ReiserFS

You can run the file system check from the command line for ReiserFS and XFSxfs as shown below if the array is started in Maintenance mode by using a command of the form:

xfs_repair -v /dev/mdX

or

reiserfsck -v /dev/mdX

where X corresponds to the diskX number shown in the Unraid GUI. Using the /dev/mdX type device will maintain parity. If the file system to be repaired as an encrypted XFS one then the command needs to be modified to use the /dev/mapper/mdX device

Note that this is no longer quite accurate as the device name for array drives can vary according to the Unraid release and whether encryption is being used or not.   Much better to do it from the GUI as then you do not need to worry about the device name.

  • Like 1
  • Upvote 1
Link to comment
On 5/23/2024 at 1:01 PM, bmartino1 said:

kinda. I would wait for hte rebuild and then run the commands aginst the disk.

Only to fix/maintain unraid party and data. and only if the drive is in its forever home and not to be hit again.
 

Running the Test using the command line

XFS and ReiserFS

You can run the file system check from the command line for ReiserFS and XFSxfs as shown below if the array is started in Maintenance mode by using a command of the form:

xfs_repair -v /dev/mdX

or

reiserfsck -v /dev/mdX

where X corresponds to the diskX number shown in the Unraid GUI. Using the /dev/mdX type device will maintain parity. If the file system to be repaired as an encrypted XFS one then the command needs to be modified to use the /dev/mapper/mdX device

 

 

Ah just saw this, can you let me know what I should do regarding my last post?

Link to comment

Seeing that your parity build was 1 month time. yes. There may be other damage to that disk. I would run spinrite a third party tool to check the bumped disk. (simlar to memtest but for HDD)
paid software.. Review: https://mbusb.aguslr.com/misc/spinrite.html

You may see a similar party build time depending on data. (highest I've seen was a year. The shortest i've seen was 3 hours. Times vary based on data on disk. Average 1 day for a data build depending on mead type...

Even it its a "1TB" drive, the sizes are not the same when moving media.
A HDD 1TB has more sectors and space than a SSD/NVME do not change media types if doing this unless its a bigger size.

I would recommend replacing the disk.

Link to comment
Posted (edited)

You mean because it wrote that to parity? It was fine (from what I could tell) until I accidentally hit the power cable. I tried the xfs_repair and it didn't get past phase 1, something about not enough super blocks

Edit: My parity check was before all of that happened too

Edited by Exes
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...