Unassigned Devices Preclear - a utility to preclear disks before adding them to the array


dlandon

Recommended Posts

Well I have no idea what was wrong and nothing was showing in the logs but removing UNASSIGNED DEVICES and UD PRECLEAR and reinstalling them they are now working.

It had been working before and must have stopped working when I updated. I did not see anything in the logs that would have indicated a problem it would just refuse to start. I think the flag was a red herring but not sure. 

Link to comment

Hey all,

 

My preclear failed on post-read verification after couple days of pre-clearing a disk. I removed the log and tried to verify the disk again; I got this error "INVALID UNRAID`S MBR SIGNATURE".

 

is the most recent preclear reliable?

I ran a SMART extended self-test. it's completed with no error. Is it safe to use it?

 

running a second time of preclear using binhex-preclear; hope this work.

 

my server is on 6.11.5

 

Thanks,

Wing

 

 

Screen Shot 2022-12-17 at 7.25.59 PM.png

 

Edited by winglam
Link to comment

Hi,

 

since a few days UA Preclear produces php warnings in the syslog. My Unraid Version is 6.9.2 and the UA Preclear Plugin is the newest version.

 

Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413

 

What mean this php warnings?

 

Thank you!

Christian

 

Link to comment
34 minutes ago, Shantarius said:

Hi,

 

since a few days UA Preclear produces php warnings in the syslog. My Unraid Version is 6.9.2 and the UA Preclear Plugin is the newest version.

 

Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:00 Avalon rc.diskinfo[22433]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413
Dec 22 19:45:15 Avalon rc.diskinfo[28229]: PHP Warning: strpos(): Empty needle in /usr/local/emhttp/plugins/unassigned.devices.preclear/scripts/rc.diskinfo on line 413

 

What mean this php warnings?

 

Thank you!

Christian

 

I'll have a fix in the next release.  If you're not using UD preclear, uninstall it to stop the log messages.

Link to comment

I recently started to create a zfs pool on my unraid server. With that setup, I want the participating drives to not be a target of any destructive operations. I have marked the disks as passthrough in UD, and that pretty much achieves it - no more mount, format etc are offered by UD. However, preclear option is still offered next to the disk ID, which obviously makes me a bit nervous

 

I also tried disabling the destructive mode in UD settings, but seems preclear plugin doesn't respect that

 

Is there any way I can marks certain disks not eligible for preclear option?

 

attached screenshot to show the issue

preclear_on_PT_disk.thumb.jpg.db8dcd93fe5c714da159ddfc4708be23.jpg

 

Edited by Ashish Pandey
added screenshot
Link to comment
8 hours ago, apandey said:

I recently started to create a zfs pool on my unraid server. With that setup, I want the participating drives to not be a target of any destructive operations. I have marked the disks as passthrough in UD, and that pretty much achieves it - no more mount, format etc are offered by UD. However, preclear option is still offered next to the disk ID, which obviously makes me a bit nervous

 

8 hours ago, apandey said:

I also tried disabling the destructive mode in UD settings, but seems preclear plugin doesn't respect that

 

Is there any way I can marks certain disks not eligible for preclear option?

 

attached screenshot to show the issue

preclear_on_PT_disk.thumb.jpg.db8dcd93fe5c714da159ddfc4708be23.jpg

 

What shows as the file system (FS) on that disk?  Once a file system shows on the disk, the preclear goes away.

 

8 hours ago, apandey said:

I also tried disabling the destructive mode in UD settings, but seems preclear plugin doesn't respect that

Prelcear does not respect the destructive mode.  All preclear looks for is disks that do not have a file system.

Link to comment
10 minutes ago, dlandon said:

What shows as the file system (FS) on that disk?  Once a file system shows on the disk, the preclear goes away.

 

Prelcear does not respect the destructive mode.  All preclear looks for is disks that do not have a file system.

I understand that, but does not help here. My point is that passthrough might mean the disk is being used in ways that may not be detectable in usual way. There is no traditional filesystem for preclear to detect here, unless its looking for zfs pools and the disks participating. That pre-clear button is just a risk with no current way to disable or opt out of it for disks that are in use

 

Would it be a useful feature for preclear to opt out when it sees passthrough (if it can?). Disks being passed should have no legitimate use case to be precleared, while passed disks are legitimate "unknowns" to be allowed to be precleared by mistake

Link to comment
21 minutes ago, apandey said:

I understand that, but does not help here. My point is that passthrough might mean the disk is being used in ways that may not be detectable in usual way. There is no traditional filesystem for preclear to detect here, unless its looking for zfs pools and the disks participating. That pre-clear button is just a risk with no current way to disable or opt out of it for disks that are in use

 

Would it be a useful feature for preclear to opt out when it sees passthrough (if it can?). Disks being passed should have no legitimate use case to be precleared, while passed disks are legitimate "unknowns" to be allowed to be precleared by mistake

I asked you the question to determine the best way to handle this situation because I'm currently working on zfs integration in UD for Unraid 6.12 and I wanted to know what the "FS" shows for the zfs_member as UD should now handle zfs file systems.

Link to comment
2 hours ago, dlandon said:

I asked you the question to determine the best way to handle this situation because I'm currently working on zfs integration in UD for Unraid 6.12 and I wanted to know what the "FS" shows for the zfs_member as UD should now handle zfs file systems.

on 6.11.5 

 

image.thumb.png.ec90182802ff4e48ca3d143982625abf.png

  • Like 1
Link to comment
2 hours ago, dlandon said:

I asked you the question to determine the best way to handle this situation because I'm currently working on zfs integration in UD for Unraid 6.12 and I wanted to know what the "FS" shows for the zfs_member as UD should now handle zfs file systems.

Sorry, I didn't realize that I wasn't seeing partitions because I has passthrough turned on

Here is what I see in 6.11.5

zfs_pool_partitions.thumb.jpg.b67829be970e1e69d77461fdb72c493f.jpg

 

Interestingly, unlike SimonF, I don't see a FS type (same with lsblk). Also, I noticed something interesting which may or may not be useful here. The partition name seems irrelevant - records was once a xfs partition I had on the first disk before I created a zpool on it. Seems zpool create retained the name that existed 

 

I have a zfs mirror on first and third disk here. The first partition on each is the 1TB data partition used by zfs. the second one (marked part9) is the 8MB partition - I believe this is the solaris reserved partition. In theory, this can be a way to detect zfs presence, but I am not sure how bulletproof this is

Link to comment
4 minutes ago, apandey said:

I have a zfs mirror on first and third disk here. The first partition on each is the 1TB data partition used by zfs. the second one (marked part9) is the 8MB partition - I believe this is the solaris reserved partition. In theory, this can be a way to detect zfs presence, but I am not sure how bulletproof this is

 

fdisk -l gives me something more crystal clear

Device          Start        End    Sectors   Size Type
/dev/sdj1        2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sdj9  1953507328 1953523711      16384     8M Solaris reserved 1

 

Link to comment
11 minutes ago, apandey said:

Sorry, I didn't realize that I wasn't seeing partitions because I has passthrough turned on

Here is what I see in 6.11.5

zfs_pool_partitions.thumb.jpg.b67829be970e1e69d77461fdb72c493f.jpg

 

Interestingly, unlike SimonF, I don't see a FS type (same with lsblk). Also, I noticed something interesting which may or may not be useful here. The partition name seems irrelevant - records was once a xfs partition I had on the first disk before I created a zpool on it. Seems zpool create retained the name that existed 

 

I have a zfs mirror on first and third disk here. The first partition on each is the 1TB data partition used by zfs. the second one (marked part9) is the 8MB partition - I believe this is the solaris reserved partition. In theory, this can be a way to detect zfs presence, but I am not sure how bulletproof this is

I had the zfs plugin installed, I guess you dont has passing them through to a vm? 

 

i.e. the os doesnt know the fs.

Edited by SimonF
Link to comment
2 minutes ago, SimonF said:

I had the zfs plugin installed, I guess you dont has passing them through to a vm?

I have the following plugins:

  • zfs for unraid 6 (2.1.6)
  • zfs companion (2021.08.24)
  • zfs master (2022.12.04.61)

I have simply created a zpool mirror with 2 disks, and a dataset under it for now. no zvols for now

no VMs involved, no actual passthrough. I marked the disks passthrough just to disable all the UD features (mount / format etc) being offered for those drives

Link to comment

I've made some changes.  This is really a UD issue, and not a preclear issue.  UD considers any disk without a file system to be a candidate for a preclear.  You have found an edge case where your zfs disk(s) are not being recognized by Linux and therefore show a blank 'FS'.

 

This is now how it will look when a disk is passed through and the file system is not recognized in the next release:

Screenshot 2022-12-27 114536.png

 

I'm working on UD chages for Unraid 6.12 which includes zfs.  Notice that the "Dev 1" disk is a zfs file system and is recognized.  Also note that "Dev 3" is passed through and there is no recognized file system.  UD Preclear is installed and the icon to preclear is not shown.

 

With Unraid 6.12, zfs file systems are created after the disk is partitioned, so there won't be a partition 9 created.  This is how UD will create zfs disks so the partitioninig UD uses will be compatible with array disks and can be introduced into the array without reformatting.

  • Like 1
Link to comment
1 hour ago, apandey said:

 

fdisk -l gives me something more crystal clear

Device          Start        End    Sectors   Size Type
/dev/sdj1        2048 1953507327 1953505280 931.5G Solaris /usr & Apple ZFS
/dev/sdj9  1953507328 1953523711      16384     8M Solaris reserved 1

 

 

This is not how UD will create zfs disks.  They will be created after the disk is partitioned.  This is what lsblk shows:

root@BackupServer:/mnt/user/unraid/unassigned.devices/unassigned.devices.emhttp# lsblk -f | grep sdc
sdc                                                                                      
└─sdc1 zfs_member  5000  Testing_fmt 5758782561912062602

 

This is what fdisk shows:

fdisk -l | grep sdc
Disk /dev/sdc: 111.79 GiB, 120034123776 bytes, 234441648 sectors
/dev/sdc1        2048 234441647 234439600 111.8G 83 Linux

 

Link to comment
2 hours ago, apandey said:

Interestingly, unlike SimonF, I don't see a FS type (same with lsblk). Also, I noticed something interesting which may or may not be useful here. The partition name seems irrelevant - records was once a xfs partition I had on the first disk before I created a zpool on it. Seems zpool create retained the name that existed 

The mount point/name stays with the disk and is only changed with UD.  If you've never set a mount point manually, it will default to the disk label/zpool name if there is a label.  You can make it default by changing the mount point and clearing off the old value.

Link to comment
8 hours ago, dlandon said:

This is not how UD will create zfs disks.  They will be created after the disk is partitioned.  This is what lsblk shows:

I am just trying out zfs with my unraid server, and it would be nice if I can align myself to what is coming up. Is there a set of steps I can follow to create my pools to be compliant with upcoming UD way? I guess I need a specific partitioning step before zpool create

 

BTW, great to hear all this progress being made in UD for zfs support. I am not sure how far down zfs cache pools are in unraid proper, but UD support will certainly make things easier 

Link to comment
7 hours ago, dlandon said:

The mount point/name stays with the disk and is only changed with UD.  If you've never set a mount point manually, it will default to the disk label/zpool name if there is a label.  You can make it default by changing the mount point and clearing off the old value.

thanks, tried this out and defaulted the name

Link to comment
8 hours ago, dlandon said:

I've made some changes.  This is really a UD issue, and not a preclear issue.  UD considers any disk without a file system to be a candidate for a preclear.  You have found an edge case where your zfs disk(s) are not being recognized by Linux and therefore show a blank 'FS'.

can confirm this works well now after updating UD to latest. My disks are now showing up as zfs and preclear option has disappeared

Link to comment
36 minutes ago, apandey said:

I am just trying out zfs with my unraid server, and it would be nice if I can align myself to what is coming up. Is there a set of steps I can follow to create my pools to be compliant with upcoming UD way? I guess I need a specific partitioning step before zpool create

If you have the zfs plugin installed, I think UD will now let you create single disk zfs file systems on an earlier Unraid version.  It will sense zfs being installed and should let you format a zfs disk.  Once you create zfs disks, it is my understanding you can join them to make a zpool.  UD does not create zpools with multiple disks.

Link to comment
40 minutes ago, apandey said:

BTW, great to hear all this progress being made in UD for zfs support. I am not sure how far down zfs cache pools are in unraid proper, but UD support will certainly make things easier 

Native zfs is coming in Unraid 6.12.  I've been working on UD to get it ready to support zfs.

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.