Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

4 hours ago, Gico said:

Your disks are having problems:

Dec 13 01:13:14 Juno kernel: sd 8:0:1:0: [sdw] tag#6846 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:14 Juno kernel: sd 8:0:1:0: [sdw] tag#6846 CDB: opcode=0x88 88 00 00 00 00 05 74 ff ff 80 00 00 00 08 00 00
Dec 13 01:13:14 Juno kernel: print_req_error: I/O error, dev sdw, sector 23437770624
Dec 13 01:13:14 Juno rc.diskinfo[9245]: SIGHUP received, forcing refresh of disks info.
Dec 13 01:13:14 Juno kernel: sd 7:0:10:0: [sdp] tag#1601 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:14 Juno kernel: sd 7:0:10:0: [sdp] tag#1601 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:13:14 Juno kernel: print_req_error: I/O error, dev sdp, sector 19532873600
Dec 13 01:13:15 Juno kernel: sd 9:0:0:0: [sdx] tag#472 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:15 Juno kernel: sd 9:0:0:0: [sdx] tag#472 CDB: opcode=0x88 88 00 00 00 00 02 ba a0 f4 00 00 00 00 08 00 00
Dec 13 01:13:15 Juno kernel: print_req_error: I/O error, dev sdx, sector 11721044992
Dec 13 01:13:25 Juno kernel: sd 7:0:11:0: [sdq] tag#1609 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:25 Juno kernel: sd 7:0:11:0: [sdq] tag#1609 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:13:25 Juno kernel: print_req_error: I/O error, dev sdq, sector 19532873600
Dec 13 01:13:25 Juno rc.diskinfo[9245]: SIGHUP received, forcing refresh of disks info.
Dec 13 01:13:25 Juno kernel: sd 9:0:2:0: [sdz] tag#3693 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:25 Juno kernel: sd 9:0:2:0: [sdz] tag#3693 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:13:25 Juno kernel: print_req_error: I/O error, dev sdz, sector 19532873600
Dec 13 01:13:35 Juno kernel: sd 9:0:6:0: [sdad] tag#5479 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:35 Juno kernel: sd 9:0:6:0: [sdad] tag#5479 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:13:35 Juno kernel: print_req_error: I/O error, dev sdad, sector 19532873600
Dec 13 01:13:35 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdaa 2>/dev/null | /bin/grep -c standby) took longer than 10s!
Dec 13 01:13:56 Juno kernel: sd 9:0:1:0: [sdy] tag#1686 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:56 Juno kernel: sd 9:0:1:0: [sdy] tag#1686 CDB: opcode=0x88 88 00 00 00 00 05 74 ff ff 80 00 00 00 08 00 00
Dec 13 01:13:56 Juno kernel: print_req_error: I/O error, dev sdy, sector 23437770624
Dec 13 01:13:56 Juno kernel: sd 9:0:7:0: [sdae] tag#1060 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:13:56 Juno kernel: sd 9:0:7:0: [sdae] tag#1060 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:13:56 Juno kernel: print_req_error: I/O error, dev sdae, sector 19532873600
Dec 13 01:13:56 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdae 2>/dev/null | /bin/grep -c standby) took longer than 10s!
Dec 13 01:13:56 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdaa 2>/dev/null | /bin/grep -c standby) took longer than 10s!
### [PREVIOUS LINE REPEATED 2 TIMES] ###
Dec 13 01:14:24 Juno kernel: sd 9:0:4:0: [sdab] tag#1694 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:14:24 Juno kernel: sd 9:0:4:0: [sdab] tag#1694 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:14:24 Juno kernel: print_req_error: I/O error, dev sdab, sector 19532873600
Dec 13 01:14:24 Juno kernel: sd 9:0:5:0: [sdac] tag#4039 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:14:24 Juno kernel: sd 9:0:5:0: [sdac] tag#4039 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:14:24 Juno kernel: print_req_error: I/O error, dev sdac, sector 19532873600
Dec 13 01:14:24 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdab 2>/dev/null | /bin/grep -c standby) took longer than 10s!
Dec 13 01:14:36 Juno kernel: sd 9:0:3:0: [sdaa] tag#1695 UNKNOWN(0x2003) Result: hostbyte=0x00 driverbyte=0x00
Dec 13 01:14:36 Juno kernel: sd 9:0:3:0: [sdaa] tag#1695 CDB: opcode=0x88 88 00 00 00 00 04 8c 3f ff 80 00 00 00 08 00 00
Dec 13 01:14:36 Juno kernel: print_req_error: I/O error, dev sdaa, sector 19532873600
Dec 13 01:14:36 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdae 2>/dev/null | /bin/grep -c standby) took longer than 10s!

I would also remove the preclear plugin until you get this sorted out.  The rc.diskinfo is a preclear background task that is also doing disk io.

 

Edit: You also have an array disk with read errors:

Dec 26 04:40:03 Juno root: Fix Common Problems: Error: disk9 (WDC_WD60EFRX-68MYMN1_WD-WX11DB4H8SXT) has read errors

You need to get these issues fixed before you start losing data.

Edited by dlandon
Link to comment

How is it that all these disks has an issue around on the same time?

Seems there is another root cause for these errors at that time.

All these disks sdq, sdq, sdz, sdae etc. are backup drives that 95% of the time are not mounted,

and most of them are quite new and have not been used other than being backup drives.

Anyway please look at January 11th issues when all these drives were not mounted.

Is there a reason for any plugin to access an unmounted drive?

 

Yes disk 9 has read errors, but SMART report seems OK other than stable 13 UDMA CRC error count.

I will continue to monitor this drive.

Link to comment
9 hours ago, Gico said:

How is it that all these disks has an issue around on the same time?

Seems there is another root cause for these errors at that time.

It could be something like a power supply problem, cabling, or a failing controller.

9 hours ago, Gico said:

All these disks sdq, sdq, sdz, sdae etc. are backup drives that 95% of the time are not mounted,

and most of them are quite new and have not been used other than being backup drives.

I don't think it is a disk issue.  It's something common to several disks.

9 hours ago, Gico said:

Is there a reason for any plugin to access an unmounted drive?

UD checks a spinning disk for disk temperatures.  Preclear checks disks in the background.  You'd need to post on the preclear forum for more information.

Link to comment
9 hours ago, dlandon said:

UD checks a spinning disk for disk temperatures.  Preclear checks disks in the background.  You'd need to post on the preclear forum for more information.

Thanks. For Preclear I will ask in Preclear forum, but what about UD and unmounted disks? Following an example with sdp and sdq which are not mounted.

 

Jan 12 10:56:33 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdp 2>/dev/null | /bin/grep -c standby) took longer than 10s!
Jan 12 10:57:20 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdq 2>/dev/null | /bin/grep -c standby) took longer than 10s!
Jan 12 10:57:20 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdq 2>/dev/null | /bin/grep -c standby) took longer than 10s!
Jan 12 10:57:20 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdq 2>/dev/null | /bin/grep -c standby) took longer than 10s!
Jan 12 10:57:20 Juno unassigned.devices: Error: shell_exec(/usr/sbin/hdparm -C /dev/sdq 2>/dev/null | /bin/grep -c standby) took longer than 10s!

Edited by Gico
Link to comment
1 hour ago, Gico said:

but what about UD and unmounted disks?

As per above.

11 hours ago, dlandon said:

UD checks a spinning disk for disk temperatures.

UD is checking to see if the disk is spinning.

 

The command is timing out because there are issues with your disk system, which is where you need to spend your time.

  • Thanks 1
Link to comment

Can pleae somebody explain the "pass-through" switch on the settings page of hard disks? I simply don't get it.

 

I need to pass through several hard disks by id to a VM. These hard disks are shown within the Unassigned Disks area on Unraids main page.

 

I've set the pass-through slider for these disks but they do not appear on the VM creation page. My understanding is, that a disk, that's set to pass-through, is blocked and available for pass-through selection on the VM page.

 

Am I wrong? Is that pass-through slider simply misnamed and should be called "ignore".

 

Edited by hawihoney
Link to comment

I have a question regarding UD / encrypted disks. I replaced one of the encrypted array data drives with  larger one. After rebuild had successfully finished, I plugged the previous data drive into an empty slot and wanted to mount it. However it does not mount. It obviously has the same password as the array and the array is mounted. UD displays "luks" as file system for both the drive as well as partition 1. Mount button for the drive is clickable, for partition 1 it is greyed out. If I click on mount, UD spends some seconds doing something, the disk log adds one line of "/usr/sbin/cryptsetup luksOpen /dev/sdv1 TOSHIBA_HDWN180_xxxxxxxxxxxxx", but the partition does not mount.

Is there a misunderstanding on my side or is something wrong that I need to troubleshoot?

Unraid 6.8.3, UD 2021.01.09

Link to comment

I think the problem assuming you got the larger disk into the Array using a rebuild will be that the rebuilt disk will have the same GUID as the old one.   This normally show up in the log for the drive.  You can change the GUID of the old disk using Settings,-> Unassigned Devices.

Link to comment

Reposted from the General forum where it was incorrectly posted. Many thanks to @JorgeB for the redirection.

 

With some additional thoughts (see below).

 

====cut here====

 

I've just precleared a 16TB drive and formatted it as btrfs-luks against a pass phrase. I want to mount it and share it as an unassigned device.

 

It won't mount. Sensible, because I'm not giving it the pass phase. But I can't give it the pass phrase because I'm not being offered any way of entering the pass phrase.

 

I'm struggling to find documentation in the manual about this. Anyone care to help?

 

LATER THAT SAME OTHERWISE UNEVENTFUL AFTERNOON.

 

Thanks for that very swift response, @JorgeB I was a few seconds ahead of that redirection with the solution and trying to post it here, but for some reason the this forum is being very sluggish for me today.

 

I think it's worth expanding this beyond just trotting out the solution because I believe my newbie expectation that there would be a dialogue box for pass phrase entry on each attempt to mount the drive may not be exceptional.

 

There is no such dialogue box. The second logical place to look for the entry of a pass phrase would, I believe, be in the settings for that particular drive in its listing under MAIN. Again, it's not impossible that others new to UnRAID might be tempted to look there. No, there's nothing there either.

 

The pass phase that was entered when formatting the drive needs to be re-entered into the general Unassigned Devices settings, which you can pick up from the SETTINGS tab, under User Utilities/Set Encrypted Disk Password. I find both the location and permanence of this pass phrase, shall we say, counter-intuitive, but it is what it is.

 

What this means is that the encryption of the drive effectively only kicks in if someone steals the drive but forgets to take the rest of the UnRAID box with it.

 

Shouldn't there at least be an option (in the WebGUI---I've no doubt something can be cooked up at the command line) to insist that each mount occasion requires the pass phrase?

 

-- 

Chris

 

UnRAID 6.9.0-rc2

Link to comment
2 hours ago, itimpi said:

I think the problem assuming you got the larger disk into the Array using a rebuild will be that the rebuilt disk will have the same GUID as the old one.

Makes sense and is correct. Partition 1 however has a different GUID on the rebuilt drive. Is this Unraid's work when resizing the partition after the rebuild?

2 hours ago, itimpi said:

This normally show up in the log for the drive

For some reason it didn't.

 

2 hours ago, itimpi said:

You can change the GUID of the old disk using Settings,-> Unassigned Devices.

The list of available drives for changing the GUID in UD is empty. Any idea?

Link to comment
4 hours ago, bidmead said:

Shouldn't there at least be an option (in the WebGUI---I've no doubt something can be cooked up at the command line) to insist that each mount occasion requires the pass phrase?

The best way to handle this is to use the same passphrase as the array.

Link to comment
1 minute ago, dlandon said:

UD will not try to mount a passed through disk.  You still have to do the steps to pass it to the VM.

Thanks. But why is that switch called Pass-Through then? It has nothing to do with Pass-Through. It simply ignores disks with that activated switch. Wouldn't it be better called "Ignore" or "Don't mount"? IMHO it does suggest something that doesn't happen.

 

Link to comment
5 hours ago, tstor said:

Makes sense and is correct. Partition 1 however has a different GUID on the rebuilt drive. Is this Unraid's work when resizing the partition after the rebuild?

For some reason it didn't.

 

The list of available drives for changing the GUID in UD is empty. Any idea?

Post diagnostics.

Link to comment
1 minute ago, hawihoney said:

Thanks. But why is that switch called Pass-Through then? It has nothing to do with Pass-Through. It simply ignores disks with that activated switch. Wouldn't it be better called "Ignore" or "Don't mount"? IMHO it does suggest something that doesn't happen.

 

What it means is that UD needs to not mount the the disk because it is being passed through.  This is the normal way a UD disk is handled when UD is not to mount it.  I think it makes sense.

 

You seem to imply that if UD marks it as passed through, that UD manages passing it through to a VM automatically.  UD doesn't do that, it only makes it available because UD won't mount it.

Link to comment
4 hours ago, dlandon said:

Post diagnostics

Here they are. Please note that in the mean time I have changed the conflicting UUID manually (/dev/sdt). However UD still does not show any disk under "Change Disk UUID".

 

By the way and completely unrelated, when searching this thread for information regarding LUKS and UD I got a bit confused regarding LUKS and SSDs. In your first post it is stated first that "SSD disks formatted with xfs, btrfs, or ext4 will be mounted with 'discard'.  This includes encrypted disks."
Then further down in the same post it is said that "Discard is disabled on an encrypted SSD because of potential security concerns.  Fstrim will fail."

Finally a post much later contains this: "Add '--allow-discards' to luks open when an encrypted disk is a SSD so discard and trim will work on the disk."

What is the current status regarding SSD / discard / encryption?

 

tower-diagnostics-20210115-0306.zip

Edited by tstor
add device id
Link to comment
9 hours ago, tstor said:

Here they are. Please note that in the mean time I have changed the conflicting UUID manually (/dev/sdt). However UD still does not show any disk under "Change Disk UUID".

 

By the way and completely unrelated, when searching this thread for information regarding LUKS and UD I got a bit confused regarding LUKS and SSDs. In your first post it is stated first that "SSD disks formatted with xfs, btrfs, or ext4 will be mounted with 'discard'.  This includes encrypted disks."
Then further down in the same post it is said that "Discard is disabled on an encrypted SSD because of potential security concerns.  Fstrim will fail."

Finally a post much later contains this: "Add '--allow-discards' to luks open when an encrypted disk is a SSD so discard and trim will work on the disk."

What is the current status regarding SSD / discard / encryption?

 

tower-diagnostics-20210115-0306.zip 145.53 kB · 0 downloads

When the "Mount Disks with 'discard' option?" is set to yes an encrypted disk will be mounted with the discard option.  The --allow-discards is used with the luksOpen command.  Some of the confusion is that the use of discard on encrypted disks has changed and the posts reflect the discussion of those changes.  I will review the first post write up and adjust as necessary to clear up any confusion.

 

The UUID issue is because the disk is encrypted and UD is not determining that the disk is formatted XFS when creating a list of disks where the UUID can be changed.  I will have to look into this.

Link to comment
5 hours ago, dlandon said:

I will review the first post write up and adjust as necessary to clear up any confusion.

Thanks

 

5 hours ago, dlandon said:

The UUID issue is because the disk is encrypted and UD is not determining that the disk is formatted XFS when creating a list of disks where the UUID can be changed.  I will have to look into this.

Thanks again, I really appreciate the efforts you put into the UD plugins.

 

Now even though I have first changed the UUID via CLI and then rebooted the server, UD does not mount the encrypted disk. Any idea?

Link to comment

Not sure if anyone else has experienced this, but it's only happened over the last 2 weeks. I've been upgrading drives on my media unRAID, going from 10TB to 16TB. I then use the old 10TB drives to upgrade my backup unRAID. I leave the old 10TB drives alone until the parity rebuild completes successfully on the replacement 16TB drives.

 

Once the 16TB drives have been successfully rebuilt, I then want to run a preclear zero pass on the 10TB drives before re-using them in my backup unRAID. I know it's not really needed to successfully re-use the 10TB drives, but my OCD is better satisfied if I do. As my media unRAID is running on a 10+ year old motherboard/CPU, it only has USB 2.0 so I attach the old 10TB drives to an external USB 3.0 to SATA enclosure so I can run the zero pass on the backup unRAID system.

 

UD has no problem seeing the drive and shows it as XFS formatted. It can be mounted and the contents of the drive are accessible so everything appears to be working correctly. The only thing that I can't seem to do is enable the drive to be shared from its settings page in UD. Just like @vyreks mentioned, clicking Done or Save on the settings page reverts the Share switch to off. I only want to share it so I can do a quick contents compare over the network against the new 16TB drive that replaced it. As I can't get the drive to share, I temporarily put it back in the media unRAID, change the UUID, mount it and then do the quick contents compare with the new 16TB drive.

 

I then re-install it in the USB enclosure and re-attach it to the backup unRAID system via a USB3 port. I then attempt to remove the partition(s) so I can use preclear. For the last 2 x 10TB drives, for some reason UD fails when attempting to remove the partition. I tried changing the mountpoint label and that was successful, but it still failed when attempting to remove the partition. Rather than re-installing it in the media unRAID again, I tried attaching the 10TB drives to my Ubuntu laptop with USB 3 ports and using the Disks utility to remove the partition. It also fails, which seems to indicate that it's something to do with the formatting created when it was formally part of the protected unRAID array.

 

As a last ditch attempt before re-installing the 10TB drives in the media unRAID just to remove the partitions, I tried using my MacBook Pro and the Disk utility, with the 10TB drives still in the USB 3 enclosure. The Mac immediately reports the drive as unrecognized and asks me to 'Initialize' it, which I do. I can then erase the drive - typically I just choose to format it as NTFS as for some reason, Mac Disk Utility won't let you remove all partitions anymore. I can then re-attach the drive to my backup unRAID and now UD is able to remove the partition. And of course I can then run my preclear zero pass.

 

While trying to figure out both the share and remove partition issues, I tried some reboots. I made the mistake of forgetting to grab the logs before rebooting so I don't have any logs to share at this time. I have 2 more of the 10TB --> 16TB upgrades to do so I'll be sure to grab the logs if the same situation occurs. In any case, it seems that something changed in the way UD works when attempting to share or remove partitions. When I did this in the past going from 4TB to 10TB drives, it had no problems with either sharing or removing the partitions. I'm receiving the 16TB drives today so I'll see if the issue persists.

 

In any case, this long-winded post is more of a 'FYI' but I can confirm the same share issue that @vyreks posted above. Any thoughts until I can grab some diagnostics/logs if it happens again?

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.