Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

2 hours ago, Akshunhiro said:

 

Just saw the plugin update, wondering whether I need to change anything on my system.

 

I'm on 6.11.4 and have a ZFS pool created in TrueNAS passed-through.

You don't need to change anything.  This feature is intended to be a help with zvol issues in 6.12.  In 6.12, you can unassign a ZFS pool device that might have zvols and mount it in UD so you can do a file system check, extract files, and just generally work with the zvol.  The best use case is when you have a VM on a zvol and are having issues with the VM.

  • Like 1
Link to comment

Updated the Plugin to 2023.05.23 and now my UD Pools won't mount, generating an error in the syslog that the name is reserved and cannot be used.

 

May 24 10:56:57 Morgoth unassigned.devices: Error: Device '/dev/sde1' mount point 'Definitely_Not_Torrents' - name is reserved, used in the array or a pool, or by an unassigned device.
May 24 10:56:57 Morgoth unassigned.devices: Disk with serial 'CT4000MX500SSD1_2223E63A58B2', mountpoint 'Definitely_not_Torrents' cannot be mounted.

May 24 11:02:21 Morgoth unassigned.devices: Error: Device '/dev/nvme0n1p1' mount point 'Plex_Docker_AppData' - name is reserved, used in the array or a pool, or by an unassigned device.
May 24 11:02:21 Morgoth unassigned.devices: Disk with serial 'Samsung_SSD_970_EVO_500GB_S5H7NC0N331833E', mountpoint 'Plex_Docker_AppData' cannot be mounted.


Have two Unassigned Devices pools which have been working for a few years now, but they will no longer mount, stating the mount name is reserved or in use. I have tried stopping the array, removing the /var/state/unassigned.devices/share_names.json file, and starting the array again, but am seeing the same result. Issue persists through a reboot as well. I feel like there may be an issue with the update for the share name dupe check that was done with the most recent update.

Link to comment
21 minutes ago, tronyx said:

Updated the Plugin to 2023.05.23 and now my UD Pools won't mount, generating an error in the syslog that the name is reserved and cannot be used.

 

May 24 10:56:57 Morgoth unassigned.devices: Error: Device '/dev/sde1' mount point 'Definitely_Not_Torrents' - name is reserved, used in the array or a pool, or by an unassigned device.
May 24 10:56:57 Morgoth unassigned.devices: Disk with serial 'CT4000MX500SSD1_2223E63A58B2', mountpoint 'Definitely_not_Torrents' cannot be mounted.

May 24 11:02:21 Morgoth unassigned.devices: Error: Device '/dev/nvme0n1p1' mount point 'Plex_Docker_AppData' - name is reserved, used in the array or a pool, or by an unassigned device.
May 24 11:02:21 Morgoth unassigned.devices: Disk with serial 'Samsung_SSD_970_EVO_500GB_S5H7NC0N331833E', mountpoint 'Plex_Docker_AppData' cannot be mounted.


Have two Unassigned Devices pools which have been working for a few years now, but they will no longer mount, stating the mount name is reserved or in use. I have tried stopping the array, removing the /var/state/unassigned.devices/share_names.json file, and starting the array again, but am seeing the same result. Issue persists through a reboot as well. I feel like there may be an issue with the update for the share name dupe check that was done with the most recent update.

I see the problem and will issue an update to UD as soon as I figure out what is happening.

  • Like 1
Link to comment

I was trying to have a 10min delay on auto mount remote drive to my unraid everytime the array start. The drive I intended to mount is actually a share from my VM. I need a few minutes after the boot to let the VM start.

 

I had the following script but didn't work. Do you find any problem on my script? It didn't work.

#!/bin/bash

# Wait for 10 minutes before attempting to mount the shared drives
sleep 600

# Define the shared drive information
nl3s-vm-dsm="//192.168.4.172/nl3s_appdata backup"
nl3b="//192.168.4.216/nl3s_appdata backup"

# Mount the shared drives
mount -t cifs -o username=marco,password=19980928MMCmmc "$nl3s-vm-dsm" /mnt/remotes
mount -t cifs -o username=marco,password=19980928MMCmmc "$nl3b" /mnt/remotes

# Verify if the drives are mounted successfully
if mountpoint -q /mnt/remotes; then
    echo "Shared drives mounted successfully."
else
    echo "Failed to mount one or more shared drives."
fi

image.png.b660c4668c8ad0e2a044733fe3e37d63.pngimage.thumb.png.6069f613322356d3b5e524d27913dcdf.pngimage.thumb.png.6b048742c5139ce7f9a8881c2ba48280.pngnorthernlight3s-diagnostics-20230524-2340.zip

Link to comment
38 minutes ago, marco_yang said:

I had the following script but didn't work. Do you find any problem on my script? It didn't work.

That won't work.  Have UD mount them with this command:

 

/usr/local/sbin/rc.unassigned mount source' - where source is the SMB/NFS source.

Link to comment
5 hours ago, dlandon said:

I see the problem and will issue an update to UD as soon as I figure out what is happening.


Is there anything I can do in the meantime to get them mounted? Server is pretty much useless without being able to mount these pools.

Link to comment

New version of UD.  The notable change is with root shares.  Changes were made in the configuration to be more aligned with how other SMB and NFS shares are configured.  This change will cause any existing root shares to fail to mount.  The fix is to remove any root shares and add them back.  Save any script files you have on the root shares as they will be deleted when the root share is removed.

 

The other change was pools failed to mount with a false share name is already being used message.  I broke this making a change yesterday to address where a duplicate share name was not detected.  Should be all sorted out now.

  • Like 1
Link to comment
4 minutes ago, dlandon said:

New version of UD.  The notable change is with root shares.  Changes were made in the configuration to be more aligned with how other SMB and NFS shares are configured.  This change will cause any existing root shares to fail to mount.  The fix is to remove any root shares and add them back.  Save any script files you have on the root shares as they will be deleted when the root share is removed.

 

The other change was pools failed to mount with a false share name is already being used message.  I broke this making a change yesterday to address where a duplicate share name was not detected.  Should be all sorted out now.


I thank you for dealing with this so quickly. My one pool mounted fine, but the other is having issues now and I think it might be because I changed the UUID of one of the disks within UD to try and resolve the issue earlier today.

 

May 24 21:00:11 Morgoth unassigned.devices: *** dev /dev/sde1 mountpoint Definitely_Not_Torrents
May 24 21:00:11 Morgoth unassigned.devices: Mounting partition 'sde1' at mountpoint '/mnt/disks/Definitely_Not_Torrents'...
May 24 21:00:11 Morgoth unassigned.devices: Mount cmd: /sbin/mount -t 'btrfs' -o rw,noatime,nodiratime,space_cache=v2 '/dev/sde1' '/mnt/disks/Definitely_Not_Torrents'
May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): unrecognized or unsupported super flag: 34359738368
May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): dev_item UUID does not match metadata fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61 != 62b54680-89b9-47a3-a1be-a770935f18df
May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): superblock contains fatal errors
May 24 21:00:11 Morgoth kernel: BTRFS error (device sde1): open_ctree failed
May 24 21:00:13 Morgoth unassigned.devices: Mount of 'sde1' failed: 'mount: /mnt/disks/Definitely_Not_Torrents: wrong fs type, bad option, bad superblock on /dev/sde1, missing codepage or helper program, or other error.        dmesg(1) may have more information after failed mount system call. '
May 24 21:00:13 Morgoth unassigned.devices: Partition 'Definitely_Not_Torrents' cannot be mounted.
May 24 21:00:13 Morgoth unassigned.devices: Disk with ID 'CT4000MX500SSD1_2223E63A58B0 (sdf)' is not set to auto mount.


Would you happen to know if there is some way I can fix this?

Link to comment
6 minutes ago, tronyx said:

Would you happen to know if there is some way I can fix this?

Yes, the Pool devices all have to have the same UUID.  There is a way with a command line command, but I would be afraid to offer that up because I'd probably screw it up.  @JorgeB is who could help with this.  Be patient though, he lives in Europe and it's late at night there.

  • Like 1
Link to comment
16 minutes ago, dlandon said:

UD allows changing the UUID on BTRFS disks also.  Maybe I need to block that on pool disls.

Was going to say that it would probably be a good idea, or change them for all pool members, but I tested and it changed them for all pool devices:

 

May 25 12:40:35 Tower15 unassigned.devices: Changed partition UUID on '/dev/sdg1' with result: Current fsid: cf6ba21f-646f-4793-9145-29d965e34c2b New fsid: f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd Set superblock flag CHANGING_FSID Change fsid in extent tree Change fsid in chunk tree Clear superblock flag CHANGING_FSID Fsid change finished 
May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 2 transid 12 /dev/sdg1 scanned by udevd (29766)
May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 3 transid 12 /dev/sdb1 scanned by udevd (29765)
May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 1 transid 12 /dev/sde1 scanned by udevd (29764)

 

@tronyxwere all devices connected when you changed the UUID?

Link to comment
2 hours ago, JorgeB said:

Was going to say that it would probably be a good idea, or change them for all pool members, but I tested and it changed them for all pool devices:

 

May 25 12:40:35 Tower15 unassigned.devices: Changed partition UUID on '/dev/sdg1' with result: Current fsid: cf6ba21f-646f-4793-9145-29d965e34c2b New fsid: f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd Set superblock flag CHANGING_FSID Change fsid in extent tree Change fsid in chunk tree Clear superblock flag CHANGING_FSID Fsid change finished 
May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 2 transid 12 /dev/sdg1 scanned by udevd (29766)
May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 3 transid 12 /dev/sdb1 scanned by udevd (29765)
May 25 12:40:35 Tower15 kernel: BTRFS: device fsid f49fbb0d-3c6e-46bf-b03d-bc76c86c3cdd devid 1 transid 12 /dev/sde1 scanned by udevd (29764)

 

@tronyxwere all devices connected when you changed the UUID?


Yes, all drives were connected.

Link to comment

Checked the syslog, however, and see the following:

 

May 24 10:40:16 Morgoth  ool www[20370]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'uuid_change'
May 24 10:40:36 Morgoth kernel: BTRFS: device label Torrents devid 1 transid 34810 /dev/sde1 scanned by udevd (29068)
May 24 10:40:43 Morgoth kernel: BTRFS: device label Torrents devid 2 transid 34810 /dev/sdf1 scanned by udevd (29069)
May 24 10:40:57 Morgoth unassigned.devices: Warning: shell_exec(/sbin/btrfstune -uf '/dev/sde1') took longer than 20s!
May 24 10:40:57 Morgoth unassigned.devices: Changed partition UUID on '/dev/sde1' with result: command timed out

 

Link to comment
8 minutes ago, JorgeB said:

That's strange since I don't see a way to change the UUID of a single pool member when all are connected, post the output of:

btrfs fi show

 


Here is the output:

 

root@Morgoth:~# btrfs fi show
Label: none  uuid: fc5ecaa9-cdce-4b18-afe2-0d3640b5e669
        Total devices 2 FS bytes used 663.39GiB
        devid    1 size 894.25GiB used 722.03GiB path /dev/sdc1
        devid    2 size 894.25GiB used 722.03GiB path /dev/sdb1

Label: 'Plex_Docker_AppData'  uuid: 8a3a773d-d3f3-46b2-9783-b5c5c517d2b7
        Total devices 2 FS bytes used 127.39GiB
        devid    1 size 465.76GiB used 253.03GiB path /dev/nvme0n1p1
        devid    2 size 465.76GiB used 253.03GiB path /dev/nvme1n1p1


ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: superblock checksum matches but it has invalid members
ERROR: cannot scan /dev/sdf1: Input/output error
ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: superblock checksum matches but it has invalid members
ERROR: cannot scan /dev/sde1: Input/output error


Therein lies the issue which I'm trying to fix.

Edited by tronyx
Link to comment
8 minutes ago, tronyx said:
ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: superblock checksum matches but it has invalid members
ERROR: cannot scan /dev/sdf1: Input/output error
ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: superblock checksum matches but it has invalid members
ERROR: cannot scan /dev/sde1: Input/output error

 

This shows the UUID change didn't complete, so it's not just a case of both pool members having a different UUID, that I could reproduce, by changing the UUID with one of the devices disconnected, and fix, not sure about this since I cannot create an identical situation, lets try this:

 

First try changing the UUID again for only one of the devices, the other one still has the same FSID, so lets see if it can change for both automatically:

 

btrfstune -u /dev/sdf1

 

If this one doesn't work try the other one:

btrfstune -u /dev/sde1

 

If neither works try manually changing both to the same UUID, if it works for the 1st one, to change the 2nd device you must temporarily disconnect the 1st one or it will complain that the UUID already exists:

 

btrfstune -U 81fe799a-b046-40f8-b9af-fe886306ba0d /dev/sdf1

 

If it fails try the other one:

 

btrfstune -U 81fe799a-b046-40f8-b9af-fe886306ba0d /dev/sde1

 

If successful for one of them disconnect or use UD to detach that device then do it for the other one, then reboot and post new btrfs fi show output.

 

 

 

 

 

  • Like 2
Link to comment
6 minutes ago, JorgeB said:

 

This shows the UUID change didn't complete, so it's not just a case of both pool members having a different UUID, that I could reproduce, by changing the UUID with one of the devices disconnected, and fix, not sure about this since I cannot create an identical situation, lets try this:

 

First try changing the UUID again for only one of the devices, the other one still has the same FSID, so lets see if it can change for both automatically:

 

btrfstune -u /dev/sdf1

 

If this one doesn't work try the other one:

btrfstune -u /dev/sde1

 

If neither works try manually changing both to the same UUID, if it works for the 1st one, to change the 2nd device you must temporarily disconnect the 1st one or it will complain that the UUID already exists:

 

btrfstune -U 81fe799a-b046-40f8-b9af-fe886306ba0d /dev/sdf1

 

If it fails try the other one:

 

btrfstune -U 81fe799a-b046-40f8-b9af-fe886306ba0d /dev/sde1

 

If successful for one of them disconnect or use UD to detach that device then do it for the other one, then reboot and post new btrfs fi show output.

 

 

 

 

 


Changing the UUID for the 1st drive failed, but the 2nd drive worked:

 

root@Morgoth:~# btrfstune -u /dev/sdf1
WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: superblock checksum matches but it has invalid members
ERROR: cannot scan /dev/sdf1: Input/output error
ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: superblock checksum matches but it has invalid members
ERROR: cannot scan /dev/sde1: Input/output error
WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
warning, device 1 is missing
WARNING: it's recommended to run 'btrfs check --readonly' before this operation.
        The whole operation must finish before the filesystem can be mounted again.
        If cancelled or interrupted, run 'btrfstune -u' to restart.
We are going to change UUID, are your sure? [y/N]: y
Current fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61
New fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61
Set superblock flag CHANGING_FSID
ERROR: failed to write bytenr 956768075776 length 16384: Input/output error
ERROR: btrfstune failed
root@Morgoth:~# btrfstune -u /dev/sde1
WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: dev_item UUID does not match fsid: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
ERROR: superblock checksum matches but it has invalid members
ERROR: cannot scan /dev/sde1: Input/output error
WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
WARNING: ignored: dev_item fsid mismatch: 62b54680-89b9-47a3-a1be-a770935f18df != 00000000-0000-0000-0000-000000000000
WARNING: it's recommended to run 'btrfs check --readonly' before this operation.
        The whole operation must finish before the filesystem can be mounted again.
        If cancelled or interrupted, run 'btrfstune -u' to restart.
We are going to change UUID, are your sure? [y/N]: y
Current fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61
New fsid: d5cddf19-cc3e-42bc-8607-4e4242a02d61
Set superblock flag CHANGING_FSID
Change fsid in extents
Change fsid on devices
Clear superblock flag CHANGING_FSID
Fsid change finished
root@Morgoth:~# btrfs fi show
Label: none  uuid: fc5ecaa9-cdce-4b18-afe2-0d3640b5e669
        Total devices 2 FS bytes used 661.49GiB
        devid    1 size 894.25GiB used 722.03GiB path /dev/sdc1
        devid    2 size 894.25GiB used 722.03GiB path /dev/sdb1

Label: 'Plex_Docker_AppData'  uuid: 8a3a773d-d3f3-46b2-9783-b5c5c517d2b7
        Total devices 2 FS bytes used 127.39GiB
        devid    1 size 465.76GiB used 253.03GiB path /dev/nvme0n1p1
        devid    2 size 465.76GiB used 253.03GiB path /dev/nvme1n1p1

Label: 'Torrents'  uuid: d5cddf19-cc3e-42bc-8607-4e4242a02d61
        Total devices 2 FS bytes used 1.22TiB
        devid    1 size 3.64TiB used 1.23TiB path /dev/sde1
        devid    2 size 3.64TiB used 1.23TiB path /dev/sdf1


I was then able to mount the Pool and everything seems to be good to go again. Thank you so much!

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.