Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

40 minutes ago, dlandon said:

That would be best.  A diagnostic a week later doesn't really help.

Shutdown my Synology. All the syno share status dots turned grey.
Told the array to stop. My remote unraid went to an UNMOUNTING state. A minute later, the first syno mount did too.
Let it sit here for a couple of minutes, collected diag file ending in 1724.
Powered the syno back up. As soon as the NAS was online, the NFS share dots turned green and they all changed to show MOUNT. My remote unraid went back to showing UNMOUNT.
Collected another diag file ending in 1730
Remounted the shares
Started my array

unraid-diagnostics-20220409-1724.zip unraid-diagnostics-20220409-1730.zip

Link to comment
31 minutes ago, UncleStu said:

Shutdown my Synology. All the syno share status dots turned grey.
Told the array to stop. My remote unraid went to an UNMOUNTING state. A minute later, the first syno mount did too.
Let it sit here for a couple of minutes, collected diag file ending in 1724.
Powered the syno back up. As soon as the NAS was online, the NFS share dots turned green and they all changed to show MOUNT. My remote unraid went back to showing UNMOUNT.
Collected another diag file ending in 1730
Remounted the shares
Started my array

unraid-diagnostics-20220409-1724.zip 224.02 kB · 1 download unraid-diagnostics-20220409-1730.zip 223.14 kB · 1 download

The unmount is failing with a mount timeout:

Apr  9 17:21:52 unRAID unassigned.devices: Unmount cmd: /sbin/umount -fl '10.253.0.2:/mnt/user/ricketts' 2>&1
Apr  9 17:22:22 unRAID unassigned.devices: Error: shell_exec(/sbin/umount -fl '10.253.0.2:/mnt/user/ricketts' 2>&1) took longer than 30s!
Apr  9 17:22:22 unRAID unassigned.devices: Unmount of 'ricketts' failed: 'command timed out'

 

The '-fl' option does a force lazy unmount on a NFS mount.  The idea is to force the unmount even if the remote server is off-line.  It appears that is not working and by all the research I've done, it is supposed to.  I don't have an answer.

Link to comment

Ever since rc4 I cannot get my second unraid server to connect to the first.  I can get NFS working, and it sees the smb shares on my first server but wont mount them.  The first one can connect to the second.  Tried uninstalling the unassigned pluggins and reinstalling them along with reboots.  Like i said, it worked before rc4.  Both servers share the exact same security and rootshare settings.

 

This is the error in the log.

Apr 10 12:05:16 Nas1 kernel: CIFS: Attempting to mount \\9900K\rootshare
Apr 10 12:05:16 Nas1 kernel: CIFS: Status code returned 0xc000006d STATUS_LOGON_FAILURE
Apr 10 12:05:16 Nas1 kernel: CIFS: VFS: \\9900K Send error in SessSetup = -13
Apr 10 12:05:16 Nas1 kernel: CIFS: VFS: cifs_mount failed w/return code = -13

 

Link to comment
Just now, dlandon said:

That's a waste of time and doesn't solve anything.

 

It's telling you that you have a credentials issue.

I have quadruple checked the rootshare username, and even changed the password.  If i enter the password wrong, it wont even show the shares, so i know the password is right.

 

What else could it be if not the username and password?

Link to comment
4 minutes ago, sittingmongoose said:

I have quadruple checked the rootshare username, and even changed the password.  If i enter the password wrong, it wont even show the shares, so i know the password is right.

 

What else could it be if not the username and password?

Watch out for special characters - '$', '!', etc.

Link to comment
56 minutes ago, dlandon said:

Watch out for special characters - '$', '!', etc.

I do have a + and a ! in my password.  Is that a problem?

 

So I set all of the settings to be exactly the same on both servers.  Including usernames, passwords and rootshare settings.  My first server connects to the second perfectly.  But it doesnt work the otherway around.  I also know for sure my password is right because I tried to purposefully type it in wrong and it wont load the available shares at all.

 

And again, it all worked perfectly before, I upgraded from the last stable to rc4 and thats when it stopped working.  The one that upgraded in the host.  The client has been on beta releases early on.

 

Is there somewhere else I should be looking?  

 

Edit: I am able to access the problem rootshare and all the smb shares via windows pc.

Edited by sittingmongoose
Link to comment

Hey dlandon.  Thanks so much for this awesome tool.  I've been using it for years, and it's been a big help for certain tasks.

 

One of the things I occasionally use it for is for a removable btrfs JBOD drive pool of 5 USB HDD's.  It's so easy to plug it in, mount it, run my rsync backup job, then put it back in offline storage when I'm done.  I love the fact I don't have to stop/start the array to use it, that I don't get warnings when I unplug it, and that I don't get any fix common problems warnings for duplicated data on a cache disk.

 

I was recently sharing my solution with some fellow users, and I discovered that the tutorial for how to create the btrfs drive pool for UD was removed.  I reached out to JorgeB and he restored that post so now I have those instructions again.

 

While working with these other users on how to do this hot-pluggable backup pool, and comparing with how it works using stock Unraid pools, a few things cropped up that I wanted to ask you about.  After all, UD is the best tool for creating hot-pluggable drive pools that are normally stored offline, but there are a couple things Unraid pools do a bit better.

 

First, when mounting the 1st pool device, the buttons to mount the other devices remain enabled.  One of my fellow users got confused, and clicked mount on all devices, and then saw the pool was mounted multiple times.  Would it be possible to both make it more obvious that all the drives in the pool are now mounted, and to disable/hide the mount button on the other drives?  Currently the only indication is the partition size on the mounted drive.  Perhaps even the other drives that got mounted in the pool can be inset to the right, beneath the parent, to better indicate what is going on.

 

Second, would it be possible to add a feature in the GUI to add a partition to an existing pool?  I believe that Unraid pools let you do this, but in UD you have to go out to the command line and do the btrfs dev add... command to add the partition to a mount point.  I know it's a pretty easy command line, but some users are very uncomfortable with the cmd line and prefer the GUI approach.

 

I know most people seem to think that Unraid pools are the only game in town now, even your own documentation states to use them.  But for hot-pluggable, removable drive pools, UD is so much better, I hope you continue to support and enhance this capability.

 

Thanks!!!

Paul

 

Edited by Pauven
Link to comment
53 minutes ago, Pauven said:

First, when mounting the 1st pool device, the buttons to mount the other devices remain enabled. 

All devices in the pool must have the same mount point.  Click on the mount point when the devices are unmounted and make them all the same.

 

55 minutes ago, Pauven said:

Second, would it be possible to add a feature in the GUI to add a partition to an existing pool?

UD will mount an existing pool, but pool management is beyond UD's scope.

  • Thanks 1
Link to comment

I just tried changing the Mount Point to all be the same, and it won't let me.  It reports "Fail".  I think it is because it's changing the disk label and the mount point at the same time.

 

Is there a trick to doing this?

 

Errors in the log:

 

Apr 10 17:24:51 Tower unassigned.devices: Error: Device '/dev/sdx1' mount point 'Frankenstore' - name is reserved, used in the array or by an unassigned device.

 

Edited by Pauven
Link to comment
3 minutes ago, Pauven said:

I just tried changing the Mount Point to all be the same, and it won't let me.  It reports "Fail".  I think it is because it's changing the disk label and the mount point at the same time.

 

Is there a trick to doing this?

Try a blank the mount point and it should pick up the default pool label.  If that doesn't work, remove all the pool devices and delete each one in Historical devices.  Then re-install them.

 

Edit: The mount point has to be the disk label on the pool devices.

Link to comment
3 minutes ago, sittingmongoose said:

I changed my passwords to not have special characters and now everything works, except my 1st server won’t unmount the remote shares.  Do I need to reboot to get it unmount?

I'm not sure of your sequence of events, but probably.  Look in the log after your unmount attempts and see if it's because the mounts are busy.

Link to comment
1 hour ago, dlandon said:

Try a blank the mount point and it should pick up the default pool label.  If that doesn't work, remove all the pool devices and delete each one in Historical devices.  Then re-install them.

 

Edit: The mount point has to be the disk label on the pool devices.

 

So does that mean this isn't possible?  Sorry, I got confused. 

 

Since the mount point has to be the disk label, and it won't let me rename to an existing value, that makes it impossible to do the solution you offered, right?

Link to comment
10 minutes ago, Pauven said:

Since the mount point has to be the disk label, and it won't let me rename to an existing value, that makes it impossible to do the solution you offered, right?

Did you see my suggestion about removing all the historical information for the devices?

 

Enter the disk label as the mount point.

 

A pool of devices is where all the disks have the same label and the same UUID.  Because UD isn't pool aware, you have to trick it into finding the pool.

655176887_Screenshot2022-04-10180346.png.6b7c251a4fd94ff15b9e5029fcd63b69.png

669377364_Screenshot2022-04-10180419.png.7f57db618846e06248a881fd33f37015.png

Link to comment

I did see that, but then you appended with your edit and I thought you were changing your answer, hence my confusion.

 

I currently have 78.6 TB of data backed up in this pool, as-is.  If I follow those steps, is there any risk I could lose that data and have to repopulate the back-up?  It was over a week of copying, I don't want to have to do that again.

 

If I'm understanding you correctly, I can remove the 5 disks, delete the history, then 1 at a time insert the disk and rename it to the same pool name, delete my history again just to make sure, and then the next time I bring in all 5 drives at the same time, they will appear as a single pool.  Does that sound right?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.