Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

On 1/12/2024 at 11:56 PM, dlandon said:

You can't change the time, but you can get around it.  Install the User Scripts plugin and set up a script to run on first array start.  Enter this script and adjust the time delay to your liking:

sleep 10	// wait for 10 seconds after the array is started

// auto mount UD remote shares
/usr/local/sbin/rc.unassigned mount autoshares

 

It won't cause any issues with UD when it is run at  later time.

 

I have a similar issue.

My Backup NAS is most of the time down and will be waked once a week for backup.
I have set this up in UD with automount and it works flawlessly when the backup NAS is online.

When i restart unraid if the backup NAS is down i have the following in the logs:

Mar 18 19:16:11 MediaServer unassigned.devices: Mounting Remote Share '//192.168.0.100/Backup'...

Mar 18 19:16:11 MediaServer nginx: 2024/03/18 19:16:11 [alert] 6458#6458: worker process 13854 exited on signal 6

Mar 18 19:16:12 MediaServer unassigned.devices: Remote Server '192.168.0.100' is offline and remote share 'Backup' cannot be mounted.

 

UD shows it gray -> offline.
When i the backup NAS now, UD recognizes it and the dot goes green and the "mount" button will be orange.

If i press it it will mount.

But my use case is another one.
Some other dockers want accessing the mount and complaining that it does'nt exists -> they are correct because not auto mounted.

Can i use "/usr/local/sbin/rc.unassigned mount autoshares" in this siuation with an user script executed every minute?
Or is there another workaround for this scenario (backup NAS offline on boot -> will go online but not mounted in unraid UD)?

Or in other words is there a "force" parameter so i can force the mount (also when the remote is offline -> so just the mount folder is created?)

 

PS.: As a workaround, i am running the "... mount autoshares" now with an user script, 2 minutes prior the backup.


thanks

 

Edited by pOpYRaid
Link to comment
3 hours ago, pOpYRaid said:

PS.: As a workaround, i am running the "... mount autoshares" now with an user script, 2 minutes prior the backup.

How about you change your strategy and mount and unmount the shares as needed for your backup.  You could also add some code to check that the backup server is online by pinging it before auto mounting and either waiting for it to come on line or fail the backup.  In pseudo code:

  • Ping remote server and see if it is on line.
  • If not, wait for it to come on line, or fail the backup script.
  • rc.unassigned mount <backup remote share>.
  • Perform the backup...
  • rc.unassigned umount <backup remote share>.

I wouldn't have dockers running depending on the remote mount to be available if it is turned off.  Coordinate the use of the mount points with when the backup server is on line.

  • Thanks 1
Link to comment

I am having a weird bug on Unraid 6.12.8 where I cannot save or modify any of my scripts. When I go to save it merely flashes the screen and everything resorts back as if I had hit the reset button. The only thing I can do is delete a script. I am pulling my hair out and resorting to manually moving files. 

Link to comment
6 minutes ago, StriggityStrack said:

I am having a weird bug on Unraid 6.12.8 where I cannot save or modify any of my scripts. When I go to save it merely flashes the screen and everything resorts back as if I had hit the reset button. The only thing I can do is delete a script. I am pulling my hair out and resorting to manually moving files. 

I'm not having any trouble with scripts.  Is your disk or remote share mounted?  The script buttons act differently if the device is mounted.  If the script is running, you are even more limited in what you can do.  The only button that applies changes is 'Apply'.

 

There's an off chance you are running into a php warning or error.  Check the Tools->PHP Settings and see if anythjing is logged.

Link to comment
5 minutes ago, dlandon said:

I'm not having any trouble with scripts.  Is your disk or remote share mounted?  The script buttons act differently if the device is mounted.  If the script is running, you are even more limited in what you can do.  The only button that applies changes is 'Apply'.

 

There's an off chance you are running into a php warning or error.  Check the Tools->PHP Settings and see if anythjing is logged.

I checked my PHP settings and there is no log file. I have tried changing the script from the offline historical device section, when the drive is plugged in but not mounted, and also when I mount it. I have been at this for some time now and I feel like I am just making a dumb mistake. The only time I can invoke a log message is when I select "default", change the code, and then save.

I get "Mar 18 20:28:09 NAS unassigned.devices: Warning: Cannot use '/boot/config/plugins/unassigned.devices/' as a device script file name." I also try changing the file name and then get "Mar 18 20:25:51 NAS unassigned.devices: Warning: Cannot use 'asdfasdf' as a device script file name."

 

Link to comment
22 hours ago, StriggityStrack said:

I get "Mar 18 20:28:09 NAS unassigned.devices: Warning: Cannot use '/boot/config/plugins/unassigned.devices/' as a device script file name.

That can't be used as a file name because it is a folder.  The name you want is '/boot/config/plugins/unassigned.devices/some file name'.

Link to comment
8 minutes ago, dlandon said:

That can't be iused as a file name because it is a folder.  The name you want is '/boot/config/plugins/unassigned.devices/some file name'.

I also tried that, below is the before and after of when I select the save button. I am going to sleep on this, restart my server tomorrow, and give it a fresh go when my head is on straight. 

Screenshot 2024-03-18 210821.png

Screenshot 2024-03-18 210850.png

Link to comment

Hello, i have a problem with a partition on a Windows 11 VM:

I recently installed a new 4tb nvme and i created 3 NTFS partitions on it with a GParted VM as you can see :Screenshot 2024-03-19 at 09.44.06.png

i passed the first partition (gaming) to the VM via /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_AA230803N404TB01240-part1


the first time i used the VM the Windows disk management tool see it as unallocated space, so i had to format it and assign a drive letter.

Now every time i use it i have the error:

Screenshot 2024-03-19 at 09.48.31.png

What can i do?

 

 

 

Edited by Vulneraria
Link to comment

They are mounted to unraid in your screenshot, they must NOT be if the VM is to access them directly.

Not sure passing a single partition even works, might have to pass the whole disk or nothing.

Edited by Kilrah
Link to comment

Thanks for the reply! i tried to unmount the disk and format once again in the VM, let's see if this changes anything...

i added the partition as a second VDisk inside the configuration, manual location and /dev/disk/by-id/nvme-SPCC_M.2_PCIe_SSD_AA230803N404TB01240-part1 as the path, i think i saw it in an old SpaceInvaderOne video.

Link to comment
14 hours ago, dlandon said:

How about you change your strategy and mount and unmount the shares as needed for your backup.  You could also add some code to check that the backup server is online by pinging it before auto mounting and either waiting for it to come on line or fail the backup.  In pseudo code:

  • Ping remote server and see if it is on line.
  • If not, wait for it to come on line, or fail the backup script.
  • rc.unassigned mount <backup remote share>.
  • Perform the backup...
  • rc.unassigned umount <backup remote share>.

I wouldn't have dockers running depending on the remote mount to be available if it is turned off.  Coordinate the use of the mount points with when the backup server is on line.

 

Thanks, will write an user script, use rsync directly and ditch luckyback all together.

  • Like 1
Link to comment
22 hours ago, StriggityStrack said:

I am having a weird bug on Unraid 6.12.8 where I cannot save or modify any of my scripts. When I go to save it merely flashes the screen and everything resorts back as if I had hit the reset button. The only thing I can do is delete a script. I am pulling my hair out and resorting to manually moving files. 

Ok, it seems a change was made in Unraid 6.13 to address an issue with saving script files after a UD change.  It has been fixed in 6.13, but 6.11 and 6.12 will have an issue.  I will update UD once I come up with a way to make a proper fix.

  • Upvote 1
Link to comment

I have the problem that File Manager does not seem to work correctly with Exclusive Shares as the source for copy/move operations.

 

To reproduce:

  • Go to the Shares tab and start to browse a share that is an Exclusive Share
  • Select a folder within that share
  • Select the Copy or Move option

You are now only offered a list of disk shares for the destination of the copy.    I think that you should instead be offered a list of User Shares instead as you would if the source was not an Exclusive share.

 

The above process works seems to fine if the SOURCE is a none-exclusive share and the TARGET is an Exclusive share.

Link to comment
34 minutes ago, itimpi said:

I have the problem that File Manager does not seem to work correctly with Exclusive Shares as the source for copy/move operations.

 

To reproduce:

  • Go to the Shares tab and start to browse a share that is an Exclusive Share
  • Select a folder within that share
  • Select the Copy or Move option

You are now only offered a list of disk shares for the destination of the copy.    I think that you should instead be offered a list of User Shares instead as you would if the source was not an Exclusive share.

 

The above process works seems to fine if the SOURCE is a none-exclusive share and the TARGET is an Exclusive share.

Did you intend to report this as a UD issue?

  • Like 1
Link to comment
1 hour ago, datahorder said:

Hi everone,

noob here... 

is there a way to not hide "dot" files?
I tired the normal SMB settings, but this does ether not effect unassigned device shares or I am am doing something wrong.

Hiding dot files is controlled by the Settings->SMB setting.  It applies to all SMB shares - Unraid and UD shares.

Link to comment
1 hour ago, dlandon said:

Hiding dot files is controlled by the Settings->SMB setting.  It applies to all SMB shares - Unraid and UD shares.

 oh. 
i have the setting set to show dot files. but on my UD share, i do not see the files. but when i check on my unraid server itselfs in the cli i can see the files.

 

Link to comment
2 minutes ago, datahorder said:

 oh. 
i have the setting set to show dot files. but on my UD share, i do not see the files. but when i check on my unraid server itselfs in the cli i can see the files.

 

It works for me.  Post your diagnostics so I can see your setup.

Link to comment

Hi I really need some input on how to accomplish this using UAD.
I already have enabled a rootshare on each of my Unraid servers

  1. I really want a rootshare mapped between my 2 Unraid servers on UAD (So far I keep getting errors)
  2. I really would like this to utilize the direct 10Gb LAN connection between them 192.168.11.6 & 192.168.11.14 if possible?
  3. SMB or NFS?

So far I haven't been able to accomplish this using the UAD, do I need some Linux terminal commands to accomplish this?
I am planning to use the "luckybackup" docker to move large amount of data between them, but having a rootshare mount between them would be really helpful 

UPDATE:
Mapping the two rootshares on windows works!
But trying to mount a SMB rootshare between the Unraid servers does NOT work?
Same SMB part

\\SERVERNAME\Shares-Pools  or

\\IP\Shares-Pools

 

Unraid errors
image.thumb.png.3e0fbbfce6d513b0c4127804326d389f.png

Edited by casperse
Link to comment
3 hours ago, casperse said:

But trying to mount a SMB rootshare between the Unraid servers does NOT work?

I think I understand what you are trying to accomplish here.  It looks like a credentials issue.  You can't use the "root" credentials.  You should set up a user with priveledges to read and write all of your shares on PLXEZONE2.  I have one set as what I call an "Administrator".

 

You will know this works when you go to set up the share on PLEXZONE and the shares are listed when you click the "Load Shares" button using your "Administrator" credentials.  If the shares are not listed, your credentials are wrong.

Link to comment

Missing drives in the unassigned drives window under the main tab after updating to unraid version 6.12.8 and updating the unassigned devices plugin to version dated 2024.03.19.  (updated from 6.6.7 and unassigned devices dated 2019.03.31)

 

It looks like the SAMBA shares are still working on the drives that were previously shared, but MOST of the assigned drives did not show on the MAIN tab of unraid.  only two initially showed up.  Each time I press the REFRESH DISKS AND CONFIGURATION icon, it adds ONE drive to the list...  I am now seeing 7 of the 10 drives not in the array.  MOST of them were precleared and ready to add to the array when needed.  3 were being used as unprotected temporary data drives for misc. uses.

 

is there a concern with the drives not showing up as expected?

 

Also, previously under FS it listed precleared drives as being precleared.  Has that functionality been deliberately removed?  it was pretty convenient for hot spares.

 

Thanks for the plugin, it has served me well for many years.

Link to comment
36 minutes ago, electron286 said:

It looks like the SAMBA shares are still working on the drives that were previously shared, but MOST of the assigned drives did not show on the MAIN tab of unraid.  only two initially showed up.  Each time I press the REFRESH DISKS AND CONFIGURATION icon, it adds ONE drive to the list...  I am now seeing 7 of the 10 drives not in the array.  MOST of them were precleared and ready to add to the array when needed.  3 were being used as unprotected temporary data drives for misc. uses.

Post diagnostics for further help with this.

 

37 minutes ago, electron286 said:

Also, previously under FS it listed precleared drives as being precleared.  Has that functionality been deliberately removed?  it was pretty convenient for hot spares.

That status is lost once the server is rebooted because it is in RAM, but the drives are still precleared.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.