Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

I have 5 drives that are direct attached to my server.  All of them are formatted NTFS.  Only two of them will allow me to mount them with UD.  Where should I start looking to see why the other 3 drives won’t let me mount them?  They’re all identical model drives and all of them show up in windows fine.

Link to comment

I have a remote share that is automounted by UD when the system first boots and the array is started.   However if I Stop the array this remote share is unmounted by UD and I have to manually mount it again myself if I want to continue using it.   Is there any way I can tell UD that I do NOT want this share marked as being auto-mounted actually being unmounted when the array is stopped (or at least it should be auto-mounted again after the Stop completes)? 

 

In principle I guess the same question can be applied to local drives managed by UD. 

 

Is this expected behaviour or a bug?  If expected then I was thinking of something along the lines of the PASS THRU toggle that the user can set might be a good idea.

Link to comment

I have a remote share located on a proxmox server, served via a lxc container running samba on Debian. As soon as I mount the remote share on unRaid (6.9.0-rc1) with UD (2020.12.11a), I am getting constant reads on the remote share and heavy cpu load on the lxc container caused by the smbd. I am not sure when this behavior started, but it was quite recently.

Is there a way to find out what is actually happening? The OpenFiles plug-in does not show any use of the mapped share. Please tell me if I can provide any further data for troubleshooting.

Cheers

Link to comment
9 hours ago, smidley said:

I have 5 drives that are direct attached to my server.  All of them are formatted NTFS.  Only two of them will allow me to mount them with UD.  Where should I start looking to see why the other 3 drives won’t let me mount them?  They’re all identical model drives and all of them show up in windows fine.

The log is the best place to start.  If you can't figure it out, post your diagnostics.

Link to comment
2 hours ago, itimpi said:

I have a remote share that is automounted by UD when the system first boots and the array is started.   However if I Stop the array this remote share is unmounted by UD and I have to manually mount it again myself if I want to continue using it.   Is there any way I can tell UD that I do NOT want this share marked as being auto-mounted actually being unmounted when the array is stopped (or at least it should be auto-mounted again after the Stop completes)? 

 

In principle I guess the same question can be applied to local drives managed by UD. 

 

Is this expected behaviour or a bug?  If expected then I was thinking of something along the lines of the PASS THRU toggle that the user can set might be a good idea.

UD will unmount all devices and remote shares whenever the array is stopped.  UD does not know if it is a stop array event or a shutdown/reboot.  Everything has to be unmounted for a shutdown/reboot.  Any devices marked as auto mount will be mounted again when the array is restarted.  Be sure you have the latest version of UD because there was a bug that prevented remote shares to mount in some circumstances.

 

If it is still not working for you, post diagnostics.

Link to comment
14 minutes ago, paschtin said:

I have a remote share located on a proxmox server, served via a lxc container running samba on Debian. As soon as I mount the remote share on unRaid (6.9.0-rc1) with UD (2020.12.11a), I am getting constant reads on the remote share and heavy cpu load on the lxc container caused by the smbd. I am not sure when this behavior started, but it was quite recently.

Is there a way to find out what is actually happening? The OpenFiles plug-in does not show any use of the mapped share. Please tell me if I can provide any further data for troubleshooting.

Cheers

What on the Unraid server has access to this remote share?  Could an app constantly be reading the share?

 

Post diagnostics and I'll take a look.

Link to comment
On 12/12/2020 at 6:38 AM, ICDeadPpl said:

Just a minor request for convenience:
When adding a remote share it would be nice to have the text box selected/active in the pop-up, so one wouldn't need to click in every text box to fill in IP, username and password.

I have to take a closer look at this.  Unraid uses Sweet Alert for dialogs and user interface, and I am by no means an expert at it.

Link to comment
1 minute ago, dlandon said:

UD will unmount all devices and remote shares whenever the array is stopped.  UD does not know if it is a stop array event or a shutdown/reboot.  Everything has to be unmounted for a shutdown/reboot.  Any devices marked as auto mount will be mounted again when the array is restarted.  Be sure you have the latest version of UD because there was a bug that prevented remote shares to mount in some circumstances.

 

If it is still not working for you, post diagnostics.

I guess the best answer would be if Limetech added a new event to say that a shutdown/reboot sequence has been initiated so you can tell the 2 scenarios apart.  If such an event existed it would also allow me to remove some code that I currently have in my 'stop' file to instead be handled by User Scripts (assuming it was enhanced to support such an event).

 

Such an event would also help me with simplifying some coding in the new version of the Parity Check Tuning plugin that I am currently working on/testing so I may request this be added anyway.

Link to comment
1 minute ago, itimpi said:

I guess the best answer would be if Limetech added a new event to say that a shutdown/reboot sequence has been initiated so you can tell the 2 scenarios apart.  If such an event existed it would also allow me to remove some code that I currently have in my 'stop' file to instead be handled by User Scripts (assuming it was enhanced to support such an event).

 

Such an event would also help me with simplifying some coding in the new version of the Parity Check Tuning plugin that I am currently working on/testing so I may request this be added anyway.

You should make a feature request to LT.

Link to comment

@dlandon I have added a new post under pre-release. Not sure if it is a UD issue or unraid. But found plugging in a flash drive caused a SAS disk to spin up.

 

 

Also removed Libvirt hotplug to discount this, as shouldnt have been a factor but issue still happens with it uninstalled.

Edited by SimonF
Link to comment

New release of UD:

- The remote shares mounted at /mnt/remotes/ is now enabled for all versions of Unraid.  If you use the /mnt/disks/ mount point for remote mounted shares in your VMs or Docker Containers, you need to read the recommended post at the top of this forum.

- Change the device name nomenclature to meet Unraid standards.  i.e. 'dev1' is now shown as 'Dev 1'.  It now matches with the Dashboard.

- Fix disk spin up/down spinner on the wrong disk icon.

- Added 'Cancel' buttons to adding a remote share and formatting a disk dialogs.

- Update disk spinning status when a change is made.  i.e. disk spin up/down on 'Array Operation' will refresh the UD page.

- Update remote server status on a 15 second timer to ping remote servers and update remote server online status more often.  This is only done when the UD webpage is displayed.

- Add a spin down disk option to the rc.unassigned script so you can spin down a disk and have Unraid track the correct spinning status.  See the UD help for how to use this feature.

Link to comment
47 minutes ago, dlandon said:

What on the Unraid server has access to this remote share?  Could an app constantly be reading the share?

 

Post diagnostics and I'll take a look.

I have mounted the UD smb share as "Read/Write - Slave" to a Duplicacy Docker. Stopping the Docker does not change the behavior.

As long the array is started, I am not able to unmount the remote share, even with the Duplicacy Docker stopped.

 

Please find attached the diagnostics.

 

Moreover here is the output of testparm from the proxmox server - maybe this helps.

rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Registered MSG_REQ_POOL_USAGE
Registered MSG_REQ_DMALLOC_MARK and LOG_CHANGED
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[GD07]"
Loaded services file OK.
Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

# Global parameters
[global]
        log file = /var/log/samba/log.%m
        logging = file
        map to guest = Bad User
        max log size = 1000
        netbios name = BACKUPS
        obey pam restrictions = Yes
        pam password change = Yes
        panic action = /usr/share/samba/panic-action %d
        passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
        passwd program = /usr/bin/passwd %u
        server role = standalone server
        unix password sync = Yes
        usershare allow guests = Yes
        idmap config * : backend = tdb


[GD07]
        path = /srv/backups/GD07
        read only = No
        valid users = ***

 

gd07-diagnostics-20201213-1405.zip

Link to comment
14 minutes ago, paschtin said:

I have mounted the UD smb share as "Read/Write - Slave" to a Duplicacy Docker. Stopping the Docker does not change the behavior.

As long the array is started, I am not able to unmount the remote share, even with the Duplicacy Docker stopped.

 

Please find attached the diagnostics.

 

Moreover here is the output of testparm from the proxmox server - maybe this helps.


rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Registered MSG_REQ_POOL_USAGE
Registered MSG_REQ_DMALLOC_MARK and LOG_CHANGED
Load smb config files from /etc/samba/smb.conf
rlimit_max: increasing rlimit_max (1024) to minimum Windows limit (16384)
Processing section "[GD07]"
Loaded services file OK.
Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

# Global parameters
[global]
        log file = /var/log/samba/log.%m
        logging = file
        map to guest = Bad User
        max log size = 1000
        netbios name = BACKUPS
        obey pam restrictions = Yes
        pam password change = Yes
        panic action = /usr/share/samba/panic-action %d
        passwd chat = *Enter\snew\s*\spassword:* %n\n *Retype\snew\s*\spassword:* %n\n *password\supdated\ssuccessfully* .
        passwd program = /usr/bin/passwd %u
        server role = standalone server
        unix password sync = Yes
        usershare allow guests = Yes
        idmap config * : backend = tdb


[GD07]
        path = /srv/backups/GD07
        read only = No
        valid users = ***

 

gd07-diagnostics-20201213-1405.zip 108.44 kB · 0 downloads

I see no attempt in the logs to unmount the share.

 

You need to change the Cache Dirs settings:

Dec 13 12:22:25 GD07 cache_dirs: Setting Included dirs: 
Dec 13 12:22:25 GD07 cache_dirs: Setting Excluded dirs: Time\,Machine,appdata,backup,backup_misc,pictures_archive,restore,syslog,system
Dec 13 12:22:25 GD07 cache_dirs: min_disk_idle_before_restarting_scan_sec=60
Dec 13 12:22:25 GD07 cache_dirs: scan_timeout_sec_idle=150
Dec 13 12:22:25 GD07 cache_dirs: scan_timeout_sec_busy=30
Dec 13 12:22:25 GD07 cache_dirs: scan_timeout_sec_stable=30
Dec 13 12:22:25 GD07 cache_dirs: frequency_of_full_depth_scan_sec=604800
Dec 13 12:22:25 GD07 cache_dirs: ERROR: excluded directory 'Time\ Machine' does not exist.

Instead of excluding folders, you need to specify what folders to include.  Any disk mounted by UD will be included in the cached folders automatically.

Link to comment
5 minutes ago, akme1245 said:

Just noticed a problem and it’s possible it’s related to the new plugin. Haven’t totally narrowed it down yet. But Plex can’t see my UD NAS drive anymore. I have it set as a RW Slave. I can see the contents of the NAS drive in UnRAID and Krusader. But Plex can’t see it. 

Read the recommended post at the top of this forum for the fix.

Link to comment
7 hours ago, dlandon said:

You should make a feature request to LT.

Checked with Tom and it seems like a new event is unlikely (at least any time in the near future).

 

Just found out, though, that all mount points are automatically unmounted when a shutdown is in progress so no need for UD to do it explicitly just because the array is being Stopped (unless there is some specific reason that I have missed)!

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.