Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

1 minute ago, dlandon said:

This is confusing.  How does Windows mount the Synology?  Navigating the share how?

A windows host can mount a synology share without issue, I can either mount a drive in windows explorer, or I can go in the network and navigate the share there.

With that said, it means that the Synology is not the issue, ports are open otherwise windows wouldn't be able to mount the same folder I'm trying to mount in the Unraid, correct ? 

Link to comment
1 minute ago, ailliano said:

With that said, it means that the Synology is not the issue, ports are open otherwise windows wouldn't be able to mount the same folder I'm trying to mount in the Unraid, correct ? 

I don't think so.  I think you are confusing Windows browsing of shares with Unraid doing a CIFS mount.  They are different.

 

I have no issues mounting remote shares and have not heard of anyone else having issues like what you are having.  I'll keep looking, but I don't see anything.

 

The think the CIFS mount error is coming from the Synology and not from Unraid.  The research I did was of no help with this error code so I don't have any other suggestions.

Link to comment
46 minutes ago, dlandon said:

I don't think so.  I think you are confusing Windows browsing of shares with Unraid doing a CIFS mount.  They are different.

 

I have no issues mounting remote shares and have not heard of anyone else having issues like what you are having.  I'll keep looking, but I don't see anything.

 

The think the CIFS mount error is coming from the Synology and not from Unraid.  The research I did was of no help with this error code so I don't have any other suggestions.

I was able to successfully mount Synology share with CIFS on a linux vm, it's using SMB3 , unraid is the only one that can't mount this synology and not sure what else to look.
 

Link to comment
4 minutes ago, ailliano said:

it's using SMB3

Which version of SMB was successful in mounting to the Synology.

 

UD attempts to mount the remote share with the following sequence of SMB versions:

  • None to see if the remote server offers a version it supports.
  • then 3.1.1
  • then 3.0
  • then 2.0
  • finally 1.0

The next version is tried if the previous version fails.

 

It may be that the Synology needs a specific version not included here, and I need to add another version to try.

 

Maybe NFS would be a better solution to mount the Synology remote share in your case.

Link to comment
22 minutes ago, dlandon said:

Which version of SMB was successful in mounting to the Synology.

 

UD attempts to mount the remote share with the following sequence of SMB versions:

  • None to see if the remote server offers a version it supports.
  • then 3.1.1
  • then 3.0
  • then 2.0
  • finally 1.0

The next version is tried if the previous version fails.

 

It may be that the Synology needs a specific version not included here, and I need to add another version to try.

 

Maybe NFS would be a better solution to mount the Synology remote share in your case.

Synology shows that the linux vm is connected with SMB3 but not sure about the specific version. even then I have synology to allow SMB2 except v1.0 which should be compatible between unraid and synology. I can do NFS again but was trying to take advantage of Multi-channel support.

Link to comment
2 minutes ago, ailliano said:

Synology shows that the linux vm is connected with SMB3 but not sure about the specific version. even then I have synology to allow SMB2 except v1.0 which should be compatible between unraid and synology. I can do NFS again but was trying to take advantage of Multi-channel support.

Can you post the mount command from the Linux VM so I can compare?

Link to comment
1 hour ago, dlandon said:

By your screen shot, you're on 6.11.5.  This post is about 6.12.0-rc5.

 

Move your post to the UD forum, post diagnostics again and we'll carry on there.

 

Yeah, the screenshot is from a unraid-server running 6.11.5 that is the one that contains the share that I'm mounting on one with unraid 6.12.0-rc5, wich is the one that is throwing CIFS-lines in the log.

 

Attaching diagnostics from both servers.

"unraid" runs RC5, "nas" runs 6.11.5 (stable)

 

A courious thing here is that "nas" (stable) also have a mounted SMB-share from "unraid" but is not throwing those errors into the log, while "unraid" (RC5) has a mounted share and thows a lot of those lines in the log

 

EDIT: Fixed the quote.

nas-diagnostics-20230503-2019.zip unraid-diagnostics-20230503-2019.zip

Edited by Koenig
Fixed the quote.
Link to comment
29 minutes ago, ailliano said:

 

This is the command that is used on Unraid to mount your remote share:

/sbin/mount -t cifs -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.1.1,credentials='/tmp/unassigned.devices/credentials_Unraid' '//DS918/Unraid' '/mnt/remotes/DS918_Unraid'

 

The credentials file contents are:

username=

password=

domain=

 

Try the command on Unraid and modify the options, etc and see if one of them is un-compatible with Synology.

 

I'm wondering if you didn't set a domain when you set up the share.

 

 

Link to comment
3 minutes ago, dlandon said:
_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.1.1,credentials='/tmp/unassigned.devices/credentials_Unraid' '//DS918/Unraid' '/mnt/remotes/DS918_Unrai

 

5 minutes ago, dlandon said:

 

This is the command that is used on Unraid to mount your remote share:

/sbin/mount -t cifs -o rw,noserverino,nounix,iocharset=utf8,file_mode=0777,dir_mode=0777,uid=99,gid=100,vers=3.1.1,credentials='/tmp/unassigned.devices/credentials_Unraid' '//DS918/Unraid' '/mnt/remotes/DS918_Unraid'

 

The credentials file contents are:

username=

password=

domain=

 

Try the command on Unraid and modify the options, etc and see if one of them is un-compatible with Synology.

 

I'm wondering if you didn't set a domain when you set up the share.

 

 

Thanks for checking, what domain would I use ? is that the WORKGROUP ? 

Link to comment
48 minutes ago, Koenig said:

A courious thing here is that "nas" (stable) also have a mounted SMB-share from "unraid" but is not throwing those errors into the log, while "unraid" (RC5) has a mounted share and thows a lot of those lines in the log

 

I'm seeing this in the log:

Apr 29 07:15:56 NAS  rpc.mountd[6334]: v4.0 client detached: 0x602ff7716405b917 from "192.168.8.30:702"
Apr 29 07:18:05 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 9 TIMES] ###
Apr 29 07:18:19 NAS  ool www[4016]: Successful logout user root from 192.168.8.120
Apr 29 07:18:19 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 14 TIMES] ###
Apr 29 07:18:43 NAS webGUI: Successful login user root from 192.168.8.120
Apr 29 07:18:44 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 6 TIMES] ###

 

I'm not sure what this means, but the workgroup on your servers is not the same.  Unraid is ToK, NAS is TOK.  Windows doesn't care about the case because it is always upper case in Windows.  Linux may deal with it differently though because of the case difference.

Link to comment
6 minutes ago, dlandon said:

 

I'm seeing this in the log:

Apr 29 07:15:56 NAS  rpc.mountd[6334]: v4.0 client detached: 0x602ff7716405b917 from "192.168.8.30:702"
Apr 29 07:18:05 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 9 TIMES] ###
Apr 29 07:18:19 NAS  ool www[4016]: Successful logout user root from 192.168.8.120
Apr 29 07:18:19 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 14 TIMES] ###
Apr 29 07:18:43 NAS webGUI: Successful login user root from 192.168.8.120
Apr 29 07:18:44 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 6 TIMES] ###

 

I'm not sure what this means, but the workgroup on your servers is not the same.  Unraid is ToK, NAS is TOK.  Windows doesn't care about the case because it is always upper case in Windows.  Linux may deal with it differently though because of the case difference.

Nice catch, changed it too see if it matters.

Edited by Koenig
Link to comment

Updated plugins, including UD, earlier today and now I'm realizing I don't have the UD SMB shares anymore. Any thing change that could have caused this? I tried disabling SMB share on the disk and re-enabling it, but still nothing there. Running unRAID 6.9.2.

 

edit: Huh, I was fiddling around and refreshed and now it has come back again... strange.

Edited by deusxanime
update
Link to comment
7 hours ago, dlandon said:

 

I'm seeing this in the log:

Apr 29 07:15:56 NAS  rpc.mountd[6334]: v4.0 client detached: 0x602ff7716405b917 from "192.168.8.30:702"
Apr 29 07:18:05 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 9 TIMES] ###
Apr 29 07:18:19 NAS  ool www[4016]: Successful logout user root from 192.168.8.120
Apr 29 07:18:19 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 14 TIMES] ###
Apr 29 07:18:43 NAS webGUI: Successful login user root from 192.168.8.120
Apr 29 07:18:44 NAS kernel: CIFS: VFS: \\UNRAID\ISOs BAD_NETWORK_NAME: \\UNRAID\ISOs
### [PREVIOUS LINE REPEATED 6 TIMES] ###

 

I'm not sure what this means, but the workgroup on your servers is not the same.  Unraid is ToK, NAS is TOK.  Windows doesn't care about the case because it is always upper case in Windows.  Linux may deal with it differently though because of the case difference.

Unfortunately me changing this so the setting says "ToK" on both servers didn't help, lines like the ones below continue to come frequently:

May  4 02:28:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1182285
May  4 02:54:55 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1291108
May  4 02:57:00 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1299810
May  4 02:58:02 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1304178
May  4 03:17:17 Unraid kernel: CIFS: VFS: \\192.168.8.25\Backups Close unmatched open for MID:1384761

 

Link to comment
30 minutes ago, dlandon said:

Can you updatde the 6.11.5 server to 6.12?

It will eventually get updated, but probably not until there´s a 6.12.1 stable.

 

If that is the only course of action we have to wait, it is not like I will forget the issue anyway as there's plenty to remind me in the log ;-)

 

Link to comment
On 3/1/2023 at 1:27 PM, csendre said:

Any idea what could be happening

 

Well, i had a similar issue as well. The system works fine and like you said end up with that. I also use /tmp for plex but how the heck does plex have to do with unassigned devices that would cause such issue? Its a minor thing, but when you need to use the web ui, its not allways practical to reboot.

Link to comment

Earlier today while testing I accidentally did something that meant I ended up with about 15 drives under historical devices that I do not want to remain there.   I could successfully remove them one at a time by clicking on the 'x' against each drive, but I was wondering whether it would be practical to providing a mechanism for selecting multiple drives and then a Remove button that removes all selected drives at once rather than the 'x' against each drive?  Certainly not critical but thought it was worth asking.

  • Upvote 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.