Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

8 hours ago, dlandon said:

What I was looking for were the device designations.  I'm pretty sure there are invalid characters in the device designations.

I dont know what you mean by device designators, but the export path/filenames only contain letters, number, dashes and underscores, nothing else.

 

I wouldn't know why this should not be acceptable names and renaming them would be quite an effort as they are mounted by a lot of other machines, too.

Link to comment
8 hours ago, dlandon said:

Irrespective of that, PM me the /flash/config/plugins/unassigned.devices/samba_mount.cfg file and I can walk you through how to remove those remote shares manually.  Don't post that file on the forum.

 

samba_mount.cfg? we ares till talking about NFS shares, right? :)

Link to comment
8 hours ago, dlandon said:

Irrespective of that, PM me the /flash/config/plugins/unassigned.devices/samba_mount.cfg file and I can walk you through how to remove those remote shares manually.  Don't post that file on the forum.

 

you mean /boot/... ? should I just remove the NFS blocks in the config file?

 

this would encomass restarting the machine? or could this be done be restarting some component of UA?

Edited by murkus
Link to comment
26 minutes ago, murkus said:

I dont know what you mean by device designators, but the export path/filenames only contain letters, number, dashes and underscores, nothing else.

This is what I was talking about:

Screenshot 2023-08-25 053707.png

 

27 minutes ago, murkus said:

I wouldn't know why this should not be acceptable names and renaming them would be quite an effort as they are mounted by a lot of other machines, too.

The source is only for UD to identify the remote share and has to be free of characters that cause issues in php.  The actual share name characters are not an issue.

Link to comment
26 minutes ago, murkus said:

you mean /boot/... ? should I just remove the NFS blocks in the config file?

When you are in Unraid on a command line it is /boot/.  The SMB share is /flash/.

 

26 minutes ago, murkus said:

this would encomass restarting the machine? or could this be done be restarting some component of UA?

Depends.  If you would please PM me the file I asked for:

  • I'd be able to give you better guidance on how to fix the issue.
  • I can investigate why this came up to start with and try to prevent it from happening in the future
Link to comment
1 minute ago, dlandon said:

When you are in Unraid on a command line it is /boot/.  The SMB share is /flash/.

 

Depends.  If you would please PM me the file I asked for:

  • I'd be able to give you better guidance on how to fix the issue.
  • I can investigate why this came up to start with and try to prevent it from happening in the future

sure

Link to comment
1 minute ago, murkus said:

So if a share designtor is [peep.poop.net:/mnt/foop/feep] can I just edit it manually in the config file? Which characters should be removed from this designator? How may I restart UA without restartung the machine or array?

Please don't hand edit the file.  That is one reason this invalid situation happens.

 

One last time, please PM me the config file.

Link to comment
47 minutes ago, murkus said:

I have sent it, the file on the system has never been edited manually.

 

Yes, I didn't see any editing in the file.  The issue was a capitalization issue that has been fixed in UD.  The source server name needs to be in all capitals for an NFS share.

Link to comment
1 minute ago, murkus said:

Thanks for fixing the bug, it works again here!

An integrity check has been added to UD to be sure the configuration makes sense.  The invalid configuration shows when UD senses that the configuration has a problem and may not mount.  The idea is to indcate an issue so it can be fixed by re-adding the remote share.  This prevents the problem of when the 'Mount' button is clicked that the device doesn't mount.

 

The reason this came up is a bug in UD not capitalizing the server name in the Source for an NFS share.  So the integrity check is also catching bugs in UD.

Link to comment

Hey,
I'm having an issue with remote shares. They were working fine up until today, when two different unraid servers are thinking that the share is offline. However, I can ping the server from both machines, and I can connect to the share from my PC on the same network.
The only thing that has changed is that I updated to the latest version of unassigned devices yesterday. So I'm pretty sure that has something to do with it.
Any help would be greatly appreciated

projectneutron-diagnostics-20230826-1406.zip

Link to comment
7 minutes ago, Machine_Galaxy said:

The only thing that has changed is that I updated to the latest version of unassigned devices yesterday. So I'm pretty sure that has something to do with it.

There were some changes made in the latest version that affect this.  I will need the following informtion in order to troubleshoot this.  PM me the following:

  • Files:
    • /etc/hosts
    • /var/state/unassigned.devices/ping_status.json
  • The output from the following commands:
    • ping <remote share server>.
    • ping <remote share server.local>
    • ping <remote share server.Local_TLD>

I think it's related to your Local_TLD setting.

Link to comment

I have multiple partitions on a HDD I'm trying to mount. Only one of the partitions show up as mountable. The Partition I'm looking to mount is not listed, only part1/part2. The bulk of the data is on one of those partitions that I cannot mount. Any idea?

 

image.thumb.png.6f1f8e94bfe3d73e97651359b7f4454f.png

Link to comment
28 minutes ago, unyin said:

I have multiple partitions on a HDD I'm trying to mount. Only one of the partitions show up as mountable. The Partition I'm looking to mount is not listed, only part1/part2. The bulk of the data is on one of those partitions that I cannot mount. Any idea?

 

image.thumb.png.6f1f8e94bfe3d73e97651359b7f4454f.png

There are three partitions.  The third one is a ntfs partition.  The other two are probably Windows specific partitions with no file system or a file system that Linux cannot handle.

Link to comment
1 hour ago, dlandon said:

There are three partitions.  The third one is a ntfs partition.  The other two are probably Windows specific partitions with no file system or a file system that Linux cannot handle.

Figured it out. For whatever reason, probably because its a dynamic disk, my plex drive shows up twice logically as F: on windows. Nothing I can do about that so I just put it back in my windows desktop and I'm doing a remote share to copy things through krusader.

Link to comment
1 hour ago, clowncracker said:

I recently updated to 6.11.5 from 6.9.2, but now my unassigned device doesn't show any space (Used/Free).  This has led Unraid to believe there is no space available and has interrupted my backups.  What can I do to get the space data back?

The problem is more than just the space.  I see some strange stuff.  There is no 'Source' or 'Mountpoint', but the disk orb is green meaning that UD thinks the remote server is on line.

 

Do this:

  • Click on the double arrows on the upper right of the UD page.
  • If that doesn't fix things, pm me the '/flash/config/plugins/unassigned.devices/samba_mount.cfg' file.
Link to comment
37 minutes ago, dlandon said:

The problem is more than just the space.  I see some strange stuff.  There is no 'Source' or 'Mountpoint', but the disk orb is green meaning that UD thinks the remote server is on line.

 

Do this:

  • Click on the double arrows on the upper right of the UD page.
  • If that doesn't fix things, pm me the '/flash/config/plugins/unassigned.devices/samba_mount.cfg' file.

Refreshing Disks and Configuration didn't work, just PM'd you the file.  It's weird, if I use my Windows PC to access the SMB share (totally unrelated to Unraid) it works.  So I know it's online.

Link to comment
34 minutes ago, clowncracker said:

Refreshing Disks and Configuration didn't work, just PM'd you the file.  It's weird, if I use my Windows PC to access the SMB share (totally unrelated to Unraid) it works.  So I know it's online.

I believe I have resolved the issue.  The share somehow got unmounted even though the mount button is disabled.  I've manually remounted the share and have set AUTOMOUNT = true on the settings.

Link to comment
4 minutes ago, clowncracker said:

I believe I have resolved the issue.  The share somehow got unmounted even though the mount button is disabled.  I've manually remounted the share and have set AUTOMOUNT = true on the settings.

There is still a problem. The remote share does not show on the ud page.  I'll take a look at the configuration file.

Link to comment
1 hour ago, clowncracker said:

I believe I have resolved the issue.  The share somehow got unmounted even though the mount button is disabled.  I've manually remounted the share and have set AUTOMOUNT = true on the settings.

Does the remote share show in the UD page?  Your configuration is fine and I don't see an issue.

Link to comment

Booted up my Unraid Server today after very carefully moving the Server to my new place, and my two NVMe drives are not showing up. No other issues with any of the array drives or other unassigned devices that are installed. I see the following in the syslog:

 

root@Morgoth:~# grep -i nvme /var/log/syslog
Aug 29 19:08:08 Morgoth kernel: nvme nvme0: pci function 0000:09:00.0
Aug 29 19:08:08 Morgoth kernel: nvme nvme1: pci function 0000:0a:00.0
Aug 29 19:08:08 Morgoth kernel: nvme nvme0: Device not ready; aborting initialisation, CSTS=0x0
Aug 29 19:08:08 Morgoth kernel: nvme nvme1: Device not ready; aborting initialisation, CSTS=0x0
Aug 29 19:08:08 Morgoth kernel: nvme nvme0: Removing after probe failure status: -19
Aug 29 19:08:08 Morgoth kernel: nvme nvme1: Removing after probe failure status: -19


Already tried adding nvme_core.default_ps_max_latency_us=0 pcie_aspm=off at the end of the append line of the boot options for the default boot option, but it did not help. lshw shows them both as UNCLAIMED as well, but lspci sees them, or at least their controllers:

 

09:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983
0a:00.0 Non-Volatile memory controller: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983


I also tried upgrading from 6.11.5 to 6.12.3 and even reseating the drives, but nothing has helped. Are they both just magically dead?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.