Jump to content

Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

2 hours ago, AgentXXL said:

In the meantime, is it as simple as replacing the plugin package on my USB key with the previous version and rebooting? NVM - I just restored the 2024.05.01 version of the plugin and it's working. I'll have to ignore updates until I get the motherboard issue fixed. Not having a useable syslog is very frustrating to say the least.

OK, that's odd. I saw the drives all show up again after reverting to the older version, but now it's back to the same timeout issue. And when I checked the USB key, the plugin had been re-updated to 2024.05.06. Is there some new process in unRAID 6.13 that updates plugins automatically on start of the array?

Link to comment
5 hours ago, biologicalrobot said:

 

The Pluto server is the other unraid server that I am trying to connect to via NFS/SMB. The share name is "Mass Dick" as it is my 12TB+ array.

Give me the result of these two commands:

/usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO/2049) &>/dev/null'; echo $?

/usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO.LOCAL/2049) &>/dev/null'; echo $?

 

Link to comment
1 minute ago, dlandon said:

Give me the result of these two commands:

/usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO/2049) &>/dev/null'; echo $?

/usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO.LOCAL/2049) &>/dev/null'; echo $?

 

The result for /usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO/2049) &>/dev/null'; echo $?    is
1

 

The result for /usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO.LOCAL/2049) &>/dev/null'; echo $? is
124

 

Let me know if you need anything else

Link to comment
5 minutes ago, biologicalrobot said:

The result for /usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO/2049) &>/dev/null'; echo $?    is
1

 

The result for /usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO.LOCAL/2049) &>/dev/null'; echo $? is
124

 

Let me know if you need anything else

Post Unraid diagnostics for PLUTO.

Link to comment
Posted (edited)

Hi there,

I believe I'm having the exact same issue as @biologicalrobot, where the recent change in UD seem to be causing my remote shares on a second Unraid server to no longer show as online nor be mountable via either SMB or NFS.

 

Have tried:

- rebooting both unraid servers

- disabling/re-enabling SMB and NFS

- deleting and re-adding multiple shares with various permissions

- re-adding via host name and by IP

- second server successfully pings (although it looks like the change makes that no longer relevant)

- shares are still accessible on other systems (several windows machines)

 

I ran the commands @dlandon posted and got the same results of 1 and 124, respectively. Sharing my diagnostics as well in case it proves helpful.

backphox-diagnostics-20240507-2016.zip

Edited by darkphox
Link to comment
Posted (edited)
On 5/7/2024 at 9:55 PM, dlandon said:
/usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO/2049) &>/dev/null'; echo $?

/usr/bin/timeout 1 bash -c '(echo >/dev/tcp/PLUTO.LOCAL/2049) &>/dev/null'; echo $?

 

 

Hi dlandon

 

I think what you wanted to achieve is this:

(echo quit && sleep 1) | telnet PLUTO 2049 2>&1 | grep -q "Connected to"; echo $?

(echo quit && sleep 1) | telnet PLUTO.LOCAL 2049 2>&1 | grep -q "Connected to"; echo $?

 

Greetings

 

*(the command was correctly - i see that wrong)

Edited by Amane
Link to comment
5 hours ago, biologicalrobot said:

Here it is!

 

13 minutes ago, darkphox said:

 

I ran the commands @dlandon posted and got the same results of 1 and 124, respectively. Sharing my diagnostics as well in case it proves helpful.

The updated nethod of checking for servers to be online is by checking that the relevant port is open (SMB 445, and NFS 2049).  The commands you ran checked that the port for NFS was open,  The response should be 0.

 

When you add a remote share and do a search for the server, what comes up in the list is what the search found when it scanned IPs for open ports and the name was looked up  The Unraid server should show asTOWER.Local TLD.  Your server is referenced on the LAN by the name (i.e.TOWER) with the Local TLD appended.  The Local TLD is set in the Settings->Mansgement Access tab.  Normally it is blank or "local".  Both those settings result in "LOCAL" being appended to the server name.  Let me know if it shows something else.  Also do a manual port scan with the commands I posted and use the server name exactly as it shows in the server dropdown list.  That will let me know what I might need to do.

 

Also be sure NetBIOS is off.  I haven't tested with it on and it may change the naming of SMB servers.

 

For a quick fix, use the remote servers IP address.

Link to comment
10 hours ago, AgentXXL said:

Alas I was unsuccessful in attempting to flash the BMC firmware - the chip won't even read, which is what the Supermicro UEFI programming utility said. I'll be ordering a replacement chip later today. Once I get the replacement chip I'll program it and then do the surgery to replace the dead one on my motherboard.

 

In the meantime, is it as simple as replacing the plugin package on my USB key with the previous version and rebooting? NVM - I just restored the 2024.05.01 version of the plugin and it's working. I'll have to ignore updates until I get the motherboard issue fixed. Not having a useable syslog is very frustrating to say the least.

I'm putting out a release this evening.  For the time being I'll increase the time out to 30 seconds.  Let's see if the UD page will refresh for you.

  • Like 1
Link to comment
9 minutes ago, dlandon said:

 

The updated nethod of checking for servers to be online is by checking that the relevant port is open (SMB 445, and NFS 2049).  The commands you ran checked that the port for NFS was open,  The response should be 0.

Interesting. These ports are open on my firewall (for my LAN at least), and both SMB and NFS searches are finding the shares when going through the process to add them. It's once they're added that it can't seem to see the share as 'online.'

10 minutes ago, dlandon said:

 

The Local TLD is set in the Settings->Mansgement Access tab.  Normally it is blank or "local".  Both those settings result in "LOCAL" being appended to the server name.  Let me know if it shows something else. 

 

For a quick fix, use the remote servers IP address.

My local TLD was customized. A few months back I changed it so I could connect with a domain address instead of the server's IP to avoid the "potential security risk ahead" warning.

 

I deleted this field, so it is now blank. Adding a share via the automated setup still shows offline, but connecting a share via manually entering its IP now shows the share online and mountable. I'm out of time tonight, but I'll do the port scans tomorrow.

Link to comment
Posted (edited)
9 minutes ago, dlandon said:

I'm putting out a release this evening.  For the time being I'll increase the time out to 30 seconds.  Let's see if the UD page will refresh for you.

Thanks! Hopefully it will work. Still puzzled as to how the plugin updates itself automatically during a reboot. I may have missed something in the release notes for 6.13 beta 2 (or earlier). I do have Docker containers updating automatically via Appdata Backup but that doesn't do plugins. I tried another reboot and the same thing happened - the 2024.05.01 version I put in /boot/config/plugins/unassigned.devices/ gets replaced by 2024.05.06 (which I had deleted).

 

I mentioned this in another post but I still haven't been able to fix the BMC firmware issue - the flash chip can't even be read by either the Supermicro UEFI flash utility or by my external CH341a programmer. So I've ordered a replacement chip which should be here early next week.

 

In the meantime I can't even grab diagnostics... the constant spamming of the system log by these phantom messages from the iKIVM (part of the BMC/IPMI functionality) just holds things up. It's even taking my unRAID server almost double the time to boot (over 10 minutes). These messages start to appear the moment the system powers on and initializes the BMC. It's making trying to troubleshoot any issues a nightmare.

 

Regardless, thanks for increasing the timeout for this. Perhaps maybe make it a value we can set in Settings --> Unassigned Devices? As always, appreciate the effort you always make. I'll update you once I apply the new release to let you know if it's working.

Edited by AgentXXL
Link to comment

Hello,

 

I am having a very similar issue. My unassigned devices were showing fine this morning, but about 20 minutes after the UD plugin updated, my shares show that the "server is offline". My shares from the NAS are accessible on the network, and the issue seems to be directly related to Unraid or the UD plugin. How can I go about rolling back the plugin version for further troubleshooting?

 

Thank you.

Link to comment
2 minutes ago, darkphox said:

Interesting. These ports are open on my firewall (for my LAN at least), and both SMB and NFS searches are finding the shares when going through the process to add them. It's once they're added that it can't seem to see the share as 'online.'

If you use the IP address, the command checking the ports will show as open.  That's what the initial server scan does.  After the server scan returns the list of IP addresses with open ports, I try to find the name for that server.  Apparently there are issues with the name resolution on some machines as it works fine for me.

Link to comment
5 minutes ago, Wsunderlage said:

Hello,

 

I am having a very similar issue. My unassigned devices were showing fine this morning, but about 20 minutes after the UD plugin updated, my shares show that the "server is offline". My shares from the NAS are accessible on the network, and the issue seems to be directly related to Unraid or the UD plugin. How can I go about rolling back the plugin version for further troubleshooting?

 

Thank you.

Use an IP address for your remote server instead of the name.

  • Upvote 1
Link to comment
1 hour ago, dlandon said:

I'm putting out a release this evening.  For the time being I'll increase the time out to 30 seconds.  Let's see if the UD page will refresh for you.

Release 2024.05.07 has been installed. Initially I thought it was working as I saw the UD drives and shares.

 

I did some tests by changing the tab I was on in unRAID and then going back to the Main tab. Each time it took between 12 and 15 seconds for the UD drives and shares to show up. Great!

 

For a refresh started by clicking the refresh symbol from the UD controls, it timed out after 30 seconds (which it should since that's the new timeout value). Alas that refresh attempt seems to have borked it again as now changing tabs or reloading the Main tab won't display the drives or shares.

 

But interestingly, I left it on the Main tab for a few minutes and then came back and the drives and shares were now visible. I suspect some of this behavior (maybe all of it) is also being influenced by the constant spamming of the syslog. I have been noticing other things taking more time to complete since the BMC firmware died.

 

Regardless, it's working enough for my needs right now. Thanks again!

Link to comment
11 hours ago, Amane said:

 

Hi dlandon

 

I think what you wanted to achieve is this:

(echo quit && sleep 1) | telnet PLUTO 2049 2>&1 | grep -q "Connected to"; echo $?

(echo quit && sleep 1) | telnet PLUTO.LOCAL 2049 2>&1 | grep -q "Connected to"; echo $?

 

Greetings

 

*(the first command check the exitcode from the timeout)

That does pretty much the same thing as the port check commands I posted.  I do find them to be slightly faster.

  • Like 1
Link to comment

Hi everyone,


I'm seeking some guidance with managing an HFS+ formatted drive in unRAID. Here's a breakdown of what happened:


Initial Setup:
* I had an HFS+ formatted HDD connected to unRAID, appearing as dev 2 in the Unassigned Devices section.
* I added a second HDD (formerly a parity drive) as dev 3 with the intention of using it as a backup destination for the HFS+ drive.


Formatting and Rebooting:


* Seeing no HFS+ formatting options in unRAID, I shut down the drive (dev 3) and formatted it with HFS+ on an iMac.
* Upon reconnecting the formatted drive (dev 3) to unRAID, I saw a "reboot" icon next to it.


Troubleshooting Attempts:


* I performed a full system reboot of unRAID with both drives (in USB 3.0 docks) powered off during the process.
* After the unRAID reboot and array mount, I powered on the USB docks one by one.
* In the Unassigned Devices GUI, I mounted dev 2 (the HFS+ drive). This resulted in the "reboot" icon reappearing next to dev 3 (the backup drive).


Current Situation:


* I'm unsure how to proceed with using dev 3 as a backup destination for dev 2 within unRAID.
* Does the "reboot" icon next to dev 3 require a system reboot even though it previously had no data on it?


Additional Information:


* I have both the Unassigned Devices plugin and the Unassigned Devices Plus (UD+) plugin installed.


I appreciate any insights and advice you can offer to help me achieve my backup goals.


Attached: unRAID Diagnostics


Thanks!

server2018-diagnostics-20240508-1230.zip

Link to comment
1 hour ago, oh-tomo said:

Does the "reboot" icon next to dev 3 require a system reboot even though it previously had no data on it?

The mount button indicators are documented in the first post.  For me to investigate further, please supply the ud diagnostics.  Go to a command line and type 'ud_diagnostics'.  Then post the /flash/logs/ud_diagnostics.zip file here.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...