Jump to content

Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

12 minutes ago, dlandon said:

Neither and it's not a bug.

 

Unraid assigns unassigned disks the 'devX' designation.  UD changes the 'devX' to 'Dev X' to make it a little clearer.  Given a little time and UD should update the 'dev1' on the GUI.

 

alright perfect :)

Link to comment

Hi, my friends.

I use a script to automatically back up my data when I connect my external USB disk. It works really well! But now, I'm trying to enhance the script to do the same thing while also encrypting the information.

Honestly, I don't have any idea how to do this. Is something like the following possible?

tar -cz $BACKUPDIR | gpg --encrypt -r $RECIPIENT_EMAIL | dd of="$BACKUP_DIR/backup-$(date +%Y%m%d).tar.gz.gpg"

 

Link to comment
12 minutes ago, trosma said:

Hi, my friends.

I use a script to automatically back up my data when I connect my external USB disk. It works really well! But now, I'm trying to enhance the script to do the same thing while also encrypting the information.

Honestly, I don't have any idea how to do this. Is something like the following possible?

tar -cz $BACKUPDIR | gpg --encrypt -r $RECIPIENT_EMAIL | dd of="$BACKUP_DIR/backup-$(date +%Y%m%d).tar.gz.gpg"

 

Format the disk as an encrypted disk.

  • Upvote 1
Link to comment
On 2/28/2024 at 11:32 AM, dlandon said:

That timer is the time to wait before starting to mount the remote shares.  This insures the network is ready, it has nothing to do with the time to mount shares.

 

No, it's hard coded.

 

Post your disgnostics so I can be sure to come up with the correct solution.

 

Then perhaps it shouldn't be hardcoded or at least be more than 10 seconds. 10 seconds aren't much on a encrypted WAN connection, which (not in my case) could be established by the requests.

 

My workaround is to manually mount the share in the terminal. The second and third share to the same target takes only 1 or 2 seconds. It's only the first to be too slow for the 10 s timeout.

Link to comment
1 hour ago, tbahn said:

Then perhaps it shouldn't be hardcoded or at least be more than 10 seconds.

Allowing all of the timeouts in UD to be programmable would not be practical.  

 

There is no way short of testing this timeout on every possible remote server there is to know the right timeout.  The latest version of UD increases the time out to 15 seconds.

Link to comment

I noticed an odd issue with SMB remote shares after my upgrade from 6.12.4 to 6.12.8.  I was seeing a 30 to 40 watt higher power usage on my servers if I mounted an SMB share using the computer name for example //Tower/Backup.  If I mounted as the IP address ( //192.168.1.1/Backup ) I do not see the increase in power usage.  It does not matter if I manually mounted the share or by a script, the same thing happens.  After about 10 mins a process named kworker/##-#-cifsiod starts using CPU.  While the CPU load is not very high it does drop my battery backup runtime by a good 5 to 7 mins between both servers.  I don't believe this is specifically a UD issue probably more of a 6.12.4 to 6.12.6/8 issue.  I was running UD 2024.02.17 on both 6.12.4 and 6.12.8.  This issue did not appear in 6.12.4 but does show up in 6.12.8.  Upgrading to UD 2024.03.03 has the same results.  The work around is easy, update my scripts using the SMB share name to the IP address.  I usually left the SMB share mounted all the time between the servers but I am probably going to change my scripts to mount the share only when needed.  I use the shares to run some backups between the two servers, the connection is not needed all the time.  I read through the last 12 pages in the forum and did not see anyone else post about this, my apologizes if it is a duplicate. I wanted to post in case anyone else is seeing the same thing.  

kworker -cifsiod usage.jpg

Link to comment

I see several disk devices with names similar to "Generic-_SD_MMC_20060413092100000-0:2" (see image below) in Unassigned Devices.  The "MOUNT" button is greyed out so I am unable to do anything with them.  I believe that they may be related with attaching and removing USB devices.  I have rebooted unRAID several times, but that does not seem to help.  I even tried uninstalling and reinstalling UD, but the devices always reappear.  Can anyone explain to me what they are and how to remove them?  I am running unRAID 6.12.8.

 

Thanks,

 

-Mark

Screenshot 2024-03-03 181914.png

Link to comment
5 hours ago, mftovey said:

I see several disk devices with names similar to "Generic-_SD_MMC_20060413092100000-0:2" (see image below) in Unassigned Devices. 

You have a card reader attached and it's one that still shows empty drives when no card is in. Just ignore.

Link to comment

Actually, all USB ports are empty except for the one that holds the unRAID jump drive.  There is no card reader attached.  This started out displaying only one of these Generic drives, then a second one appeared, later the third appeared, and now there are four.  I suspect these are artifacts left over from attaching and removing a jump drive,  There must be a way of removing them from memory but I can't find a way yet.

 

 

Link to comment
8 hours ago, mftovey said:

Actually, all USB ports are empty except for the one that holds the unRAID jump drive.  There is no card reader attached.  This started out displaying only one of these Generic drives, then a second one appeared, later the third appeared, and now there are four.  I suspect these are artifacts left over from attaching and removing a jump drive,  There must be a way of removing them from memory but I can't find a way yet.

Do the following:

  • Post diagnostics.
  • Click on the double arrows icon in the upper right of the UD page.
Link to comment

Hi,

 

getting some trouble too.

 

i was using the docker container for tail scale. was working flawless. since a few days it stops working. I switched to this plugin version. same problem.

 

i connect 2 unraid server together. then I want to mount a share with unassigned devices. 

 

Ping Unraid <-Tailscale-> Unraid works good. connection ok.

 

But unassigned devices say, mount point is offline... 

in network settings, I set the network extra "tailscale0"...

 

but there seems a setting I missed? the past 1 year it was working good. 

 

any idea? 🙂

Link to comment
2 hours ago, Johnny4233 said:

Hi,

 

getting some trouble too.

 

i was using the docker container for tail scale. was working flawless. since a few days it stops working. I switched to this plugin version. same problem.

 

i connect 2 unraid server together. then I want to mount a share with unassigned devices. 

 

Ping Unraid <-Tailscale-> Unraid works good. connection ok.

 

But unassigned devices say, mount point is offline... 

in network settings, I set the network extra "tailscale0"...

 

but there seems a setting I missed? the past 1 year it was working good. 

 

any idea? 🙂

The method for determining if a remote server share is available has changed from a ping to checking if the port is open for SMB on the remote server.  Check that SMB is enabled on your remote server.  Also, be sure port 445 is open on the remote server and it is not being blocked by tail scale or a firewall.

Link to comment
12 hours ago, Kilrah said:

These are clearly the 4 slots from a card reader... None built-in to the PC?

 

As a matter of fact, there are.  I am using an old Lenovo S20 tower and it does have card reader slots built into it.  I never used them and so I never paid any attention to them.  The physical labels on the slots match what appears in the unRAID screen, so that verifies the source of the displayed information.  But these only started to appear just recently, and they appeared one at a time with a long delay between each one showing up (days, weeks?).  Perhaps the last update to UD caused these to start appearing?  But why the delay between each line appearing?

    If that is all it is, I can live with them being displayed.  I was just uncomfortable with not knowing what this was about.

 

Thanks for solving the mystery.

 

Mark

 

Link to comment

Hi all, thanks for this great plugin @dlandon. I'm trying to understand the following…

 

2130910140_20240307UnassignedAPFSdrive.thumb.png.621c25fcff103c43edb04076d2753108.png

 

The drive in the picture is an APFS (Apple File System) drive mounted via UD+. The partition with all the data (I notice this with all macOS formatted APFS drives) is Read Only. I presumed (maybe incorrectly) that UD+ could mount APFS in R/W mode.

- Can Unnassigned Devices Plus mount APFS in R/W-mode? If not, all clear.

- If it should be able to mount the drive in R/W I'm wondering what could cause the RO lockup and how to solve this. Note that I have not set RO in the drive's preferences. 

- Does anyone know what the EFI partition means and why it's there? I've never seen this on my macOS system.

 

Thanks a lot, Jan

Link to comment

Hi  

I noticed, that after a power failure & (auto)restart of my Server (NUC), the NFS-Shares (on local Synologys) do not get mounted automatically. I use the U(nassigned)D(evices) Plugin, all NFS-Shares are set on automount.

What can I do, so the NFS-Shares get mounted "by themselves"? 

 

PS: I also noticed, that the NFS-Shares don't get mounted on reboot ... Do I need to set  /etc/fstab-entries, like in linux? then again I would think the plugin is there for that.

any help is welcome 

Edited by shingaling
Link to comment
16 minutes ago, shingaling said:

Hi  

I noticed, that after a power failure & (auto)restart of my Server (NUC), the NFS-Shares (on local Synologys) do not get mounted automatically. I use the U(nassigned)D(evices) Plugin, all NFS-Shares are set on automount.

What can I do, so the NFS-Shares get mounted "by themselves"? 

 

PS: I also noticed, that the NFS-Shares don't get mounted on reboot ... Do I need to set  /etc/fstab-entries, like in linux? then again I would think the plugin is there for that.

any help is welcome 

Post diagnostics.

Link to comment
7 minutes ago, dlandon said:

It looks to two shares from one remote server that won't mount:

Actually, after powerfailure/on reboot none of them mount.

But when I go into Unraid and even just click on "Add NFS Share", refresh the available NFS-Servers, Select one, and then cancel the procedure .... suddenly all mounts from this server are found again, and the "MOUNT"-button turns from grey to orange,

But it doesn't find them by itself on reboot/power failure-restart

 

Is there anything that can be done for UD to find the NFS-Shares again by itself, to automount them?

 

I come from Ubuntu. There I could just add the NFS-Shares to the /fstab. If power failed and came back, the server would restart, and load the paths from the /fstab anew. Never had to open any port on the NFS-Share-Servers.

Edited by shingaling
typo
Link to comment
13 minutes ago, dlandon said:

UD does not ping anymore to check for a server being online.

those servers are always on ... is there a way to set unraid (and UD) to wait on restart/reboot until network-connectivity is reached?

Edited by shingaling
Link to comment
1 hour ago, shingaling said:

Actually, after powerfailure/on reboot none of them mount.

But when I go into Unraid and even just click on "Add NFS Share", refresh the available NFS-Servers, Select one, and then cancel the procedure .... suddenly all mounts from this server are found again, and the "MOUNT"-button turns from grey to orange,

But it doesn't find them by itself on reboot/power failure-restart

It looks to me like the remote shares are auto mounting because 30 seconds after UD says it's waiting 30 seconds, the remote shares start mounting.

But two did not mount.

 

Then I see where you loged in and mounted two manually:

1 hour ago, shingaling said:

Is there anything that can be done for UD to find the NFS-Shares again by itself, to automount them?

Yes.  There may not be long enough time waiting for the remote server to be available.  There is a UD setting that sets how long UD will wait to mount remote shares.  Currently you have it set for 30 seconds:

Mar  8 07:32:10 Monolith-6 unassigned.devices: Waiting 30 secs before mounting Remote Shares...

The setting is 'Remote share mount wait time' in Settings->Unassigned Devices.

 

UD will also attempt to update the online status before trying to mount remote shares, so it should automatically get the current status before mounting.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...