Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

9 hours ago, bonienl said:

Duplicate serial numbers is clearly a bug at the manufacurer's side (WD).

A serial number is used to uniquely identify a product and is used in the RMA process of the vendor.

A two-bay dock presents both drives under the dock serial number rather than the drives' serial numbers.

Link to comment

Hi all,

I found a... bug? no potential enhancement in UA. If you are in a mounted drive on the CLI and then unmount the drive in UI through UA, it lets you do it - but it creates a mess. The drive won't properly unmount because it's in use, but there's no error or anything.. The disk shows as unmounted, but the directories are still there in the /mnt/disks/ - obviously accessing them causes an error. A simple mount and unmount fixes it all, but a detection for the drive being in use (similar to how the main array won't unmount if a disk is in use) would be a good improvement. 

Link to comment
7 hours ago, cinereus said:

When you click on the name and it says something like "type a new name" and the that's the name of the folder under /mnt/disks/

That doesn't help at all with automount or scripting since it still has no way to decide which of the 2 identical serial numbers that name should apply to.

Link to comment
58 minutes ago, trurl said:

That doesn't help at all with automount or scripting since it still has no way to decide which of the 2 identical serial numbers that name should apply to.

Sorry not following? I wasn't claiming it helps. That was my point. Is it not possible to not rename them both if we are set on serial ID being the only identifier?

Link to comment
2 hours ago, whiteatom said:

Hi all,

I found a... bug? no potential enhancement in UA. If you are in a mounted drive on the CLI and then unmount the drive in UI through UA, it lets you do it - but it creates a mess. The drive won't properly unmount because it's in use, but there's no error or anything.. The disk shows as unmounted, but the directories are still there in the /mnt/disks/ - obviously accessing them causes an error. A simple mount and unmount fixes it all, but a detection for the drive being in use (similar to how the main array won't unmount if a disk is in use) would be a good improvement. 

UD checks for a disk busy error when a disk is unmounted and if it is busy, it checks for open files.  If no files are open, the unmount is forced.  If you look at the log, you'll see the log messages showing this action.  If UD operated as you suggest, and this situation was to come up during a shutdown, you would get an unclean shutdown and I'd be hearing about a 'bug' in UD that would not allow the array to shutdown cleanly.

 

The problem is the mountpoint could not be removed because you were on the cli accessing files at the mountpoint.

 

While this situation might be an annoyance to you, it is really the best way for UD to operate.  Fix: Don't unmount a disk if you are in the cli on that disk.

Link to comment
11 minutes ago, cinereus said:

Sorry not following? I wasn't claiming it helps. That was my point. Is it not possible to not rename them both if we are set on serial ID being the only identifier?

UD works on the premise that each disk has a unique identifier (serial number) and was not built to handle duplicates because they should never come up.  Each disk is expected to have a unique serial number.

 

I think you are confused about the 'rename'.  It appears you think by setting the mountpoint name, the disk will then be unique,.  The name you are setting is the mountpoint for the disk and has nothing to do with making each disk 'unique' to UD.  The serial number is the only thing used to identify the disk.

Link to comment
24 minutes ago, dlandon said:

I think you are confused about the 'rename'.  It appears you think by setting the mountpoint name, the disk will then be unique,.  The name you are setting is the mountpoint for the disk and has nothing to do with making each disk 'unique' to UD.  The serial number is the only thing used to identify the disk.

No, I absolutely understand all that. I just mean what I say: because they have the same serial ID, it causes undesirable behaviour when using the rename function.

Link to comment

I just released an update to UD.  The changes are all related to the UI.  The most noticeable change is that the '+' icon has been removed.  You click on the serial number to access the partitions and mount points.  The other changes are related to page layout to better match the Unraid standard and fit things better on the page.

  • Thanks 1
Link to comment
On 4/25/2020 at 12:02 PM, dlandon said:

That's the way UD works.  You can make the shares hidden in UD settings.

Thanks for the advice, I've done that now.

 

But for security reasons it would be nice to be able to turn this feature off completely. I'm using the share for a backup purpose, and therefore I only want it to be shared one way to prevent unauthorised access by malware and ransomware. 

Link to comment

Hi everyone,

 

I'm facing a little anoyance because i probably am doing something wrong.

I have 2 ssd's in UD both with a vdisk for my only windows 10 vm.

If i install programs and games the space in UD disk status is going up (as it should of course) but when i uninstall it again lets say a game of 100Gb it won't update the diskspace used in UD and i have faced some drive fillup all ready and that results in unraid pausing my VM.

 

Please help with my Noobnes.

Link to comment
5 minutes ago, Timmex said:

Hi everyone,

 

I'm facing a little anoyance because i probably am doing something wrong.

I have 2 ssd's in UD both with a vdisk for my only windows 10 vm.

If i install programs and games the space in UD disk status is going up (as it should of course) but when i uninstall it again lets say a game of 100Gb it won't update the diskspace used in UD and i have faced some drive fillup all ready and that results in unraid pausing my VM.

 

Please help with my Noobnes.

VM vdisks are initially allocated as ‘sparse’ files which means only sectors written inside the VM actually use space at the host (Unraid level).  However is not atypical for the host to not know when a VM deletes a file internally so the space that file occupied remains allocated within the vdisk.  You should always assume that the vdisk can grow towards the logical size you allocated to the VM and avoid over-committing the physical space.

Link to comment
26 minutes ago, itimpi said:

VM vdisks are initially allocated as ‘sparse’ files which means only sectors written inside the VM actually use space at the host (Unraid level).  However is not atypical for the host to not know when a VM deletes a file internally so the space that file occupied remains allocated within the vdisk.  You should always assume that the vdisk can grow towards the logical size you allocated to the VM and avoid over-committing the physical space.

Does that also mean that when i delete a file and put it back again within that reserved space it won't use any more space than it already did.

Does it also do this when i put the vm in the cache disk?

Link to comment
Just now, Timmex said:

Does that also mean that when i delete a file and put it back again within that reserved space it won't use any more space than it already did.

Does it also do this when i put the vm in the cache disk?

When you delete a file and put it back it depends on whether the VM re-uses the same internal sectors.   If it does then the vdisk will not use any additional space.    However if the VM decides to use different internal sectors then additional space will be used by the vdisk.   The point is that the host is only aware of ‘sectors’ within the vdisk file and not how the VM is using them.   The moment the VM writes a sector with non-Zero values the vdisk has to have the space to store that sector.

 

This behaviour is independent of where the vdisk is located and is inherent in how vdisks operate.

 

it is possible to initially fully allocate the vdisk so that the physical and logical space are the same.   In such a case the vdisk will no longer grow as you are already using the maximum space the VMs will internally expect to be available.

 

Link to comment
21 minutes ago, itimpi said:

When you delete a file and put it back it depends on whether the VM re-uses the same internal sectors.   If it does then the vdisk will not use any additional space.    However if the VM decides to use different internal sectors then additional space will be used by the vdisk.   The point is that the host is only aware of ‘sectors’ within the vdisk file and not how the VM is using them.   The moment the VM writes a sector with non-Zero values the vdisk has to have the space to store that sector.

 

This behaviour is independent of where the vdisk is located and is inherent in how vdisks operate.

 

it is possible to initially fully allocate the vdisk so that the physical and logical space are the same.   In such a case the vdisk will no longer grow as you are already using the maximum space the VMs will internally expect to be available.

 

At first i would like to thank you for this fully noob proof answer:P 

How would i be able to allocate the full space so it won't expant anymore?

Link to comment
1 hour ago, Timmex said:

If i install programs and games the space in UD disk status is going up (as it should of course) but when i uninstall it again lets say a game of 100Gb it won't update the diskspace used in UD and i have faced some drive fillup all ready and that results in unraid pausing my VM.

You can recover the space with trim, see here.

Link to comment
10 hours ago, dlandon said:

The other changes are related to page layout to better match the Unraid standard and fit things better on the page.

One request if you don't mind, I would like to see the devices grouped more closely together, similar to the array devices, so they didn't take so much vertical real estate and more could fit on the page, e.g. 8 arrays devices vs 8 UDs:

 

image.thumb.png.8cfdd64d568e82ce827d899b86d7c28d.png

 

image.thumb.png.7fecf0e483562f20b2027687dc9e6c1d.png

 

Also not a very big fan of the new color, maybe because they are links now? But not a big deal.

 

 

Link to comment
15 hours ago, johnnie.black said:

One request if you don't mind, I would like to see the devices grouped more closely together, similar to the array devices, so they didn't take so much vertical real estate and more could fit on the page, e.g. 8 arrays devices vs 8 UDs:

 

image.thumb.png.8cfdd64d568e82ce827d899b86d7c28d.png

 

image.thumb.png.7fecf0e483562f20b2027687dc9e6c1d.png

 

Also not a very big fan of the new color, maybe because they are links now? But not a big deal.

 

 

The color was to indicate highlighted text.  I agree, I have put back the '+' and removed the highlight.  Not all my ideas work out.

Edited by dlandon
Link to comment
On 4/17/2020 at 6:32 PM, alexdodd said:

I've got the same issue, its the plugin at some level missing them.

I can add the unassigned disks to the array, i just cant see them with the plugin.

I've learned to just live with it!

I have too. At the end of the day it isn't the worst thing to not have my SAS drives show up in UD but it would be handy in a pinch I must admit.

Link to comment
On 4/27/2020 at 9:01 PM, dlandon said:

If someone will post diagnostics, I can take a look.

I'm having the same issue.  I have 2 SAS SSD's installed that were showing up, I tried to add another drive to that controller and it wouldn't show up, so I tried restarting the server just to see if the controller could even see the drive (it didn't, drive is definitely bad, but it does still see the 2 SAS SSD's).  On restart, Unassigned Devices can't see those 2 SAS drives now, but you can find them in the '/dev/disk/by-id' to identify them, and even manually mount them if you want (both contain XFS partitions, one I'm using for my plex transcoding cache, and the other I'm going to use for my Docker appdata).  I can always edit the go file to mount those 2 drives where they need to be by their 'by-id' info (just in case them move from their 'sd?' designation), but they WERE working in UD, but it will NOT see them now (through about 4 boots so far, and this damned HP DL380p-G8 takes forever to reboot, lol).  I'm going to attach my current diagnostics to this post.  Also, is there a reason you can't mount a drive in UD wherever you want (I believe this was the old behavior), and not be forced to mount them in /mnt/disks/ ?  Or would it be possible to make that a toggleable feature?

media01-diagnostics-20200505-0102.zip

Link to comment

I've been running a Windows 10 VM with an unassigned NVME drive passed through for months without issue. It was set to auto mount at boot and I frequently would start it up and shut it down. Today I tried to fire it up and was presented with the attached error message. I look at the drives and see that the NVME failed to auto mount and will not do so manually, as well as a "historical drive" that I haven't seen before (screenshot attached).

 

I've been searching for an answer for quite some time and my best guess is that the XFS partition is corrupted. Is anyone able to shed some light on this situation?

 

Screen Shot 2020-05-05 at 1.23.36 AM.png

Screen Shot 2020-05-05 at 1.23.10 AM.png

Link to comment
5 hours ago, whiskeykilo said:

I've been running a Windows 10 VM with an unassigned NVME drive passed through for months without issue. It was set to auto mount at boot and I frequently would start it up and shut it down. Today I tried to fire it up and was presented with the attached error message. I look at the drives and see that the NVME failed to auto mount and will not do so manually, as well as a "historical drive" that I haven't seen before (screenshot attached).

 

I've been searching for an answer for quite some time and my best guess is that the XFS partition is corrupted. Is anyone able to shed some light on this situation?

 

Screen Shot 2020-05-05 at 1.23.36 AM.png

Screen Shot 2020-05-05 at 1.23.10 AM.png

You have the "Pass Thru" switch on which will prevent auto and manual mount.  Turn off that switch.

  • Thanks 1
Link to comment
  • trurl pinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.