Unassigned Devices - Managing Disk Drives and Remote Shares Outside of The Unraid Array


Recommended Posts

1 hour ago, dlandon said:

This is not a good idea because of the possibility of inexperienced users doing things wrong and risking data loss.

I think this is a problem for specific users if they don't have enough knowledge :) But custom options will be very useful. Now I need to change lib.php to mount my shares with needed params.

Link to comment
On 4/26/2021 at 5:38 PM, XisoP said:

The whole server was unresponsive. Had to do a PTP (FTS, Flip The Switch) in my case.

Server booted withoud issues, SSD for plex is visible.

 

Attatched is the diagnostics report. Log is only of today, not that much hepful I'm afraid.

 

penny-diagnostics-20210426-1732.zip 93.67 kB · 0 downloads

 

I'm starting a new parity check as I'm typing this. Hope it will survive. Unraid is booted in GUI safemode at this moment.

 

edit:

Parity check is still running, getting close to 14 hours. Still booted in safe mode

 

Final verdict:

 

Par check finished successfully in safe mode, server has been restarted in normal mode. All works fine now. Don't know what the issue was, but all works fine again. Maybe the dead drive? It is strange though that the unassigned disk for plex was dropped during the par check. 

 

My server froze again. All went well for a couple of days, no real load. Just preclearing 14TB and serving some plex. I didn't leave the main tab open, but i did have a preclear progress screen open on one of my machines. Preclear indicates that the server froze 61:33:11 into the job. Unfortunately I cant get any logs (different than posted earlier), the complete server is unresponsive. No SSH, no GUI terminal. Total uptime according to docker GUI screen has been 2 days and a couple of hours (about the same as the preclear job)

Edited by XisoP
typo's
Link to comment
2 hours ago, XisoP said:

 

My server froze again. All went well for a couple of days, no real load. Just preclearing 14TB and serving some plex. I didn't leave the main tab open, but i did have a preclear progress screen open on one of my machines. Preclear indicates that the server froze 61:33:11 into the job. Unfortunately I cant get any logs (different than posted earlier), the complete server is unresponsive. No SSH, no GUI terminal. Total uptime according to docker GUI screen has been 2 days and a couple of hours (about the same as the preclear job)

If you think there is a preclear issue, post on that forum.  I don't think this is a UD issue.

Link to comment
16 minutes ago, dlandon said:

If you think there is a preclear issue, post on that forum.  I don't think this is a UD issue.

 

I'm leaving it in the middle for now. Preclear was doing a post read, pre-read was error free. Data rebuild is going at this moment, I'm leaving a main tab open on one of my machines to track what will happen. I'll keep you posted 👍

Link to comment

So it's probably been explained, but searching through 232 pages isn't exactly fun, so I'll ask:

 

When I go into "settings" for a partition, I see the mount point (WD-2TB), and share toggle, then below I see:
 


Script File: /boot/config/plugins/unassigned.devices/WD-2TB.sh

RUN IN BACKGROUND (toggle off/on)

User Script: (undefined)


Script Content:

 

 

Now I see the WD-2TB.sh script is 0 bytes, and empty.

 

What is the point of it?   and how does it differ from  "User Script" ?

 

I want to have an rclone sync command do a backup to the USB disk daily; I already know the command works when I issue it manually from a command line.

 

Also, how would the "User Scripts" plugin factor in this equation, would it be better than using any of the above, or just a similar alternative and either works?

 

 

Link to comment
15 minutes ago, mooky said:

 

When I go into "settings" for a partition, I see the mount point (WD-2TB), and share toggle, then below I see:

That script is executed when there is a disk event such as when a disk is plugged in and is generally used to perform an operation on the inserted disk, such as a backup.

 

The User Scripts plugin is best used to initiate a script on a timed basis, such as a daily event.  In this case it's best to mount the UD disk and leave it mounted so the User Scripts plugin script can perform it's operations at the appropriate time.

Link to comment

@dlandon,

 

So the "scripts file" automatically fires always when that particular disk is plugged in, or is there a list of disk events I can use to trigger stuff in this file?

 

Ideally I'd like to have an rclone sync start on plugin of the disk, and then further trigger once a day until I unplug it and offsite it.

 

I'm going to be doing a rotation of 2 different disks so I can finally make good on legitimately accomplishing a proper 3-2-1 backup strategy.

Link to comment
28 minutes ago, mooky said:

@dlandon,

 

So the "scripts file" automatically fires always when that particular disk is plugged in, or is there a list of disk events I can use to trigger stuff in this file?

 

Ideally I'd like to have an rclone sync start on plugin of the disk, and then further trigger once a day until I unplug it and offsite it.

 

I'm going to be doing a rotation of 2 different disks so I can finally make good on legitimately accomplishing a proper 3-2-1 backup strategy.

Click the 'Default Script' button and the basic script will load and show the events that occur.  Add your code where appropriate.

  • Thanks 1
Link to comment
On 4/30/2021 at 9:08 AM, SuberSeb said:

I think this is a problem for specific users if they don't have enough knowledge :) But custom options will be very useful. Now I need to change lib.php to mount my shares with needed params.

I've added a setting in the latest release of UD that will remove the no caching from SMD and NFS mounts.

 

Go to the UD settings and set 'Favor reliability on remote share mounts?' to 'No'.  This removes the 'noac' option for NFS mounts and 'cache=none' for SMB mounts.

 

For the moment I'm trying different things to try to resolve remote shares becoming unresponsive.  This is the zero size issue on both SMB and NFS mounts.  I suspect it shows up on users mounting shares on a remote Unraid server where files are being moved by the mover causing stale file handles.  I added an option to the CIFS mount command to use local inodes, and not the remote server inodes.  I'm curious to see if this helps.

Link to comment

I have been able to reproduce the zero size, used, and free on a NFS mounted share.  It occurs from an Unraid share on the remote server when the mover moves files.  I will be looking for a solution.

 

In the future, anyone posting issues that are related to UD, please post here.  As I've been looking at the forum this weekend, I see a lot of posts related to the issues with CIFS (SMB) and NFS mounts and the unresponsive issues (zero size, used, and free).  It is very difficult for me to try to find and respond to all the posts related to this issue if they are scattered about the forum.  We can come up with solutions faster if we keep all posts regarding UD here.

Link to comment

After some research I have found that the issue with NFS showing zero for size, used, and free is from a stale file handle (unexpected fileid) after the mover runs.  It clears up with an unmount and remount.  There are several ways this can be handled for now.

  • The best way to handle this is to not set the remote share to use the cache disk.  Leave all the files on the array or a pool device.  I've found that the recycle bin creates the same issue because the file is moved.  Disable the recycle bin for the remote share.
  • Use SMB remote shares instead.  I believe the stale file handle has been fixed on CIFS shares.
  • UD can detect this situation and unmount and remount the share, but I'm not a fan of this approach.  If the file handle changes on every file move, and the share is extremely busy, UD would thrash the share with unmounts and mounts.

I'm still researching this, but I'm not very hopeful of a good solution for NFS.

Link to comment
5 hours ago, dlandon said:

Use SMB remote shares instead.  I believe the stale file handle has been fixed on CIFS shares.

Because of stale file handles I switched to NFS. SMB have this issue too. The only way is to disable cache on that shares or disable hard links.

I described it here:

 

Edited by SuberSeb
Link to comment
5 hours ago, SuberSeb said:

Because of stale file handles I switched to NFS. SMB have this issue too.

Update to the latest version of UD.  I've applied a fix for the SMB shares.

 

Your solutions for NFS shares are the only things that work for now.

Link to comment

Hello everyone,

 

is it possible to add a remote Synology NAS with Unassigned Devices via the internet?

The NAS is reachable through the Synology dynDNS service with the address "mom.myds.me" (not the real name). Uploads with filezilla and this adress are also possilbe.

My main questions 1) is it possible? and 2) if so what is the right "mounting term" to connect it with Unassigned Devices

 

I am running Version: 6.9.2 and Unassigned Devices 2021.04.19

 

Thanks!

Link to comment
On 5/3/2021 at 7:16 AM, SuberSeb said:

Because of stale file handles I switched to NFS. SMB have this issue too. The only way is to disable cache on that shares or disable hard links.

I described it here:

 

 

For SMB at least you don't need to disable hard links nor cache to fix stale file handles. I made a post about it

 

  • Like 1
Link to comment

UD does not leave "Format" State for a passed through disk after the rebuilt of this disk finishes in the VM.

 

Let me explain:

 

Several disks are passed through to an Unraid VM. These disks are marked as passed through within UD. One of the passed through disks had to be replaced within the VM. During the rebuild within the VM, UD did show the "Format" button for this passed through disk. Everythings fine until here.

 

Now the disk is successfully rebuilt within the VM and the VM is working as usual.

 

UD still shows the "Format" button and no plus sign (for the partition).

 

Please have a look at the attached screenshot. The disk is formatted (rebuild), working and is written to. But UD still shows no + sign (for the partition) and still shows the Format text.

 

Thanks for listening.

 

 

 

Clipboard01.jpg

Edited by hawihoney
Link to comment
2 hours ago, hawihoney said:

 

Please have a look at the attached screenshot. The disk is formatted (rebuild), working and is written to. But UD still shows no + sign (for the partition) and still shows the Format text.

Linux does not recognize the file format and/or partition layout.  If Linux does not recognize the file format, UD can't show it.

Link to comment
4 hours ago, dlandon said:

Linux does not recognize the file format and/or partition layout.  If Linux does not recognize the file format, UD can't show it.

 

Thanks for your answer.

 

To rebuild I did shutdown the VM, replaced the disk on the host backplane and did start the VM with changed disk/by-id again. Within the VM the rebuild was started and UD on host showed "Format" immediately. Now this "Format" does not go away. I do understand that the host still does not show the Partition (-part1) in /dev/disk/by-id and that's the reason why UD can't show the existing partition.

 

May I vote for additional options, or a slightly changed handling, in UD where a.) partitions for disks that are marked as passed through never become shown at all and b.) the text "Format" is never shown for disks that are marked as passed through.

 

Reason: Consider a host with lots of disks and VMs, equipped with a professional backplane. It's not required to shutdown all VMs and the host to swap a disk that is passed through to a single VM.

 

In such an environment partitions on passed through disks are not required to be shown on the host. And it is not required to show the Format text for a passed through disks.

 

It does not harm the way it is. But it would look even more professional for large scale systems.

 

Please have a look at the screenshot. All these disks are passed through to VMs, they are marked as passed through (that's why Mount is not clickable - good). Format is still shown (- not good). And all partitions are expandable if there's a filesystem (- really not needed). Why would a host need to show partitions for disks that are completely passed through to a VM?

 

Thanks for listening.

 

 

Clipboard01.jpg

Edited by hawihoney
Link to comment
9 hours ago, dlandon said:

This is fixed in the latest release of UD.

Thank you for the hard work on this.  I just wanted to confirm what you have been saying.  In the latest release of UD, we should be able to leave cache enabled  or hard links enable on the remote share as long as the mount is SMB.  However, NFS shares cannot have cache enabled.

 

Can cache be left enabled under NFS if hard links are disabled? I believe that I tried this scenario and still had problems.

Link to comment
8 hours ago, hawihoney said:

May I vote for additional options, or a slightly changed handling, in UD where a.) partitions for disks that are marked as passed through never become shown at all and b.) the text "Format" is never shown for disks that are marked as passed through.

I'll have a look and see what makes sense.

Link to comment
3 hours ago, mikesp18 said:

Thank you for the hard work on this.  I just wanted to confirm what you have been saying.  In the latest release of UD, we should be able to leave cache enabled  or hard links enable on the remote share as long as the mount is SMB.  However, NFS shares cannot have cache enabled.

Yes.

 

3 hours ago, mikesp18 said:

Can cache be left enabled under NFS if hard links are disabled? I believe that I tried this scenario and still had problems.

I think disabling hard links is hit and miss and depends on your particular use case.  Disabling cache on share is apparently a better solution.

 

I know some of you are frustrated with the NFS situation, but understand that it is beyond Limetech's control.  NFS is a pretty poor implementation of remote share mounting.  It cannot handle a file being moved on the server because the file handle is changed and the NFS client is not aware of the new file handle.  That's where the stale file handle comes from.  You'll see log entries that the file handle is not what was expected.  This is exactly what the mover does, hence the problem on Unraid servers.

 

There is a remote possibility that NFSv4 may offer some relief - there is a feature called volatile file handles, but I do not understand how it works because documentation is weak.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.