Jump to content

NFS share on Unassigned device broken after share modification


Go to solution Solved by dlandon,

Recommended Posts

So today I changed shares:

appdata

domains

syslog

system

from cache<--Array

to cache only to speed things up.

Prior to making that change, I brought down docker. Issued the mover, made the change, brought docker back up. 

Everything came up fine. 

Except the unassigned device that I have a CCTV drive not in the array that I use for my frigate server ( not running on unraid )

The unassigned device lost its name, it is now dev1, where it used to be frigate. When I go into the settings, share, and automount are turned off. I turn them on, add the name BACK, and hit apply. Task completed, I click done.

 

Now. I got to the unassigned devices setting under settings, and under NFS settings, Enabled NFS export is set to NO.

I set it to YES, and click apply. Page refreshes, and its set to No. Looking at the syslog, I get this when I set the option to Yes:

 

Apr 27 11:24:41 server1 unassigned.devices: Set Disk Name on 'ST6000VX009-2ZR186_ZVY00GYH (sdf)' to 'frigate'
Apr 27 11:24:51 server1 ool www[4265]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'nfs_settings'
Apr 27 11:24:52 server1 unassigned.devices: Updating share settings...
Apr 27 11:24:52 server1 unassigned.devices: Share settings updated.
Apr 27 11:25:35 server1 unassigned.devices: Set Disk Name on 'ST6000VX009-2ZR186_ZVY00GYH (sdf)' to 'frigate'
Apr 27 11:25:39 server1 unassigned.devices: Mounting partition 'sdf1' at mountpoint '/mnt/disks/frigate'...
Apr 27 11:25:39 server1 unassigned.devices: Mount cmd: /sbin/mount -t 'xfs' -o rw,relatime '/dev/sdf1' '/mnt/disks/frigate'
Apr 27 11:25:40 server1 kernel: XFS (sdf1): Mounting V5 Filesystem
Apr 27 11:25:40 server1 kernel: XFS (sdf1): Ending clean mount
Apr 27 11:25:40 server1 unassigned.devices: Successfully mounted '/dev/sdf1' on '/mnt/disks/frigate'.
Apr 27 11:25:40 server1 unassigned.devices: Adding SMB share 'frigate'.
Apr 27 11:25:40 server1 unassigned.devices: Warning: Unassigned Devices are not set to be shared with NFS.
Apr 27 11:25:54 server1 ool www[10136]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'nfs_settings'
Apr 27 11:25:54 server1 unassigned.devices: Updating share settings...
Apr 27 11:25:54 server1 unassigned.devices: Share settings updated.

 

When I go back to main, under unassigned devices, the drive that WAS labeled Frigate, is now again DEV1, and the share and automount options are UNCHECKED.

 

I've tried setting the shares back to cache<---Array. Running the mover, and nothing is fixing it. It's completely messed up.

 

How do I get the system shares running on cache only, and get my NFS share back? It's incredibly important.

Link to comment

In Fix Common Problems I also have:

 

Share system set to cache-only, but files / folders exist on the array	You should change the shares settings appropriately  or use the dolphin / krusader docker applications to move the offending files accordingly. Note that there are some valid use cases for a set up like this. In particular: THIS  More Information

 

I've followed this link:


But what this post says to do doesn't exist anymore. "Prefer" is not an option. I am running Unraid 6.12.10 This option no longer exists apparently.

Link to comment
4 hours ago, itimpi said:

You should refer to the current documentation that covers that here.

 

That is exactly what I had done originally before changing to cache only. 

Brought down docker ( had no vms actively running )

the shares already were cache <---Array. So I instituted the mover. Checked the syslog, and nothing was copied over.

Changed to cache only. Restarted docker. 

I ended up moving libvirt.img from /mnt/user/system to /mnt/cache/system and the fix common problems error about system having files still on the array went away.

 

But the NFS share for unassigned drives is still chooched.

I was able to force it up temporarily by adding the entry to /etc/exports, but I'd prefer to get it working natively, as exports tends to wipe itself clear of any modifications.

Link to comment
1 hour ago, Original_Vecna said:

the shares already were cache <---Array. So I instituted the mover. Checked the syslog, and nothing was copied over.

If you wanted the files on the cache then the mover should have been set to array->cache.

Link to comment
2 hours ago, Original_Vecna said:

That is exactly what I had done originally before changing to cache only. 

Brought down docker ( had no vms actively running )

the shares already were cache <---Array. So I instituted the mover. Checked the syslog, and nothing was copied over.

Changed to cache only. Restarted docker. 

I ended up moving libvirt.img from /mnt/user/system to /mnt/cache/system and the fix common problems error about system having files still on the array went away.

 

But the NFS share for unassigned drives is still chooched.

I was able to force it up temporarily by adding the entry to /etc/exports, but I'd prefer to get it working natively, as exports tends to wipe itself clear of any modifications.

Set the "Enable NFS Export" to "Yes":

Screenshot 2024-04-27 182243.png

Link to comment

Thanks, that's exactly what I'm trying to do. When I do it, I click apply. The page refreshes, and the option is back to No.

 

Again, when I do that, the name and options of my disk in Unassigned Devices resets completely. Automount, share, and the name "frigate" reverts to dev1 and the options are lost. Syslog gives me:

 

Apr 27 11:25:40 server1 unassigned.devices: Warning: Unassigned Devices are not set to be shared with NFS.
Apr 27 11:25:54 server1 ool www[10136]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'nfs_settings'
Apr 27 11:25:54 server1 unassigned.devices: Updating share settings...
Apr 27 11:25:54 server1 unassigned.devices: Share settings updated.

 

Link to comment
  • 2 weeks later...
39 minutes ago, Original_Vecna said:

Got the SUCCESS message, but when I went back into UD, and changed the NFS export to Yes, then Apply, page refreshed and it was set to No. Same thing. I can't even change the name from Dev1. Nothing will stick.

The changes you are making are changed in the tmpfs file system in ram and then copied to the flash for persistent storage.  You have some kind of tmpfs file system problem.  Post the Unraid diagnostics and the ud_diagnostics.  Get the ud_diagnostics by going to a command line and type 'ud_diagnostics'.  Post the /flash/logs/ud_diagnostics.zip file along with Unraid the diagnostics.

Link to comment

@dlandon

Hate to be a pain, but every since I got the new UD, and was abled to enable the NFS share for my UD, I'm having REALLY poor connection issues to it. My frigate server just isn't writing to it. The drive goes to sleep, Home assistant feeds are hit or miss, just really bad. I could previously sit on the Main Unraid page and watch the writes climb every sec with writes from my 10 cctv cameras. Now? It barely increments.

 

This wasn't happening when I was juggling mounting previously. The service was excellent, it was the reliability that I occasionally had issues with. ( Dropping connection )

Edited by Original_Vecna
Link to comment
2 minutes ago, Original_Vecna said:

@dlandon

Hate to be a pain, but every since I got the new UD, and was abled to enable the NFS share for my UD, I'm having REALLY poor connection issues to it. My frigate server just isn't writing to it. The drive goes to sleep, Home assistant feeds are hit or miss, just really bad.

 

This wasn't happening when I was juggling mounting previously. The service was excellent, it was the reliability that I occasionally had issues with. ( Dropping connection )

You'll have to post UD diagnostics so I can take a look at it.

Link to comment

At this point I don't think it has anything to do with you...I just don't know why all of a sudden this is happening. 

If I run: dd if=/dev/zero of=/mnt/frigate_nfs/testfile bs=1M count=100 on the frigate server, it writes just fine, and if I spam it, unraid UD writes increment like they should.

So the UD nfs share IS working...it's like Frigate forgot what to do. I've rebooted the server, verified the share is mounted properly, can touch and otherwise right to the share...

 

Real noodle scratcher.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...