Original_Vecna Posted April 27 Share Posted April 27 So today I changed shares: appdata domains syslog system from cache<--Array to cache only to speed things up. Prior to making that change, I brought down docker. Issued the mover, made the change, brought docker back up. Everything came up fine. Except the unassigned device that I have a CCTV drive not in the array that I use for my frigate server ( not running on unraid ) The unassigned device lost its name, it is now dev1, where it used to be frigate. When I go into the settings, share, and automount are turned off. I turn them on, add the name BACK, and hit apply. Task completed, I click done. Now. I got to the unassigned devices setting under settings, and under NFS settings, Enabled NFS export is set to NO. I set it to YES, and click apply. Page refreshes, and its set to No. Looking at the syslog, I get this when I set the option to Yes: Apr 27 11:24:41 server1 unassigned.devices: Set Disk Name on 'ST6000VX009-2ZR186_ZVY00GYH (sdf)' to 'frigate' Apr 27 11:24:51 server1 ool www[4265]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'nfs_settings' Apr 27 11:24:52 server1 unassigned.devices: Updating share settings... Apr 27 11:24:52 server1 unassigned.devices: Share settings updated. Apr 27 11:25:35 server1 unassigned.devices: Set Disk Name on 'ST6000VX009-2ZR186_ZVY00GYH (sdf)' to 'frigate' Apr 27 11:25:39 server1 unassigned.devices: Mounting partition 'sdf1' at mountpoint '/mnt/disks/frigate'... Apr 27 11:25:39 server1 unassigned.devices: Mount cmd: /sbin/mount -t 'xfs' -o rw,relatime '/dev/sdf1' '/mnt/disks/frigate' Apr 27 11:25:40 server1 kernel: XFS (sdf1): Mounting V5 Filesystem Apr 27 11:25:40 server1 kernel: XFS (sdf1): Ending clean mount Apr 27 11:25:40 server1 unassigned.devices: Successfully mounted '/dev/sdf1' on '/mnt/disks/frigate'. Apr 27 11:25:40 server1 unassigned.devices: Adding SMB share 'frigate'. Apr 27 11:25:40 server1 unassigned.devices: Warning: Unassigned Devices are not set to be shared with NFS. Apr 27 11:25:54 server1 ool www[10136]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'nfs_settings' Apr 27 11:25:54 server1 unassigned.devices: Updating share settings... Apr 27 11:25:54 server1 unassigned.devices: Share settings updated. When I go back to main, under unassigned devices, the drive that WAS labeled Frigate, is now again DEV1, and the share and automount options are UNCHECKED. I've tried setting the shares back to cache<---Array. Running the mover, and nothing is fixing it. It's completely messed up. How do I get the system shares running on cache only, and get my NFS share back? It's incredibly important. Quote Link to comment
Original_Vecna Posted April 27 Author Share Posted April 27 Also have tried rebooting the server. Quote Link to comment
Original_Vecna Posted April 27 Author Share Posted April 27 In Fix Common Problems I also have: Share system set to cache-only, but files / folders exist on the array You should change the shares settings appropriately or use the dolphin / krusader docker applications to move the offending files accordingly. Note that there are some valid use cases for a set up like this. In particular: THIS More Information I've followed this link: But what this post says to do doesn't exist anymore. "Prefer" is not an option. I am running Unraid 6.12.10 This option no longer exists apparently. Quote Link to comment
dlandon Posted April 27 Share Posted April 27 Be sure you have the latest version of UD, then go to a command line and type 'ud_diagnostics'. Post the /flash/logs/ud_diagnostics.zip file. Quote Link to comment
itimpi Posted April 27 Share Posted April 27 46 minutes ago, Original_Vecna said: But what this post says to do doesn't exist anymore. "Prefer" is not an option. I am running Unraid 6.12.10 This option no longer exists apparently. You should refer to the current documentation that covers that here. Quote Link to comment
Original_Vecna Posted April 27 Author Share Posted April 27 4 hours ago, dlandon said: Be sure you have the latest version of UD, then go to a command line and type 'ud_diagnostics'. Post the /flash/logs/ud_diagnostics.zip file. I am currently running 2024.04.22 Attached is the ud_diagnostics Thank You! ud_diagnostics-20240427-163444.zip Quote Link to comment
Original_Vecna Posted April 27 Author Share Posted April 27 4 hours ago, itimpi said: You should refer to the current documentation that covers that here. That is exactly what I had done originally before changing to cache only. Brought down docker ( had no vms actively running ) the shares already were cache <---Array. So I instituted the mover. Checked the syslog, and nothing was copied over. Changed to cache only. Restarted docker. I ended up moving libvirt.img from /mnt/user/system to /mnt/cache/system and the fix common problems error about system having files still on the array went away. But the NFS share for unassigned drives is still chooched. I was able to force it up temporarily by adding the entry to /etc/exports, but I'd prefer to get it working natively, as exports tends to wipe itself clear of any modifications. Quote Link to comment
itimpi Posted April 27 Share Posted April 27 1 hour ago, Original_Vecna said: the shares already were cache <---Array. So I instituted the mover. Checked the syslog, and nothing was copied over. If you wanted the files on the cache then the mover should have been set to array->cache. Quote Link to comment
Original_Vecna Posted April 27 Author Share Posted April 27 Correct. But in the shares screen, it's listed like this, that's what I was referring to. Which is what they were set to. Quote Link to comment
dlandon Posted April 27 Share Posted April 27 2 hours ago, Original_Vecna said: That is exactly what I had done originally before changing to cache only. Brought down docker ( had no vms actively running ) the shares already were cache <---Array. So I instituted the mover. Checked the syslog, and nothing was copied over. Changed to cache only. Restarted docker. I ended up moving libvirt.img from /mnt/user/system to /mnt/cache/system and the fix common problems error about system having files still on the array went away. But the NFS share for unassigned drives is still chooched. I was able to force it up temporarily by adding the entry to /etc/exports, but I'd prefer to get it working natively, as exports tends to wipe itself clear of any modifications. Set the "Enable NFS Export" to "Yes": Quote Link to comment
Original_Vecna Posted April 27 Author Share Posted April 27 Thanks, that's exactly what I'm trying to do. When I do it, I click apply. The page refreshes, and the option is back to No. Again, when I do that, the name and options of my disk in Unassigned Devices resets completely. Automount, share, and the name "frigate" reverts to dev1 and the options are lost. Syslog gives me: Apr 27 11:25:40 server1 unassigned.devices: Warning: Unassigned Devices are not set to be shared with NFS. Apr 27 11:25:54 server1 ool www[10136]: /usr/local/emhttp/plugins/unassigned.devices/scripts/rc.settings 'nfs_settings' Apr 27 11:25:54 server1 unassigned.devices: Updating share settings... Apr 27 11:25:54 server1 unassigned.devices: Share settings updated. Quote Link to comment
Original_Vecna Posted Tuesday at 01:32 AM Author Share Posted Tuesday at 01:32 AM Can anyone help please? I thought I had a work around, but the dang server keeps having issues. Keeps dropping NFS shares. I can't get the option to enabled NFS shares in unassigned devices to stay YES. I set it to yes, the page refreshes, and it's back to no. Quote Link to comment
dlandon Posted Tuesday at 01:48 AM Share Posted Tuesday at 01:48 AM Try clicking on the double arrows on the UD webpage and see if that sorts it out. Quote Link to comment
Original_Vecna Posted Tuesday at 11:48 AM Author Share Posted Tuesday at 11:48 AM 9 hours ago, dlandon said: Try clicking on the double arrows on the UD webpage and see if that sorts it out. Got the SUCCESS message, but when I went back into UD, and changed the NFS export to Yes, then Apply, page refreshed and it was set to No. Same thing. I can't even change the name from Dev1. Nothing will stick. Quote Link to comment
dlandon Posted Tuesday at 12:33 PM Share Posted Tuesday at 12:33 PM 39 minutes ago, Original_Vecna said: Got the SUCCESS message, but when I went back into UD, and changed the NFS export to Yes, then Apply, page refreshed and it was set to No. Same thing. I can't even change the name from Dev1. Nothing will stick. The changes you are making are changed in the tmpfs file system in ram and then copied to the flash for persistent storage. You have some kind of tmpfs file system problem. Post the Unraid diagnostics and the ud_diagnostics. Get the ud_diagnostics by going to a command line and type 'ud_diagnostics'. Post the /flash/logs/ud_diagnostics.zip file along with Unraid the diagnostics. Quote Link to comment
Original_Vecna Posted Tuesday at 04:06 PM Author Share Posted Tuesday at 04:06 PM (edited) ud_diagnostics-20240507-115037.zip tomcat-diagnostics-20240507-1155.zip Here ya go! Thank you so much! Edited Tuesday at 04:06 PM by Original_Vecna Quote Link to comment
Solution dlandon Posted Tuesday at 07:19 PM Solution Share Posted Tuesday at 07:19 PM I see the issue and will have a fix later today. Quote Link to comment
Original_Vecna Posted Tuesday at 11:08 PM Author Share Posted Tuesday at 11:08 PM YOU ARE MY HERO! THANK YOU! Quote Link to comment
Original_Vecna Posted Wednesday at 01:03 PM Author Share Posted Wednesday at 01:03 PM 17 hours ago, dlandon said: I see the issue and will have a fix later today. @dlandon That did it! If I may ask...what was wrong with my server? Quote Link to comment
dlandon Posted Wednesday at 01:06 PM Share Posted Wednesday at 01:06 PM Just now, Original_Vecna said: @dlandon That did it! If I may ask...what was wrong with my server? It wasn't you. Sometime ago I made a change that borke it and it hasn't come up until now. It was pretty messy as it actually deleted all the configuration. Quote Link to comment
Original_Vecna Posted Wednesday at 01:13 PM Author Share Posted Wednesday at 01:13 PM Great to hear it wasn't me. LOL Thank you so much! This has been a thorn in my side for some time now. Quote Link to comment
Original_Vecna Posted Wednesday at 10:09 PM Author Share Posted Wednesday at 10:09 PM (edited) @dlandon Hate to be a pain, but every since I got the new UD, and was abled to enable the NFS share for my UD, I'm having REALLY poor connection issues to it. My frigate server just isn't writing to it. The drive goes to sleep, Home assistant feeds are hit or miss, just really bad. I could previously sit on the Main Unraid page and watch the writes climb every sec with writes from my 10 cctv cameras. Now? It barely increments. This wasn't happening when I was juggling mounting previously. The service was excellent, it was the reliability that I occasionally had issues with. ( Dropping connection ) Edited Wednesday at 10:10 PM by Original_Vecna Quote Link to comment
dlandon Posted Wednesday at 10:13 PM Share Posted Wednesday at 10:13 PM 2 minutes ago, Original_Vecna said: @dlandon Hate to be a pain, but every since I got the new UD, and was abled to enable the NFS share for my UD, I'm having REALLY poor connection issues to it. My frigate server just isn't writing to it. The drive goes to sleep, Home assistant feeds are hit or miss, just really bad. This wasn't happening when I was juggling mounting previously. The service was excellent, it was the reliability that I occasionally had issues with. ( Dropping connection ) You'll have to post UD diagnostics so I can take a look at it. Quote Link to comment
Original_Vecna Posted Wednesday at 10:18 PM Author Share Posted Wednesday at 10:18 PM At this point I don't think it has anything to do with you...I just don't know why all of a sudden this is happening. If I run: dd if=/dev/zero of=/mnt/frigate_nfs/testfile bs=1M count=100 on the frigate server, it writes just fine, and if I spam it, unraid UD writes increment like they should. So the UD nfs share IS working...it's like Frigate forgot what to do. I've rebooted the server, verified the share is mounted properly, can touch and otherwise right to the share... Real noodle scratcher. Quote Link to comment
Original_Vecna Posted Wednesday at 11:02 PM Author Share Posted Wednesday at 11:02 PM Needed to add: 192.168.1.79(rw,sec=sys,insecure_locks,sync,no_root_squash) Added the NFS share direct in my docker container, removing the extra mounting of nfs share in the linux system, and handling it direct in docker-compose. Works PERFECT now! Thanks again! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.