wildfire305
-
Posts
145 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by wildfire305
-
-
2 hours ago, JorgeB said:
Please the post output of:
zfs mount
root@CVG02:~# zfs mount snapshot/rsnapshot /mnt/snapshot/rsnapshot snapshot /mnt/snapshot snapshot/NVR /mnt/snapshot/NVR snapshot/cachemirror /mnt/snapshot/cachemirror root@CVG02:~#
This is where I originally had them mounted before upgrade to 6.12.
They have now also been mounted in /mnt/user/* by 6.12 although that doesn't show up in zfs mount or regular mount.
Those are the shares created by unraid 6.12. For example, when I go into the folders in /mnt/snapshot/cachemirror or /mnt/user/cachemirror - the same files appear in both.
It's the rsnapshot one that is blank.
-
And for comparison - This one mounted just fine.
-
Wizardry:
zfs get all snapshot/rsnapshot > rsnapshot_info.txt
There's all the details about that dataset. It mounts as an empty folder now. But zfs list shows it still has the 9TB of data.
-
2 minutes ago, trurl said:
Minimum Free doesn't apply to cache-only shares since Unraid isn't allowed to choose another disk if it gets too full.
Understood, that makes sense for shares. Is the minimum disk space only changeable if the array is stopped?
-
-
I have no concerns about data loss on this dvr disk, but it is concerning that I cannot set the minimum free space. Have I found another bug?
-
Minimum free space is also unchangeable on my main cache pool. Both pools are btrfs. There is one share set to only use the DVR cache pool with one disk in it and I cannot change the minimum free space their either. All of my other shares, whether they use the main cache pool or go direct to disk can be changed, and I have set them all previously.
-
Both minimum free space for the share and the disk are greyed out and not changeable.
-
2 hours ago, Squid said:
Does stopping / starting the array fix it? (Or try using 100)
Would rebooting count towards this? If so, I've rebooted about four or five times since I made the setting. I looked at my notification logs (slack app) and it also notified me at 71% and every percent past 90. So the zero setting doesn't work as the note mentions. This is also a pool device and not part of the main storage array. I used the pool instead of unassigned devices so I could have more control over sharing.
-
Well setting it to 100 worked to effectively disable the notifications. It updated to say that everything was fine. Then setting it back to 0 caused it to send out disk full warning notifications again. I'm going to set it to 100, but Perhaps the instructions on mouseover could be updated to reflect that if it isn't really a problem to be fixed.
-
I will try that and report back. I'm waiting on a 7tb file copy at the moment - first backup on a new media.
One of three ZFS datasets not showing up properly after import. (solved - sharenfs feature)
in Stable Releases
Posted · Edited by wildfire305
I got it!!!
That share was incompatible because I had sharenfs feature turned on. After turning off that feature AND rebooting, I now have access to the files.
root@CVG02:/mnt/snapshot/rsnapshot# ls alpha.0/ alpha.2/ alpha.4/ beta.0/ beta.2/ beta.4/ beta.6/ alpha.1/ alpha.3/ alpha.5/ beta.1/ beta.3/ beta.5/ root@CVG02:/mnt/snapshot/rsnapshot#
So officially the "BUG" is unraid 6.12 is not compatible with sharenfs zfs feature.
Which probably isn't something that needs to be fixed, just use the NFS share in the share created by unraid.
Thanks again for integrated ZFS support and continuing to make what I consider to be the most flexible of all the home server operating systems that caters to all tech levels.