-
Posts
10184 -
Joined
-
Last visited
-
Days Won
196
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by limetech
-
-
16 minutes ago, BRiT said:
What about other plugins, would they need to hack around these limitations too?
If they are involved in shutting down the server, yes.
-
There is a problem: during shutdown we definitely want to cleanly unmount the usb flash which is mounted at /boot. However, in order to do that we must first unmount the two squashfs file systems: /usr and /lib.
Does that 'rc.nut shutdown' operation kill power immediately? If so, any executable it uses must get moved off of /usr, maybe put into /bin.
-
Thank you for the report, yes indeed a bug here.
Quoteour NUT plugin (for UPS) being unable to shutdown the UPS inverter because the binary call to /usr/sbin/upsdrvctl is made impossible by the premature unmounting of /usr
From where is the call to this executable made, is it via /sbin/genpowerd ?
-
34 minutes ago, Taddeusz said:
I changed the AutoMounter host from unraid.bean.local to 192.168.22.90 and that worked. Why would it not work using the fq host name?
'avahi' is the name of the Linux package that implements mDNS protocol and 'bonjour' is the name of MacOS package that does same. These packages are responsible for resolving network hostnames that end in ".local". If 'avahi' is not configured properly then name resolution won't work, but you can always refer to a host by it's IP address. If disabling ipv6 does not work for you, please post contents of this file from your server:
/etc/avahi/avahi-daemon.conf
-
On 5/17/2023 at 8:40 PM, Misty said:
Please add CONFIG_FANOTIFY support in Linux Kernel. I'm noticing some strange disk spin ups in recent builds, but without fanotiy it's painful to troubleshoot (can only use inotify recursively, causing disk never spin down).
Actually, most new kernels have enabled this config (Ubuntu, Debian, recent version of RedHat). As UnRAID is mainly oriented towards storage, there's no reason to keep this option off.This is added in next release (rc7), please let me know how you're using this to track down disk spin-up issues 👍
- 1
-
5 hours ago, primeval_god said:
That is not ideal. The reason is that I have no need for IO performance beyond what shfs already provides. I would much rather have the functionality that is mentioned under the exclusive share restrictions (in particular the ability to make changes and add folders to other disks without having to stop and start the array).
Actually I neglected to update the 'Restrictions' under 'Exclusive shares'. This is the only restriction which still applies:
- Both the share Min Free Space and pool Min Free Space settings are ignored when creating new files on an exclusive share.
The symlink is 'dynamic' meaning when a share path is traversed it checks immediately if the share exists on only one volume and if so, returns a symlink, else returns normal directory.
- 2
-
18 minutes ago, Nogami said:
Just wondering why it's recommended to erase and re-create zpools? My RC5 ZFS pools seemed to come in OK. Am I overlooking something basic (bit of a ZFS newb).
This is only for zpools created with 6.12.0-beta5 (not -rc5) which was the first "beta" which had zfs pool support.
-
3 hours ago, apandey said:
Great, appreciate the speed of new RCs turning up
For those of us who have zfs via plugin on 6.11.5, is there a way to pre-check if those will be importable into 6.12? I understand that datasets get created when defining shares, but if i already have vdevs / datasets, how will they be interpreted by 6.12 initially and will there be cases where they could be rejected and require recreating pools?
Importing existing pool does not change anything in the pool, except possibly compression on/off and autotrim on/off. But note that all top-level directories will be interpreted as shares whether they are actual directories or datasets. When/if you later create a share in that pool it will be created as a dataset.
-
2 hours ago, apandey said:
Right now, for a cache=only share, we have /mnt/user/share shfs path and /mnt/poolname direct mount.
/mnt/poolname/share
2 hours ago, apandey said:Am I correct to understand that the new exclusive share feature will turn /mnt/user/share to be a direct mount?
Yes. It's a bind mount.
2 hours ago, apandey said:Will the other path still be available?
Yes.
2 hours ago, apandey said:Does the user need to make any path adjustments
No.
2 hours ago, apandey said:will the upgrade do something about it?
No need.
- 2
-
23 hours ago, aim60 said:
Sometime in the future, might shares on array disks, that have been confined to one disk (via Included Disks) be considered as Exclusive, and be bind-mounted.
Considered that, but writes to unraid array disks are greatly throttled by r/m/w parity updates. Maybe would benefit reads, but you can enable 'disk shares' and get the same benefit.
- 1
-
4 hours ago, JonathanM said:
Is the quoted text also in the inline help system next to the exclusive setting?
For "Exclusive access" the help text reads:
When set to "yes" indicates a bind-mount directly to a pool has been set up for the share in the /mnt/user tree, provided the following conditions are met:
- Primary storage is a pool.
- Secondary storage is set to none.
- The share does not exist on any other volumes.
-
25 minutes ago, JonathanM said:
That's fine, I just wanted to confirm the behaviour so we can help people who will inevitably have a container writing to the wrong location.
Note this is documented in the Release Notes:
"If the share directory is manually created on another volume, files are not visible in the share until after array restart, upon which the share is no longer exclusive."
Should we add anything to that description?
- 1
-
1 hour ago, JonathanM said:
does the "additional check" force exclusive access to NO, and continue to work as before showing a fuse mount of array and pool content?
Correct!
-
Published 6.12.0-rc4.1 which fixes a dumb coding error checking for bind-mounts if share name contains a space.
- 3
-
1 hour ago, mdrjr said:
I'm been a unraid customer since v5, love it and everytime it gets better!
Thank you!
1 hour ago, mdrjr said:1. Able to have Primary Storage as Cache drive and Secondary storage as a Pool (zfs)
That's coming and will be implemented at same time we implement multiple unRAID pools - then everything is a pool
1 hour ago, mdrjr said:2. Allow me to change the ZFS compression instead of lz4 to zstd as it resets everytime the array stops
Everything I've read says 'lz4' is "better"... but sure we will implement a way to specify the algorithm. Curious, what do you mean "it resets everytime array stops"?
-
-
Reserved
-
@xao
@xaositek please reboot in 'safe' mode and see if issue persists. Due to php v7 to v8 upgrade there may be an issue with a plugin.
-
20 minutes ago, dlandon said:
Very strange. I set up 100000 max open files and it worked. Testparm showed the new value and didn't issue a warning. smbd picked up the large value.
Sure you can set a larger value, doesn't mean it uses it You can turn on Samba debugging and see if that "file_init_global: Information only: requested ..." message appears.
- 2
-
14 hours ago, limetech said:
Samba has an absolute max of 65535.
Got this from looking at the source:
https://github.com/samba-team/samba/blob/master/source3/smbd/files.c#L1401
-
1 hour ago, 1812 said:
A Mac photo library that is of any substantial size (like 175GB for example) blows past the 40964 open file limit
Samba has an absolute max of 65535.
- 1
-
5 hours ago, kubed_zero said:
I've got zero extra Samba custom configurations/files, so this is running with vanilla Unraid as far as I'm aware.
Thank you for your report. I have TM backups running ok with Monterey (12.6). At first I thought it was because the share is marked 'public'. Changed to 'private' and there was some connectivity issues but eventually poked around the TM preferences and got it to work again... so the investigation continues...
-
Changed Status to Solved
-
Not being ignored but not everyone running win11 vm has this issue, so where to look?
How are you shutting down, from within windows or via VM 'Stop' dropdown option?
[6.12.3] /usr, /lib unmounting results in unavailable binaries for clean shutdown
in Stable Releases
Posted
Maybe another solution ... testing