-
Posts
3411 -
Joined
-
Last visited
-
Days Won
6
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by hawihoney
-
-
9 hours ago, dlandon said:
I am able to reproduce the issue. It appears when you remote mount a share from a server that is not another Unraid.
In my case these were Unraid Servers running 6.12.9 as well. I did upgrade them all yesterday. All have SMB mounts to each others disks (disk shares) via Unassigned Devices.
They all showed this error. The mounts were there, but showed only the first directory per mount. syslog was full of CIFS VFS errors.
After downgrading all machines to 6.12.8 business is back as usual.
-
39 minutes ago, JorgeB said:
That should be reported in the UD plugin support thread.
I'm quiet sure that this is releated to this bug linked below. syslog shows system errors, the result is Unassigned Devices not working properly IMHO.
-
7 minutes ago, dlandon said:
Someone post diagnostics when this occurs so I can investigate further.
Did post diagnostics in my bug report:
-
Confirmed. Back to 6.12.8 as well.
- 1
-
49 minutes ago, CiscoCoreX said:
Two very different hardware here.
Just try with these two settings reversed and MACVLAN. Worst that can happen is a crash within 24 hours. That would mean you are affected by MACVLAN crashes. AFAIK this is a docker problem. I am through with this and have to live with 10-15 sec delay currently.
-
On 9/11/2023 at 8:45 PM, CiscoCoreX said:
Settings > Network Settings > eth0 > Enable Bridging = No
Docker settings, nSettings > Docker > Host access to custom networks = Enabled
These two are the problem. This workaround was recommended in 6.12.x for users experiencing "MACVLAN" crashes. You don't need them if you are a.) on IPVLAN or b.) didn't experience MACVLAN crashes or c.) you are no user of routers like Fritzbox in Europe.
-
12 hours ago, CiscoCoreX said:
Settings > Network Settings > eth0 > Enable Bonding = Yes
Settings > Network Settings > eth0 > Enable Bridging = No
Docker settings, nSettings > Docker > Host access to custom networks = Enabled
Ah, did these changes as well on 6.12.4 and have to wait approx. 10 sec most of the time - but not always. Sometimes even Login window needs 10 sec before it appears.
-
26 minutes ago, ljm42 said:
update_cron is called by Unraid to enable `mover`, but User Shares are disabled on this server so `mover` is not needed.
No User Shares, no Mover - it worked on 6.11.5, it stopped working with 6.12.4. This is a breaking change for e.g. plugins that rely on update_cron to add own schedules to cron.
I will call update_cron as a User Script within User Scripts plugin to work around this change. No big deal for me, but be prepared for plugins or users that stumble over this change.
BTW, why not call update_cron during system start always, just in case somebody needs it.
-
25 minutes ago, itimpi said:
Perhaps many people will not need the workaround anyway if they have a plugin (such as my Parity Check Tuning plugin) that issues this as part of its install process as the update_cron command is not plugin specific - it should pick up all cron jobs that any plugin has waiting to be activated.
I wrote @Squid in his User Scripts thread about our conversation here. Perhaps you both can talk about that (call update-cron during plugin installation).
I only have 4 plugins installed on these machines, only what's really required on these. None of my plugins - except User Scripts - has to set schedules. And I don't want to install a plugin I don't need except for calling update_cron So I thiink it's up to this plugin to fix that. And if 6.13 will be picky during startup I think it's even more important to address that.
-
47 minutes ago, itimpi said:
There is also the possibility of a plugin that wants cron entries to be added to run this as part of installing the plugin (I know I do this for the Parity Check Tuning plugin).
I can add update_cron as a user script (that's fired during array start) to the User Scripts plugin. This would be a workaround.
-
1 hour ago, bonienl said:
Apply the changes AFTER updating
@ich777 asked me to add the following to my question above:
I'm running three Unraid server: One Unraid server running on bare metal and two Unraid server as VMs on that bare metal server. These two VMs act as DAS (Direct Attached Storage, access thru SMB) only - just the Array. No Docker Containers, no VMs.
His idea is that bridging needs to be enabled on the Unraid VMs - it is currently already.
Currently:
Unraid Bare metal: Bonding=no, Bridging=yes, br0 member=eth0, VLANs=no
Unraid VMs: Bonding=yes (active_backup(1)), Bridging=yes, VLANs=no
Docker Bare metal: MACVLAN, Host access=no
Docker on VMs: Disabled
Is this ok? It's running happily since years, currently on Unraid 6.11.5 with Fritzbox DSL IPv4-only.
- 1
-
Quote
Settings -> Network Settings -> eth0 -> Enable Bridging = No
Settings -> Docker -> Host access to custom networks = Enabled
Is it recommended to make these changes in 6.11.5 before the update from 6.11.5 to 6.12.4-rc18? Or should I update to 6.12.4-rc18 first and apply these changes afterwards?
-
On 8/1/2023 at 7:58 AM, Niklas said:
The notification say 7h 35m. Should be 7 hr 42 min.
Did you pause and resume parity check? Just an idea...
-
11 hours ago, emrepolat7 said:
~/.config/remmina
~/.config/sakura
remmina --> https://remmina.org/ (Remote Desktop Client)
sakura --> http://www.troubleshooters.com/linux/sakura.htm (Terminal Emulator)
If the software is not included in stock Unraid, perhaps the conf files are included by accident then?
-
Just a question before staying with 6.11.5 forever:
Does that "Exclusive Shares" thing affect all kind of shares or is it just shares below /mnt/user/ that are affected?
I use three docker containers that have access to everything. They have access to:
- Local disks (e.g. /mnt/disk1/Share/ ...)
- Local pools (e.g. /mnt/pool_nvme/Share/ ...)
- SMB remote shares via Unassigned Devices (e.g. /mnt/remotes/192.168.178.101_disk1/Share/ ...)
- Own mount points (e.g. /mnt/addons/gdrive/Share/ ...)
- Local external disks via Unassigned Devices (e.g. /mnt/disks/pool_ssd/ ...)
- But no /mnt/user/
So I gave these three containers access to everything because there are 75 access points involved:
- /mnt/ --> /mnt/ (R/W/slave) for two containers
- /mnt/ --> /mnt/ (R/slave) to the third container
It would be a huge intervention to change that because changing that would require 75 additional path mappings for every container of these three. My box is running happily that way for years because it was free from User Shares and their performance impact under some circumstances in the past.
Can please somebody shed some light for me on that special and new share treatment:
- Is it /mnt/user/ only that is affected or does it affect all kinds of shares (disk, disks, remotes, addons) ?
- Can I switch that off globally (Exclusive share - symlink) ?
- Will I see problems with these symlinks within containers when using /mnt/ -> /mnt/ without /mnt/user/ involved ?
And yes, all Path Mappings have trailing '/' applied
Many thanks in advance.
- 1
-
2 hours ago, ljm42 said:
Let's not get ahead of ourselves : )
I just asked because I am already confused. And as somebody who helps here since over a decade I should use correct naming. So the array is still the array and pools are still pools. And the cache is gone with that RC4 release. Got it now.
-
29 minutes ago, limetech said:
will be implemented at same time we implement multiple unRAID pools
"unRAID pool" is the new name of the old unRAID array? And other pools are just "pools"?
-
Quote
The files bzfirmware and bzmodules are squashfs images mounted using overlayfs at /usr and /lib respectively.
Excuse my stupid question, I'm not sure about its implications. Will it still be possible to copy files in /usr/bin/ during first array start without problems?
Example: cp /usr/bin/fusermount3 /usr/bin/fusermount
- 1
-
On 3/25/2023 at 4:18 PM, bonienl said:
Subject content is "Unraid SMTP Test"
I think a blank line after "Subject:" ends the message header and puts the rest of the header into the message body. Perhaps two CR LF or a character interpreted as such. Perhaps the subject comes with it's own CR|LF.
-
1 hour ago, itimpi said:
Perhaps it would be clearer if the text was redone to be something like "Use Cache/Pool" to make it clearer or maybe simply "Use Pool"?
This ^^ is very important.
"Set all those (ZFS) shares to cache=only" is very confusing.
-
On 3/20/2023 at 12:22 PM, bonienl said:
Converting XFS to ZFS
Excuse my stupid question, but what does that mean exactly? Is data safe during that process or does it simply mean that a XFS formatted disks can be reformatted to ZFS without preserving data?
-
-
I add my link here:
-
Sure, did not post diagnostics because I thought this difference must be obvious. Diagnostics attached now. Sorry for this delay. What I did:
My plan was to upgrade Unraid from 6.10.3 to 6.11.1 on my bare server this morning. I saw that there were two Container updates (swag and homeoffice). So I've set them to "don't autostart" and did my upgrade thing.
After reboot of the bare metal server into "Don't autostart array" I checked everything and did start array after that. After array start I did upgrade both Container. Both Containers (swag/homeoffice) did end in "Please wait" but with the "Done" button shown at the bottom of the dialog.
After upgrade I did set both Containers to "Autostart" again.
That's all. If I had to present an idea: Upgrade "Do not autostart container" Containers look like a good candidate.
[6.12.9] CIFS: VFS: directory entry name would overflow frame end of buf
in Stable Releases
Posted
Just downgrade - no need to change config etc: