hawihoney

Members
  • Posts

    3411
  • Joined

  • Last visited

  • Days Won

    6

Report Comments posted by hawihoney

  1. 9 hours ago, dlandon said:

    I am able to reproduce the issue.  It appears when you remote mount a share from a server that is not another Unraid.

     

    In my case these were Unraid Servers running 6.12.9 as well. I did upgrade them all yesterday. All have SMB mounts to each others disks (disk shares) via Unassigned Devices.

     

    They all showed this error. The mounts were there, but showed only the first directory per mount. syslog was full of CIFS VFS errors.

     

    After downgrading all machines to 6.12.8 business is back as usual.

     

  2. On 9/11/2023 at 8:45 PM, CiscoCoreX said:

    Settings > Network Settings > eth0 > Enable Bridging = No

    Docker settings, nSettings > Docker > Host access to custom networks = Enabled

     

    These two are the problem. This workaround was recommended in 6.12.x for users experiencing "MACVLAN" crashes. You don't need them if you are a.) on IPVLAN or b.) didn't experience MACVLAN crashes or c.) you are no user of routers like Fritzbox in Europe.

     

  3. 12 hours ago, CiscoCoreX said:

    Settings > Network Settings > eth0 > Enable Bonding = Yes

    Settings > Network Settings > eth0 > Enable Bridging = No

    Docker settings, nSettings > Docker > Host access to custom networks = Enabled

     

    Ah, did these changes as well on 6.12.4 and have to wait approx. 10 sec most of the time - but not always. Sometimes even Login window needs 10 sec before it appears.

     

  4. 26 minutes ago, ljm42 said:

    update_cron is called by Unraid to enable `mover`, but User Shares are disabled on this server so `mover` is not needed.

     

    No User Shares, no Mover - it worked on 6.11.5, it stopped working with 6.12.4. This is a breaking change for e.g. plugins that rely on update_cron to add own schedules to cron.

     

    I will call update_cron as a User Script within User Scripts plugin to work around this change. No big deal for me, but be prepared for plugins or users that stumble over this change.

     

    BTW, why not call update_cron during system start always, just in case somebody needs it.

     

  5. 25 minutes ago, itimpi said:

    Perhaps many people will not need the workaround anyway if they have a plugin (such as my Parity Check Tuning plugin) that issues this as part of its install process as the update_cron command is not plugin specific - it should pick up all cron jobs that any plugin has waiting to be activated.

     

    I wrote @Squid in his User Scripts thread about our conversation here. Perhaps you both can talk about that (call update-cron during plugin installation).

     

    I only have 4 plugins installed on these machines, only what's really required on these. None of my plugins - except User Scripts - has to set schedules. And I don't want to install a plugin I don't need except for calling update_cron ;-) So I thiink it's up to this plugin to fix that. And if 6.13 will be picky during startup I think it's even more important to address that.

     

  6. 1 hour ago, bonienl said:

    Apply the changes AFTER updating

     

    @ich777 asked me to add the following to my question above:

     

    I'm running three Unraid server: One Unraid server running on bare metal and two Unraid server as VMs on that bare metal server. These two VMs act as DAS (Direct Attached Storage, access thru SMB) only - just the Array. No Docker Containers, no VMs.

     

    His idea is that bridging needs to be enabled on the Unraid VMs - it is currently already.

     

    Currently:

     

    Unraid Bare metal: Bonding=no, Bridging=yes, br0 member=eth0, VLANs=no

    Unraid VMs: Bonding=yes (active_backup(1)), Bridging=yes, VLANs=no

     

    Docker Bare metal: MACVLAN, Host access=no

    Docker on VMs: Disabled

     

    Is this ok? It's running happily since years, currently on Unraid 6.11.5 with Fritzbox DSL IPv4-only.

     

    • Like 1
  7. Quote

    Settings -> Network Settings -> eth0 -> Enable Bridging = No

    Settings -> Docker -> Host access to custom networks = Enabled

     

    Is it recommended to make these changes in 6.11.5 before the update from 6.11.5 to 6.12.4-rc18? Or should I update to 6.12.4-rc18 first and apply these changes afterwards?

     

  8. Just a question before staying with 6.11.5 forever:

    Does that "Exclusive Shares" thing affect all kind of shares or is it just shares below /mnt/user/ that are affected?

     

    I use three docker containers that have access to everything. They have access to:

    - Local disks (e.g. /mnt/disk1/Share/ ...)

    - Local pools (e.g. /mnt/pool_nvme/Share/ ...)

    - SMB remote shares via Unassigned Devices (e.g. /mnt/remotes/192.168.178.101_disk1/Share/ ...)

    - Own mount points (e.g. /mnt/addons/gdrive/Share/ ...)

    - Local external disks via Unassigned Devices (e.g. /mnt/disks/pool_ssd/ ...)

    - But no /mnt/user/ 

     

    So I gave these three containers access to everything because there are 75 access points involved:

    - /mnt/ --> /mnt/ (R/W/slave) for two containers

    - /mnt/ --> /mnt/ (R/slave) to the third container

     

    It would be a huge intervention to change that because changing that would require 75 additional path mappings for every container of these three. My box is running happily that way for years because it was free from User Shares and their performance impact under some circumstances in the past.

     

    Can please somebody shed some light for me on that special and new share treatment:

     

    - Is it /mnt/user/ only that is affected or does it affect all kinds of shares (disk, disks, remotes, addons) ?

    - Can I switch that off globally (Exclusive share - symlink) ?

    - Will I see problems with these symlinks within containers when using /mnt/ -> /mnt/ without /mnt/user/ involved ?

     

    And yes, all Path Mappings have trailing '/' applied ;-)

     

    Many thanks in advance.

     

    • Upvote 1
  9. 2 hours ago, ljm42 said:

    Let's not get ahead of ourselves : )

     

    I just asked because I am already confused. And as somebody who helps here since over a decade I should use correct naming. So the array is still the array and pools are still pools. And the cache is gone with that RC4 release. Got it now.

     

  10. Quote

    The files bzfirmware and bzmodules are squashfs images mounted using overlayfs at /usr and /lib respectively.

     

    Excuse my stupid question, I'm not sure about its implications. Will it still be possible to copy files in /usr/bin/ during first array start without problems?

     

    Example: cp /usr/bin/fusermount3 /usr/bin/fusermount

     

    • Like 1
  11. Sure, did not post diagnostics because I thought this difference must be obvious. Diagnostics attached now. Sorry for this delay. What I did:

     

    My plan was to upgrade Unraid from 6.10.3 to 6.11.1 on my bare server this morning. I saw that there were two Container updates (swag and homeoffice). So I've set them to "don't autostart" and did my upgrade thing. 

     

    After reboot of the bare metal server into "Don't autostart array" I checked everything and did start array after that. After array start I did upgrade both Container. Both Containers (swag/homeoffice) did end in "Please wait" but with the "Done" button shown at the bottom of the dialog.

     

    After upgrade I did set both Containers to "Autostart" again.

     

    That's all. If I had to present an idea: Upgrade "Do not autostart container" Containers look like a good candidate.

     

    tower-diagnostics-20221010-2008.zip