hawihoney

Members
  • Posts

    2794
  • Joined

  • Last visited

  • Days Won

    2

Report Comments posted by hawihoney

  1. Just a question before staying with 6.11.5 forever:

    Does that "Exclusive Shares" thing affect all kind of shares or is it just shares below /mnt/user/ that are affected?

     

    I use three docker containers that have access to everything. They have access to:

    - Local disks (e.g. /mnt/disk1/Share/ ...)

    - Local pools (e.g. /mnt/pool_nvme/Share/ ...)

    - SMB remote shares via Unassigned Devices (e.g. /mnt/remotes/192.168.178.101_disk1/Share/ ...)

    - Own mount points (e.g. /mnt/addons/gdrive/Share/ ...)

    - Local external disks via Unassigned Devices (e.g. /mnt/disks/pool_ssd/ ...)

    - But no /mnt/user/ 

     

    So I gave these three containers access to everything because there are 75 access points involved:

    - /mnt/ --> /mnt/ (R/W/slave) for two containers

    - /mnt/ --> /mnt/ (R/slave) to the third container

     

    It would be a huge intervention to change that because changing that would require 75 additional path mappings for every container of these three. My box is running happily that way for years because it was free from User Shares and their performance impact under some circumstances in the past.

     

    Can please somebody shed some light for me on that special and new share treatment:

     

    - Is it /mnt/user/ only that is affected or does it affect all kinds of shares (disk, disks, remotes, addons) ?

    - Can I switch that off globally (Exclusive share - symlink) ?

    - Will I see problems with these symlinks within containers when using /mnt/ -> /mnt/ without /mnt/user/ involved ?

     

    And yes, all Path Mappings have trailing '/' applied ;-)

     

    Many thanks in advance.

     

    • Upvote 1
  2. 2 hours ago, ljm42 said:

    Let's not get ahead of ourselves : )

     

    I just asked because I am already confused. And as somebody who helps here since over a decade I should use correct naming. So the array is still the array and pools are still pools. And the cache is gone with that RC4 release. Got it now.

     

  3. Quote

    The files bzfirmware and bzmodules are squashfs images mounted using overlayfs at /usr and /lib respectively.

     

    Excuse my stupid question, I'm not sure about its implications. Will it still be possible to copy files in /usr/bin/ during first array start without problems?

     

    Example: cp /usr/bin/fusermount3 /usr/bin/fusermount

     

    • Like 1
  4. 1 hour ago, itimpi said:

    Perhaps it would be clearer if the text was redone to be something like "Use Cache/Pool" to make it clearer or maybe simply "Use Pool"?

     

    This ^^ is very important.

     

    "Set all those (ZFS) shares to cache=only" is very confusing.

     

  5. Sure, did not post diagnostics because I thought this difference must be obvious. Diagnostics attached now. Sorry for this delay. What I did:

     

    My plan was to upgrade Unraid from 6.10.3 to 6.11.1 on my bare server this morning. I saw that there were two Container updates (swag and homeoffice). So I've set them to "don't autostart" and did my upgrade thing. 

     

    After reboot of the bare metal server into "Don't autostart array" I checked everything and did start array after that. After array start I did upgrade both Container. Both Containers (swag/homeoffice) did end in "Please wait" but with the "Done" button shown at the bottom of the dialog.

     

    After upgrade I did set both Containers to "Autostart" again.

     

    That's all. If I had to present an idea: Upgrade "Do not autostart container" Containers look like a good candidate.

     

    tower-diagnostics-20221010-2008.zip

  6. On 4/8/2020 at 4:48 PM, ptr727 said:

    Seems highly unlikely that this is a LSI controller issue.

    My guess is the user share fuse code locks all IO while waiting for a disk mount to spin up.

     

    My Unraid Servers use LSI/Broadcom 9300-8i and 9300-8e HBAs (latest firmware). User Shares (FUSE) are not activated, only Disk Shares here. The effect happened in all Unraid releases I do remember. Whenever a disk spins up all activity on other disks attached to that same HBA stop for a short time.

     

  7. Got that. So my VMs only autostart after an Unraid upgrade. That's what I know now.

     

    My servers only boot for Unraid updates - they run 365/24. Alle Disk Additions or Replacements can be done with a stopped array without shutdown or reboot. With the "Autostart Array=no" (this is IMHO more important) the manual array start after Disk Replacements will trigger VM autostart only once and then never again until next boot.

     

    That's what I learned and that's what I have to live with ...

     

  8. Quote

    I tested with Autostart of the array set to No, and the VM still autostarted when I manually started the array.   It appears that it only works once until you next reboot.

     

    That's what I described and said:

     

    Autostart Array=off, boot, start Array manually --> Autostart VM works.

     

    Now stop Array, start Array, Autostart VM does not work --> that was my problem and the reason for this bug report.

     

    Simply confusing, not documented on VM page and I still don't see a reason for that.

     

    Autostart means Autostart. Don't change that. Just my two cents.

     

  9. And you think that this logic avarage users like me will understand? There's an option "Autostart VM" that only works if another option "Autostart Array" is set. But only if it is a first start of the array after boot, not on a second or later start of the array.

     

    Please take a step back and look at this from a distance. I already gave up and closed because I don't see that anybody understands that this option "Autostart VM" will be source of confusion in the future. It will NOT autostart VMs always and it works different than other autostart options (e.g. Autostart Docker Container).

     

    I'm not here to change the logic, I was here to make it more transparent.

     

    Let's stop here. I already closed.

     

  10. 1 hour ago, itimpi said:

    No

     

    So Unraid works different in your environment than in mine.

     

    Trust me, I'm working long enough with Unraid to know what I do and what settings are in place since years (I guess).

     

    The machine was running since monday (upgrade from 6.10.2 to 6.10.3). "Autostart Array" was off as always. Today I had to replace a disk in the array. I manually stopped all running VMs, manually stopped all running Docker Containers, stopped the array, replaced the disk, started the array to rebuild the replacement disk and the VMs, marked with "Autostart", did not start. That's all.

     

    In the meantime I've set "Autostart Array" to on, so that my VMs honor the "Autostart VM" setting in the future.

     

    I'm not satisfied but I will close that bug. Seems that it is intendent behaviour.

     

  11. 1 hour ago, itimpi said:

    In my experience this works if the array is auto started on boot.

     

    58 minutes ago, JorgeB said:

    It will autostart the VMs at first array start if array autostart is enabled.

     

    So "Autostart VM" will only work if "Autostart Array" is enabled during a boot process.

     

    But "Autostart VM" will not work if "Autostart Array" is disabled, the machine is already booted and the Array is started afterwards.

     

    So the "Autostart" options on the "VM tab" are bound to the "Autostart Array" AND the VM - with the "Autostart Array" being the main factor. This differs from the "Autostart" option on the "Docker tab" that does not care about "Autostart Array" at all.

     

    Got it, very confusing, but got it.

     

    IMHO, this should be mentioned on the "VM page". Some hours were gone without running VMs when I finally found out that my VMs with Autostart were not started. After that I scanned thru all logs this morning because I thought the VMs had errors during start.

     

    If an option suggests an Autostart it should not be bound to a different option behind the scene.

     

    Consider what I did: I stopped the array (no shutdown, no reboot, simply stop) to replace a disk. After that I started the array again and my VMs did not start. Guess nobody would expect that in this situation.

     

    • Like 1
  12. 1 hour ago, JorgeB said:

    This is done on purpose

     

    What does Autostart on VM page do then? Perhaps I don't understand your answer:

     

    1,) Autostart on VM page does not Autostart VMs. Why is this option available?

     

    2.) Autostart on VM page does not Autostart VMs on Array Start, but does work .... (when?)

     

    Whatever it is: "Autostart" on two different pages (Docker, VM) has two different meanings then. IMHO this is a bad GUI descision.

     

    *** EDIT *** I'm talking about these two settings:

    Clipboard01.jpg.3abd3a481bd3aecd8c9c44a3c35b8d58.jpg

     

    Clipboard02.jpg.f0ae0734c1ea15347ee242624c33a674.jpg

     

    How to reproduce:

    - On Settings > Disk Settings > Enable auto start --> Off

    - On Docker > [Container of your choice] > Autostart --> On]

    - On VM > [VM of your choice] > Autostart --> On

    - Stop a running Array (no reboot, no shutdown, simply stop array)

    - Start array

    --> Docker Containers with Autostart=On will start

    --> VMs with Autostart=On will NOT start

     

  13. @itimpi I know that. The disks did spin down after the set spin down delay.

     

    To answer the "doesn't make sense": I don't know where you live but I live in Germany and my power company increased their prize by 100% (EUR 0,49 / kwh) because of the current situation in Europe. I will run parity checks only when my solar roof produces enough power during the day. I doesn't make sense to me to run parity checks when I have to buy.

     

    So please expect my feature request in a day about how to start/stop/pause/resume/query parity checks from within user scripts via URL 😉

  14. I think it's my fault that this was introduced ;-)

     

    Only those pages/lists, that do have activity buttons at the end (Main, Shares, File Manager) and/or have a checkbox in the front, should toggle the background color IMHO. I know the list design has alternating backgrounds already, but these lists grew in content and details so that it is hard to find the correct button at the end.

     

    I was fine with the old design up until File Manager and it's button was introduced. I never used the old button and used MC on the command line instead. Now with File Manager I changed my mind and I'm using File Manager a lot. That's when I found that I have a hard time hitting the correct button on the far right.

     

    • Haha 1
  15. 6 minutes ago, Squid said:

    My answer was to how to handle "private" repositories.

     

    These six XML files currently (6.9.2) automatically stored on /boot are my private repository ;-)

     

    Time to take screenshots instead of using saved container configurations with 6.10 and above.