BRiT

Members
  • Posts

    6572
  • Joined

  • Days Won

    8

Report Comments posted by BRiT

  1. 13 minutes ago, limetech said:

    There is a problem: during shutdown we definitely want to cleanly unmount the usb flash which is mounted at /boot.  However, in order to do that we must first unmount the two squashfs file systems: /usr and /lib.

     

    Does that 'rc.nut shutdown' operation kill power immediately?  If so, any executable it uses must get moved off of /usr, maybe put into /bin.

     

    What about other plugins, would they need to hack around these limitations too? If so, then it seems like maybe instead of having every single plugin attempt to work around this, having the system itself change is the more efficient approach? 🤷‍♂️

  2. 4 hours ago, dlandon said:

    ...

    I know all that. I'm merely pointing out how this automatic change can and will break existing setups, so be prepared for the multitude of inbound support posts that it will generate. Maybe it's wiser to have this new behavior be an explicit opt-in feature.

  3. 25 minutes ago, dlandon said:

    The only difference will be that the /mnt/user/share for an exclusive share will be a symlink.  If you don't include the trailing "/", the mapping of /mnt/user/share will appear to be a file and not a directory.  If you use /mnt/user/share/ it will be treated as a directory.

    No. You're not understanding.

     

    They are using 1 single "/mnt/usr/" to pass everything through. The symlinks underneath are thus treated as just file and thus the dockers fail.

  4. 47 minutes ago, dlandon said:

    You need to put a trailing "/" on your paths.

    In that situation, using /mnt/usr/ still wouldn't fix their issue. They would need to individually map every single share, being sure to include include "/" at the end of every host path.

     

    This seems Ike it will break a lot of use cases that simply map /mnt/usr/ . 

  5. 4 hours ago, Kosti said:

    Hey All

    Quick questions, apology in advance if this is not the correct place to ask.

     

    I am on 6.9.2, never needed to make changes until recently added larger storage capacity, and discovered I'm miles behind.

     

    Wow ZFS look at you unraid! Well done LT..

     

    So right now I use unRAID array with an optional "Cache" all are XFS format and cache drives are btrfs. Apart for backing up the USB flash drive , any other gotcha's before I drive into 6.12 stable when released?

     

    My play is to go straight to 6.12?

     

    PEACE

    Kosti

     

     

    Yes. There are plenty of "gotchas".

     

    You will need to take the time to find any incompatible plugins, remove them and replace with compatible plugins. This is required for going to 6.11.x or newer versions of unraid.

     

    The most prominent is replacing DevPack with DevTools and switching from one version of NUT to another if you use that alternative UPS software. There are more that we're no longer supported from the original author, so others have made compatible versions, so you need to switch over.

  6. 6 hours ago, domrockt said:

    i think the question here is,

     

    to create a ZFS Unraid Array + 1 Parity drive. Instead the regular XFS Unraid Array +1 Parity and what happens if you loose 2 drives at once?

     

    That wasn't my question. I already know how Unraid Arrays function as it is filesystem agnostic.

     

    My question was around going pure ZFS Arrays instead of the Unraid Array(s).

  7. 5 hours ago, Jclendineng said:

    but if you have all the same sizes and can do zfs I would think it would be a definite upgrade?

     

    What happens in a single-parity ZFS pool setup if you lose 2 devices? Or a dual-parity ZFS pool and you lose 3 devices? Is all of the data wiped out at that point?

  8. Realtek drivers are really hit and miss. You'll have an easier time ditching any network device using that company's chipsets. Every time Limetech tried different Realtek drivers in a release in the past, it proved to be nothing but headaches all around with the sort of "Solve one problem and create dozens more" scenario.

     

    I know it's not what you want to hear and doesn't help you, but that's the reality.

    • Thanks 1
  9. I think there is a logical issue with your script. There are duplicate case statements [ button/power ]. Only the first one will be executed. I think you need to adjust your script so your customization is combined into the first case statement. Maybe something more like this:

     

    #!/bin/sh
    # Default acpi script that takes an entry for all actions

    IFS=${IFS}/
    set $@

    case "$1" in
      button/power)
        case "$2" in
          PBTN) . /boot/config/plugins/user.scripts/scripts/VMStartStop/script
             ;;
          LNXPWRBN:00) . /boot/config/plugins/user.scripts/scripts/VMStartStop/script
             ;;
          *) logger "ACPI action $2 is not defined"
             ;;
        esac
        ;;
      *)
    #    logger "ACPI group $1 / action $2 is not defined"
        ;;
    esac

  10. Completely unrelated to unRaid, I had this issue on Monday with a work server. It had IIS site set to only respond on HTTPS. The only way to get to it was using https://machinename.domain.com/ , it's FQDN. Attempting to browse to it using only https://machinename or even https://ipaddress resulted in 404s.

     

    TLDR: When using HTTPS, one must always use the fully qualified domain name that is used on the certificate.

  11. 1 hour ago, SimonF said:

    It would need to be reconnected as id has changed.

     

     

    Shouldn't the mapping be done using the Device ID and not Bus and Device Numbers?

     

    That's why I'm curious.

     

     

    Seems like the real issue is the USB disconnects, as if the controller is going to sleep or doesn't have enough power for all devices which causes them to cycle.

  12. If the images get downloads then they are not unused. Right? So if you're using an image hosted from a free account, its in your best interest to re-download them periodically.

     

    Edit: They do not enforce image retention and have moved over to resource pull limits instead:

     

    https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-subscription-updates/

    • Like 1
  13. There is a patch for newer version of WSDD to remove infinite loop under certain conditions. It was mentioned in one of these threads. 

     

    Limetech could release patches / updates for their stable 6.8 build but they seem to have forgone that for a long time now to put out 6.9 beta and RCs.