Jump to content

Mainfrezzer

Members
  • Posts

    370
  • Joined

  • Last visited

Posts posted by Mainfrezzer

  1. 31 minutes ago, sasbro97 said:

     

     

    I know I have "Preserve user defined networks" activated but thank you.

     

    impossible. your diagnostics clearly show "DOCKER_USER_NETWORKS="remove""

    if you had it enabled it would say "DOCKER_USER_NETWORKS="preserve""

    Edit:

    Your VMs complain about a missing interface aswell

  2. 8 minutes ago, sasbro97 said:

    How should I identify the one? Because the container are all running healthy. 

     

    Unraid has, on the right side, an uptime display, pretty easy to find that particular container with that infouptime.PNG.56668ed5f8055b9a9394779d444113fa.PNG
    Btw, if all your docker container run on that custom defined network, it would be wise to enable "Preserve user defined networks" because by default docker wipes those^^


    cant help you with nextcloud, dont use it.

  3. 47 minutes ago, Dal said:

    I recently purchased a 16TB USB stick and attaced to my Unraid server.

    Thinks LOOKS OK, but I'm not able to mount / write to it.

     

    image.png.8c821edcbd0339d3a2255012e56f3856.png

     

    I hope with the ~2000 dollar price tag came a good warranty 😂.

     

    Nah that thing is fake.


    Edit:

    As sort of PSA this is the largest, as of writing this, commercially available flash drive
    87242367_WhatsAppImage2024-02-16at14_10_51.thumb.jpeg.5187dc7642020e7a201e6c71dace353a.jpeg


    I based my 2k guess on flash chips, quite shocked that the 2TB version is already almost that much.

  4. Just now, Gothan said:

     

    I also to understand why one of the disks (or several) has these accesses which I cannot explain.

    given that you had successful logings from external ips, its a gamble, could be anything. File Activity plugin can help you to find out which files are being accessed but that should be the least of your worries now. 

  5. 43 minutes ago, snoopy86 said:

    Why is this a security problem? When i set one container to have static ip i still want that other container can reach this container and other way around.

    Docker container on a (macvlan/ipvlan)-bridge can reach each other.

    The security aspect is network isolation between any of the virtualized enviroments to the host system.

    Besides that, theres a checkbox to remove it.

    • Like 1
  6. 24 minutes ago, BenTheBuilder said:

    I've been getting the same error now for about a week after I performed a reboot. I've tried all the troubleshooting steps listed with no luck.  Is there no method we can use to verify if this is a bug or if the appfeed is actually down? 

    time to test your designated dns server cause i can assure you that the appfeed is available.

    https://raw.githubusercontent.com/Squidly271/AppFeed/master/applicationFeed.json

     

    https://dnld.lime-technology.com/appfeed/master/applicationFeed.json

  7. 2 minutes ago, Beryllium said:

     

    Will need to look into this. If its not too expensive this may sound like a good option. 

    Alternatively and given that you do know how to do it, you can just rent any cheap vps and put a wireguard server on it and tunnel your gameservers through that. Only real usefulness would be the static ip. 

    If you really wanna be paranoid level secure, tailscale would be an option to connect you and your friends with your gameserver.

    • Like 1
  8. 27 minutes ago, MAM59 said:

    my guess (can't proove it) is that they have artificially slowed it down in the binary.

     

     

    has to be, same behavior here.

    Edit: Mhmmm, might not be the case. Seems to be an unraid/cloudflare thing tbh. All my domains that are handled by cloudflare behave that way.

    The docker container ping just fine, its just wonky within unraids ping itself.

     

     

     

     

  9. You can place this file nvidia-driver.plg on your usb drive under

    /config/plugins

    and it should download and install the required nvidia drivers, worst case is a reboot if it doesnt pop up at the first boot. You can wait a bit for it to install and after a couple minutes hit the power button for a normal shutdown procedure.


    alternatively, if available to you, dont use UEFI to boot, use legacy. that should work 99% of the time for graphical output.


    Edit: I just noticed. your diagnostics is dated 2019. Your timeserver/dns servers are probably screwed up. Might cause problems with downloading the required nvidia package.

    Ive tested the 2 dns server designated in your diagnostics and they dont respond to any requests.

  10. 6.12.7-rc2
    As before, will update as we go along.

    6.12.10
     

    # -------------------------------------------------
    # RAM-Disk for Docker json/log files v1.6 for 6.12.10
    # -------------------------------------------------
    
    # check compatibility
    echo -e "8d6094c1d113eb67411e18abc8aaf15d /etc/rc.d/rc.docker\n9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor" | md5sum --check --status && compatible=1
    if [[ $compatible ]]; then
    
      # create RAM-Disk on starting the docker service
      sed -i '/nohup/i \
      # move json/logs to ram disk\
      rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\
      mountpoint -q /var/lib/docker/containers || mount -t tmpfs tmpfs /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be mounted!\
      rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\
      logger -t docker RAM-Disk created' /etc/rc.d/rc.docker
    
      # remove RAM-Disk on stopping the docker service
      sed -i '/tear down the bridge/i \
      # backup json/logs and remove RAM-Disk\
      rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\
      umount /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be unmounted!\
      rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\
      logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker
    
      # Automatically backup Docker RAM-Disk
      sed -i '/^<?PHP$/a \
      $sync_interval_minutes=30;\
      if ( ! ((date("i") * date("H") * 60 + date("i")) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\
        exec("\
          [[ ! -d /var/lib/docker_bind ]] && mkdir /var/lib/docker_bind\
          if ! mountpoint -q /var/lib/docker_bind; then\
            if ! mount --bind /var/lib/docker /var/lib/docker_bind; then\
              logger -t docker Error: RAM-Disk bind mount failed!\
            fi\
          fi\
          if mountpoint -q /var/lib/docker_bind; then\
            rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers && logger -t docker Success: Backup of RAM-Disk created.\
            umount -l /var/lib/docker_bind\
          else\
            logger -t docker Error: RAM-Disk bind mount failed!\
          fi\
        ");\
      }' /usr/local/emhttp/plugins/dynamix/scripts/monitor
    
    else
      logger -t docker "Error: RAM-Disk Mod found incompatible files: $(md5sum /etc/rc.d/rc.docker /usr/local/emhttp/plugins/dynamix/scripts/monitor | xargs)"
    fi

     

    • Like 7
    • Thanks 3
  11.  

    14 minutes ago, jayw1 said:

     

     

    Can I backup appdata while it is in use (containers not stopped)

    very bad idea. There is a reason why the container(s) is/are being stopped^^


    Otherwise you can use any and i mean any file-sync container/program to sync your backups to somewhere else. I use Syncthing for multiple things but mega-sync works too. Your cup of tea to choose what you gonna use to send files from A to B

×
×
  • Create New...