-
Posts
434 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Mainfrezzer
-
-
-
Just now, Gothan said:
I also to understand why one of the disks (or several) has these accesses which I cannot explain.
given that you had successful logings from external ips, its a gamble, could be anything. File Activity plugin can help you to find out which files are being accessed but that should be the least of your worries now.
-
if
"secure Shell Server _SSH_"
"Web Server _HTTP__TCP_80"
and to some extent
"Web Server _HTTP__TCP_1880""Secure Web Server _HTTPS__TCP_1444"
"Secure Web Server _HTTPS__TCP_18443"
link to your computer thats running unraid, you exposed it to the internet with a fancy invitation letter. -
43 minutes ago, snoopy86 said:
Why is this a security problem? When i set one container to have static ip i still want that other container can reach this container and other way around.
Docker container on a (macvlan/ipvlan)-bridge can reach each other.
The security aspect is network isolation between any of the virtualized enviroments to the host system.
Besides that, theres a checkbox to remove it.- 1
-
24 minutes ago, BenTheBuilder said:
I've been getting the same error now for about a week after I performed a reboot. I've tried all the troubleshooting steps listed with no luck. Is there no method we can use to verify if this is a bug or if the appfeed is actually down?
time to test your designated dns server cause i can assure you that the appfeed is available.
https://raw.githubusercontent.com/Squidly271/AppFeed/master/applicationFeed.json
https://dnld.lime-technology.com/appfeed/master/applicationFeed.json
-
-
From the diagnostic you proved its pretty clear that youre using ipvlan.
Otherwise you would have to ensure that under advancedeth0, it was bond0, is ticked and actually saved. -
-
2 minutes ago, Beryllium said:
Will need to look into this. If its not too expensive this may sound like a good option.
Alternatively and given that you do know how to do it, you can just rent any cheap vps and put a wireguard server on it and tunnel your gameservers through that. Only real usefulness would be the static ip.
If you really wanna be paranoid level secure, tailscale would be an option to connect you and your friends with your gameserver.- 1
-
6 minutes ago, Beryllium said:
Instead of giving the IP out, I will add a CNAME in Cloudflare with DNS only and then the can manually add the port at the end of the URL.Youre still handing an IP out, lol.
-
27 minutes ago, MAM59 said:
my guess (can't proove it) is that they have artificially slowed it down in the binary.
has to be, same behavior here.
Edit: Mhmmm, might not be the case. Seems to be an unraid/cloudflare thing tbh. All my domains that are handled by cloudflare behave that way.
The docker container ping just fine, its just wonky within unraids ping itself.
-
The icons on that url dont exist anymore, its a 404 page. Your link up/down is a restarting docker container thats probably using a wireguard interface, but thats my glass ball talking to me without diagnostics.
-
You can place this file nvidia-driver.plg on your usb drive under
/config/plugins
and it should download and install the required nvidia drivers, worst case is a reboot if it doesnt pop up at the first boot. You can wait a bit for it to install and after a couple minutes hit the power button for a normal shutdown procedure.
alternatively, if available to you, dont use UEFI to boot, use legacy. that should work 99% of the time for graphical output.
Edit: I just noticed. your diagnostics is dated 2019. Your timeserver/dns servers are probably screwed up. Might cause problems with downloading the required nvidia package.
Ive tested the 2 dns server designated in your diagnostics and they dont respond to any requests. -
6.12.7-rc2
As before, will update as we go along.
6.12.10 up to 6.12.13
# ------------------------------------------------- # RAM-Disk for Docker json/log files v1.6 for 6.12.10 # ------------------------------------------------- # check compatibility echo -e "8d6094c1d113eb67411e18abc8aaf15d /etc/rc.d/rc.docker\n9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor" | md5sum --check --status && compatible=1 if [[ $compatible ]]; then # create RAM-Disk on starting the docker service sed -i '/nohup/i \ # move json/logs to ram disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ mountpoint -q /var/lib/docker/containers || mount -t tmpfs tmpfs /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be mounted!\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk created' /etc/rc.d/rc.docker # remove RAM-Disk on stopping the docker service sed -i '/tear down the bridge/i \ # backup json/logs and remove RAM-Disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ umount /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be unmounted!\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker # Automatically backup Docker RAM-Disk sed -i '/^<?PHP$/a \ $sync_interval_minutes=30;\ if ( ! ((date("i") * date("H") * 60 + date("i")) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\ exec("\ [[ ! -d /var/lib/docker_bind ]] && mkdir /var/lib/docker_bind\ if ! mountpoint -q /var/lib/docker_bind; then\ if ! mount --bind /var/lib/docker /var/lib/docker_bind; then\ logger -t docker Error: RAM-Disk bind mount failed!\ fi\ fi\ if mountpoint -q /var/lib/docker_bind; then\ rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers && logger -t docker Success: Backup of RAM-Disk created.\ umount -l /var/lib/docker_bind\ else\ logger -t docker Error: RAM-Disk bind mount failed!\ fi\ ");\ }' /usr/local/emhttp/plugins/dynamix/scripts/monitor else logger -t docker "Error: RAM-Disk Mod found incompatible files: $(md5sum /etc/rc.d/rc.docker /usr/local/emhttp/plugins/dynamix/scripts/monitor | xargs)" fi
- 7
- 3
-
Im just gonna let you know that exposing your unraid webgui to the internet is certainly not the smartest idea.
Otherwise seems like a problem with your connection. -
Mit dem Bridge-Netzwerk unmöglich.
Entweder Teamspeak im host-Modus laufen lassen(oder natürlich mit separater IP) oder die bogus Docker IPs akzeptieren.
NPM könnte noch funktionieren, wenn die header gesetzt sind. -
14 minutes ago, jayw1 said:
Can I backup appdata while it is in use (containers not stopped)
very bad idea. There is a reason why the container(s) is/are being stopped^^
Otherwise you can use any and i mean any file-sync container/program to sync your backups to somewhere else. I use Syncthing for multiple things but mega-sync works too. Your cup of tea to choose what you gonna use to send files from A to B -
networks: default: name: eth0 external: true
Wäre das mindeste, das in der yaml irgendwo stehen sollte, um, in diesem Fall, ins Heimnetzwerk zu kommen.
eth0 wird halt einfach umbenannt zu dem Netzwerk welches gewollt ist.
https://docs.docker.com/compose/networking/#use-a-pre-existing-network -
11 hours ago, Torlew said:
Can I get that zip somewhere?
https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.9.2-x86_64.zip
-
you need to route the traffic on the specified ports from the VPS to the vpn ip of the connected client.
Heres an example:PreUp = iptables -t nat -A PREROUTING -i enp0s6 -p udp --dport 7777 -j DNAT --to-destination 10.123.0.2:7777 PostDown = iptables -t nat -D PREROUTING -i enp0s6 -p udp --dport 7777 -j DNAT --to-destination 10.123.0.2:7777 PreUp = iptables -t nat -A PREROUTING -i enp0s6 -p udp --dport 7778 -j DNAT --to-destination 10.123.0.2:7778 PostDown = iptables -t nat -D PREROUTING -i enp0s6 -p udp --dport 7778 -j DNAT --to-destination 10.123.0.2:7778 PreUp = iptables -t nat -A PREROUTING -i enp0s6 -p udp --dport 27015 -j DNAT --to-destination 10.123.0.2:27015 PostDown = iptables -t nat -D PREROUTING -i enp0s6 -p udp --dport 27015 -j DNAT --to-destination 10.123.0.2:27015
While you can nat all outgoing traffic, hitting the server with a request on 27015 on udp will give you no response because the VPS has nothing running that would answer on that port. Thats why you need to route all incoming traffic to the vpn client.
Just as a FYI on Oracle Cloud. For some reason and i honestly have no clue to why that is, if you try to use this method to connect 2 machines to 1 server under Ubuntu for example, it just breaks and will only forwards traffic to one and only one of the clients, no matter what ip you give as destination for that given port. Weird and odd bug. You can get that usecase working under Redhat/Oracle OS.- 1
-
13 hours ago, pbear said:
As I already had with 6.12.4, I cannot connect to my unraid server through his ipv6. I had downgraded to 6.12.3 until now for this reason (among others).
⚡ root@unraid # ss -tlpn | grep :22 tcp LISTEN 0 128 0.0.0.0:22 0.0.0.0:* users:(("sshd",pid=24054,fd=3))
I don't know why it doesn't listen on [::]:22
Any idea ?
(But with this version, unlike v6.12.4, the ipv6 works with my docker containers.)
i did encounter maybe the same?! similar issue to this after updating from 6.12.5-rc1 to 6.12.5.
Nothing(webgui, ssh, anything really) runs on ipv6 after you stop the array and then start the array again.
First bootup is fine oddly enough. It still lists that it actively listens on the adresses and ports but its absolutely dead. Gonna check on it later and post the diagnostics. -
@ich777 i would like to propose some changes to the ASA beta container/template.
The serveradmin password could be tailing the "?cmd-additions" (GAME_PARAMS) because everything that follows the "?ServerAdminPassword=BlahBlah" will be used as the server admin password. its beyond me how they didnt fix that yet.
wine64 ArkAscendedServer.exe ${MAP}?listen?SessionName="${SERVER_NAME}"?ServerPassword="${SRV_PWD}"${GAME_PARAMS}?ServerAdminPassword="${SRV_ADMIN_PWD}" ${GAME_PARAMS_EXTRA} &
about the template, the default "?MaxPlayer=20" could be changed in "-WinLiveMaxPlayers=20" (GAME_PARAMS_EXTRA) since thats the current argument for it.
I reckon a mention for the changes somewhere would be good too. Its probably confusing for people why things dont work "out of the box" as expected -
Das ganze hätte sich lösen können indem man den vhost "adapter" mit in die Wireguard config file schreibt. hier z.b.
Ich hab jetzt noch nicht geguckt ob das bei 12.5 mitbedacht wurde, ist aber auch eigentlich wumpe.
- 1
-
For 6.12.5-rc1 will update/edit this post as we go along
For 6.12.5 (Does work with 6.12.6 aswell)
# ------------------------------------------------- # RAM-Disk for Docker json/log files v1.6 for 6.12.5 # ------------------------------------------------- # check compatibility echo -e "a26fd1e4fae583e52a2a80b90b3d5500 /etc/rc.d/rc.docker\n9f0269a6ca4cf551ef7125b85d7fd4e0 /usr/local/emhttp/plugins/dynamix/scripts/monitor" | md5sum --check --status && compatible=1 if [[ $compatible ]]; then # create RAM-Disk on starting the docker service sed -i '/nohup/i \ # move json/logs to ram disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ mountpoint -q /var/lib/docker/containers || mount -t tmpfs tmpfs /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be mounted!\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk created' /etc/rc.d/rc.docker # remove RAM-Disk on stopping the docker service sed -i '/tear down the bridge/i \ # backup json/logs and remove RAM-Disk\ rsync -aH --delete /var/lib/docker/containers/ ${DOCKER_APP_CONFIG_PATH%/}/containers_backup\ umount /var/lib/docker/containers || logger -t docker Error: RAM-Disk could not be unmounted!\ rsync -aH --delete ${DOCKER_APP_CONFIG_PATH%/}/containers_backup/ /var/lib/docker/containers\ logger -t docker RAM-Disk removed' /etc/rc.d/rc.docker # Automatically backup Docker RAM-Disk sed -i '/^<?PHP$/a \ $sync_interval_minutes=30;\ if ( ! ((date("i") * date("H") * 60 + date("i")) % $sync_interval_minutes) && file_exists("/var/lib/docker/containers")) {\ exec("\ [[ ! -d /var/lib/docker_bind ]] && mkdir /var/lib/docker_bind\ if ! mountpoint -q /var/lib/docker_bind; then\ if ! mount --bind /var/lib/docker /var/lib/docker_bind; then\ logger -t docker Error: RAM-Disk bind mount failed!\ fi\ fi\ if mountpoint -q /var/lib/docker_bind; then\ rsync -aH --delete /var/lib/docker/containers/ /var/lib/docker_bind/containers && logger -t docker Success: Backup of RAM-Disk created.\ umount -l /var/lib/docker_bind\ else\ logger -t docker Error: RAM-Disk bind mount failed!\ fi\ ");\ }' /usr/local/emhttp/plugins/dynamix/scripts/monitor else logger -t docker "Error: RAM-Disk Mod found incompatible files: $(md5sum /etc/rc.d/rc.docker /usr/local/emhttp/plugins/dynamix/scripts/monitor | xargs)" fi
- 1
- 6
Plugged in a 16TB USB - Is it too large?
in General Support
Posted · Edited by Mainfrezzer
I hope with the ~2000 dollar price tag came a good warranty 😂.
Nah that thing is fake.
Edit:
As sort of PSA this is the largest, as of writing this, commercially available flash drive
I based my 2k guess on flash chips, quite shocked that the 2TB version is already almost that much.