kl0wn
-
Posts
69 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by kl0wn
-
-
I've successfully (i think?) deployed the ARK container and added -crossplay but we're not seeing the server populated within the Unofficial PC List. All ports are forwarded. What am I missing?
-
This resolved the issue for me as well but dedicating a NIC to one network is kind of a bummer.
-
-
Add me to the list
-
I can confidently mark this thread as resolved. If anyone else is having an issue with kernel panics after version 6.9+ disabling "Host Access to Custom Networks" has resulted in no panics for over 24hr. Hopefully this helps someone else out.
@JorgeB thank you for your guidance and assistance as well!
- 1
-
Copy - the problem is IPVLAN uses the concept of 1 MAC Address to many IP's; which is a bit of a nightmare for me personally. I've read a lot of posts and there seems to be a common theme of switch to IPVLAN while also Disabling "Host Access to Customer Networks". I went thru a variety of macvlan/ipvlan iterations, with/without: VLAN's, dedicated NIC w/ PVID, DHCP on/off...
Right now I'm using macvlan w/ "Host Access to Custom Networks" disabled; so far so good for ~6hours. I'll report back in 24 or so.
NOTE: Bridge and Host containers will lose the abiltiy to communicate with containers that both have a static IP and live on the Host.
-
I tried that but I'm having trouble understanding how exactly to implement IPVLAN in practice. If all of the containers are using the same MAC w/ different IP addresses...how am I then able to effectively route traffic across the network? For reference my docker containers are on a VLAN as well w/ no DHCP.
Thanks!
-
Hey All,
I have issues with Kernel Panics relating to macvlan and in some cases the nvidia driver but there doesn't seem to be any rhyme or reason. Example below and Diagnostics attached
Aug 26 08:27:25 NAS kernel: <TASK>
Aug 26 08:27:25 NAS kernel: netif_rx_ni+0x53/0x85
Aug 26 08:27:25 NAS kernel: macvlan_broadcast+0x116/0x144 [macvlan]
Aug 26 08:27:25 NAS kernel: macvlan_process_broadcast+0xc7/0x110 [macvlan]
Aug 26 08:27:25 NAS kernel: process_one_work+0x198/0x27a
Aug 26 08:27:25 NAS kernel: worker_thread+0x19c/0x240
Aug 26 08:27:25 NAS kernel: ? rescuer_thread+0x28b/0x28b
Aug 26 08:27:25 NAS kernel: kthread+0xde/0xe3
Aug 26 08:27:25 NAS kernel: ? set_kthread_struct+0x32/0x32
Aug 26 08:27:25 NAS kernel: ret_from_fork+0x22/0x30
Aug 26 08:27:25 NAS kernel: </TASK> -
1 minute ago, Josh.5 said:
Anyone here getting errors trying to start the container, please make sure that you are not running any other containers like this that is using the GPU for a "screen". This is not currently possible.
Got it. I have Plex and Emby leveraging the card. I'll hold off for now, amigo.
-
4 hours ago, Irithor said:
Hmm, throwing some errors for me. Any suggestions? Cheers for the work/in advance.
steam-headless-2022-01-10T15-46-13.log 35.9 kB · 7 downloads
Same here.
EDIT: tried specifying the UUID of the card (I only have 1) to see if that would help. Same deal.
-
Hello,
Stoked to try this out but I'm running into an issue: See below for docker log output:
ErrorWarningSystemArrayLogin [ /scripts/10-setup_user.sh: executing... ] **** Configure default user **** Setting run user uid=100(default) gid=99(default) Setting umask to 000 Adding default home directory template Setting root password Setting user password DONE [ /scripts/20-configre_sshd.sh: executing... ] DONE [ /scripts/30-configure_system_paths.sh: executing... ] **** Configure system paths **** Configure dbus Configure X Windows context Configure X Windows session Remove old lockfiles DONE [ /scripts/40-setup_locale.sh: executing... ] **** Locales already set correctly to en_US.UTF-8 UTF-8 **** DONE [ /scripts/50-configure_audio.sh: executing... ] **** Configure pulseaudio socket **** **** Patching noVNC with audio websocket **** DONE [ /scripts/80-configure_nvidia_driver.sh: executing... ] **** Found NVIDIA device 'NVIDIA GeForce GTX 1050 Ti' **** Downloading driver Installing driver DONE [ /scripts/90-configure_xorg.sh: executing... ] **** Generate NVIDIA xorg.conf **** Configure Xwrapper.config Configuring X11 with GPU ID: 'GPU-bb0a6c23-f6a4-d567-4fe4-edc694fe7fe9' Configuring X11 with PCI bus ID: 'PCI:4:0:0' Writing X11 config with Modeline "1600x900R" 97.50 1600 1648 1680 1760 900 903 908 926 +hsync -vsync WARNING: Unable to locate/open X configuration file. Package xorg-server was not found in the pkg-config search path. Perhaps you should add the directory containing `xorg-server.pc' to the PKG_CONFIG_PATH environment variable No package 'xorg-server' found Option "ProbeAllGpus" "False" added to Screen "Screen0". Option "AllowEmptyInitialConfiguration" "True" added to Screen "Screen0". New X configuration file written to '/etc/X11/xorg.conf' DONE **** Starting supervisord **** DONE [ /scripts/90-configure_xorg.sh: executing... ] **** Generate NVIDIA xorg.conf **** Configure Xwrapper.config Configuring X11 with GPU ID: 'GPU-bb0a6c23-f6a4-d567-4fe4-edc694fe7fe9' Configuring X11 with PCI bus ID: 'PCI:4:0:0' Writing X11 config with Modeline "1600x900R" 97.50 1600 1648 1680 1760 900 903 908 926 +hsync -vsync WARNING: Unable to locate/open X configuration file. Package xorg-server was not found in the pkg-config search path. Perhaps you should add the directory containing `xorg-server.pc' to the PKG_CONFIG_PATH environment variable No package 'xorg-server' found Option "ProbeAllGpus" "False" added to Screen "Screen0". Option "AllowEmptyInitialConfiguration" "True" added to Screen "Screen0". New X configuration file written to '/etc/X11/xorg.conf' DONE **** Starting supervisord **** 2022-01-10 11:10:06,372 INFO Included extra file "/etc/supervisor/conf.d/services.conf" during parsing 2022-01-10 11:10:06,373 INFO Set uid to user 0 succeeded 2022-01-10 11:10:06,383 INFO RPC interface 'supervisor' initialized 2022-01-10 11:10:06,384 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2022-01-10 11:10:06,385 INFO supervisord started with pid 1 2022-01-10 11:10:07,390 INFO spawned: 'audiostream' with pid 172 2022-01-10 11:10:07,394 INFO spawned: 'audiowebsock' with pid 173 2022-01-10 11:10:07,396 INFO spawned: 'dbus' with pid 174 2022-01-10 11:10:07,401 INFO spawned: 'pulseaudio' with pid 175 2022-01-10 11:10:07,403 INFO spawned: 'ssh' with pid 176 2022-01-10 11:10:07,406 INFO spawned: 'xorg' with pid 177 2022-01-10 11:10:07,411 INFO spawned: 'x11vnc' with pid 178 2022-01-10 11:10:07,419 INFO spawned: 'de' with pid 179 2022-01-10 11:10:07,422 INFO spawned: 'novnc' with pid 180 2022-01-10 11:10:07,435 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:07,519 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:07,581 INFO reaped unknown pid 221 (exit status 0) 2022-01-10 11:10:07,602 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:08,454 INFO spawned: 'audiostream' with pid 235 2022-01-10 11:10:08,455 INFO success: audiowebsock entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,455 INFO success: dbus entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,455 INFO success: pulseaudio entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,455 INFO success: ssh entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,456 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,456 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,456 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:08,462 INFO spawned: 'x11vnc' with pid 236 2022-01-10 11:10:08,471 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:09,236 INFO spawned: 'xorg' with pid 240 2022-01-10 11:10:09,239 INFO spawned: 'de' with pid 241 2022-01-10 11:10:09,313 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:09,369 INFO reaped unknown pid 250 (exit status 0) 2022-01-10 11:10:09,371 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:09,483 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:09,485 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:10,491 INFO spawned: 'audiostream' with pid 251 2022-01-10 11:10:10,496 INFO spawned: 'x11vnc' with pid 252 2022-01-10 11:10:10,501 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:11,262 INFO reaped unknown pid 220 (exit status 1) 2022-01-10 11:10:11,516 INFO spawned: 'xorg' with pid 253 2022-01-10 11:10:11,516 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:11,520 INFO spawned: 'de' with pid 254 2022-01-10 11:10:11,521 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:11,526 INFO spawned: 'x11vnc' with pid 255 2022-01-10 11:10:11,601 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:11,652 INFO reaped unknown pid 264 (exit status 0) 2022-01-10 11:10:11,653 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:12,547 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:12,547 INFO reaped unknown pid 249 (exit status 1) 2022-01-10 11:10:12,550 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:12,634 INFO spawned: 'x11vnc' with pid 265 2022-01-10 11:10:13,658 INFO spawned: 'audiostream' with pid 266 2022-01-10 11:10:13,658 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:13,659 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:13,663 INFO spawned: 'x11vnc' with pid 267 2022-01-10 11:10:13,667 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:13,668 INFO gave up: audiostream entered FATAL state, too many start retries too quickly 2022-01-10 11:10:14,686 INFO spawned: 'xorg' with pid 268 2022-01-10 11:10:14,686 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:14,690 INFO spawned: 'de' with pid 269 2022-01-10 11:10:14,690 INFO reaped unknown pid 263 (exit status 1) 2022-01-10 11:10:14,691 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:14,695 INFO spawned: 'x11vnc' with pid 270 2022-01-10 11:10:14,757 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:14,813 INFO gave up: xorg entered FATAL state, too many start retries too quickly 2022-01-10 11:10:14,813 INFO reaped unknown pid 279 (exit status 0) 2022-01-10 11:10:14,815 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:15,463 INFO gave up: de entered FATAL state, too many start retries too quickly 2022-01-10 11:10:15,714 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:15,716 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:16,722 INFO spawned: 'x11vnc' with pid 280 2022-01-10 11:10:17,751 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:17,753 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:18,761 INFO spawned: 'x11vnc' with pid 281 2022-01-10 11:10:18,762 INFO reaped unknown pid 278 (exit status 1) 2022-01-10 11:10:19,783 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:19,786 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:20,793 INFO spawned: 'x11vnc' with pid 282 2022-01-10 11:10:21,563 WARN received SIGTERM indicating exit request 2022-01-10 11:10:21,564 INFO waiting for audiowebsock, dbus, pulseaudio, ssh, x11vnc, novnc to die 2022-01-10 11:10:21,595 INFO stopped: novnc (exit status 143) 2022-01-10 11:10:21,595 INFO reaped unknown pid 204 (exit status 0) 2022-01-10 11:10:21,563 WARN received SIGTERM indicating exit request 2022-01-10 11:10:21,564 INFO waiting for audiowebsock, dbus, pulseaudio, ssh, x11vnc, novnc to die 2022-01-10 11:10:21,595 INFO stopped: novnc (exit status 143) 2022-01-10 11:10:21,595 INFO reaped unknown pid 204 (exit status 0) 2022-01-10 11:10:21,815 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:21,817 INFO stopped: x11vnc (terminated by SIGQUIT) 2022-01-10 11:10:21,819 INFO stopped: ssh (exit status 255) 2022-01-10 11:10:21,821 INFO stopped: pulseaudio (terminated by SIGQUIT) 2022-01-10 11:10:21,824 INFO stopped: dbus (terminated by SIGQUIT) 2022-01-10 11:10:21,828 INFO stopped: audiowebsock (terminated by SIGQUIT)
-
12 hours ago, kp74508 said:
Since this was the first result in a Google search when I was looking for a solution, I will post my solution here. I created an MDNS reflector with a Docker container to reflect between my secure LAN , IOT -VLAN2, and Guest -VLAN3 networks. Change the specifics to fit your situation.
1- Create docker networks for the VLANs (one time setup in Unraid terminal):
docker network create --driver macvlan --subnet 192.168.2.0/24 --gateway 192.168.2.1 --opt parent=eth0.2 br0.2
docker network create --driver macvlan --subnet 192.168.3.0/24 --gateway 192.168.3.1 --opt parent=eth0.3 br0.3
2- Create docker container:
Name:Avahi (name is used in Post Arguments later)
Repository:flungo/avahi
Network Type:Custom:br0
Fixed IP address:192.168.1.20
Docker Variables:
REFLECTOR_ENABLE_REFLECTOR
Key:REFLECTOR_ENABLE_REFLECTOR
Value:yes
Docker Post Arguments: (ADVANCED VIEW)
; docker network connect br0.2 Avahi --ip 192.168.2.20; docker network connect br0.3 Avahi --ip 192.168.3.20
Thanks! I ended up snagging a Firewalla Gold a few months back and it has mDNS reflection built in.
-
On 4/12/2021 at 10:49 AM, Squid said:
Try it again? How much memory is in your server? Were there any errors during the installation? Worst comes to worst you can always manually update by downloading the zip file and overwriting all of the bz* files in the root with those in the zip
I tried a few times - no dice. 8GB of RAM. No errors. I'll give the manual method a shot this weekend. Thanks, bud.
-
On 4/8/2021 at 9:54 AM, kl0wn said:
I'm stuck on "Reboot Now" it spins for a bit then throws a 504 Gateway error. I also tried to reboot via the typical "Reboot" button, no dice. I SSH'd in and was able to reboot but it didn't update to the 6.9.2 and still shows the "Reboot Now" banner. Any thoughts?
Anyone?
-
I'm stuck on "Reboot Now" it spins for a bit then throws a 504 Gateway error. I also tried to reboot via the typical "Reboot" button, no dice. I SSH'd in and was able to reboot but it didn't update to the 6.9.2 and still shows the "Reboot Now" banner. Any thoughts?
-
On 9/24/2018 at 10:18 PM, jkBuckethead said:
I've been down some crazy rabbit holes with windows before, but this one really takes the cake. A little googling, and you quickly see that tons and tons of people have experienced this particular error. There are dozens upon dozens of potential solutions, ranging from simple to extremely complicated and everything in between. Reading posts of people's results couldn't be more random. For every person that is helped by a particular solution, there are twenty people for whom it didn't work. I myself had tried about a dozen of the best sure-fire fixes without any success.
I really didn't have much hope, but I took a look at the post linked above. The thread started in August of 2015. One common thread in error 0x80070035 posts is the 1803 windows 10 update so I decided to jump ahead to the end of the thread. Low and behold, on page 5, the first post I read struck a chord for some reason. Even though I was quite tired of trying random things without success, I decided to give this registry edit a try. As soon as I added the key below I was able to access the unraid server. I didn't even have to reboot. HALLELUJAH!!!!
Try: (Solution)
https://www.schkerke.com/wps/2015/06/windows-10-unable-to-connect-to-samba-shares/
Basically the solution follows, but you'll need to use regedit:
add the new key HKLM\SYSTEM\CurrentControlSet\Services\LanmanWorkstation\Parameters\AllowInsecureGuestAuth key, value set to 1
It's interesting to know one of my other computers that works doesn't have this key, but it has "AuditSmb1Access" which set to 0, which this computer doesn't have.
I checked one of windows 10 home machines, and like the post above it does not have the AllowInsecureGuestAuth key, but does have the AuditSmb1Access key set to 0. My windows 10 pro machine, the one the could not access my unraid server, had the AllowInsecureGuestAuth set to 0. Setting this to 1 appears to have fixed my problem.
I'm not certain, but I suspect the different keys could be linked to one being Home and the other Pro. Again I'm just guessing, but the name suggests that access was blocked because the share lacked a password. I guess it's a security thing, but it's kind of an unexpected default setting. I wonder what GUI setting this is associated with. I don't recall ever seeing a windows setting to block access to open servers. I don't even want to test and see how much frustration I could have saved myself if I had simply secured the share and set a password from the start.
Changing the regkey to 1 worked for me. Thanks!
-
Howdy Gents,
I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
-
Hey Gents,
I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
-
The issue popped up again, so I submitted a bug and rolled back to 6.5.3, everything is now stable....so it's definitely something going on with that version. I'll hang out in 6.5.3 land
-
No offense taken my friend lol. I know that I DEFINITELY need a better/beefier box but it's just not in the cards right now. I could up the RAM but I don't want to dump funds into an old box that will eventually be upgraded to a platform that won't even support the RAM from this one. After reboot, my memory is at 37% so something was definitely hung. I do however plan to up the size of my Cache drive, that way I can just kick off Mover every morning at say 2AM rather than having it run every hour. Thanks for the input bud.
-
I'm starting to think this has to do with Mover causing the IOWAIT. I changed this to run every 4 hours, rather than every 1 hour and enabled logging. I'll report back with what I find. If anyone has other ideas, please let me know.
EDIT: I found that my pihole docker, that was writing to a cache that was set to ONLY use the cache drive, somehow had files living on every disk in my environment....not sure how that's possible but it happened. I set the share to Cache Prefer --> Ran Mover --> All files were moved back to cache. I now switched the share back to Cache only --> Invoked Mover --> No crazy spike in CPU. My theory is when Mover was invoked it was touching all of the drives, thus causing the IOWAIT. I may be totally wrong but it's the best I got for now.
-
There it is...top with a screen of Unraid showing 100%
-
-
The Transcoder is going to fluctuate all day long but you're right 344% is a bit much haha. I've played around with Docker pinning but Plex seems to leak into other cores/ht regardless of what is set. I'll see what the Transcoder shows the next time this happens but I have 6-7 streams (some of those being transcodes) running every night with no issues. This is only happening in the morning so it would be nice to see a more verbose log output to identify what is kicking off or possibly causing this to happen.
Something changing MAC address
in General Support
Posted
I also am getting random MAC addresses leaking to the broader network. I have 2 NIC's, 1 for management and the other for Docker (MAC VLAN errors...). This only seems to popup when I make a change within docker. I'll try disabling Host access to custom networks because I don't really need that enabled anyways because the router is taking care of segmentation via VLAN.