kl0wn

Members
  • Posts

    69
  • Joined

  • Last visited

Everything posted by kl0wn

  1. I also am getting random MAC addresses leaking to the broader network. I have 2 NIC's, 1 for management and the other for Docker (MAC VLAN errors...). This only seems to popup when I make a change within docker. I'll try disabling Host access to custom networks because I don't really need that enabled anyways because the router is taking care of segmentation via VLAN.
  2. I've successfully (i think?) deployed the ARK container and added -crossplay but we're not seeing the server populated within the Unofficial PC List. All ports are forwarded. What am I missing?
  3. This resolved the issue for me as well but dedicating a NIC to one network is kind of a bummer.
  4. Same. I attempted adding --runtime=nvidia and exposing my NVIDIA card to the container but no dice.
  5. I can confidently mark this thread as resolved. If anyone else is having an issue with kernel panics after version 6.9+ disabling "Host Access to Custom Networks" has resulted in no panics for over 24hr. Hopefully this helps someone else out. @JorgeB thank you for your guidance and assistance as well!
  6. Copy - the problem is IPVLAN uses the concept of 1 MAC Address to many IP's; which is a bit of a nightmare for me personally. I've read a lot of posts and there seems to be a common theme of switch to IPVLAN while also Disabling "Host Access to Customer Networks". I went thru a variety of macvlan/ipvlan iterations, with/without: VLAN's, dedicated NIC w/ PVID, DHCP on/off... Right now I'm using macvlan w/ "Host Access to Custom Networks" disabled; so far so good for ~6hours. I'll report back in 24 or so. NOTE: Bridge and Host containers will lose the abiltiy to communicate with containers that both have a static IP and live on the Host.
  7. I tried that but I'm having trouble understanding how exactly to implement IPVLAN in practice. If all of the containers are using the same MAC w/ different IP addresses...how am I then able to effectively route traffic across the network? For reference my docker containers are on a VLAN as well w/ no DHCP. Thanks!
  8. Hey All, I have issues with Kernel Panics relating to macvlan and in some cases the nvidia driver but there doesn't seem to be any rhyme or reason. Example below and Diagnostics attached Aug 26 08:27:25 NAS kernel: <TASK> Aug 26 08:27:25 NAS kernel: netif_rx_ni+0x53/0x85 Aug 26 08:27:25 NAS kernel: macvlan_broadcast+0x116/0x144 [macvlan] Aug 26 08:27:25 NAS kernel: macvlan_process_broadcast+0xc7/0x110 [macvlan] Aug 26 08:27:25 NAS kernel: process_one_work+0x198/0x27a Aug 26 08:27:25 NAS kernel: worker_thread+0x19c/0x240 Aug 26 08:27:25 NAS kernel: ? rescuer_thread+0x28b/0x28b Aug 26 08:27:25 NAS kernel: kthread+0xde/0xe3 Aug 26 08:27:25 NAS kernel: ? set_kthread_struct+0x32/0x32 Aug 26 08:27:25 NAS kernel: ret_from_fork+0x22/0x30 Aug 26 08:27:25 NAS kernel: </TASK> nas-diagnostics-20220826-1931.zip
  9. Got it. I have Plex and Emby leveraging the card. I'll hold off for now, amigo.
  10. Same here. EDIT: tried specifying the UUID of the card (I only have 1) to see if that would help. Same deal.
  11. Hello, Stoked to try this out but I'm running into an issue: See below for docker log output: ErrorWarningSystemArrayLogin [ /scripts/10-setup_user.sh: executing... ] **** Configure default user **** Setting run user uid=100(default) gid=99(default) Setting umask to 000 Adding default home directory template Setting root password Setting user password DONE [ /scripts/20-configre_sshd.sh: executing... ] DONE [ /scripts/30-configure_system_paths.sh: executing... ] **** Configure system paths **** Configure dbus Configure X Windows context Configure X Windows session Remove old lockfiles DONE [ /scripts/40-setup_locale.sh: executing... ] **** Locales already set correctly to en_US.UTF-8 UTF-8 **** DONE [ /scripts/50-configure_audio.sh: executing... ] **** Configure pulseaudio socket **** **** Patching noVNC with audio websocket **** DONE [ /scripts/80-configure_nvidia_driver.sh: executing... ] **** Found NVIDIA device 'NVIDIA GeForce GTX 1050 Ti' **** Downloading driver Installing driver DONE [ /scripts/90-configure_xorg.sh: executing... ] **** Generate NVIDIA xorg.conf **** Configure Xwrapper.config Configuring X11 with GPU ID: 'GPU-bb0a6c23-f6a4-d567-4fe4-edc694fe7fe9' Configuring X11 with PCI bus ID: 'PCI:4:0:0' Writing X11 config with Modeline "1600x900R" 97.50 1600 1648 1680 1760 900 903 908 926 +hsync -vsync WARNING: Unable to locate/open X configuration file. Package xorg-server was not found in the pkg-config search path. Perhaps you should add the directory containing `xorg-server.pc' to the PKG_CONFIG_PATH environment variable No package 'xorg-server' found Option "ProbeAllGpus" "False" added to Screen "Screen0". Option "AllowEmptyInitialConfiguration" "True" added to Screen "Screen0". New X configuration file written to '/etc/X11/xorg.conf' DONE **** Starting supervisord **** DONE [ /scripts/90-configure_xorg.sh: executing... ] **** Generate NVIDIA xorg.conf **** Configure Xwrapper.config Configuring X11 with GPU ID: 'GPU-bb0a6c23-f6a4-d567-4fe4-edc694fe7fe9' Configuring X11 with PCI bus ID: 'PCI:4:0:0' Writing X11 config with Modeline "1600x900R" 97.50 1600 1648 1680 1760 900 903 908 926 +hsync -vsync WARNING: Unable to locate/open X configuration file. Package xorg-server was not found in the pkg-config search path. Perhaps you should add the directory containing `xorg-server.pc' to the PKG_CONFIG_PATH environment variable No package 'xorg-server' found Option "ProbeAllGpus" "False" added to Screen "Screen0". Option "AllowEmptyInitialConfiguration" "True" added to Screen "Screen0". New X configuration file written to '/etc/X11/xorg.conf' DONE **** Starting supervisord **** 2022-01-10 11:10:06,372 INFO Included extra file "/etc/supervisor/conf.d/services.conf" during parsing 2022-01-10 11:10:06,373 INFO Set uid to user 0 succeeded 2022-01-10 11:10:06,383 INFO RPC interface 'supervisor' initialized 2022-01-10 11:10:06,384 CRIT Server 'unix_http_server' running without any HTTP authentication checking 2022-01-10 11:10:06,385 INFO supervisord started with pid 1 2022-01-10 11:10:07,390 INFO spawned: 'audiostream' with pid 172 2022-01-10 11:10:07,394 INFO spawned: 'audiowebsock' with pid 173 2022-01-10 11:10:07,396 INFO spawned: 'dbus' with pid 174 2022-01-10 11:10:07,401 INFO spawned: 'pulseaudio' with pid 175 2022-01-10 11:10:07,403 INFO spawned: 'ssh' with pid 176 2022-01-10 11:10:07,406 INFO spawned: 'xorg' with pid 177 2022-01-10 11:10:07,411 INFO spawned: 'x11vnc' with pid 178 2022-01-10 11:10:07,419 INFO spawned: 'de' with pid 179 2022-01-10 11:10:07,422 INFO spawned: 'novnc' with pid 180 2022-01-10 11:10:07,435 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:07,519 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:07,581 INFO reaped unknown pid 221 (exit status 0) 2022-01-10 11:10:07,602 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:08,454 INFO spawned: 'audiostream' with pid 235 2022-01-10 11:10:08,455 INFO success: audiowebsock entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,455 INFO success: dbus entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,455 INFO success: pulseaudio entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,455 INFO success: ssh entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,456 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,456 INFO success: novnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:08,456 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:08,462 INFO spawned: 'x11vnc' with pid 236 2022-01-10 11:10:08,471 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:09,236 INFO spawned: 'xorg' with pid 240 2022-01-10 11:10:09,239 INFO spawned: 'de' with pid 241 2022-01-10 11:10:09,313 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:09,369 INFO reaped unknown pid 250 (exit status 0) 2022-01-10 11:10:09,371 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:09,483 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:09,485 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:10,491 INFO spawned: 'audiostream' with pid 251 2022-01-10 11:10:10,496 INFO spawned: 'x11vnc' with pid 252 2022-01-10 11:10:10,501 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:11,262 INFO reaped unknown pid 220 (exit status 1) 2022-01-10 11:10:11,516 INFO spawned: 'xorg' with pid 253 2022-01-10 11:10:11,516 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:11,520 INFO spawned: 'de' with pid 254 2022-01-10 11:10:11,521 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:11,526 INFO spawned: 'x11vnc' with pid 255 2022-01-10 11:10:11,601 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:11,652 INFO reaped unknown pid 264 (exit status 0) 2022-01-10 11:10:11,653 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:12,547 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:12,547 INFO reaped unknown pid 249 (exit status 1) 2022-01-10 11:10:12,550 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:12,634 INFO spawned: 'x11vnc' with pid 265 2022-01-10 11:10:13,658 INFO spawned: 'audiostream' with pid 266 2022-01-10 11:10:13,658 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:13,659 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:13,663 INFO spawned: 'x11vnc' with pid 267 2022-01-10 11:10:13,667 INFO exited: audiostream (exit status 111; not expected) 2022-01-10 11:10:13,668 INFO gave up: audiostream entered FATAL state, too many start retries too quickly 2022-01-10 11:10:14,686 INFO spawned: 'xorg' with pid 268 2022-01-10 11:10:14,686 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:14,690 INFO spawned: 'de' with pid 269 2022-01-10 11:10:14,690 INFO reaped unknown pid 263 (exit status 1) 2022-01-10 11:10:14,691 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:14,695 INFO spawned: 'x11vnc' with pid 270 2022-01-10 11:10:14,757 INFO exited: xorg (exit status 1; not expected) 2022-01-10 11:10:14,813 INFO gave up: xorg entered FATAL state, too many start retries too quickly 2022-01-10 11:10:14,813 INFO reaped unknown pid 279 (exit status 0) 2022-01-10 11:10:14,815 INFO exited: de (exit status 1; not expected) 2022-01-10 11:10:15,463 INFO gave up: de entered FATAL state, too many start retries too quickly 2022-01-10 11:10:15,714 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:15,716 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:16,722 INFO spawned: 'x11vnc' with pid 280 2022-01-10 11:10:17,751 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:17,753 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:18,761 INFO spawned: 'x11vnc' with pid 281 2022-01-10 11:10:18,762 INFO reaped unknown pid 278 (exit status 1) 2022-01-10 11:10:19,783 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:19,786 INFO exited: x11vnc (exit status 1; not expected) 2022-01-10 11:10:20,793 INFO spawned: 'x11vnc' with pid 282 2022-01-10 11:10:21,563 WARN received SIGTERM indicating exit request 2022-01-10 11:10:21,564 INFO waiting for audiowebsock, dbus, pulseaudio, ssh, x11vnc, novnc to die 2022-01-10 11:10:21,595 INFO stopped: novnc (exit status 143) 2022-01-10 11:10:21,595 INFO reaped unknown pid 204 (exit status 0) 2022-01-10 11:10:21,563 WARN received SIGTERM indicating exit request 2022-01-10 11:10:21,564 INFO waiting for audiowebsock, dbus, pulseaudio, ssh, x11vnc, novnc to die 2022-01-10 11:10:21,595 INFO stopped: novnc (exit status 143) 2022-01-10 11:10:21,595 INFO reaped unknown pid 204 (exit status 0) 2022-01-10 11:10:21,815 INFO success: x11vnc entered RUNNING state, process has stayed up for > than 1 seconds (startsecs) 2022-01-10 11:10:21,817 INFO stopped: x11vnc (terminated by SIGQUIT) 2022-01-10 11:10:21,819 INFO stopped: ssh (exit status 255) 2022-01-10 11:10:21,821 INFO stopped: pulseaudio (terminated by SIGQUIT) 2022-01-10 11:10:21,824 INFO stopped: dbus (terminated by SIGQUIT) 2022-01-10 11:10:21,828 INFO stopped: audiowebsock (terminated by SIGQUIT)
  12. Thanks! I ended up snagging a Firewalla Gold a few months back and it has mDNS reflection built in.
  13. I tried a few times - no dice. 8GB of RAM. No errors. I'll give the manual method a shot this weekend. Thanks, bud.
  14. I'm stuck on "Reboot Now" it spins for a bit then throws a 504 Gateway error. I also tried to reboot via the typical "Reboot" button, no dice. I SSH'd in and was able to reboot but it didn't update to the 6.9.2 and still shows the "Reboot Now" banner. Any thoughts?
  15. Changing the regkey to 1 worked for me. Thanks!
  16. Howdy Gents, I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
  17. Hey Gents, I am trying to setup an AVAHI Daemon for mDNS, Bonjour etc. across multiple network segments. I figured the easiest way to do this would be via a docker container but I haven't had any success finding an UNRAID template. Has anyone else done this via a docker container? Maybe I could just leverage the AVAHI Daemon that's in UNRAID? Open to suggestions but note that I have no additional hardware for AVAHI - this would need to run on the host or docker container. Thanks!
  18. The issue popped up again, so I submitted a bug and rolled back to 6.5.3, everything is now stable....so it's definitely something going on with that version. I'll hang out in 6.5.3 land
  19. No offense taken my friend lol. I know that I DEFINITELY need a better/beefier box but it's just not in the cards right now. I could up the RAM but I don't want to dump funds into an old box that will eventually be upgraded to a platform that won't even support the RAM from this one. After reboot, my memory is at 37% so something was definitely hung. I do however plan to up the size of my Cache drive, that way I can just kick off Mover every morning at say 2AM rather than having it run every hour. Thanks for the input bud.
  20. I'm starting to think this has to do with Mover causing the IOWAIT. I changed this to run every 4 hours, rather than every 1 hour and enabled logging. I'll report back with what I find. If anyone has other ideas, please let me know. EDIT: I found that my pihole docker, that was writing to a cache that was set to ONLY use the cache drive, somehow had files living on every disk in my environment....not sure how that's possible but it happened. I set the share to Cache Prefer --> Ran Mover --> All files were moved back to cache. I now switched the share back to Cache only --> Invoked Mover --> No crazy spike in CPU. My theory is when Mover was invoked it was touching all of the drives, thus causing the IOWAIT. I may be totally wrong but it's the best I got for now.
  21. There it is...top with a screen of Unraid showing 100%
  22. The Transcoder is going to fluctuate all day long but you're right 344% is a bit much haha. I've played around with Docker pinning but Plex seems to leak into other cores/ht regardless of what is set. I'll see what the Transcoder shows the next time this happens but I have 6-7 streams (some of those being transcodes) running every night with no issues. This is only happening in the morning so it would be nice to see a more verbose log output to identify what is kicking off or possibly causing this to happen.