JesterEE

Members
  • Posts

    168
  • Joined

  • Last visited

Everything posted by JesterEE

  1. Quick note about this kernel build with regards to the fix for onboard audio passthrough on x570. I have been tracking the Linux kernel git source to see if this issue has been addressed on the trunk, but unfortunately, it has not. There is also the possibility that AMD would address this issue in firmware with an updated AGESA ... but the sparsely documented notes for the latest to-be-released AGESA 1.0.0.5 firmware say nothing about the on-board audio. So the if we have any hope of using the on-board audio on our boards in Unraid, a patched Unraid kernel with this patch will be required. Luckily, we have a Skitals! As a user of the NVIDIA Unraid build, I'm going to be forced to choose between my GPU in docker and my audio in my VM, which is unfortunate. It looks like I'm in the market for a USB audio DAC. 😒 EDIT: Looks like some people asked AMD to comment about the USB and audio with the AGESA 1.0.0.5, and others actually tried it ... no dice!
  2. Enhancement Request: Can a monitor be added for the PCI Rx and Tx bus speeds reported in the nvidia-smi -q -x command? Is has been reported in the logs here as: <nvidia_smi_log> <gpu id="00000000:0A:00.0"> <pci> <tx_util>320000 KB/s</tx_util> <rx_util>3686000 KB/s</rx_util> </pci> </gpu> </nvidia_smi_log> Thanks! -JesterEE
  3. I have been running the LinuxServer.io Folding@home docker recently which has been keeping my GPU very busy fighting CORVID-19 💪! With the GPU under load, I haven't noticed this issue come up at all; with or without the GPU Statistics plugin installed. When I stop utilizing the GPU, and query the GPU with nvidia-smi or the GPU Statistics plugin, I immediately see this error in the log. So, my latest hypothesis, there is something with the way the NVIDIA plugin is interfacing with the GPU when being queried in the low power (P0/throttled) state. @linuxserver.io, any ideas? -JesterEE
  4. Ya it changes the WebUI of the docker page by grouping dockers together into collapsible "folder" groups. Really, kinda handy! I'm actually hoping it gets rolled in by the Unraid devs after the bugs are ironed out 🤣. I'm not expecting a fix for ControlR as it's a plugin and not a mainstream UI feature but, just really an FYI for a known conflict.
  5. New Unraid plugin Docker Folder breaks the parsing the ControlR app does. Dockers that are on are showing as off.
  6. @GuildDarts It might also make sense to incorporate a "pass-through" option for the WebUI and WebUI New tab; one that uses a Docker's configured WebUI setting rather than re-specifying it for the plugin. Kinda like the way you already have the Docker Sub Menus for the selected images, but instead of the submenu, a quick link to the WebUI subcommand. This won't actually work for my use case, with the way Unraid currently handles WebUIs with mapped container networks ... so I'd also like to keep my original request intact 😉. As of Unraid 6.8.3, if you specify a container as the Network Type, it will not allow the WebUI subcommand to be present even if it is specified in the configuration. So, this might break a look-up of the WebUI subcommand if you attempt to look for it in the submenu instead of parsing the XML. I haven't looked at your plugin code, so I don't know how you're doing it. -JesterEE
  7. I have another enhancement request. Sorry ... I've been playing 😒. When selecting a Docker Restart option, only restart the containers that are already started. Currently, it will restart containers that are on and start containers that are off. -JesterEE
  8. Good question ... honestly IDK. Maybe others that have been around for longer than I have can share some historically significant info about that. The help information for the Docker WebUI element says to do it this way, and the request of [PORT:####] syntax is just to align with the base functionality. -JesterEE
  9. Thanks for the great plugin! Can I make 2 enhancement requests? Allow renaming of Folders after they have been created. Currently, the name is grayed out for editing after initial creation. Allow the Unraid Docker WebUI construct of [IP] and [PORT:####] for buttons in the Folder. for example WebUI button = https://[IP]:[PORT:1234] -JesterEE
  10. @linuxserver.io Thread and app need to be updated to the new locations. https://github.com/linuxserver/docker-nzbhydra2 https://hub.docker.com/r/linuxserver/nzbhydra2
  11. No definitive answer unfortunately! I wish I had one! I reloaded my server with the Unraid 6.8.3 LinuxServer.io Nvidia build and almost everything that I would "normally" run, and my VM is pretty great. I think any bottlenecks/stuttering I'm still getting is just a limitation of the hardware (GPU) I'm running an not the emulation. Though ... it's night and day from what I had originally. It was awful! When I was restarting my build I was going pretty slowly, only loading a plugin or docker at a time while I tested out the VM performance. There are still a couple plugins I want to run that I haven't loaded yet because I couldn't dedicate the time yet. If I had to guess ... I probably changed some obscure setting while I was tinkering with my first ever Unraid setup that didn't need to be changed that hosed everything. But, I won't be sure till I load all the stuff I want to run. As I get that time to do my testing, I'll post here, but I doubt it will be soon. Here are the plugins I'm running that I have whitelisted for my build CA Appdata Backup/Restore v2 CA Auto Update CA Dynamix Unlimited Width CA Fix Common Problems CA User Scripts Community Applications ControlR Disable Security Mitigations Dynamix Active Streams Dynamix Local Master Dynamix SSD TRIM Dynamix System Autofan (not currently using) Dynamix System Info Dynamix System Temp (not currently using - Need new the Linux kernel for my x570 build) Dynamix WireGuard Enhanced Log Viewer File Activity Mover Tuning NerdPack GUI Preclear Disk Recycle Bin Statistics Tips and Tweaks Unassigned Devices UnBALANCE Unraid Nvidia VFIO-PCI CFG VM Backup Here are a couple more I want to run, that I currently greylisted: Dynamix Cache Dirs Dynamix File Integrity
  12. From what I've read (a while ago), the onboard audio needs to be addressed in the kernel. Please see this 6.8.0-RC5 kernel mod by Skitals. Unfortunately, this has not been addressed upstream in the main line Linux 5.6 kernel (which is actually a version newer than the 6.9.0-beta series is built on), so we are SOL unless it's been addressed in another kernel module. My recommendation, try Skitals 6.8.0-RC5 kernel mod, and the 6.9.0-beta1 build to see if it fixes it for you. Not all x570 boards use the audio chip/codec so depending on what board you're running, your mileage may vary. I tried the 6.8.0-RC5 kernel mod with my ASUS ROG Strix x570-E Ryzen build (see my sig), but had some issues so I gave up on it. If Skitals does a new version based on a 6.9.0-RC build, I might try it again.
  13. I suggest reading the linked comment on this thread, saying "Thank you!" for all the volunteered effort by the dev team, and waiting patiently. If you have some skills, maybe you can help though. I'm sure CHBMB and the rest if the team would appreciate another set of hands!
  14. Can not confirm your issue with the latest. Works as expected on my box.
  15. @Squid Thanks for this great plugin. Can I request an update for those of us running Unraid 6.8.3 that use docker container to container networking? @bonienl updated the docker logic to trigger an update of docker images that depend on the network of another docker image that has been updated. I believe this logic is triggered in the php code, so the docker page needs to be loaded to allow this to happen. Is it possible to have the plugin "load the docker page" to allow this to happen after the docker auto update is executed? Thanks! -JesterEE
  16. I started using this container this week. I have not run a CPU taxing container like this before and found an interesting thing that might be applicable to others. I have half my CPU cores available to Unraid and half isolated for VMs. Making the isolcpu system initialization change does not modify how the CPU is reported to the docker process and the containers that are run. For example, my 8C/16T AMD 3800X still reports as such even though only 4C/8T are available for allocation by the host (and by extension, docker). So, when running the BOINC container and letting it use 100% on my CPU, it spins up 16 WU processes because it thinks it can do that many on the processor concurrently without multitasking on the same core. The result, each core has 2 BOINC threads competing for resources. Probably not the biggest deal, but not ideal as there is still likely overhead switching between them. So, in instances where you isolate CPUs away from the host, my workaround is to tell BOINC the percentage of the CPU still available to it (i.e. how many are still available for docker allocation ... e.g. [16/2]/16 = 8/16 = 0.5, 0.5*100=50%). This setting is in: Options->Computing Preferences->Computing->Usage limits->Use at most '50%' of the CPUs. I tried CPU pinning cores to the BOINC docker and keeping the BOINC config at 100%, but BOINC still interprets the number of cores from the CPU definition. Anyone have a better solution that is more portable and less hardcoded to the current system configuration? I don't always run CPU isolation and would like to keep as much as possible immune to my whim to change it. -JesterEE
  17. JesterEE

    Squid is 50!

    Happy Birthday Squid! Thanks for all you've done and continue to do for the community! 🙏
  18. @linuxserver.io Thank you for making this application available to the Unraid community! Great work as always! I think we all should be running F@H with our always on machines now more than ever as we fight COVID-19 if you can spare the clock cycles, and we each individually can justify the extra cost of the higher power draw and component degradation. For those that may not want to run this container because it might "steal" compute resources from the server tasks, be advised that the docker app will run the CPU folding processes with a nice level of 19 (i.e. a LOW scheduler priority). So, this is not an issue and everything else on the system will have a CPU priority higher than the folding tasks even when operating in "Full Power" mode. The GPU doesn't work on a scheduler like the CPU, so yes, the GPU will work hard. BUT, I was able to fold and run 10+ Plex hardware transcoded streams on my GTX 1060 6GB card at the same time. So really, it's a non-issue as well. There's more than enough juice to go around! Hope this alleviates some concern! Join the UnRAID folding team and do your part! Service guarantees citizenship! UnRAID Folding Team: 227802 -JesterEE
  19. No problem. I just want to be clear, this seems to be an underlying problem in the build ... the a GPU Stats plug-in just makes it more apparent. -JesterEE
  20. After installing the plugin, I receive the following error in the system terminal upon starting Unraid. error: failed to connect to the hypervisor error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory This does not get logged in the syslog ... only output to an attached terminal. I have had this error since Unraid 6.8.0 but I just traced it here on a new 6.8.3 install. This has been experienced by other users too, but up till now, no one knew what was causing it. -JesterEE
  21. I traced this issue to the installation of the VM Backup Plugin. I will be cross posting in that thread about this issue. -JesterEE
  22. No problem. FYI, I temporarily uninstalled the GPU Statistics plugin till this gets sorted out. This output is without the plugin installed. <?xml version="1.0" ?> <!DOCTYPE nvidia_smi_log SYSTEM "nvsmi_device_v10.dtd"> <nvidia_smi_log> <timestamp>Thu Mar 19 21:10:15 2020</timestamp> <driver_version>440.59</driver_version> <cuda_version>10.2</cuda_version> <attached_gpus>1</attached_gpus> <gpu id="00000000:09:00.0"> <product_name>GeForce GTX 1060 6GB</product_name> <product_brand>GeForce</product_brand> <display_mode>Enabled</display_mode> <display_active>Disabled</display_active> <persistence_mode>Disabled</persistence_mode> <accounting_mode>Disabled</accounting_mode> <accounting_mode_buffer_size>4000</accounting_mode_buffer_size> <driver_model> <current_dm>N/A</current_dm> <pending_dm>N/A</pending_dm> </driver_model> <serial>N/A</serial> <uuid>GPU-36d93047-20d4-1f53-260a-613eac368c41</uuid> <minor_number>0</minor_number> <vbios_version>86.06.0E.00.99</vbios_version> <multigpu_board>No</multigpu_board> <board_id>0x900</board_id> <gpu_part_number>N/A</gpu_part_number> <inforom_version> <img_version>G001.0000.01.03</img_version> <oem_object>1.1</oem_object> <ecc_object>N/A</ecc_object> <pwr_object>N/A</pwr_object> </inforom_version> <gpu_operation_mode> <current_gom>N/A</current_gom> <pending_gom>N/A</pending_gom> </gpu_operation_mode> <gpu_virtualization_mode> <virtualization_mode>None</virtualization_mode> <host_vgpu_mode>N/A</host_vgpu_mode> </gpu_virtualization_mode> <ibmnpu> <relaxed_ordering_mode>N/A</relaxed_ordering_mode> </ibmnpu> <pci> <pci_bus>09</pci_bus> <pci_device>00</pci_device> <pci_domain>0000</pci_domain> <pci_device_id>1C0310DE</pci_device_id> <pci_bus_id>00000000:09:00.0</pci_bus_id> <pci_sub_system_id>371A1458</pci_sub_system_id> <pci_gpu_link_info> <pcie_gen> <max_link_gen>3</max_link_gen> <current_link_gen>3</current_link_gen> </pcie_gen> <link_widths> <max_link_width>16x</max_link_width> <current_link_width>16x</current_link_width> </link_widths> </pci_gpu_link_info> <pci_bridge_chip> <bridge_chip_type>N/A</bridge_chip_type> <bridge_chip_fw>N/A</bridge_chip_fw> </pci_bridge_chip> <replay_counter>0</replay_counter> <replay_rollover_counter>0</replay_rollover_counter> <tx_util>0 KB/s</tx_util> <rx_util>0 KB/s</rx_util> </pci> <fan_speed>0 %</fan_speed> <performance_state>P0</performance_state> <clocks_throttle_reasons> <clocks_throttle_reason_gpu_idle>Not Active</clocks_throttle_reason_gpu_idle> <clocks_throttle_reason_applications_clocks_setting>Not Active</clocks_throttle_reason_applications_clocks_setting> <clocks_throttle_reason_sw_power_cap>Active</clocks_throttle_reason_sw_power_cap> <clocks_throttle_reason_hw_slowdown>Not Active</clocks_throttle_reason_hw_slowdown> <clocks_throttle_reason_hw_thermal_slowdown>Not Active</clocks_throttle_reason_hw_thermal_slowdown> <clocks_throttle_reason_hw_power_brake_slowdown>Not Active</clocks_throttle_reason_hw_power_brake_slowdown> <clocks_throttle_reason_sync_boost>Not Active</clocks_throttle_reason_sync_boost> <clocks_throttle_reason_sw_thermal_slowdown>Not Active</clocks_throttle_reason_sw_thermal_slowdown> <clocks_throttle_reason_display_clocks_setting>Not Active</clocks_throttle_reason_display_clocks_setting> </clocks_throttle_reasons> <fb_memory_usage> <total>6077 MiB</total> <used>0 MiB</used> <free>6077 MiB</free> </fb_memory_usage> <bar1_memory_usage> <total>256 MiB</total> <used>2 MiB</used> <free>254 MiB</free> </bar1_memory_usage> <compute_mode>Default</compute_mode> <utilization> <gpu_util>2 %</gpu_util> <memory_util>0 %</memory_util> <encoder_util>0 %</encoder_util> <decoder_util>0 %</decoder_util> </utilization> <encoder_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </encoder_stats> <fbc_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </fbc_stats> <ecc_mode> <current_ecc>N/A</current_ecc> <pending_ecc>N/A</pending_ecc> </ecc_mode> <ecc_errors> <volatile> <single_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </single_bit> <double_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </double_bit> </volatile> <aggregate> <single_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </single_bit> <double_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </double_bit> </aggregate> </ecc_errors> <retired_pages> <multiple_single_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </multiple_single_bit_retirement> <double_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </double_bit_retirement> <pending_blacklist>N/A</pending_blacklist> <pending_retirement>N/A</pending_retirement> </retired_pages> <temperature> <gpu_temp>51 C</gpu_temp> <gpu_temp_max_threshold>102 C</gpu_temp_max_threshold> <gpu_temp_slow_threshold>99 C</gpu_temp_slow_threshold> <gpu_temp_max_gpu_threshold>N/A</gpu_temp_max_gpu_threshold> <memory_temp>N/A</memory_temp> <gpu_temp_max_mem_threshold>N/A</gpu_temp_max_mem_threshold> </temperature> <power_readings> <power_state>P0</power_state> <power_management>Supported</power_management> <power_draw>28.83 W</power_draw> <power_limit>120.00 W</power_limit> <default_power_limit>120.00 W</default_power_limit> <enforced_power_limit>120.00 W</enforced_power_limit> <min_power_limit>60.00 W</min_power_limit> <max_power_limit>140.00 W</max_power_limit> </power_readings> <clocks> <graphics_clock>455 MHz</graphics_clock> <sm_clock>455 MHz</sm_clock> <mem_clock>4006 MHz</mem_clock> <video_clock>696 MHz</video_clock> </clocks> <applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </applications_clocks> <default_applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </default_applications_clocks> <max_clocks> <graphics_clock>1961 MHz</graphics_clock> <sm_clock>1961 MHz</sm_clock> <mem_clock>4004 MHz</mem_clock> <video_clock>1708 MHz</video_clock> </max_clocks> <max_customer_boost_clocks> <graphics_clock>N/A</graphics_clock> </max_customer_boost_clocks> <clock_policy> <auto_boost>N/A</auto_boost> <auto_boost_default>N/A</auto_boost_default> </clock_policy> <supported_clocks>N/A</supported_clocks> <processes> </processes> <accounted_processes> </accounted_processes> </gpu> </nvidia_smi_log>
  23. @b3rs3rk Issue using the GPU Statistics plugin with the Unraid 6.8.3 LinuxServerio NVIDIA Build. I don't think it's specifically a GPU Stats plugin error ... but appears frequently when the plugin is in use. Just to keep you in the loop.
  24. I see this too ... it's infrequent unless I also install the GPU Statistics Plugin, then it's constant. GTX 1060 here. With the the GPU Statistics Plugin: Mar 19 20:32:46 Tower kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Mar 19 20:32:49 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] A few minutes later after uninstalling the GPU Statistics Plugin (no caller line in the log): Mar 19 20:37:08 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] M