Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

22 Good

About JesterEE

  • Rank
    Advanced Member

Recent Profile Visitors

435 profile views
  1. @Squid Thanks for this great plugin. Can I request an update for those of us running Unraid 6.8.3 that use docker container to container networking? @bonienl updated the docker logic to trigger an update of docker images that depend on the network of another docker image that has been updated. I believe this logic is triggered in the php code, so the docker page needs to be loaded to allow this to happen. Is it possible to have the plugin "load the docker page" to allow this to happen after the docker auto update is executed? Thanks! -JesterEE
  2. I started using this container this week. I have not run a CPU taxing container like this before and found an interesting thing that might be applicable to others. I have half my CPU cores available to Unraid and half isolated for VMs. Making the isolcpu system initialization change does not modify how the CPU is reported to the docker process and the containers that are run. For example, my 8C/16T AMD 3800X still reports as such even though only 4C/8T are available for allocation by the host (and by extension, docker). So, when running the BOINC container and letting it use 100% on my CPU, it spins up 16 WU processes because it thinks it can do that many on the processor concurrently without multitasking on the same core. The result, each core has 2 BOINC threads competing for resources. Probably not the biggest deal, but not ideal as there is still likely overhead switching between them. So, in instances where you isolate CPUs away from the host, my workaround is to tell BOINC the percentage of the CPU still available to it (i.e. how many are still available for docker allocation ... e.g. [16/2]/16 = 8/16 = 0.5, 0.5*100=50%). This setting is in: Options->Computing Preferences->Computing->Usage limits->Use at most '50%' of the CPUs. I tried CPU pinning cores to the BOINC docker and keeping the BOINC config at 100%, but BOINC still interprets the number of cores from the CPU definition. Anyone have a better solution that is more portable and less hardcoded to the current system configuration? I don't always run CPU isolation and would like to keep as much as possible immune to my whim to change it. -JesterEE
  3. JesterEE

    Squid is 50!

    Happy Birthday Squid! Thanks for all you've done and continue to do for the community! 🙏
  4. @linuxserver.io Thank you for making this application available to the Unraid community! Great work as always! I think we all should be running F@H with our always on machines now more than ever as we fight COVID-19 if you can spare the clock cycles, and we each individually can justify the extra cost of the higher power draw and component degradation. For those that may not want to run this container because it might "steal" compute resources from the server tasks, be advised that the docker app will run the CPU folding processes with a nice level of 19 (i.e. a LOW scheduler priority). So, this is not an issue and everything else on the system will have a CPU priority higher than the folding tasks even when operating in "Full Power" mode. The GPU doesn't work on a scheduler like the CPU, so yes, the GPU will work hard. BUT, I was able to fold and run 10+ Plex hardware transcoded streams on my GTX 1060 6GB card at the same time. So really, it's a non-issue as well. There's more than enough juice to go around! Hope this alleviates some concern! Join the UnRAID folding team and do your part! Service guarantees citizenship! UnRAID Folding Team: 227802 -JesterEE
  5. No problem. I just want to be clear, this seems to be an underlying problem in the build ... the a GPU Stats plug-in just makes it more apparent. -JesterEE
  6. After installing the plugin, I receive the following error in the system terminal upon starting Unraid. error: failed to connect to the hypervisor error: Failed to connect socket to '/var/run/libvirt/libvirt-sock': No such file or directory This does not get logged in the syslog ... only output to an attached terminal. I have had this error since Unraid 6.8.0 but I just traced it here on a new 6.8.3 install. This has been experienced by other users too, but up till now, no one knew what was causing it. -JesterEE
  7. I traced this issue to the installation of the VM Backup Plugin. I will be cross posting in that thread about this issue. -JesterEE
  8. No problem. FYI, I temporarily uninstalled the GPU Statistics plugin till this gets sorted out. This output is without the plugin installed. <?xml version="1.0" ?> <!DOCTYPE nvidia_smi_log SYSTEM "nvsmi_device_v10.dtd"> <nvidia_smi_log> <timestamp>Thu Mar 19 21:10:15 2020</timestamp> <driver_version>440.59</driver_version> <cuda_version>10.2</cuda_version> <attached_gpus>1</attached_gpus> <gpu id="00000000:09:00.0"> <product_name>GeForce GTX 1060 6GB</product_name> <product_brand>GeForce</product_brand> <display_mode>Enabled</display_mode> <display_active>Disabled</display_active> <persistence_mode>Disabled</persistence_mode> <accounting_mode>Disabled</accounting_mode> <accounting_mode_buffer_size>4000</accounting_mode_buffer_size> <driver_model> <current_dm>N/A</current_dm> <pending_dm>N/A</pending_dm> </driver_model> <serial>N/A</serial> <uuid>GPU-36d93047-20d4-1f53-260a-613eac368c41</uuid> <minor_number>0</minor_number> <vbios_version>86.06.0E.00.99</vbios_version> <multigpu_board>No</multigpu_board> <board_id>0x900</board_id> <gpu_part_number>N/A</gpu_part_number> <inforom_version> <img_version>G001.0000.01.03</img_version> <oem_object>1.1</oem_object> <ecc_object>N/A</ecc_object> <pwr_object>N/A</pwr_object> </inforom_version> <gpu_operation_mode> <current_gom>N/A</current_gom> <pending_gom>N/A</pending_gom> </gpu_operation_mode> <gpu_virtualization_mode> <virtualization_mode>None</virtualization_mode> <host_vgpu_mode>N/A</host_vgpu_mode> </gpu_virtualization_mode> <ibmnpu> <relaxed_ordering_mode>N/A</relaxed_ordering_mode> </ibmnpu> <pci> <pci_bus>09</pci_bus> <pci_device>00</pci_device> <pci_domain>0000</pci_domain> <pci_device_id>1C0310DE</pci_device_id> <pci_bus_id>00000000:09:00.0</pci_bus_id> <pci_sub_system_id>371A1458</pci_sub_system_id> <pci_gpu_link_info> <pcie_gen> <max_link_gen>3</max_link_gen> <current_link_gen>3</current_link_gen> </pcie_gen> <link_widths> <max_link_width>16x</max_link_width> <current_link_width>16x</current_link_width> </link_widths> </pci_gpu_link_info> <pci_bridge_chip> <bridge_chip_type>N/A</bridge_chip_type> <bridge_chip_fw>N/A</bridge_chip_fw> </pci_bridge_chip> <replay_counter>0</replay_counter> <replay_rollover_counter>0</replay_rollover_counter> <tx_util>0 KB/s</tx_util> <rx_util>0 KB/s</rx_util> </pci> <fan_speed>0 %</fan_speed> <performance_state>P0</performance_state> <clocks_throttle_reasons> <clocks_throttle_reason_gpu_idle>Not Active</clocks_throttle_reason_gpu_idle> <clocks_throttle_reason_applications_clocks_setting>Not Active</clocks_throttle_reason_applications_clocks_setting> <clocks_throttle_reason_sw_power_cap>Active</clocks_throttle_reason_sw_power_cap> <clocks_throttle_reason_hw_slowdown>Not Active</clocks_throttle_reason_hw_slowdown> <clocks_throttle_reason_hw_thermal_slowdown>Not Active</clocks_throttle_reason_hw_thermal_slowdown> <clocks_throttle_reason_hw_power_brake_slowdown>Not Active</clocks_throttle_reason_hw_power_brake_slowdown> <clocks_throttle_reason_sync_boost>Not Active</clocks_throttle_reason_sync_boost> <clocks_throttle_reason_sw_thermal_slowdown>Not Active</clocks_throttle_reason_sw_thermal_slowdown> <clocks_throttle_reason_display_clocks_setting>Not Active</clocks_throttle_reason_display_clocks_setting> </clocks_throttle_reasons> <fb_memory_usage> <total>6077 MiB</total> <used>0 MiB</used> <free>6077 MiB</free> </fb_memory_usage> <bar1_memory_usage> <total>256 MiB</total> <used>2 MiB</used> <free>254 MiB</free> </bar1_memory_usage> <compute_mode>Default</compute_mode> <utilization> <gpu_util>2 %</gpu_util> <memory_util>0 %</memory_util> <encoder_util>0 %</encoder_util> <decoder_util>0 %</decoder_util> </utilization> <encoder_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </encoder_stats> <fbc_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </fbc_stats> <ecc_mode> <current_ecc>N/A</current_ecc> <pending_ecc>N/A</pending_ecc> </ecc_mode> <ecc_errors> <volatile> <single_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </single_bit> <double_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </double_bit> </volatile> <aggregate> <single_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </single_bit> <double_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </double_bit> </aggregate> </ecc_errors> <retired_pages> <multiple_single_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </multiple_single_bit_retirement> <double_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </double_bit_retirement> <pending_blacklist>N/A</pending_blacklist> <pending_retirement>N/A</pending_retirement> </retired_pages> <temperature> <gpu_temp>51 C</gpu_temp> <gpu_temp_max_threshold>102 C</gpu_temp_max_threshold> <gpu_temp_slow_threshold>99 C</gpu_temp_slow_threshold> <gpu_temp_max_gpu_threshold>N/A</gpu_temp_max_gpu_threshold> <memory_temp>N/A</memory_temp> <gpu_temp_max_mem_threshold>N/A</gpu_temp_max_mem_threshold> </temperature> <power_readings> <power_state>P0</power_state> <power_management>Supported</power_management> <power_draw>28.83 W</power_draw> <power_limit>120.00 W</power_limit> <default_power_limit>120.00 W</default_power_limit> <enforced_power_limit>120.00 W</enforced_power_limit> <min_power_limit>60.00 W</min_power_limit> <max_power_limit>140.00 W</max_power_limit> </power_readings> <clocks> <graphics_clock>455 MHz</graphics_clock> <sm_clock>455 MHz</sm_clock> <mem_clock>4006 MHz</mem_clock> <video_clock>696 MHz</video_clock> </clocks> <applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </applications_clocks> <default_applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </default_applications_clocks> <max_clocks> <graphics_clock>1961 MHz</graphics_clock> <sm_clock>1961 MHz</sm_clock> <mem_clock>4004 MHz</mem_clock> <video_clock>1708 MHz</video_clock> </max_clocks> <max_customer_boost_clocks> <graphics_clock>N/A</graphics_clock> </max_customer_boost_clocks> <clock_policy> <auto_boost>N/A</auto_boost> <auto_boost_default>N/A</auto_boost_default> </clock_policy> <supported_clocks>N/A</supported_clocks> <processes> </processes> <accounted_processes> </accounted_processes> </gpu> </nvidia_smi_log>
  9. @b3rs3rk Issue using the GPU Statistics plugin with the Unraid 6.8.3 LinuxServerio NVIDIA Build. I don't think it's specifically a GPU Stats plugin error ... but appears frequently when the plugin is in use. Just to keep you in the loop.
  10. I see this too ... it's infrequent unless I also install the GPU Statistics Plugin, then it's constant. GTX 1060 here. With the the GPU Statistics Plugin: Mar 19 20:32:46 Tower kernel: caller _nv000908rm+0x1bf/0x1f0 [nvidia] mapping multiple BARs Mar 19 20:32:49 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] A few minutes later after uninstalling the GPU Statistics Plugin (no caller line in the log): Mar 19 20:37:08 Tower kernel: resource sanity check: requesting [mem 0x000c0000-0x000fffff], which spans more than PCI Bus 0000:00 [mem 0x000c0000-0x000dffff window] M
  11. https://developer.nvidia.com/video-encode-decode-gpu-support-matrix TL;DR; I wouldn't bother unless you have lots of h264 content and need more streams than you CPU can do.
  12. So I did what I said in the previous post. Nuked my Unraid USB (after backing it up, of course!), loaded 6.8.2, created a dummy array with another single USB drive, set a SSD as an Unassigned Device for a VM (actually, just used the same SSD/VM I have been using), isolated half the cores for the VM, stubbed my GPU, and played with a few benchmarks and games to test it out. Butter...so, so smooth. Bare metal performance. So, now I know it's possible ... now to figure out what's causing it to stop being that way. The struggle continues... -JesterEE
  13. Config update Tried @zeus83's recommendation and we still having similar issues. Loaded @Skitals 6.8.0-RC5 kernel but was having issues passing through my Nvidia card, so I scrapped that effort. Upgraded to 6.8.2, same expected issues, but now my Nvidia card is passing through fine with QEMU 4.2. My last straw is thermonuclear ... Maybe I set an Unraid configuration option or loaded a plugin somewhere along the line that is interfering with the VM performance. I'm going to temporarily disable my array, load a fresh install on my USB drive, and just run the virtual machine. No docker, no plugins, no tweaks, no nothing ... Vanilla. I'll try 6.8.2, and 6.8.0-RC7. If one works really well I know it's something I did and I'll re-setup my array and try to reconfigure Unraid as I like it until I figure out what is causing the issue, but I have low expectations. -JesterEE