Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

11 Good


  • Rank

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Another affected user here. My SSD has 499.90 TB in writes just over a year in use! Echoing what others have said, I have 2 x SSDs in a btrfs unencrypted RAID-0 setup for cache. I see anywhere from 25Mb/s to 80+Mb/s writes on the cache pool. I have another Unraid server, set up the same way (6.8.3, 2xSSD btrfs) that does not seem to be affected. The main difference between the two systems is the dockers that are running. In my case, the official Plex, MariaDB, and Zoneminder dockers seem to be the main offenders. If I disable those, I can drop the writes to ~ 200Kb/s. Plex I switched to linuxserver but unfortunately I can't simply disable my CCTV software...
  2. This worked for me as well!
  3. Anyone experiencing empty stats? Unraid 6.8.3 Was working fine until I updated the plugin to 2020-03-14 Here's the output from nvidia-smi -x -q <?xml version="1.0" ?> <!DOCTYPE nvidia_smi_log SYSTEM "nvsmi_device_v10.dtd"> <nvidia_smi_log> <timestamp>Sat Mar 14 14:55:23 2020</timestamp> <driver_version>440.59</driver_version> <cuda_version>10.2</cuda_version> <attached_gpus>1</attached_gpus> <gpu id="00000000:01:00.0"> <product_name>GeForce GTX 1660 Ti</product_name> <product_brand>GeForce</product_brand> <display_mode>Disabled</display_mode> <display_active>Disabled</display_active> <persistence_mode>Enabled</persistence_mode> <accounting_mode>Disabled</accounting_mode> <accounting_mode_buffer_size>4000</accounting_mode_buffer_size> <driver_model> <current_dm>N/A</current_dm> <pending_dm>N/A</pending_dm> </driver_model> <serial>N/A</serial> <uuid>GPU-4686c842-7851-9b12-c973-0ea9df59f775</uuid> <minor_number>0</minor_number> <vbios_version></vbios_version> <multigpu_board>No</multigpu_board> <board_id>0x100</board_id> <gpu_part_number>N/A</gpu_part_number> <inforom_version> <img_version>G001.0000.02.04</img_version> <oem_object>1.1</oem_object> <ecc_object>N/A</ecc_object> <pwr_object>N/A</pwr_object> </inforom_version> <gpu_operation_mode> <current_gom>N/A</current_gom> <pending_gom>N/A</pending_gom> </gpu_operation_mode> <gpu_virtualization_mode> <virtualization_mode>None</virtualization_mode> <host_vgpu_mode>N/A</host_vgpu_mode> </gpu_virtualization_mode> <ibmnpu> <relaxed_ordering_mode>N/A</relaxed_ordering_mode> </ibmnpu> <pci> <pci_bus>01</pci_bus> <pci_device>00</pci_device> <pci_domain>0000</pci_domain> <pci_device_id>218210DE</pci_device_id> <pci_bus_id>00000000:01:00.0</pci_bus_id> <pci_sub_system_id>3FBE1458</pci_sub_system_id> <pci_gpu_link_info> <pcie_gen> <max_link_gen>3</max_link_gen> <current_link_gen>1</current_link_gen> </pcie_gen> <link_widths> <max_link_width>16x</max_link_width> <current_link_width>8x</current_link_width> </link_widths> </pci_gpu_link_info> <pci_bridge_chip> <bridge_chip_type>N/A</bridge_chip_type> <bridge_chip_fw>N/A</bridge_chip_fw> </pci_bridge_chip> <replay_counter>0</replay_counter> <replay_rollover_counter>0</replay_rollover_counter> <tx_util>0 KB/s</tx_util> <rx_util>0 KB/s</rx_util> </pci> <fan_speed>0 %</fan_speed> <performance_state>P8</performance_state> <clocks_throttle_reasons> <clocks_throttle_reason_gpu_idle>Active</clocks_throttle_reason_gpu_idle> <clocks_throttle_reason_applications_clocks_setting>Not Active</clocks_throttle_reason_applications_clocks_setting> <clocks_throttle_reason_sw_power_cap>Not Active</clocks_throttle_reason_sw_power_cap> <clocks_throttle_reason_hw_slowdown>Not Active</clocks_throttle_reason_hw_slowdown> <clocks_throttle_reason_hw_thermal_slowdown>Not Active</clocks_throttle_reason_hw_thermal_slowdown> <clocks_throttle_reason_hw_power_brake_slowdown>Not Active</clocks_throttle_reason_hw_power_brake_slowdown> <clocks_throttle_reason_sync_boost>Not Active</clocks_throttle_reason_sync_boost> <clocks_throttle_reason_sw_thermal_slowdown>Not Active</clocks_throttle_reason_sw_thermal_slowdown> <clocks_throttle_reason_display_clocks_setting>Not Active</clocks_throttle_reason_display_clocks_setting> </clocks_throttle_reasons> <fb_memory_usage> <total>5944 MiB</total> <used>0 MiB</used> <free>5944 MiB</free> </fb_memory_usage> <bar1_memory_usage> <total>256 MiB</total> <used>2 MiB</used> <free>254 MiB</free> </bar1_memory_usage> <compute_mode>Default</compute_mode> <utilization> <gpu_util>0 %</gpu_util> <memory_util>0 %</memory_util> <encoder_util>0 %</encoder_util> <decoder_util>0 %</decoder_util> </utilization> <encoder_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </encoder_stats> <fbc_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </fbc_stats> <ecc_mode> <current_ecc>N/A</current_ecc> <pending_ecc>N/A</pending_ecc> </ecc_mode> <ecc_errors> <volatile> <sram_correctable>N/A</sram_correctable> <sram_uncorrectable>N/A</sram_uncorrectable> <dram_correctable>N/A</dram_correctable> <dram_uncorrectable>N/A</dram_uncorrectable> </volatile> <aggregate> <sram_correctable>N/A</sram_correctable> <sram_uncorrectable>N/A</sram_uncorrectable> <dram_correctable>N/A</dram_correctable> <dram_uncorrectable>N/A</dram_uncorrectable> </aggregate> </ecc_errors> <retired_pages> <multiple_single_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </multiple_single_bit_retirement> <double_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </double_bit_retirement> <pending_blacklist>N/A</pending_blacklist> <pending_retirement>N/A</pending_retirement> </retired_pages> <temperature> <gpu_temp>39 C</gpu_temp> <gpu_temp_max_threshold>95 C</gpu_temp_max_threshold> <gpu_temp_slow_threshold>92 C</gpu_temp_slow_threshold> <gpu_temp_max_gpu_threshold>90 C</gpu_temp_max_gpu_threshold> <memory_temp>N/A</memory_temp> <gpu_temp_max_mem_threshold>N/A</gpu_temp_max_mem_threshold> </temperature> <power_readings> <power_state>P8</power_state> <power_management>Supported</power_management> <power_draw>7.40 W</power_draw> <power_limit>120.00 W</power_limit> <default_power_limit>120.00 W</default_power_limit> <enforced_power_limit>120.00 W</enforced_power_limit> <min_power_limit>70.00 W</min_power_limit> <max_power_limit>150.00 W</max_power_limit> </power_readings> <clocks> <graphics_clock>300 MHz</graphics_clock> <sm_clock>300 MHz</sm_clock> <mem_clock>405 MHz</mem_clock> <video_clock>540 MHz</video_clock> </clocks> <applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </applications_clocks> <default_applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </default_applications_clocks> <max_clocks> <graphics_clock>2130 MHz</graphics_clock> <sm_clock>2130 MHz</sm_clock> <mem_clock>6001 MHz</mem_clock> <video_clock>1950 MHz</video_clock> </max_clocks> <max_customer_boost_clocks> <graphics_clock>N/A</graphics_clock> </max_customer_boost_clocks> <clock_policy> <auto_boost>N/A</auto_boost> <auto_boost_default>N/A</auto_boost_default> </clock_policy> <supported_clocks>N/A</supported_clocks> <processes> </processes> <accounted_processes> </accounted_processes> </gpu> </nvidia_smi_log> but the output from gpustatus.php appears to be empty: []
  4. I run this ZM docker using an external mariadb database and encountered a few issues when upgrading to 1.34. The first issue was: Upgrading DB to 1.33.7 from 1.32.3 ERROR 1419 (HY000) at line 3: You do not have the SUPER privilege and binary logging is enabled (you *might* want to use the less safe log_bin_trust_function_creators variable) this was fixed by changing a setting in the mariadb mysql instance: mysql -u root -p set global log_bin_trust_function_creators=1; The second issue was: Upgrading DB to 1.33.7 from 1.32.3 ERROR 1728 (HY000) at line 117: Cannot load from mysql.proc. The table is probably corrupted This was fixed by running the mysql_upgrade on the mariadb instance: mysql_upgrade -u root -p After that, the upgrade went smoothly. Hoping this helps anyone who might be in the same boat
  5. Would this plugin in anyway affect the Unraid Nvidia plugin? I run a backup every Tuesday at 3AM and for whatever reason, every Tuesday morning the two dockers that utilize my Nvidia GPU via the Nvidia plugin have failed to restart. Attempting to start the dockers in the Unraid GUI yields a "execution error" and I typically need to reboot for everything to work again. I'd like to think there's another cause but it happens every backup, without fail. I can't figure out why and it's driving me crazy!
  6. Ok, I'll assume it's hardware related and not Unraid or this plugin until I rule that out. Thanks for the suggestions.
  7. Occasionally my GPU seemingly 'disappears' from Unraid/Docker. Running nvidia-smi produces: but if I check Settings/Unraid Nvidia the device is still present: The server log is filled with some nvidia driver errors: A server reboot usually fixes the issue but it's rather annoying. How would I go about debugging this? Running 6.8.0 EDIT: Just after I posted this nvidia-smi showed the card correctly and the docker containers that use the GPU were working correctly, I did not reboot. It basically just showed back up again.
  8. I'm guessing this is just an issue with mismatching versions. The zmeventnotification hook folder was updated 3 days ago with new dependencies added, namely a new OpenCV version: opencv_contrib_python>= while dlandon's docker installs a different OpenCV version: opencv-python I'm guessing the detect.py script you got from the zmeventnotification repo is using something from that new opencv package that's not available. Although if you tried re-running the requirements.txt in the ZM docker it should have fixed that
  9. Looking good so far. I added the INSTALL_FACE variable to the docker and everything seemed to pull down properly. I still haven't gotten face recognition to work properly but I don't think that's a problem with the docker. I appreciate you adding these even though you don't use them yourself, it saves me a lot of scripting. Thanks again!
  10. Figured it out. setup.py installs the zmes_hook_helpers directory. Currently the dockerfile only copies the setup.py: COPY zmeventnotification/setup.py /root/ When the setup.py script is run later, it errors out because it's missing the zmes_hook_helpers directory, here's the relevant log: File "/root/setup.py", line 25, in find_version IOError: [Errno 2] No such file or directory: '/root/zmes_hook_helpers/__init__.py' Looks like you'll need to add the zmes_hook_helpers dir and copy it so that setup.py can complete successfully.
  11. I may have spoke too soon. Looks like this is handled by the setup.py script (no copying needed). I'm starting fresh to be sure and will report back.
  12. the zme_hook_helpers directory is provided by zmeventnotification so instead of creating it, maybe just check if it exists and if so, copy the entire dir so we're left with /usr/bin/zme_hook_helpers/
  13. Looks good so far. I have run into an issue where the new detect.py script requires some helper files located in hook/zme_hook_helpers that are omitted from your copy command to /usr/bin. If I manually copy that directory, it seems to work.
  14. Currently I install the hook dependencies using the ADVANCED_SCRIPT/userscript.sh option like so: apt-get -y install python-pip pip install opencv-python apt-get -y install libsm6 libxext6 libxrender1 sudo pip install -r /var/detect/requirements.txt sudo pip install cmake sudo -H pip install face_recognition Should I switch to the INSTALL_HOOK option instead?
  15. Wow, I didn't realize it wasn't on the cache drive. I can already notice a huge difference. Thanks for the help!