therapist

Members
  • Posts

    85
  • Joined

  • Last visited

Everything posted by therapist

  1. 2020/03/08 10:13:45 [error] 17316#17316: *5876683 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.4, server: , request: "GET /usr/local/emhttp/plugins/gpustat/gpustatus.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "8357a7086dfbe113cb69d62fbf03b2cb7f5d8e39.unraid.net", referrer: "https://8357a7086dfbe113cb69d62fbf03b2cb7f5d8e39.unraid.net/Dashboard" 2020/03/08 10:14:58 [error] 17316#17316: *5876683 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 192.168.1.4, server: , request: "GET /usr/local/emhttp/plugins/gpustat/gpustatus.php HTTP/2.0", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "8357a7086dfbe113cb69d62fbf03b2cb7f5d8e39.unraid.net", referrer: "https://8357a7086dfbe113cb69d62fbf03b2cb7f5d8e39.unraid.net/gpustatus" page source.txt
  2. My NUT & IPMI summary dash show up, but no GPU Immediately under those are the default widgets
  3. that the funny thing --- it doesnt show up on my dash its in the page map, but when i click there its literally a blank page
  4. root@OCHO:~# nvidia-smi -q -x <?xml version="1.0" ?> <!DOCTYPE nvidia_smi_log SYSTEM "nvsmi_device_v10.dtd"> <nvidia_smi_log> <timestamp>Sun Mar 8 10:00:09 2020</timestamp> <driver_version>418.56</driver_version> <cuda_version>10.1</cuda_version> <attached_gpus>1</attached_gpus> <gpu id="00000000:03:00.0"> <product_name>GeForce GTX 1050 Ti</product_name> <product_brand>GeForce</product_brand> <display_mode>Disabled</display_mode> <display_active>Disabled</display_active> <persistence_mode>Disabled</persistence_mode> <accounting_mode>Disabled</accounting_mode> <accounting_mode_buffer_size>4000</accounting_mode_buffer_size> <driver_model> <current_dm>N/A</current_dm> <pending_dm>N/A</pending_dm> </driver_model> <serial>N/A</serial> <uuid>GPU-44ed723b-7f9c-07da-236b-347ab81deddf</uuid> <minor_number>0</minor_number> <vbios_version>86.07.39.00.30</vbios_version> <multigpu_board>No</multigpu_board> <board_id>0x300</board_id> <gpu_part_number>N/A</gpu_part_number> <inforom_version> <img_version>G001.0000.01.04</img_version> <oem_object>1.1</oem_object> <ecc_object>N/A</ecc_object> <pwr_object>N/A</pwr_object> </inforom_version> <gpu_operation_mode> <current_gom>N/A</current_gom> <pending_gom>N/A</pending_gom> </gpu_operation_mode> <gpu_virtualization_mode> <virtualization_mode>None</virtualization_mode> </gpu_virtualization_mode> <ibmnpu> <relaxed_ordering_mode>N/A</relaxed_ordering_mode> </ibmnpu> <pci> <pci_bus>03</pci_bus> <pci_device>00</pci_device> <pci_domain>0000</pci_domain> <pci_device_id>1C8210DE</pci_device_id> <pci_bus_id>00000000:03:00.0</pci_bus_id> <pci_sub_system_id>A45419DA</pci_sub_system_id> <pci_gpu_link_info> <pcie_gen> <max_link_gen>3</max_link_gen> <current_link_gen>3</current_link_gen> </pcie_gen> <link_widths> <max_link_width>16x</max_link_width> <current_link_width>16x</current_link_width> </link_widths> </pci_gpu_link_info> <pci_bridge_chip> <bridge_chip_type>N/A</bridge_chip_type> <bridge_chip_fw>N/A</bridge_chip_fw> </pci_bridge_chip> <replay_counter>0</replay_counter> <replay_rollover_counter>0</replay_rollover_counter> <tx_util>0 KB/s</tx_util> <rx_util>12000 KB/s</rx_util> </pci> <fan_speed>51 %</fan_speed> <performance_state>P0</performance_state> <clocks_throttle_reasons> <clocks_throttle_reason_gpu_idle>Not Active</clocks_throttle_reason_gpu_idle> <clocks_throttle_reason_applications_clocks_setting>Not Active</clocks_throttle_reason_applications_clocks_setting> <clocks_throttle_reason_sw_power_cap>Not Active</clocks_throttle_reason_sw_power_cap> <clocks_throttle_reason_hw_slowdown>Not Active</clocks_throttle_reason_hw_slowdown> <clocks_throttle_reason_hw_thermal_slowdown>Not Active</clocks_throttle_reason_hw_thermal_slowdown> <clocks_throttle_reason_hw_power_brake_slowdown>Not Active</clocks_throttle_reason_hw_power_brake_slowdown> <clocks_throttle_reason_sync_boost>Not Active</clocks_throttle_reason_sync_boost> <clocks_throttle_reason_sw_thermal_slowdown>Not Active</clocks_throttle_reason_sw_thermal_slowdown> <clocks_throttle_reason_display_clocks_setting>Not Active</clocks_throttle_reason_display_clocks_setting> </clocks_throttle_reasons> <fb_memory_usage> <total>4040 MiB</total> <used>286 MiB</used> <free>3754 MiB</free> </fb_memory_usage> <bar1_memory_usage> <total>256 MiB</total> <used>2 MiB</used> <free>254 MiB</free> </bar1_memory_usage> <compute_mode>Default</compute_mode> <utilization> <gpu_util>0 %</gpu_util> <memory_util>0 %</memory_util> <encoder_util>0 %</encoder_util> <decoder_util>2 %</decoder_util> </utilization> <encoder_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </encoder_stats> <fbc_stats> <session_count>0</session_count> <average_fps>0</average_fps> <average_latency>0</average_latency> </fbc_stats> <ecc_mode> <current_ecc>N/A</current_ecc> <pending_ecc>N/A</pending_ecc> </ecc_mode> <ecc_errors> <volatile> <single_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </single_bit> <double_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </double_bit> </volatile> <aggregate> <single_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </single_bit> <double_bit> <device_memory>N/A</device_memory> <register_file>N/A</register_file> <l1_cache>N/A</l1_cache> <l2_cache>N/A</l2_cache> <texture_memory>N/A</texture_memory> <texture_shm>N/A</texture_shm> <cbu>N/A</cbu> <total>N/A</total> </double_bit> </aggregate> </ecc_errors> <retired_pages> <multiple_single_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </multiple_single_bit_retirement> <double_bit_retirement> <retired_count>N/A</retired_count> <retired_pagelist>N/A</retired_pagelist> </double_bit_retirement> <pending_retirement>N/A</pending_retirement> </retired_pages> <temperature> <gpu_temp>45 C</gpu_temp> <gpu_temp_max_threshold>102 C</gpu_temp_max_threshold> <gpu_temp_slow_threshold>99 C</gpu_temp_slow_threshold> <gpu_temp_max_gpu_threshold>N/A</gpu_temp_max_gpu_threshold> <memory_temp>N/A</memory_temp> <gpu_temp_max_mem_threshold>N/A</gpu_temp_max_mem_threshold> </temperature> <power_readings> <power_state>P0</power_state> <power_management>Supported</power_management> <power_draw>N/A</power_draw> <power_limit>75.00 W</power_limit> <default_power_limit>75.00 W</default_power_limit> <enforced_power_limit>75.00 W</enforced_power_limit> <min_power_limit>52.50 W</min_power_limit> <max_power_limit>75.00 W</max_power_limit> </power_readings> <clocks> <graphics_clock>1746 MHz</graphics_clock> <sm_clock>1746 MHz</sm_clock> <mem_clock>3504 MHz</mem_clock> <video_clock>1569 MHz</video_clock> </clocks> <applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </applications_clocks> <default_applications_clocks> <graphics_clock>N/A</graphics_clock> <mem_clock>N/A</mem_clock> </default_applications_clocks> <max_clocks> <graphics_clock>1923 MHz</graphics_clock> <sm_clock>1923 MHz</sm_clock> <mem_clock>3504 MHz</mem_clock> <video_clock>1708 MHz</video_clock> </max_clocks> <max_customer_boost_clocks> <graphics_clock>N/A</graphics_clock> </max_customer_boost_clocks> <clock_policy> <auto_boost>N/A</auto_boost> <auto_boost_default>N/A</auto_boost_default> </clock_policy> <supported_clocks>N/A</supported_clocks> <processes> <process_info> <pid>10067</pid> <type>C</type> <process_name>/usr/lib/plexmediaserver/Plex Transcoder</process_name> <used_memory>276 MiB</used_memory> </process_info> </processes> <accounted_processes> </accounted_processes> </gpu> </nvidia_smi_log> root@OCHO:~# root@OCHO:~# cd /usr/local/emhttp/plugins/gpustat root@OCHO:/usr/local/emhttp/plugins/gpustat# php ./gpustat.php Could not open input file: ./gpustat.php root@OCHO:/usr/local/emhttp/plugins/gpustat# ls GPUStatSettings.page gpustatus.page icons/ license/ README.md gpustatus.php images/ root@OCHO:/usr/local/emhttp/plugins/gpustat# php ./gpustatus.php {"name":"GeForce GTX 1050 Ti","util":"0%","memutil":"0%","temp":"43C","fan":"51%","power":"N\/A","encoders":1,"vendor":"nVidia"}root@OCHO:/usr/local/emhttp/plugins/gpustat# I think you may have a typo somewhere which is causing my issue?
  5. is there a minimum driver or unraid version for this plugin? i am on 6.7.0 and have been using a 1050 to hw transcode for quite some time -- but after installing this plugin it doesnt seem to do anything, nothing appears on the dashboard? ps i can confirm that my 1050 reports N/A / 75w i think the lower end cards dont report power usage as my VM utilized 1070 reports fine
  6. One can set up VLANs and fix the issue Will need a managed switch If its a built in limitation of Docker, I dont think LT devs are going to create some workaround when a viable solution exists
  7. Ive switched to a 9300-8i and have had no TRIM issues since. I tried various other LSI card previously -- 9207 / 9201 / 9211 -- and they all have issues with the latest MP2SAS/MP3SAS software. Id recommend upgrading your HBA card or running off motherboard ports if you really want TRIM. Prior to my upgrade I had run without TRIM for almost a year with regular operation (.5-1 TBw/d) and no performance issues with my 860s
  8. any update on getting the latest git pull @atribe? this package still wont start: Connecting to InfluxDB host:192.168.1.248, DB:nut Connected successfully to InfluxDB Connecting to NUT host 192.168.1.248:3493 Connected successfully to NUT Traceback (most recent call last): File "/src/nut-influxdb-exporter.py", line 107, in <module> json_body = construct_object(ups_data, remove_keys, tag_keys) File "/src/nut-influxdb-exporter.py", line 85, in construct_object fields['watts'] = watts * 0.01 * fields['ups.load'] TypeError: can't multiply sequence by non-int of type 'float'
  9. I am having this issue as well Looks like there was a github update to address ~4days ago -- container needs to be updated to latest
  10. I do exactly this with SSDs outside the array in a BTRFS RAID5 config Very easy to set up https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=462135
  11. unRAID (as when I started with it) helped me get away from the off the shelf NAS products into a much more robust environment. Originally starting with an 8-disk rack server, I quickly added additional DAS devices for more storage...swapped hardware for more VM performance...and now host a media streaming server for 20ish daily users. The server also runs my home LAN dockers, and my daily driver gaming VM....could have done it a million other ways, but unRAID made everything cohesive & simple!
  12. Use the SuperMicro IPMICFG tool to create a new user, then login and change password Tool available here: https://www.supermicro.com/solutions/SMS_IPMI.cfm Copy linux package to accessible dir and use via putty Follow guide here: https://www.servethehome.com/reset-supermicro-ipmi-password-default-lost-login/
  13. of course, new system already has license can data array just be moved to a new system and spun up? will shares auto populate?
  14. My usage with UNRAID has changed over time & I have decided to build a dedicated storage box (low power) & a high power compute box for VMs/PLEX/dockers. I have a single existing system which does everything right now & I'd like to peel out JUST the data array to move to the new storage box -- is it possible to move drives, assign & just recreate parity on a new system? I know I could move the flash drive over (i've upgraded hw over time) but I want to keep my existing dockers/vms on the compute box. Can I copy flash drive contents to new? Any advice?
  15. I primarily use the NUT plugin for UPS control, and leave the baked in UPS monitor alone, however have had two instances now where I wake up to emails/errors about UPS lost communication. System was last started up/reset on 4/6/19 and first saw the issue on 4/14: Apr 14 04:40:04 OCHO apcupsd[547]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded Apr 14 04:40:04 OCHO apcupsd[547]: NIS server startup succeeded Apr 14 04:40:04 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:09 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:14 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:19 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:24 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:29 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:34 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:39 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:44 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:49 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:54 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:40:59 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 14 04:41:04 OCHO apcupsd[547]: Communications with UPS lost. The usbfs error continues on w/ the communications lost notification until I addressed the issue. Found my dashboard had the UPS plugin running, went into settings and found it started w/ gui setting set to off. a quick toggle of the settings and all was well again & dashboard panel went away. Now again last night (4/21/19) Apr 21 04:40:04 OCHO apcupsd[2116]: apcupsd 3.14.14 (31 May 2016) slackware startup succeeded Apr 21 04:40:04 OCHO apcupsd[2116]: NIS server startup succeeded Apr 21 04:40:04 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:09 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:14 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:19 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:24 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:29 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:34 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:39 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:44 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:49 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:54 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:40:59 OCHO kernel: usb 2-1.1: usbfs: interface 0 claimed by usbfs while 'apcupsd' sets config #1 Apr 21 04:41:04 OCHO apcupsd[2116]: Communications with UPS lost. Exactly 7 days to the second, we get another myster startup. I dont have any user scripts running that would call apcupsd, but i do see my "fixed schedules" set to run daily scripts at 440a. But apcupsd doesnt start up daily, only weekly which is set to run at 430a on Sundays.
  16. Version: 6.7.0-rc7 Seeing random segfaults related to musl recently: Apr 8 23:59:32 OCHO kernel: python[34253]: segfault at 6e6f68747988 ip 000014ed29501411 sp 000014ed226c2d38 error 6 in ld-musl-x86_64.so.1[14ed294f4000+45000] Apr 8 23:59:32 OCHO kernel: Code: 48 8b 47 10 48 39 47 18 75 14 89 f1 48 c7 c0 fe ff ff ff 48 d3 c0 f0 48 21 05 1b b6 06 00 48 8b 57 18 48 8b 47 10 48 89 42 10 <48> 89 50 18 48 8b 47 08 48 89 c2 48 83 e0 fe 48 83 ca 01 48 89 57 Apr 9 00:12:33 OCHO kernel: python[2565]: segfault at 561700000090 ip 000014ed2950140d sp 000014ed226c2f08 error 6 in ld-musl-x86_64.so.1[14ed294f4000+45000] Apr 9 00:12:33 OCHO kernel: Code: 83 c0 10 c3 48 8b 47 10 48 39 47 18 75 14 89 f1 48 c7 c0 fe ff ff ff 48 d3 c0 f0 48 21 05 1b b6 06 00 48 8b 57 18 48 8b 47 10 <48> 89 42 10 48 89 50 18 48 8b 47 08 48 89 c2 48 83 e0 fe 48 83 ca have unifi-controller / ombi / nextcloud / resilio-sync / tautulli / duplicati installed from linuxserver repo how do I go about identifying the culprit container? Its not like its happening with any kind of frequency ocho-diagnostics-20190409-1212.zip
  17. Running 6.7.0-rc6 on a supermicro x9dri-ln4f w/ 2x E5-2670v2 & 128gb ECC ram 1x 9211-8i -- 6x drives on array (slot 4) 1x 9207-8i -- 5x non array SSDs + 1x cache SSD (slot 6) 1x 9207-8e -- 4x array drives + 2x non array drives (slot 1) gigabyte gtx 1070 (slot 3) 11 array drives -- 6tb parity Upgraded from x8dti-f w/ 2x l6540 in late January / early February -- had no issues with this system other than lacking a little grunt for PLEX transcoding Ive been having recent issues with call traces which seem to be occurring during parity checks. Noticed them during the previous month's scheduled partiy check (2/28) I had my tunables set to: Tunable (nr_requests): 128 Tunable (md_num_stripes): 4096 Tunable (md_sync_window): 2048 Tunable (md_sync_thresh): 2000 And I've lowered sync_thresh to 192 which alleviated a number of the call traces I was seeing but don't understand why this became an issue after the upgrade when I re-used all hardware other than the mobo/cpu Also, still getting a KVM related call trace at the onset of the parity check -- and NIC dropouts while parity check is running Easy answer is to do parity checks less (I'm on UPS & SONNEN battery) but I'd like to fix the problem... Any advice on where to go? ocho-diagnostics-20190401-1838.zip
  18. there was in fact.... I have: Tunable (nr_requests): 128 default Tunable (md_num_stripes): 4096 user-set Tunable (md_sync_window): 2048 user-set Tunable (md_sync_thresh): 2000 user-set been set this way for quite some time. why would parity check cause KVM lock up?
  19. started getting call traces, which seem to occur when previously functional VMs are running Feb 28 13:58:53 OCHO kernel: CPU: 33 PID: 17724 Comm: unraidd Not tainted 4.18.20-unRAID #1 Feb 28 13:58:53 OCHO kernel: Hardware name: Supermicro X9DRi-LN4+/X9DR3-LN4+/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.2 03/04/2015 Feb 28 13:58:53 OCHO kernel: Call Trace: Feb 28 13:58:53 OCHO kernel: <IRQ> Feb 28 13:58:53 OCHO kernel: dump_stack+0x5d/0x79 Feb 28 13:58:53 OCHO kernel: nmi_cpu_backtrace+0x71/0x83 Feb 28 13:58:53 OCHO kernel: ? lapic_can_unplug_cpu+0x8e/0x8e Feb 28 13:58:53 OCHO kernel: nmi_trigger_cpumask_backtrace+0x57/0xd7 Feb 28 13:58:53 OCHO kernel: rcu_dump_cpu_stacks+0x91/0xbb Feb 28 13:58:53 OCHO kernel: rcu_check_callbacks+0x23f/0x5ca Feb 28 13:58:53 OCHO kernel: ? tick_sched_handle.isra.5+0x2f/0x2f Feb 28 13:58:53 OCHO kernel: update_process_times+0x23/0x45 Feb 28 13:58:53 OCHO kernel: tick_sched_timer+0x36/0x64 Feb 28 13:58:53 OCHO kernel: __hrtimer_run_queues+0xb1/0x105 Feb 28 13:58:53 OCHO kernel: hrtimer_interrupt+0xf4/0x20d Feb 28 13:58:53 OCHO kernel: smp_apic_timer_interrupt+0x79/0x91 Feb 28 13:58:53 OCHO kernel: apic_timer_interrupt+0xf/0x20 Feb 28 13:58:53 OCHO kernel: </IRQ> Feb 28 13:58:53 OCHO kernel: RIP: 0010:xor_avx_5+0x1c5/0x352 Feb 28 13:58:53 OCHO kernel: Code: c5 fd 7f 98 e0 00 00 00 c4 c1 7d 6f 82 00 01 00 00 c4 c1 7c 57 83 00 01 00 00 c5 fc 57 83 00 01 00 00 c5 fc 57 85 00 01 00 00 <c5> fc 57 80 00 01 00 00 c5 fd 7f 80 00 01 00 00 c4 c1 7d 6f 8a 20 Feb 28 13:58:53 OCHO kernel: RSP: 0018:ffffc9000b6cfc68 EFLAGS: 00000287 ORIG_RAX: ffffffffffffff13 Feb 28 13:58:53 OCHO kernel: RAX: ffff880e118ada00 RBX: ffff880e118a5a00 RCX: ffff880e118a5000 Feb 28 13:58:53 OCHO kernel: RDX: 0000000000000000 RSI: ffff880e118ad000 RDI: 0000000000001000 Feb 28 13:58:53 OCHO kernel: RBP: ffff880e118a4a00 R08: ffff880e118a6000 R09: ffff880e118a7000 Feb 28 13:58:53 OCHO kernel: R10: ffff880e118a7a00 R11: ffff880e118a6a00 R12: 0000000000000a00 Feb 28 13:58:53 OCHO kernel: R13: ffff880e118ad000 R14: ffff880e118a4000 R15: ffff880e118a5000 Feb 28 13:58:53 OCHO kernel: ? xor_avx_5+0x2d/0x352 Feb 28 13:58:53 OCHO kernel: check_parity+0x118/0x349 [md_mod] Feb 28 13:58:53 OCHO kernel: handle_stripe+0xe8a/0x1226 [md_mod] Feb 28 13:58:53 OCHO kernel: unraidd+0xbc/0x123 [md_mod] Feb 28 13:58:53 OCHO kernel: ? md_open+0x2c/0x2c [md_mod] Feb 28 13:58:53 OCHO kernel: md_thread+0xcc/0xf1 [md_mod] Feb 28 13:58:53 OCHO kernel: ? wait_woken+0x68/0x68 Feb 28 13:58:53 OCHO kernel: kthread+0x10b/0x113 Feb 28 13:58:53 OCHO kernel: ? kthread_flush_work_fn+0x9/0x9 Feb 28 13:58:53 OCHO kernel: ret_from_fork+0x35/0x40 Feb 28 14:01:53 OCHO kernel: INFO: rcu_sched self-detected stall on CPU Feb 28 14:01:53 OCHO kernel: 33-....: (1140023 ticks this GP) idle=92a/1/4611686018427387906 softirq=714962/714976 fqs=282448 Feb 28 14:01:53 OCHO kernel: (t=1140024 jiffies g=275169 c=275168 q=9034795) Feb 28 14:01:53 OCHO kernel: NMI backtrace for cpu 33 Feb 28 14:01:53 OCHO kernel: CPU: 33 PID: 17724 Comm: unraidd Not tainted 4.18.20-unRAID #1 Feb 28 14:01:53 OCHO kernel: Hardware name: Supermicro X9DRi-LN4+/X9DR3-LN4+/X9DRi-LN4+/X9DR3-LN4+, BIOS 3.2 03/04/2015 Feb 28 14:01:53 OCHO kernel: Call Trace: Feb 28 14:01:53 OCHO kernel: <IRQ> Feb 28 14:01:53 OCHO kernel: dump_stack+0x5d/0x79 Feb 28 14:01:53 OCHO kernel: nmi_cpu_backtrace+0x71/0x83 Feb 28 14:01:53 OCHO kernel: ? lapic_can_unplug_cpu+0x8e/0x8e Feb 28 14:01:53 OCHO kernel: nmi_trigger_cpumask_backtrace+0x57/0xd7 Feb 28 14:01:53 OCHO kernel: rcu_dump_cpu_stacks+0x91/0xbb Feb 28 14:01:53 OCHO kernel: rcu_check_callbacks+0x23f/0x5ca Feb 28 14:01:53 OCHO kernel: ? tick_sched_handle.isra.5+0x2f/0x2f Feb 28 14:01:53 OCHO kernel: update_process_times+0x23/0x45 Feb 28 14:01:53 OCHO kernel: tick_sched_timer+0x36/0x64 Feb 28 14:01:53 OCHO kernel: __hrtimer_run_queues+0xb1/0x105 Feb 28 14:01:53 OCHO kernel: hrtimer_interrupt+0xf4/0x20d Feb 28 14:01:53 OCHO kernel: smp_apic_timer_interrupt+0x79/0x91 Feb 28 14:01:53 OCHO kernel: apic_timer_interrupt+0xf/0x20 Feb 28 14:01:53 OCHO kernel: </IRQ> Feb 28 14:01:53 OCHO kernel: RIP: 0010:unraidd+0xb1/0x123 [md_mod] Feb 28 14:01:53 OCHO kernel: Code: 48 89 12 48 89 52 08 f0 80 62 20 fe f0 ff 42 28 8b 42 28 ff c8 74 02 0f 0b 48 89 df c6 07 00 0f 1f 40 00 fb 66 0f 1f 44 00 00 <4c> 89 ff 41 ff c5 e8 1e ed ff ff 4c 89 ff e8 69 e1 ff ff 48 89 df Feb 28 14:01:53 OCHO kernel: RSP: 0018:ffffc9000b6cfe50 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff13 Feb 28 14:01:53 OCHO kernel: RAX: 0000000000000000 RBX: ffff880e27bf6268 RCX: ffff880e1643d818 Feb 28 14:01:53 OCHO kernel: RDX: ffff880e129489d8 RSI: 0000000000000046 RDI: ffff880e27bf6268 Feb 28 14:01:53 OCHO kernel: RBP: ffffc9000b6cfeb8 R08: 0000000000000000 R09: ffffc9000b6cfdb8 Feb 28 14:01:53 OCHO kernel: R10: 0000000000000001 R11: ffff880e19d36000 R12: ffff880e27bf6000 Feb 28 14:01:53 OCHO kernel: R13: 0000000001ebd7e8 R14: ffff880e27bf6220 R15: ffff880e129489c8 Feb 28 14:01:53 OCHO kernel: ? md_open+0x2c/0x2c [md_mod] Feb 28 14:01:53 OCHO kernel: md_thread+0xcc/0xf1 [md_mod] Feb 28 14:01:53 OCHO kernel: ? wait_woken+0x68/0x68 Feb 28 14:01:53 OCHO kernel: kthread+0x10b/0x113 Feb 28 14:01:53 OCHO kernel: ? kthread_flush_work_fn+0x9/0x9 Feb 28 14:01:53 OCHO kernel: ret_from_fork+0x35/0x40 can I get some guidance on what I may be looking at here? EDIT: running 6.6.6 diagnostics attached ocho-diagnostics-20190228-1409.zip
  20. @johnnie.black I have a UD RAID0 array currently & have been considering changing to RAID5 Can btrfs balance -dconvert be done on the fly w/ data existing or must the disks be cleared beforehand?
  21. 6.6.6 is current stable the releases are Release Candidates which are technically beyond beta but before full release. I'd imagine that since @limetech can only test so many configurations themselves, they rely on the community to see if there are glitches with the thousands of different hardware options people use. Personally, I stick to stable releases...only installing them after they have been in the wild for 2+ weeks Theres nobody twisting arms to run RC software, so no need to be up in arms about the testing process
  22. I have an x9dri-ln4f+ which has a dedicated onboard Intel 602 controller for the "SCU" port which I am trying to pass through to a Win10 VM. I set up a RAID0 array at boot in the controller interface, and after passing through the controller with vfio-id the windows 10 installer doesnt see any drives. I tried using the virtio drivers & even brought in a copy of intel 602 drivers but no love. Any pointers on where to go next? Goal is to pass through a 4 drive RAID0 for a gaming VM.
  23. I tried again with an Intel PRO 2500 SSD w/ the same results on a 9207-8e w/ expander & a 9211-8i w/ passthrough backplane