Dase

Members
  • Posts

    64
  • Joined

  • Last visited

Everything posted by Dase

  1. By the way, here's the call trace on on my system on the active core during a spike. You can see it trying to do some display-related work: Call Trace: [ 967.656630] <NMI> [ 967.656631] ? nmi_cpu_backtrace+0xd3/0x104 [ 967.656633] ? nmi_cpu_backtrace_handler+0xd/0x15 [ 967.656635] ? nmi_handle+0x54/0x131 [ 967.656637] ? do_raw_spin_lock+0xb/0x1a [ 967.656638] ? default_do_nmi+0x66/0x15b [ 967.656640] ? exc_nmi+0xbf/0x130 [ 967.656642] ? end_repeat_nmi+0x16/0x67 [ 967.656644] ? do_raw_spin_lock+0xb/0x1a [ 967.656645] ? do_raw_spin_lock+0xb/0x1a [ 967.656647] ? do_raw_spin_lock+0xb/0x1a [ 967.656647] </NMI> [ 967.656648] <TASK> [ 967.656648] _raw_spin_lock_irqsave+0x2c/0x37 [ 967.656650] fwtable_read32+0x2c/0xb8 [i915] [ 967.656723] get_data+0x54/0x63 [i915] [ 967.656796] bit_xfer+0x252/0x3e1 [i2c_algo_bit] [ 967.656800] gmbus_xfer+0x44/0x92 [i915] [ 967.656909] __i2c_transfer+0x2af/0x39b [i2c_core] [ 967.656917] i2c_transfer+0xa2/0xc6 [i2c_core] [ 967.656923] drm_do_probe_ddc_edid+0xc6/0x130 [drm] [ 967.656950] ? drm_get_override_edid+0x53/0x53 [drm] [ 967.656973] edid_block_read+0x3a/0xc1 [drm] [ 967.656996] _drm_do_get_edid+0x83/0x2ec [drm] [ 967.657018] ? drm_get_override_edid+0x53/0x53 [drm] [ 967.657041] drm_get_edid+0x34/0x5c [drm] [ 967.657063] intel_hdmi_set_edid+0x9d/0x271 [i915] [ 967.657136] intel_hdmi_detect+0xc7/0x101 [i915] [ 967.657207] drm_helper_probe_detect_ctx+0x81/0xf4 [drm_kms_helper] [ 967.657219] output_poll_execute+0x10e/0x1fb [drm_kms_helper] [ 967.657231] process_one_work+0x1a8/0x295 [ 967.657233] worker_thread+0x18b/0x244 [ 967.657235] ? rescuer_thread+0x281/0x281 [ 967.657237] kthread+0xe4/0xef [ 967.657239] ? kthread_complete_and_exit+0x1b/0x1b [ 967.657241] ret_from_fork+0x1f/0x30 [ 967.657243] </TASK>
  2. I have this same problem on my board, an ASRockRack E3C246D4U. Took me forever to track down. It appeared in the 6.11 series. Blacklisting the GPU i915 module fixed it. Now I can't use hardware transcoding, but the CPU not spiking every 30 seconds was worth the tradeoff for me.
  3. A setting that noticeably improved my Windows 11 VM performance was disabling the Windows "Memory integrity" option. From the Windows Start menu, search for "Core isolation" and open that page. I'm curious if this is enabled and if disabling it helps you at all.
  4. "Memory integrity" Is the option you should try disabling to see if it makes a difference. Leave the driver blocklist enabled.
  5. Just open the Windows Start menu and type "core isolation". It'll take you to that settings page. Make sure "Memory integrity" is disabled. That definitely slowed down my Windows 11 VM.
  6. 6.10.3 and earlier work fine. 6.11+ including 6.12 maxes a CPU core for about 5 seconds every 30 seconds. This process shown in HTOP accumulates about 3 seconds of CPU time after each spike: /usr/bin/php -q /usr/local/emhttp/webGui/nchan/device list Any idea what might have changed here 6.11+? This recurring spike happens even in safe mode with the array stopped. I'm not positive it's this process causing my problem, but its behavior seems suspect. It's making newer releases of unraid unusable for me, because the spike seems to eventually land on a CPU core assigned to a VM, making the VM unresponsive while the spike is happening. Safe mode 6.12.0 diagnostics attached. System: ASRockRack E3C246D4U Motherboard Intel® Xeon® E-2288G CPU @ 3.70GHz Dell PERC H310 8-Port 6Gb/s SAS Adapter RAID Controller 11 Drives and 1 Parity Drive tower-diagnostics-20230615-1145.zip
  7. This issue has been preventing me from upgrading from 6.10. Every 6.11 release has this issue for me. I even rebooted unraid into safe mode and didn't start the array, and the periodic spike was still there. It's in a kworker thread: I've attached safe mode diagnostics. I'd appreciate any insights. tower-diagnostics-20221118-1318.zip
  8. Seeing this same issue in my 6.11.0 final install. I too use an integrated GPU for Plex transcode.
  9. I had the same problem with slowness writing big files after a while. It's usually something to do with vm.dirty_ratio & vm.dirty_background_ratio. You can adjust these with the Tips And Tweaks community application. The percentage ratio depends on how much RAM you have and your use cases. Here's an explainer for what these do: https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
  10. Mine gives the same 1006 error. Thanks for letting us know the UnraidServerName version of the URL works. I was able to get it in that way.
  11. Upgraded from beta30. Saw missing temps on some drives once the array came up, but doing a manual spin up restored all temps and they've worked correctly since. No issues with spinups/spindowns across three controllers and 15 drives, and no issues with my dockers or my VMs. Everything is working great. Thank you Unraid team and the community for all your work!
  12. I idle at 65 watts, but that's with two drives always spun up, a parity drive and a torrent/download drive, and 13 more on standby. I also have two drive controllers, a Dell Perc H310 and a Supermicro AOC-SASLP-MV8. Each of these draws around 10 watts on its own I've noticed. System specs: M/B: ASRockRack E3C246D4U CPU: Intel® Xeon® E-2288G CPU @ 3.70GHz Memory: 32 GiB DDR4 Single-bit ECC
  13. Those two parameters have some complicated interactions depending on the amount of memory you have. It may take some experimenting to figure out what works best for your environment. This is an older post I found, but it has good background information that may help you fine-tune things: https://lonesysadmin.net/2013/12/22/better-linux-disk-caching-performance-vm-dirty_ratio/
  14. I had similar problems. I downloaded the Tips and Tweaks community application and adjusted these settings: Disk Cache 'vm.dirty_background_ratio' (%): 1 (Default 10) Disk Cache 'vm.dirty_ratio' (%): 3 (Default 20) I have 32 GB RAM, and I found the defaults would excessively cache and then try to dump way too much data at once on large copies. With these settings the reads and writes are much more steady, though you will probably see more frequent disk activity. Read the help items for these parameters in Tips and Tweaks for more info.
  15. If it won't boot with one stick, try pulling the battery for 5 min so the CMOS can clear and try it with one stick again. After I got it to boot, I went into the CMOS and flashed the BIOS that allows hardware transcoding. After that I didn't have problems, other than having to flash the BMC separately so sensors would work.
  16. As I recall I had this problem too. I got past it by using only one stick of RAM, and then once it posted successfully I spent some time figuring out which pair of slots worked correctly. Board has been rock-solid ever since.
  17. Anybody else seeing this in their log output every 30 minutes? This started happening a few days ago. 2020-05-17 09:37:28,402 DEBG 'watchdog-script' stdout output: [warn] Incoming port site 'https://portchecker.co/check' failed to web scrape, marking as failed
  18. I stopped Docker and copied my docker.img file to a non-SSD array drive that was always spun up anyways. In Docker settings I pointed it to the new location and started Docker. All my containers started up normally and everything is working fine. Prior to doing this I was averaging over 1 GB/hr on loop2. I'm down to 140 MB/hr now. We'll see if that lower number stays down. Update: Almost 24 hours later I'm still running 140ish MB/hr.
  19. Kapton tape is ideal for this type of thing. This 1 mm tape from Amazon covers the drive's connector pins perfectly. https://www.amazon.com/gp/product/B006ROKY68
  20. I have a Norco 4220 case with a Noctua NH-D9L on my E-2288G, 110 mm height (fits in a 3U chassis). Even running all-core BOINC I haven't seen it go over 82C (ambient temp of around 65F). My system with all-core BOINC, all cores 100%, and two drives spun up is drawing 204 watts, according to my UPS. I'm really pleased with it and it was the easiest heatsink install I've done to date. https://www.amazon.com/gp/product/B00QCEWTAW
  21. Dase

    intel-gpu-tools

    The default template I pulled from CA had /dev/dri twice. I changed it to /dev/dri and it still didn't launch. Showing advanced settings displayed the USER_ID variable, which had no value. I added 0 and everything started working. Thanks for the container! It was interesting spying on my Plex hw transcodes.
  22. I too am part of the ebay ASRockRack E3C246D4U club, thanks to Hoopster! I remember someone posting that they had to reflash the motherboard BMC because when they installed it the BMC button on the back got stuck behind the backplate and that cleared the BMC flash. Well, the same thing happened to me, and now my BIOS BMC flash is failing self-test and I can't control fan speeds. I can't find the original post about this, but does anyone have the E3C246D4U BMC flash files/utility? I submitted a support request on the ASRockRack web site, but no response yet....
  23. Have you checked out binhex-minidlna? It's simple to set up and works great for me. It's in Community Applications.
  24. This is almost working for me now. It turns out the Pictures folder is mounted read-only, so that's why I couldn't upload new photos or make new albums. I copied some pictures there manually, but photoshow won't show any thumbnails. All I see are broken link boxes. It will show the full image if I click the eye in the expanded image view though. When I go into the settings and tell photoshow to regenerate thumbs and images, my browser loads a blank page and nothing happens. Any thoughts? This seems similar to the problems Phlii was having.
  25. I recompiled p910nd under unRAID 6.1 and added it as an attachment to my original post in this thread: http://lime-technology.com/forum/index.php?topic=2888.msg64225#msg64225 I tested it and it works fine for me with unRAID 6.1.