olympia

Members
  • Posts

    436
  • Joined

  • Last visited

Everything posted by olympia

  1. Wow! you made a really deep testing there! Thank you very much for your efforts! I really do appreciate this! Getting late here, I will do the same with same screenshots tomorrow just for the sake of the records. Probably also Blue Iris is behaving badly in this setup. It is running as a windows service and constantly using igpu, so it is not like the video in a web browser what should stop when you disconnect. ...and your question: open means when I have an active RDP window open - no problem with this at all. When I just close the RDP window (disconnect from the session, but leave the user logged in) - that's when the problem starts. If I connect back to the left sesion - no issue again until the window is open. If I leave the session with logging out, then it looks like the issue is not popping up - still need some testing on this to confirm 100% And also for the records: There is no issue with RDP without igpu passthrough Let me ask a few quick questions: - is your VM configured as Q35-5.1 with OVMF BIOS? - did you do any tweaks to the video drivers in win10 (e.g. removing QXL video device manually from the XML suggested somewhere on the forums) other than what is in the second post? - what unRAID version are you running? I don't recognize the chart for the CPU load. Is this in 6.10rc1 or that is provided by some plugin? - do you have GuC/ HuC loading disabled?
  2. ...but contrary to you, the issue at me is not when I have the rdp session open (what I also had at 1080p, so not beyond the limit), but when I disconnect RDP without logging out from Windows. If I am logging out, then it seems like there is no issue.
  3. This is 100% RDP related for me. Blue Iris in a Win10 VM with constant 20% load was running nicely in parallel with the parity check for 8:30+ hours, then following I logged in and disconnected (not logged out) using RDP I've got the memory page errors in 5 minutes and libvirt has got frozen. I also noted the following: - I have 4 cores allocated to the Win10 VM - the average CPU load on all 4 cores cores were around 15% when it was still running - If I logged in with RDP and just disconnected from RDP session (leaving my user logged in), then the CPU loads were more than double on all core - If I logged (signed) out my user in Windows instead of just disconnecting from the RDP session, then all 4 cores have got back to the normal 15% and not having the double load issue Now the question is that why this Windows 10 build 1903 bug from 2019 is hitting me in 2021? Le me see if the workaround from 2019 disabling "Use WDDM graphics display driver for Remote Desktop Connections" helps me now...
  4. What do you mean by force this? You mean force and easily reproduce memory page errors followed by the crash of libvirt? I am still in the early few hours of testing, but the VM is now running with Blue Iris for 5:30 hours now in parallel with parity check and the only thing what I did is that I did not connect to the server remotely at all since it booted up. The unraid was booting up, VM and parity check have started automatically and seems to run smoothly now. The VM never survived that much time before with Blue Iris and parity check in parallel. So it pretty much looking like that somehow I am hit by this old issue what MS has fixed in Theory long ago with KB4522355 back in 2019: "Addresses an issue with high CPU usage in Desktop Window Manager (dwm.exe) when you disconnect from a Remote Desktop Protocol (RDP) session." There was a lot of reports re to this back in 2019, but not so much since then, so I am not sure this is the same issue. Do you have any clue based on this? Presumable I should also try another remote desktop solution?
  5. Ahh, ok, so you (only) have the memory page errors with a VM crash consequence if you run more than 2-3 VMs at the same time. Is libvirt also crashing at you if this happens and only complete restart of unraid helps? Could this be somehow load related? With Blue Iris there is a constant load of 20-25% on the igpu. I don't know if this is much, but constant. At normal operation I get to the point of memory page errors within 24 hours, but for example if parity check is running (because of an unclean restart following a prior crash) then the memory page errors comes very quick (few minutes, but may an hour). How much is your load on the igpu? Also, are you using RDP to connect remotely? @STGMavrick would you mind sharing your current status with this?
  6. I am getting exactly the same as @STGMavrick with Blue Iris and GVT-g... Tried all what has been suggested before, yet I cannot get rid of the freezing. @alturismo are you also having the same symptoms? If yes, what do you mean by "i can use 2 /_4) or 3 (_8) max, then i run into memory page errors ..." - Does this mean you have a solution to it?
  7. unRAID is now having libevent-2.1.11-x86_64-1 installed by default. Preclear plugin is downgrading this to libevent-2.1.8-x86_64-3, please see below. Not sure from which unRAID version libevent is included, but could you please remove the downloading and installation of libevent for unRAID v6.8.0 for sure? Thank you! Dec 6 09:59:05 Tower root: +============================================================================== Dec 6 09:59:05 Tower root: | Upgrading libevent-2.1.11-x86_64-1 package using /boot/config/plugins/preclear.disk/libevent-2.1.8-x86_64-3.txz Dec 6 09:59:05 Tower root: +============================================================================== Dec 6 09:59:05 Tower root: Pre-installing package libevent-2.1.8-x86_64-3... Dec 6 09:59:05 Tower root: Removing package: libevent-2.1.11-x86_64-1-upgraded-2019-12-06,09:59:05 Dec 6 09:59:05 Tower root: Verifying package libevent-2.1.8-x86_64-3.txz. Dec 6 09:59:05 Tower root: Installing package libevent-2.1.8-x86_64-3.txz:
  8. Should this work regardless of what network protocol is being used? I had parity check running and streaming a movie was completely impossible via NFS (I believe this was even possible without this feature before...). Haven't tried via SMB yet, just wondering whether if this feature protocol dependent or not.
  9. Apologies for the late response. Didn't expect an action and I just saw your reply. So I just tested the latest version and it indeed fixes the issue. Thank you very much for your attention and quick fix! Great support, I do really appreciate it!
  10. It's an onboard NIC Z270M Pro4 NIC is: Intel® I219V. Fair enough, I am not forcing anything. On the side note: I never had this issue pre v3.8.0, so it pretty much seems to be v3.8.0 specific. Subsequently, potentially more feedback like this will come when stable is released and more users are upgrading.
  11. @dlandon, I believe you misunderstood the case. It is not the disabling itself what is causing the issue, but the moment tips and tweaks applying those settings. My setup is working with having these two settings disabled for ages. However, with unraid v6.8.0 there is a race condition issue at the moment of the plugin is applying the settings and docker to detect custom networks. When the plugin applies the NIC settings, the NIC gets disabled for a sec or two (I guess) and that's the same moment when docker is trying to detect custom networks. As a result of the disabled NIC state, it doesn't detect anything. If after booting up I restart the docker service, the custom networks get detected properly with the settings applied. It's not a biggy for me as I don't have hard reasons to disabling those settings (although that's what the plug recommends), but I am reporting this, because I guess more users will face with this issue when v6.8.0 stable gets released.
  12. OK, I close it here, I continue with dlandon in the plugin thread. Thanks for the attention.
  13. I have no hard reason, I was just following the recommendation of a trustworthy plugin author kind of blindly I presume this should be handled somehow (maybe a warning in the plugin could be enough), because I don't think I will be the only one with this issue, especially when 6.8.0 stable gets released.
  14. Hi @dlandon, Could you please have a look at this thread? Bottom line is that there is a race condition between custom network detection for dockers and tips & teaks plugin applying nw settings. What is your position on this? Thank you!
  15. Thanks @bonienl! So I guess this is something for @dlandon to address in the tips and tweaks plugin?
  16. @bonienl @dlandon OK, I think I've got this figured out and it looks like this is caused by a race condition between detecting custom networks for dockers and the tips & tweak plugin applying "Disable NIC Flow Control" and "Disable NIC Offload". If I have tips & tweaks on default settings then custom networks are showing up for dockers, if I set the above two options to yes, then customer networks are not detected. Does this make any sense to you?
  17. I attached two new diag packages. The zip with the earlier time stamp is the one after a fresh reboot and having "custom : br0" disappeared from docker nw type. The later zip has grabbed after re-applying settings and having "custom : br0" showing up and dockers (failing to start after boot) autostarted. tower-diagnostics-20191015-0932.zip tower-diagnostics-20191015-0953.zip
  18. Spoke too soon. While re-applying network settings helps, but after a reboot "custom : br0" disappears until I am re-applying network settings once again. So I need to re-apply network settings after each reboot for some reason to make custom nw available for dockers. Do you have an idea why is this? Shall I provide another diags?
  19. Thank you for both! @bonienlthat did the trick! Many thanks for the hint! (Maybe something to include in the release notes )
  20. Could someone please confirm that br0 is showing up by default in docker settings/ network type? I only have "Bridge", "Host" and "None" as choices in the dropdown menu now, while "custom : br0" is not available as in v6.7.2
  21. Thanks for your attention. Here it is. tower-diagnostics-20191014-1154.zip
  22. Following the upgrade I noticed my dockers with custom/fix IP address are not starting up with execution error. It turned out that the network type has got auto assigned to none and when checked, br0 is not available anymore in the network type. Was there a change regards to this?