olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Everything posted by olympia

  1. What do you mean by force this? You mean force and easily reproduce memory page errors followed by the crash of libvirt? I am still in the early few hours of testing, but the VM is now running with Blue Iris for 5:30 hours now in parallel with parity check and the only thing what I did is that I did not connect to the server remotely at all since it booted up. The unraid was booting up, VM and parity check have started automatically and seems to run smoothly now. The VM never survived that much time before with Blue Iris and parity check in parallel. So it pretty much looking like that somehow I am hit by this old issue what MS has fixed in Theory long ago with KB4522355 back in 2019: "Addresses an issue with high CPU usage in Desktop Window Manager (dwm.exe) when you disconnect from a Remote Desktop Protocol (RDP) session." There was a lot of reports re to this back in 2019, but not so much since then, so I am not sure this is the same issue. Do you have any clue based on this? Presumable I should also try another remote desktop solution?
  2. Ahh, ok, so you (only) have the memory page errors with a VM crash consequence if you run more than 2-3 VMs at the same time. Is libvirt also crashing at you if this happens and only complete restart of unraid helps? Could this be somehow load related? With Blue Iris there is a constant load of 20-25% on the igpu. I don't know if this is much, but constant. At normal operation I get to the point of memory page errors within 24 hours, but for example if parity check is running (because of an unclean restart following a prior crash) then the memory page errors comes very quick (few minutes, but may an hour). How much is your load on the igpu? Also, are you using RDP to connect remotely? @STGMavrick would you mind sharing your current status with this?
  3. I am getting exactly the same as @STGMavrick with Blue Iris and GVT-g... Tried all what has been suggested before, yet I cannot get rid of the freezing. @alturismo are you also having the same symptoms? If yes, what do you mean by "i can use 2 /_4) or 3 (_8) max, then i run into memory page errors ..." - Does this mean you have a solution to it?
  4. unRAID is now having libevent-2.1.11-x86_64-1 installed by default. Preclear plugin is downgrading this to libevent-2.1.8-x86_64-3, please see below. Not sure from which unRAID version libevent is included, but could you please remove the downloading and installation of libevent for unRAID v6.8.0 for sure? Thank you! Dec 6 09:59:05 Tower root: +============================================================================== Dec 6 09:59:05 Tower root: | Upgrading libevent-2.1.11-x86_64-1 package using /boot/config/plugins/preclear.disk/libevent-2.1.8-x86_64-3.txz Dec 6 09:59:05 Tower root: +============================================================================== Dec 6 09:59:05 Tower root: Pre-installing package libevent-2.1.8-x86_64-3... Dec 6 09:59:05 Tower root: Removing package: libevent-2.1.11-x86_64-1-upgraded-2019-12-06,09:59:05 Dec 6 09:59:05 Tower root: Verifying package libevent-2.1.8-x86_64-3.txz. Dec 6 09:59:05 Tower root: Installing package libevent-2.1.8-x86_64-3.txz:
  5. Should this work regardless of what network protocol is being used? I had parity check running and streaming a movie was completely impossible via NFS (I believe this was even possible without this feature before...). Haven't tried via SMB yet, just wondering whether if this feature protocol dependent or not.
  6. Apologies for the late response. Didn't expect an action and I just saw your reply. So I just tested the latest version and it indeed fixes the issue. Thank you very much for your attention and quick fix! Great support, I do really appreciate it!
  7. It's an onboard NIC Z270M Pro4 NIC is: Intel® I219V. Fair enough, I am not forcing anything. On the side note: I never had this issue pre v3.8.0, so it pretty much seems to be v3.8.0 specific. Subsequently, potentially more feedback like this will come when stable is released and more users are upgrading.
  8. @dlandon, I believe you misunderstood the case. It is not the disabling itself what is causing the issue, but the moment tips and tweaks applying those settings. My setup is working with having these two settings disabled for ages. However, with unraid v6.8.0 there is a race condition issue at the moment of the plugin is applying the settings and docker to detect custom networks. When the plugin applies the NIC settings, the NIC gets disabled for a sec or two (I guess) and that's the same moment when docker is trying to detect custom networks. As a result of the disabled NIC state, it doesn't detect anything. If after booting up I restart the docker service, the custom networks get detected properly with the settings applied. It's not a biggy for me as I don't have hard reasons to disabling those settings (although that's what the plug recommends), but I am reporting this, because I guess more users will face with this issue when v6.8.0 stable gets released.
  9. OK, I close it here, I continue with dlandon in the plugin thread. Thanks for the attention.
  10. I have no hard reason, I was just following the recommendation of a trustworthy plugin author kind of blindly I presume this should be handled somehow (maybe a warning in the plugin could be enough), because I don't think I will be the only one with this issue, especially when 6.8.0 stable gets released.
  11. Hi @dlandon, Could you please have a look at this thread? Bottom line is that there is a race condition between custom network detection for dockers and tips & teaks plugin applying nw settings. What is your position on this? Thank you!
  12. Thanks @bonienl! So I guess this is something for @dlandon to address in the tips and tweaks plugin?
  13. @bonienl @dlandon OK, I think I've got this figured out and it looks like this is caused by a race condition between detecting custom networks for dockers and the tips & tweak plugin applying "Disable NIC Flow Control" and "Disable NIC Offload". If I have tips & tweaks on default settings then custom networks are showing up for dockers, if I set the above two options to yes, then customer networks are not detected. Does this make any sense to you?
  14. I attached two new diag packages. The zip with the earlier time stamp is the one after a fresh reboot and having "custom : br0" disappeared from docker nw type. The later zip has grabbed after re-applying settings and having "custom : br0" showing up and dockers (failing to start after boot) autostarted. tower-diagnostics-20191015-0932.zip tower-diagnostics-20191015-0953.zip
  15. Spoke too soon. While re-applying network settings helps, but after a reboot "custom : br0" disappears until I am re-applying network settings once again. So I need to re-apply network settings after each reboot for some reason to make custom nw available for dockers. Do you have an idea why is this? Shall I provide another diags?
  16. Thank you for both! @bonienlthat did the trick! Many thanks for the hint! (Maybe something to include in the release notes )
  17. Could someone please confirm that br0 is showing up by default in docker settings/ network type? I only have "Bridge", "Host" and "None" as choices in the dropdown menu now, while "custom : br0" is not available as in v6.7.2
  18. Thanks for your attention. Here it is. tower-diagnostics-20191014-1154.zip
  19. Following the upgrade I noticed my dockers with custom/fix IP address are not starting up with execution error. It turned out that the network type has got auto assigned to none and when checked, br0 is not available anymore in the network type. Was there a change regards to this?
  20. Is it just me who has lost the custom network type in docker containers, so that I am unable to set fix IPs any longer? Edit: oh, I found the new setting for enabling user defined networks on docker settings page. I don't this this change was mentioned anywhere in the change logs? Anyhow, I am happy now...
  21. Many thanks Alex for your efforts! Should the folder caching status indicator accurate in the settings page? It shows me stopped, while in the log I have: Oct 30 21:50:03 Tower cache_dirs: cache_dirs service rc.cachedirs: Started: '/usr/local/emhttp/plugins/dynamix.cache.dirs/scripts/cache_dirs -i "Movies" -c 2 -p 8 -l on 2>/dev/null'
  22. Is there any way to trigger an update manually? The Db of my MB server is from 16 Dec 2017, because I haven't had it running for long time. Now it is getting the updates in every hour, but only incremental updates. Meaning, if there was a MB update 5 times on 17 Dec, then it takes 5 hours for my server to get there, then it needs to catch up with the days since 16 Dec. So can I quicken this process somehow to get up to date at once?