Jump to content

olympia

Members
  • Posts

    458
  • Joined

  • Last visited

Posts posted by olympia

  1. 1 hour ago, alturismo said:

    you can force this pretty easy with RDP when you have a higher resolution then 1920 x 1080 ...

     

    so yes, i would prefer RDP due performance, but depending on client the resolution switch can hurt ... so currently i either use VNC or parsec (fluid experience)

     

    What do you mean by force this? You mean force and easily reproduce memory page errors followed by the crash of libvirt?

     

    I am still in the early few hours of testing, but the VM is now running with Blue Iris for 5:30 hours now in parallel with parity check and the only thing what I did is that I did not connect to the server remotely at all since it booted up. The unraid was booting up, VM and parity check have started automatically and seems to run smoothly now. The VM never survived that much time before with Blue Iris and parity check in parallel.

     

    So it pretty much looking like that somehow I am hit by this old issue what MS has fixed in Theory long ago with KB4522355 back in 2019:

    "Addresses an issue with high CPU usage in Desktop Window Manager (dwm.exe) when you disconnect from a Remote Desktop Protocol (RDP) session."

     

    There was a lot of reports re to this back in 2019, but not so much since then, so I am not sure this is the same issue.

    Do you have any clue based on this? Presumable I should also try another remote desktop solution?

     

  2. 4 hours ago, alturismo said:

    but i cant use that many simultan, if i add more and more simultan i run into memory page errors and the VM's crashes

     

    Ahh, ok, so you (only) have the memory page errors with a VM crash consequence if you run more than 2-3 VMs at the same time.

    Is libvirt also crashing at you if this happens and only complete restart of unraid helps?

     

    Could this be somehow load related? With Blue Iris there is a constant load of 20-25% on the igpu. I don't know if this is much, but constant. At normal operation I get to the point of memory page errors within 24 hours, but for example if parity check is running (because of an unclean restart following a prior crash) then the memory page errors comes very quick (few minutes, but may an hour).

     

    How much is your load on the igpu?

    Also, are you using RDP to connect remotely?

     

    @STGMavrick would you mind sharing your current status with this?

  3. On 8/28/2021 at 6:26 AM, alturismo said:

    its about available mem for the vgpu(s), so basically with the desktop cpu's we cant get more out of it.

     

    even roughly the *_4 and *_8 (4 or 8 vgpu's) prolly wont work ... i can use 2 /_4) or 3 (_8) max, then i run into memory page errors ...

     

    I am getting exactly the same as @STGMavrick with Blue Iris and GVT-g... 

     

    Tried all what has been suggested before, yet I cannot get rid of the freezing.

     

    @alturismo are you also having the same symptoms? 

    If yes, what do you mean by "i can use 2 /_4) or 3 (_8) max, then i run into memory page errors ..." - Does this mean you have a solution to it?

     

  4. unRAID is now having libevent-2.1.11-x86_64-1 installed by default.

     

    Preclear plugin is downgrading this to libevent-2.1.8-x86_64-3, please see below.

    Not sure from which unRAID version libevent is included, but could you please remove the downloading and installation of libevent for unRAID v6.8.0 for sure?

     

    Thank you!

    Dec  6 09:59:05 Tower root: +==============================================================================
    Dec  6 09:59:05 Tower root: | Upgrading libevent-2.1.11-x86_64-1 package using /boot/config/plugins/preclear.disk/libevent-2.1.8-x86_64-3.txz
    Dec  6 09:59:05 Tower root: +==============================================================================
    Dec  6 09:59:05 Tower root: Pre-installing package libevent-2.1.8-x86_64-3...
    Dec  6 09:59:05 Tower root: Removing package: libevent-2.1.11-x86_64-1-upgraded-2019-12-06,09:59:05
    Dec  6 09:59:05 Tower root: Verifying package libevent-2.1.8-x86_64-3.txz.
    Dec  6 09:59:05 Tower root: Installing package libevent-2.1.8-x86_64-3.txz:

     

    • Thanks 1
  5. @dlandon, I believe you misunderstood the case.

     

    It is not the disabling itself what is causing the issue, but the moment tips and tweaks applying those settings.

    My setup is working with having these two settings disabled for ages.

     

    However, with unraid v6.8.0 there is a race condition issue at the moment of the plugin is applying the settings and docker to detect custom networks.

    When the plugin applies the NIC settings, the NIC gets disabled for a sec or two (I guess) and that's the same moment when docker is trying to detect custom networks. As a result of the disabled NIC state, it doesn't detect anything. If after booting up I restart the docker service, the custom networks get detected properly with the settings applied.

     

    It's not a biggy for me as I don't have hard reasons to disabling those settings (although that's what the plug recommends), but I am reporting this, because I guess more users will face with this issue when v6.8.0 stable gets released.

  6. Is it just me who has lost the custom network type in docker containers, so that I am unable to set fix IPs any longer?

     

    Edit: oh, I found the new setting for enabling user defined networks on docker settings page. I don't this this change was mentioned anywhere in the change logs? :) Anyhow, I am happy now...

  7. Is there any way to trigger an update manually?

    The Db of my MB server is from 16 Dec 2017, because I haven't had it running for long time. Now it is getting the updates in every hour, but only incremental updates. Meaning, if there was a MB update 5 times on 17 Dec, then it takes 5 hours for my server to get there, then it needs to catch up with the days since 16 Dec. So can I quicken this process somehow to get up to date at once?

  8. 4 hours ago, natecook said:

     

    Do you have an SSD for cache? Is your Downloads share set to use cache?

     

    It might just be your array write speed being slowed by calculating parity.

     

    No, but the HDD I use for cache is fast enough to handle gigabit speed and yes, I use cache only folder. I know what I am doing. I found some other reports in the meantime on transmission forums that it cannot handle speed more than 40MB/sec:

    https://forum.transmissionbt.com/viewtopic.php?t=14697

    https://forum.transmissionbt.com/viewtopic.php?t=16725

     

    Deluge is also not much better.

    I just tried QBittorrent and it CAN saturate the network, so it seems like this is the ONLY torrent client what can handle that speed. 

    I am (was) a fun of transmission though... It's a pity... 

    • Upvote 1
  9. Could someone using a gigabit link with Transmission Docker confirm that the full bandwidth can be utilized?

    Looks like somehow I am capped at about 40MB/sec, while from another computer from the same network I can have 100MB+/sec using the same connection with the same torrent.

     

    Doesn't seem to be a limitation by the Docker environment (I was trying with the Transmission plugin as well with same results) - can somehow unRAID itself have a limitation not enabling Transmission to run at full speed?

  10. Many thanks jbrodriguez! It works perfectly now!

     

    a minor thing: while I was adding servers I used the autocomplete for IP addresses and my Android OS put a space after the IP address during the autocomplete. ControlR won't add the server in this case as it seems it is not recognizing it as an IP/host. Took me quite some mins to figure out what's wrong. Maybe you could look into this to avoid confusion of users in similar case?

     

    Not sure how difficult it would be to code, but would be nice, if the order of the servers could be modified (not something you cannot live without, but always nice to have your own server as first :) )

     

    Thank you again!

  11. Dynamix Local Master Plugin doesn't display yoda in the header any longer, while in SMB settings the plug-in reports correctly that unRAID is the elected master browser. Is this only me?

  12. Is it possible to find/ view/ copy the results of the extended tests from the file system? I have a huge log with dupes and it is difficult to process it based on the "view results" window from the GUI. I presume these results should be saved somewhere? However, I cannot locate it.

  13. I am having the exactly the same issue. Did you find a solution to the problem?

     

    Can you post your docker mappings... And we'll take a look at this...

     

    It's

    /mnt/cache/appdat/config and

    /mnt/cache/appdat/data

     

    Could it be somehow related to unRAID version? I am running v6.1.9, not the 6.2 series...

     

    Edit: I was just removing the previous version which has been installed flawlessly and working perfectly (other than it was not updating itself any more due to the schema changes) - So I have a precedent to see this docker working :)

×
×
  • Create New...