bigjme

Members
  • Posts

    351
  • Joined

  • Last visited

Everything posted by bigjme

  1. If i watch the docker usage they never really change Using community apps i can see the majority of my dockers are using only 60MB or 116MB memory, my highest docker is EMBY which almost always sites at 305MB All in they are reporting 1.83% memory usage I have the swap file running now so i can see what happens tomorrow - i have put fix common problems back on troubleshoot mode so i can post what happens then Edit: One thing that makes me think it isn;t dockers is that if i have my system on for s week and shut off all vms it sits at 2GB memory usage
  2. Day 2 of system lockups, again the server didn;t last 24 hours I've had to put the swap file plugin back on and will see if the server stays up or not, i have attached the diagnostics for the last failure archangel-diagnostics-20160623-1756.zip
  3. Having removed everything from my server even the powerdown plugin i received my first full system crash today - i have been 1 month free of any system crashes, so i was forced to do a hard power reset Looking through the logs folder this is the last log i found before the crash - the server didn't even make 24 hours so the log isn't huge so i was able to include it The last thing i can see is unraid killing things for memory and the system hard locking not long after If i can't last 24 hours without a system lock i'm going to have to put the swap file plugin back on the system Jamie archangel-diagnostics-20160622-1743.zip
  4. I didn't realize it stored so much to be honest I have the plugin installed again and running in troubleshoot mode as well. I took note of memory usage during the system reboot, here are my numbers I'm running beta 23 870M usage after reboot with no array running 1.55G usage after starting the array 25.5G usage 30 seconds after starting all 3 VMs 28.3G after a further 5 minutes 29G after a further 10 minutes VM1: 14.7G RES VM2: 9.7G RES VM3: 2771M RES The full system is reporting 29G of 31.4G in htop 29348852 used, 342288 free, 2967696 buff/cache I hope some of this helps
  5. Jonp is it worth me putting this back on? I've got everything removed other than community apps now Just to put some things into perspective, here is what happened to a fresh reboot of my VM1 Allocated - 12GB Initial start - 12.5G VIRT, 12.3 RES After 4 Minutes - 14.6G VIRT, 14.3G RES In this time i booted up windows 10 and it has sat on the desktop doing nothing Jamie
  6. Hi Jonp, I have been able to remove all plugins but the following: SwapFile Community Applications If i remove the hotswap file plugin (which was installed due to this issue) my server will instantly start shutting off VMs to conserve memory as it has massively over allocated now - i do need the Libvirt Hotplug USB plugin back at some point but have been able to remove it now The server memory is still increasing but i am keeping an eye on it - i can restart my one vm with the hotswap plugin removed but it may not stay on for long Jamie
  7. Hi Everyone, So i'm looking at a rather over the top case upgrade for my server, more specifically one of these server cases I am looking to fit my motherboard into this server case. The server case doesn't officially support E-ATX but i know they are practically identical in size to EEB but with some different mounts. I've seen issues with people mounting EEB motherboards into an E-ATX case but not much on fitting E-ATX into an EEB motherboard Does anyone has any experience with anything like this? I'm also slightly concerned as my CPU has a watercooling block on it which obviously means it needs extra space than normal - most normal cases have the cutout for stuff like this but i don't know if a server case would support it or not? I've never done a server case build before so this will be an interesting one for me - especially with the planned watercooled system i plan to be running Jamie
  8. I agree with bungee, I have 1 machine that will boot in seabios but not in ovmf. I also have another vm that will run under both but performs better using ovmf It's a case of try and see which is better. Luckily as your on the latest beta you can load up the system logs within the unraid screen and use those while the seabios runs. I recommend even if the output is not connected that you open the unraid log full screen in the Linux environment, it has saved me a few times with system locks etc.
  9. Not taken against me directly Bungee, just wanted to point everything out so things were a little clearer than my previous message
  10. Ok so let me repeat the fix i did for all of these issues 1. I switched to the beta and changed my 1 vm to ovmf - this resolved my shutdown lockup issues 2. I was having other issues where my system would lock up sometimes when playing videos on amazon prime - this was caused by an nvidia driver and downgrading to a slightly older version fixed the issue This has resolved all my issues on the server entirely, i have run the server for well over a month with no issues, no lockups, and no problems at all - i am now running more VMs than i was before with no issues In my instance there was no hardware to fix/replace and it was an issue with drivers and unraid itself I stubbed my usb controllers which allows you to select them easily in the new unraid but have since taken the usb controllers off my vms and am using the libvirt hotswap plugin to add/remove usbs from my vms I don't condone removing windows updated entirely but i will say how to if asked, and if you really need to, there is an option to defer updates which will allow your security updates to come through I hope this helps Jamie
  11. For windows 10 if you go into the services section and disable the windows update service it tends to prevent it So you will need to stop the process, edit the start dropdown and select disabled
  12. So i just wanted to post an update as things have gotten worse since my last post and now dockers are starting to not work as they can't use any memory VM1: 18G VIRT, 17.7G RES VM2: 10.9G VIRT, 10.5G RES VM3: 4035M VIRT, 1462M RES My system has 400MB showing for cache and free combine in the stats plugin, it is still using 2.42G of my swap file as well Running HTOP i can see a process for /usr/sbin/libvirtd which has the following memory usage 789M VIRT, 2972 RES Right now if i don;t start turning things off my system is going to become un-usuable to dockers so this is becoming a larger and larger issue Is there anything at all i can post to help with this jonp? As i genuinely don't want to have to buy another set up memory to fix this problem when it seems to just want more and more Regards, Jamie Edit: Here are a few grabs from the system that may be useful virsh dommemstat "Jamie - 10586" actual 12582912 rss 18602588 virsh dommemstat "CCTV" actual 2097152 rss 1510876 virsh dommemstat "Cat - SeaBios" actual 8388608 rss 10982764 virsh dominfo "Jamie - 10586" Id: 4 Name: Jamie - 10586 UUID: 8db132a6-da10-1373-64e5-3f2515cb4b17 OS Type: hvm State: running CPU(s): 8 CPU time: 573928.3s Max memory: 12582912 KiB Used memory: 12582912 KiB Persistent: yes Autostart: disable Managed save: no Security model: none Security DOI: 0 virsh dominfo "CCTV" Id: 7 Name: CCTV UUID: 628a7030-7ffe-040f-e7dc-b828b045586c OS Type: hvm State: running CPU(s): 2 CPU time: 326103.0s Max memory: 2097152 KiB Used memory: 2097152 KiB Persistent: yes Autostart: disable Managed save: no Security model: none Security DOI: 0 virsh dominfo "Cat - SeaBios" Id: 2 Name: Cat - SeaBios UUID: 311cce28-0650-0bf3-70e8-0c25f504dcb6 OS Type: hvm State: running CPU(s): 4 CPU time: 225768.1s Max memory: 8388608 KiB Used memory: 8388608 KiB Persistent: yes Autostart: disable Managed save: no Security model: none Security DOI: 0 virsh nodememstats total : 32933444 KiB free : 322492 KiB buffers: 396 KiB cached : 278116 KiB top - 19:27:17 up 8 days, 6:03, 1 user, load average: 4.20, 4.37, 4.15 Tasks: 438 total, 1 running, 437 sleeping, 0 stopped, 0 zombie %Cpu(s): 31.9 us, 5.1 sy, 0.0 ni, 63.0 id, 0.0 wa, 0.0 hi, 0.1 si, 0.0 st KiB Mem : 32933444 total, 338480 free, 31958276 used, 636688 buff/cache KiB Swap: 8388604 total, 5959020 free, 2429584 used. 396152 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 31978 root 20 0 17.999g 0.017t 22492 S 312.0 56.5 9583:06 qemu-system-x86 8102 root 20 0 10.859g 0.010t 22380 S 190.7 33.4 3766:21 qemu-system-x86 20395 root 20 0 4132712 1.441g 14704 S 75.7 4.6 5436:46 qemu-system-x86 26511 server 20 0 2374728 256828 1072 S 1.3 0.8 0:12.69 mono-sgen 4845 root 20 0 762344 66540 18252 S 0.7 0.2 117:28.30 firefox 6708 root 20 0 819840 8088 1960 S 0.7 0.0 4:18.22 exe 20407 root 20 0 0 0 0 S 0.7 0.0 77:00.04 vhost-20395 21630 root 20 0 0 0 0 S 0.7 0.0 0:30.29 kworker/0:0 31990 root 20 0 0 0 0 S 0.7 0.0 75:50.58 vhost-31978 2547 root 20 0 9684 2504 2104 S 0.3 0.0 34:47.80 cpuload 4524 root 20 0 94476 3580 3000 S 0.3 0.0 15:39.78 emhttp 5094 root 20 0 0 0 0 S 0.3 0.0 2:12.81 xfsaild/nvme0n1 5124 root 20 0 1369872 13620 592 S 0.3 0.0 108:20.09 shfs 20411 root 20 0 0 0 0 S 0.3 0.0 5:05.00 kvm-pit/20395 20958 jamie 20 0 415400 6256 5240 S 0.3 0.0 12:10.89 smbd 1 root 20 0 4372 1476 1476 S 0.0 0.0 0:15.93 init 2 root 20 0 0 0 0 S 0.0 0.0 0:00.15 kthreadd 3 root 20 0 0 0 0 S 0.0 0.0 0:59.45 ksoftirqd/0 5 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/0:0H 7 root 20 0 0 0 0 S 0.0 0.0 3:54.68 rcu_preempt 8 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_sched 9 root 20 0 0 0 0 S 0.0 0.0 0:00.00 rcu_bh 10 root rt 0 0 0 0 S 0.0 0.0 0:01.28 migration/0 11 root rt 0 0 0 0 S 0.0 0.0 0:01.33 migration/1 12 root 20 0 0 0 0 S 0.0 0.0 0:01.13 ksoftirqd/1 14 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/1:0H 15 root rt 0 0 0 0 S 0.0 0.0 0:03.13 migration/2 16 root 20 0 0 0 0 S 0.0 0.0 0:00.74 ksoftirqd/2 18 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/2:0H 19 root rt 0 0 0 0 S 0.0 0.0 0:02.90 migration/3 20 root 20 0 0 0 0 S 0.0 0.0 0:00.78 ksoftirqd/3 23 root rt 0 0 0 0 S 0.0 0.0 0:01.83 migration/4 24 root 20 0 0 0 0 S 0.0 0.0 0:00.63 ksoftirqd/4 26 root 0 -20 0 0 0 S 0.0 0.0 0:00.00 kworker/4:0H
  13. There are a few issues with VMs seeing USB devices but nothing this plugin can resolve i believe I have a 64GB kingston USB pendrive and an external HDD cradle Passing them to my main VM running OVMf the devices don't appear at all Passing them to my second VM using SeaBios and they work instantly This is on the latest beta 23 however so there may be something that needs updating to make it work with OVMF but i'm not sure Just pointing it out id needed, honestly glad i found this plugin! Regards, Jamie
  14. Hi Jonp, Thanks for the reply to this. To go over my vm's more now i know the cases, here is a bit more overhead information VM1 (12gb allocated) Has a passed through GPU (780) Is using a bridged network Has 5 passed through usb devices (from the gui) VM operating system is reporting 4.5GB used of 12GB (just in case this helps) This VM has CCTV recording to it and otherwise has been sat idle at 6% cpu usage since yesterday Is now using 17.7G (VIRT) and 17.4G (RES) and has not done anything all day until now - memory usage is now higher in RES but less in VIRT This one is my main concern as it is using an extra 5.4GB following the RES level - it has no need to do video memory, and the vm is set to have 12,288 in the GUI - following the numbers below this would mean QEMU is having an overhead of 5.4G VM2 (8gb allocated) Has a passed through GPU (750Ti) Is using a bridged network Has 0 passed through usb devices (from the gui) VM operating system is reporting 2.1GB of 8GB usaged Is now using 10.7G (VIRT) and 10.3G (Res) - No change in memory usage This VM has been sitting doing nothing since yesterday and is just sitting on the windows desktop with just AVG running VM3 (2gb allocated) - was restarted since the last values but usage seems to be climbing Is using VNC so will be using memory for video Is using a bridged network Has 0 passed through usb devices VM operating system is reporting 1.1GB of 2GB used Is now using 2991M (VIRT) and 1913M (RES) In 5 minutes while checking this i saw this VM's RES usage go from 1913M to 1921M - VIRT remains constant - it has a CCTV program running but this saves nothing to the host and writes to network shares, it is pretty much idle at all times Doing a fresh server boot i see almost all memory free, i boot these VMs up and the system jumps to ~23GB used - watching the gui stats i can then see it slowly climb Using the system stats plugin i can see the following memory usage details Some time after boot - 28.3GB used Same time the following day - 31.6GB used Same time the day after - 32.28GB used Same time the day after - 31.6GB used but using 2GB swap as well Today - 32.2GB usage annd 2.6GB swap Hope some of that helps Regards, Jamie
  15. Hi Everyone, I've mentioned this a few times in the beta topics but haven't found anything out at all I have an issue where by my VMs are using a lot more memory than they are allocated - i have rather a high end system so would love to virtualize more but i refuse to buy more memory if the VMs are wasting it I have attached my system log for anyone interested For those that don't read the system log here is an overview of my memory usage My system has 32GB of Memory and an 8GB swap file and it is allocated as follows to my VMs VM1: 12GB VM2: 8GB VM3: 2GB Totaling 22GB so the system has 10GB free (not catering in vm overhead) Here is the interesting part, this is the memory usage reported by htop for the vms VM1: 18.4G VIRT | 17.1G RES VM2: 10.7G VIRT | 10.3G RES VM3: 3889M VIRT | 2116M RES All in that totals to around 29.46GB (RES) allocated to my VMs My entire server memory usage is 33.4GB I do have 5 dockers running as well but they are using a huge 450MB between all of them Ooo and system stats reports 220MB cache, and 320MB free Now i know i should expect some overhead but 8GB of memory is not a small amount to lose considering i have a VM that can only have 2GB due to this I never really noticed if this was an issue on 6.1.9 but would love some help to figure out why the overhead is so bad and what i can do to get some memory back on my machine - The longer the system runs, the more the VMs use and i think going to 64GB memory is just going to allow them to bloat even more Any ideas, comment, or thoughts would be appreciated Regards, Jamie p.s. i'm aware this issue is probably occuring to others so any resolutions should help more people than just me archangel-diagnostics-20160616-1923.zip
  16. Just upgraded from Beta 22 to beta 23 with no issues, all dockers are fine, vms booted and are working as expected One a side note i've noticed my memory usage has settled a little with this build - on beta 21 and 22 i was noticing my VMs using a lot of extra memory (12gb vm was using 18gb or more) but in this build things seem less drastic with my 12gb vm using 15GB Is this sort of VM overhead to be expected still or is there perhaps something going wrong? All in my VMs have 22gb allocated to them and are using 28gb (over 3vms, i have included my diagnostics if its useful) Jamie archangel-diagnostics-20160612-1339.zip
  17. I managed to run myself into 3 issues fairly quickly, luckily they all were fixed quickly but didn't happen on beta 21 so worth pointing out [*]Booting unraid with the only active network port being eth1 resulted in no network, so no gui, switching networks worked but it would be nice if i didn't have to switch cables to boot - this also had the affect that the dynamix stats plugin would not show stats without resetting default settings (eth1 does not show so can not be changed to trigger the allowance of the apply button) [*]Booted my one vm up and passed through a steam controller. Turning on the controller locked up the vm, and it auto restarted - worked after the restart though [*]Another of my VMs was having trouble with the GPU audio pass-through, again a reboot resolved the issue but didn't happen in the last beta at all If these are my only fault i will be very happy indeed! (this is just me picking at things more then anything) Now to see if the KVM memory allocated for VMs still leaks (i have an instant 3.4gb memory overhead on a 12gb vm) - ooo and by the way, opening the gui on my vm takes me to the register page by default again Great work so far though! Jamie
  18. My issue was resolved going to the latest beta, the beta still has issues so I hadn't marked this as resolved yet My system has been running fine since this update came out
  19. What version of unraid? 6.1.9 here, and I have VM's in use daily and no sign of a leak. There is qemu memory overhead associated with managing each VM, but that does not grow uncontrollably over time, which is the definition of a leak. My VM's are stable over multiple weeks. Beta 21 which is what the op upgraded to, limetech still haven't resolved the issue so it will continue to happen but hasn't cause any issues to my system at all yet The vm will boot using 12gb and watching the system stats you can see the memory usage slowly climb from day to day and htop backs up the increase per vm
  20. As a note, vms are leaking memory badly right now My one vm has 12gb allocated and I've seen it using up to 18gb (run htop on the server to see) In my server I have an 8gb and 12gb vm and they use around 28gb between them after a few days
  21. Have you checked the event logs for the system and the vm?
  22. If there is nothing important on your system then upgrade to the latest beta and if not already, create the vm again using ovmf The new beta has a lot of vm enhancements and it has made a huge difference
  23. Just as a confirmation to anyone needing it, i'm running the latest beta 21 and have just set up a 32GB swap file on my NVMe drive with no issues, htop reports the 32GB correctly so all seems to be working as expected
  24. If you have the i5 already then it won't hurt anything to try it and prove a point
  25. I'm running the latest beta unraid with a gtx 780 passed to windows 10, gaming at 2k I get a minimum of 40fps with no issues at all with everything on medium My vm is allocated 8 cores, and 12gb memory in vm manager with hyperv enabled