• Posts

  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Fizzyade's Achievements


Apprentice (3/14)



  1. The drive has only been used over the past couple of days, these issues I've been suffering with since 6.9 came out, but i'll check that out. The VM's causing excessive load is what is killing me at the moment, they're idle with no load in the VM but I'm seeing ridiculously high loads in unraid.
  2. I can't reboot it, in going to have to hard power down the machine. reboot failed as did reboot -f
  3. Once again, the GUI has locked up. A linux VM that was completely idle sent unraid into a load average of 100. Yesterday when I had to reboot it, a different idle linux VM ended up sending it to 700.
  4. Here's a set of diagnostics I took a few minutes back.
  5. Been there, Got the t-shirt, but no solutions.
  6. <rant> Tbh, I'm getting sick of problems since 6.9 and thinking of starting to look at alternative solutions to Unraid. I appreciate that this may not be the experience of others, but prior to 6.9 I had no problems, zero, none, nada, I didn't expect 6.9 to cause so many problems from day 1. The VM performance is killing my machine, I'm seeing loads of 70-700 when VM's are idle and qemu consuming all the CPU. When I first updated I had an issue where every KVM was locking up the graphics, the VM's themselves were still running but unable to VNC to the KVM, I eventually edited one of the VM's config file in the editor and suddenly all the VM's stopped locking the KVM up. I've seen that others are having this problem too, I've tried the "fixes" that have been mentioned, but nothing thus far has solved it. Between that and continual "blank pages" (i.e I get the unraid header but no actual content) returned from the web server, sections empty or missing it's starting to become a really miserable experience. Long standing issues like the VNC port resetting when making changes in the GUI, that I've mentioned several times over the course of a few years still haven't been fixed, this is *basic* stuff. The GUI feels clunky. The mover has always been a resource hog, killing the machine when it runs, this is an machine with 64GB of RAM, ssd's for the cache and VM's. I'm at a loss to what is actually going on, but it has got to the point now that I'm losing faith in my Unraid solution.
  7. I'm seeing high CPU usage on my virtual machines since switching to 6.9. I previously had an issue with 6.9 where the KVM would die on any VM after a very short time, I switched the CPU type on one of the VM's and suddenly everything started working again without locking up, support were just as baffled about this as me. Now though I'm seeing high CPU load constantly on my VM's, I have a windows VM 10 VM running and the CPU usage inside the VM is low, but looking on unraid I'm seeing 300-400% CPU usage constantly and a screen full of red bars on the main unraid page. All my issues started when I moved to 6.9, 6.8 everything worked like a dream. I've seen a few other reports about CPU usage and people have mentioned changing the scheduler, I checked and mines already set the recommended value mentioned in those threads. I can often get Unraid to "hang" by starting a VM, the CPU usage goes crazy and the web ui stops responding, the only way to fix it is to ssh in (which takes quite a while due to the CPU usage) and kill the VM process, at which point everything drops back to normal. Anybody else experiencing this? Any thoughts?
  8. driver: ixgbe version: 5.10.28-Unraid firmware-version: 0x00012b2c expansion-rom-version: bus-info: 0000:03:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes
  9. No, no log spam regarding the card. (Sorry for not replying earlierm i'm unwell)
  10. I have a X520-DA2 (uses SFP+ modules rather than RJ connectors) and it is working fine for me, also reports as a 82599. Wonder whats different? 03:00.0 Ethernet controller: Intel Corporation 82599ES 10-Gigabit SFI/SFP+ Network Connection (rev 01)
  11. 64GB. I really wish that Unraid supported lxc containers though, I have a NUC running Proxmox which has a load of containers set up which I use for CI (TeamCity), but I need more containers to handle other builds and the NUC has no more ram (it's got 64GB as well) and unraid would be ideal if it supported them.
  12. @SpencerJ@jonp Ok, I created a new VM, pointed it at the existing disk image and set the mac address to the previous VM and so far, Windows 10 seems to have held up and not fallen over. I'm guessing I could post the XML of the VM that doesn't work? I'm going to see how this holds up before doing the same procedure to all of my other VM's. The only thing I can really see, is that the graphics freezing VM is 'pc-i440fx-4.2' and the newly generated VM is 'pc-i440fx-5.1'
  13. @jonpI'm at the end of my tether here, none of my virtual machines work properly anymore, I just rebooted the windows one and I managed to get to the desktop before KVM stopped working, went back in via RDP and I could use it, back into VNC and it works again, but if you drag a window or do anything intensive, it stops working again. I'm happy to send you any logs necessary, but I need some input on what I can give you to try and figure out what is going on. Doesn't matter if it's Windows or Linux, every VM exhibits this behaviour.
  14. Yes, the wiping out of XML values is infuriating. When changing stuff in the GUI, only the items that were affected by the change should be updated in the XML, everything else should remain as it is. Ideally, the VNC port should be exposed in the GUI because it's pretty much a requirement, you don;t go around changing the VNC port on your client every time because the order has changed.
  15. Yeah, I've seen other funnies in the GUI with safari. Funnily enough, I just switched to firefox a week or so back because Safari on Big Sur is crashtastic, it also seems very slow.