Jump to content

terrastrife

Members
  • Posts

    303
  • Joined

  • Last visited

Everything posted by terrastrife

  1. I'm not sure when this started but recently I've noticed that my file moves are very weird. I am moving files between disk shares from a Windows 10 VM with the VirtIO drivers installed. The network activity seems to burst in an even manner, and the disk write also seems to do it but not in time with each other. Has UNRAID recently (the last few months) changed how it writes to the disk? It used to be a constant flatline of both network and disk usage. Thanks.
  2. Nevermind, not sure what happened but I just repeated the same steps a few more times this time leaving it on 1 core and 2GB RAM and it seems to have worked, so I edited it back to 3 cores and 4GB and it's still working.
  3. Rolled back UNRAID, deleted VM (because it wasn't possible to delete in 6.5 as the pages wouldn't load), updated to 6.5, created new VM, still having the same issue, one core is pegged at 100% even though there is no actual load in the VM. No additional plug ins, just UNRAID 6.5 itself.
  4. Hi I'm having an issue with the latest version of UNRAID 6.5, when I start my Windows 10 VM, one of the cores is pegged at 100%, but it's not Windows itself which is showing the usual 1-3% as normal but runs super super slow. If I shutdown and stop the VM the usage goes away, but I am unable to enter the VM Edit page, it just comes up blank. If I try to start it again I get various errors: Timed out during operation: cannot acquire state change lock (held by remoteDispatchDomainGetBlockInfo) Others too but I didn't copy them and now they aren't coming up >< If I roll back to UNRAID 6.3.5 the issue goes away, but now I am stuck as I use unassigned devices and that no longer works with the old version. Any help? Not sure what I need to attach for extra info.
  5. Hi there, just thought I'd reply to this with my resolution. I deleted and recreated the VM with identical settings using my existing VM disk and it seems to have worked itself out. Not sure what happened, but there you have it.
  6. Not sure what happened, came home tonight to find that the cores that my VM are assigned to pinned at 100% (viewed on Dashboard) after not being able to log into the VM. Didn't think much of it at the time, my unRAID had been up for over 3 months so it was time to update the OS and plug ins. I couldn't stop, so I had to force stop the VM before that however. Since the reboot, I've been unable to start the VM, the log after starting the VM is as follows: 2017-07-15 18:54:39.116+0000: starting up libvirt version: 2.4.0, qemu version: 2.7.1, hostname: KUROHOST LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name \'guest=Windows 10,debug-threads=on\' -S -object \'secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-7-Windows 10/master-key.aes\' -machine pc-i440fx-2.7,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/7946e10e-f6ab-0971-ad52-89d544a90048_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 7946e10e-f6ab-0971-ad52-89d544a90048 -no-user-config -nodefaults -chardev \'socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-7-Windows 10/monitor.sock,server,nowait\' -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 "); LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name 'guest=Windows 10,debug-threads=on' -S -object 'secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-7-Windows 10/master-key.aes' -machine pc-i440fx-2.7,accel=kvm,usb=off,dump-guest-core=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/7946e10e-f6ab-0971-ad52-89d544a90048_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 2048 -realtime mlock=off -smp 1,sockets=1,cores=1,threads=1 -uuid 7946e10e-f6ab-0971-ad52-89d544a90048 -no-user-config -nodefaults -chardev 'socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-7-Windows 10/monitor.sock,server,nowait' -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device ich9-usb-ehci1,id=ait' -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vram64_size_mb=0,vgamem_mb=16,max_outputs=1,bus=pci.0,addr=0x2 -device vfio-pci,host=00:1b.0,id=hostdev0,bus=pci.0,addr=0x6 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on Domain id=7 is tainted: high-privileges Domain id=7 is tainted: host-cpu char device redirected to /dev/pts/0 (label charserial0) On QEMU it just displays "Guest has not initialized the display (yet). The cores selected for the VM are pegged at 100%. That's as far as it gets. Any ideas?
  7. If you delete your mysql folder directly off the disks shares, there will be no mysql folder left for unRAID to share, so it wont. Easiest way to unshare stuff.
  8. This month marks the sad event when I retire my unRAID for tRAID.
  9. So I have a bunch of empty space, I'd like to remove more than one empty data disk at once and rebuild parity after new config, this should be fine yeah? The plan: remove two data disks and parity disk, replace parity with one of the remaining empty data disks hit new config and recreate parity. I can't work out a reason for why it wouldn't work so all green?
  10. that clears a lot up, except i have sata power connnectors IDT fit i should probably mention my psu uses 16 awg wiring too, not the usual 18/20 awg.
  11. Uhm, no. then please define how the amperage load is divided between the 4 connectors. i hav eno issues running 12/16 hdds off a single line off my psu with sata cli pon connectors i assume psu modular connectors pins are very much identical to the pins used in a molex 4 pin connector.
  12. so thats 11 amps per pin? thers 4 pins
  13. depends on the gauge of the wiring some psu come with really fat wires.
  14. theyre highpoint minisas to 4 sata forward cables. yes its a v2000.
  15. no one ever said pimp your unRAID box, this is my main desktop.
  16. Yes, really. Reread the following statement which seems aimed at all GPs. remember, GP's are NOT Caviar Green. WD used the GreenPower tag before they instroduced the Black/Blue/Green. also, i did start with 'for me', nto sure how you managed to quote from that same reply, but not include the part with i started with FOR ME.
  17. really? i started with 'for me' not for you, not for everyone. i have eacs green powers, before they were labelled as 'green'
  18. reading a 7k goes from 130MB to 80MB, readking a GP goes form around 80MB to 50MB. a 7K is essentially twice as fast as a GP, but as its only slowed down for ahlf of its data the increase would be half of twice, hence 50% or so. Final average parity check speed with just 7K's was 95MB/sec (started 120+) with teh additional GP drive its down to 60MB-ish (started 80), didnt quick have a look when it was done. Gets even worse if you have WD blacks and greens in one unRAID, these are even quicker than 7K's.
  19. for me, the process would be 50% faster, as the WD greens are half the speed of the Hitachi 7K's (ie, the 7K is pulled back to the speed of the GP for half of its data size). What would be better is to 'right align' the data bytes, not put a wait timer so that all disks end at the same time, ie a 1TB disk starts as soon as theres 1TB left of parity on a 1 TB drive. this would of corse mean a totally different way of writing parity.
×
×
  • Create New...