Jump to content

bungee91

Members
  • Content Count

    742
  • Joined

  • Last visited

Everything posted by bungee91

  1. lionelhutz has tried changing machine type and reports activation is okay when changed I think this is still questionable, as I recently performed some testing on this very thing (it was not specifically for this, but related). This is what I know on my main W10 VM activated using i440FX. Activated status, disable network (thought it'd be a good idea) and shutdown. Changed to Q35 (was testing for a USB related issue), checked activation "Windows is not activated" can't connect to Microsoft, blah blah (meaning the switch to Q35 flagged it to break the initial activation). Shutdown, change back to i440FX, re-enable network, "Windows is not activated" still a can't connect displaying. Try to hit the manual check option, not working. Reboot, same not activated message. Stop thinking about it, use computer as normal, check back a day or so later "Windows is activated". So the change to Q35 certainly made Windows take notice and kill my initial activation status (this machine has been activated for the last 6 months). However, would keeping it at Q35 from i440FX eventually lead to an activated status? IDK, however I have my doubts as this is basically a new motherboard to M$ and that almost always flags re-activation.
  2. I assume RC2? If so LT is aware, no fix currently. https://lime-technology.com/forum/index.php?topic=50344.msg484087#msg484087 Yes, this is a bug. Fixed in next release. https://lime-technology.com/forum/index.php?topic=50344.msg484087#msg484087
  3. The G210 I used to use for testing did the exact same thing. This card worked fine in Linux/OE (needed legacy drivers) not so much in Windows. I think it was better with passing the rom, so I'd recommend trying that. Fortunately when this happens your entire server does not lockup, as that was the norm for me. I have a feeling even without installing the drivers (using basic display drivers) multiple reboots or power offs of the VM will result in strange behavior, however I'd like to be wrong in that assumption.
  4. With an AMD card, no, you can "steal" it from the UnRAID console however you will have no console output (which many including myself do not have). You can also do it with Nvidia cards, but there is some trickery to get it to work. While AMD will work no doubt, I and many others will recommend Intel as it is the primary platform UnRAID is tested on, and even though you get less cores you get better per core ratings. I think if you run a lot of VM's the extra cores can help with isolation, however the comparison of 4 AMD cores to 2 Intel cores is likely not much difference (I say this without looking at specific ratings, but I know a 4 core i5 goes against an 8 core FX).
  5. Glad you figured it out, hopefully this thread will help someone else with a similar issue.
  6. I have my vms folder in Unassigned Devices And i should connect to it but i cant ! I really need help i cant connect to the vms folder Because when i try to go through the steps i get [ Error writing .config/modprobe.d/sound.conf: No such file or directory ] Hi Sniper, really hoping someone other than me reads this and gives you direction (sorry, command line is a when I need to, not a cause I like to kind of thing) It sounds as if your OE VM is located at " /mnt/disks/kvm/domains/OpenELEC" At the bash prompt type cd /mnt/disks/kvm/domains/OpenELEC this should get you into the directory, then you can make the folders, and create the needed file using nano.
  7. These instructions should help, more details in the thread. https://lime-technology.com/forum/index.php?topic=43644.msg464975#msg464975
  8. Did you attempt to pass the rom for the card? Instructions in the wiki somewhere (sorry, in a hurry).
  9. I think that's probably well enough, but can't say for sure. I'm by no means great at using a console based text editor, but it is basically: At the command line when in the directory for your OE/LE VM (such as /mnt/cache/VMs/OE/) type: nano .config/modprobe.d/sound.conf I can't recall if the folders are there or not by default, if not you may have to make the folders first A directory location you could be in at the shell is (as an example) /mnt/cache/VMs/OE/ type: mkdir .config now when in the .config folder that you just created (/mnt/cache/VMs/OE/.config/) type: mkdir modprobe.d Now that we have the needed folders: nano .config/modprobe.d/sound.conf Type: options snd-hda-intel enable_msi=1 into the nano file editor, exactly as written. Exit nano CTRL + X When asked to save the modified file, say yes. Boot or restart your OE/LE VM. There's likely a much easier way to do this, I had to look that up to give directions.. Also, Nvidia is the recommended GPU from Limetech and others on the Vfio forums. AMD is a bit more hit or miss with some cards.
  10. Others figured this out just glad I could help, now enjoy!
  11. No reboot needed, and the console going black is to be expected. When using SeaBIOS and starting a VM the console output will always be lost as there is a VGA arbitration "thing" going on (I'm not looking it up right now) and this is to be expected. The console will only come back after a reboot. The boot option with the GUI I don't believe has this issue (however I've never tried). Unless your card is in hard locked state, reboot or power down will likely not help. Thinking of that, and likely not your issue. If your card was hard locked, a complete power off with removal of power (turn off PSU, unplug, etc...) would get the card useable again, however this is not common at all and I doubt the issue.
  12. One more thing, many Nvidia cards in Windows need the MSI interrupt set. There is a utility here to do this for you, if not you'll eventually get "demonic audio". Searching............: Thread is here https://lime-technology.com/forum/index.php?topic=47089.msg453669#msg453669 I have noticed when upgrading drivers this gets set back to off, so you may have to complete this again if you update.
  13. Just want to make sure you did the config file exactly as posted, using nano or vi to edit/create the file using ssh? This has always solved the OE (and I suppose now LE) issues for Nvidia cards. I believe I also had to set passthrough to on for all supported formats, but perfect after that. As described I would get the "indefinite clicking sounds (much like if you were to hold down an arrow key and continually scroll through the main menu)", but this solved it. The last time I went to set one of these up (OE VM) I had issues with the Nvidia driver not being installed and I had to follow steps outlined in this thread https://lime-technology.com/forum/index.php?topic=47550.msg458389#msg458389 to get the Nvidia drivers installed, however this is not either of your issues.
  14. For OE you have to do the following: That should fix the audio stutter right away (reboot after change). I thought this was supposed to be done automatically WAY back in 6.1 days, I guess not.
  15. Lousy. Well I'm uncertain that this is tied to a driver anyhow, just some troubleshooting is all. Now that I think of it the 710 wasn't released at that point yet, so it certainly makes sense. I had only heard of the most recent Nvidia driver giving issues, however it is also possible that driver is no longer the most recent. The OE fix should solve that issue, however uncertain with Windows. What make is the 710 you have? I know some Asus cards can be a pain in the ass (for whatever reason, likely a card rom/bios issue). Have you attemped to extract the rom and add it to the XML of the VM? I wouldn't think for this card it is needed, but at this point it is worth trying.
  16. This is a false statement, UnRAID doesn't require a GPU at all however will certainly use one if it is available. The reason we need two cards is an Nvidia issue, and not apparent with an AMD GPU (however the workaround gridrunner linked to has been successful for many individuals) . I have four video cards and run 4 VM's with GPU's assigned, UnRAID grabs the 1st card (AMD) initially, and then I "steal" it away for my primary VM. Yeah, like I said: I did try to add another graphic card (9500 GT), placed it in slot 1 and moved the 650 card to slot 2. No changes. And it seems like users with one GPU have got it working: http://lime-technology.com/forum/index.php?topic=43644.0 When you had two Nvidia cards, you got the exact same condition/issue? Have you tried SeaBIOS as opposed to OVMF? OVMF/UEFI doesn't work with all cards.
  17. For OE you have to do the following: That should fix the audio stutter right away (reboot after change). I'm surprised you're having such issues with the 710, as it is very similar to the 720, which I have nothing but good things to say about (which means it is boring in that it just works, X2 or X3 for me). Anyhow, the only thing I can think of off hand is removing any remnants of old drivers prior to installing new ones. The best way is to use the DDU utility http://www.guru3d.com/files-details/display-driver-uninstaller-download.html to wipe out everything and start fresh. I have not updated the drivers in a while, however had heard newer Nvidia drivers were causing issues. I'm currently on 361.43 and have tested many prior versions, with no issues. It looks like 361.43 is pretty old by now though (1/19/2016), but you may want to try it. I experienced no longer than expected display blackout when installing the driver on 3 seperate VM's, Windows 10 64 bit, i440FX (newest, 2.5?), and SeaBIOS.
  18. If you motherboard BIOS allows for selection of GPU based on the slot (as mine and I'm sure others do, as both my recent builds allowed for this), you can add a secondary GPU for UnRAID and set the primary GPU for boot/UnRAID to that GPU, leaving the 16X slot available for a VM. If you have an old style PCI slot (most don't anymore), you can even buy a cheesy ~$10 PCI video card, and use that to for boot/UnRAID usage, leaving the other card available. The thread that is mentioned for single Nvidia GPU passing to a VM is located here: https://lime-technology.com/forum/index.php?topic=43644.msg464975#msg464975 I'm in the AMD in the 1st slot (I've had this card prior to switching to the "one machine to rule them all" thinking), Nvidia in all of the rest. Boot/UnRAID does its thing on that primary card, then I steal it away for my primary VM, which works pretty well (I'd also call this "magic").
  19. Many people have had great luck with the AMD 6450 so if you have them you may want to try them first, and also the 5450 (however I didn't so I won't recommend it). Since your build looks to have onboard GPU, I'd recommend you stick with Nvidia, as they're the current recommended card overall (AMD can be a bit more picky across the range of selection). Low power, possibly fanless, I highly recommend the GeForce GT710/720/730 I have extensively tested the 720 in both SeaBIOS (my primary usage) and OVMF and do not have any complaints.
  20. No, you will not be able to do any of this without VT-d there is no way around this (Your subject line clearly states "Non VT-D Installation"). UnRAID doesn't need a GPU, however it will grab one if it is there and available, however any VM will still be managed by the host/hypervisor which would then allocate/assign resources to the VM. To do this with a GPU (or any physical hardware device), you need VT-d, as the VM is a child to UnRAID as the parent. The best you can do is boot UnRAID in GUI mode, however that environment is not meant for anything more than UnRAID management.
  21. Thanks RobJ for looking into this, and giving me some pointers. I'll do some research, and also update to the newest bios. I think I'll also follow up with LT, and hopefully they can help me to get this resolved.
  22. I'm not positive I understand your statement, as it almost sounds like you've had this before? It's the first one like this that I've seen. These messagess have been common for me on the last couple of beta's, and I have reported as such in at least one other of the threads. I honestly don't know what causes it, just know that over time I get these messages again, and again, and again. They do not result in instability (which to me is quite surprising) and my assumptions is whatever is managing memory reallocates this, and everything goes back to working as it should (meaning no more of these in the syslog). On previous beta's I received these messages much earlier in regards to uptime. These parts of that snippet don't seem correct, as I'd assume these should be a value > "0" Jul 4 12:47:20 Server kernel: 0 pages in swap cache Jul 4 12:47:20 Server kernel: Swap cache stats: add 0, delete 0, find 0/0 Jul 4 12:47:20 Server kernel: Free swap = 0kB Jul 4 12:47:20 Server kernel: Total swap = 0kB Jul 4 12:47:20 Server kernel: 8338252 pages RAM Jul 4 12:47:20 Server kernel: 0 pages HighMem/MovableOnly When I first started getting these I assumed it was my setup, as I recently had my motherboard die, and this board (exact same as prior) is pickier about ram timings (I have them set to relaxed timings, no XMP, a bit more voltage just in case). Anyhow, I've ran Memtest for plenty of passes with 0 errors (can only run the included Memtest in default/non multithreaded mode or it will fail), however I've considered grabbing a newer version that is aware of DDR4 and the X99 chipset, and running that for stability tests. I do have a BIOS update available, and will venture on installing it soon. There is a good reason I have delayed this, as I have tested with newer versions (there were plenty of beta release bios's for my board, I am actually currently on one), however they had worse issues such as: not being able to get into setup, a warning about PCIe resources low, etc... Most of the bios updates are because of the newer CPU's available for the X99 socket, now that Skylake versions have been released. I mentioned this same issue in the B21 thread here http://lime-technology.com/forum/index.php?topic=48193.msg471875#msg471875 Similar issue, supposedly fixed and should be included in the current kernel https://bugzilla.kernel.org/show_bug.cgi?id=93251 Thanks for looking, I appreciate any help with this! Edit: I will look into this further, as I have no experience with something of that nature.
  23. I thought this may have been solved, but just noticed it cropped up again in my syslog (certainly took longer to show up than previous beta (up 14 days now)). Jul 4 12:47:20 Server kernel: CPU: 3 PID: 13786 Comm: qemu-system-x86 Tainted: G W 4.4.13-unRAID #1 Jul 4 12:47:20 Server kernel: Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./X99-SLI-CF, BIOS F21a 01/12/2016 Jul 4 12:47:20 Server kernel: 0000000000000000 ffff8807b7e937a8 ffffffff81369dde 0000000000000001 Jul 4 12:47:20 Server kernel: 0000000000000004 ffff8807b7e93840 ffffffff810bcc2f 0260c0c000000010 Jul 4 12:47:20 Server kernel: ffff880700000040 0000000400000040 0000000000000004 0000000000000004 Jul 4 12:47:20 Server kernel: Call Trace: Jul 4 12:47:20 Server kernel: [<ffffffff81369dde>] dump_stack+0x61/0x7e Jul 4 12:47:20 Server kernel: [<ffffffff810bcc2f>] warn_alloc_failed+0x10f/0x127 Jul 4 12:47:20 Server kernel: [<ffffffff810bfc46>] __alloc_pages_nodemask+0x870/0x8ca Jul 4 12:47:20 Server kernel: [<ffffffff810bfe4a>] alloc_kmem_pages_node+0x4b/0xb3 Jul 4 12:47:20 Server kernel: [<ffffffff810f4434>] kmalloc_large_node+0x24/0x52 Jul 4 12:47:20 Server kernel: [<ffffffff810f6bd3>] __kmalloc_node+0x22/0x153 Jul 4 12:47:20 Server kernel: [<ffffffff810209af>] reserve_ds_buffers+0x18c/0x33d Jul 4 12:47:20 Server kernel: [<ffffffff8101b3f0>] x86_reserve_hardware+0x135/0x147 Jul 4 12:47:20 Server kernel: [<ffffffff8101b452>] x86_pmu_event_init+0x50/0x1c9 Jul 4 12:47:20 Server kernel: [<ffffffff810ae064>] perf_try_init_event+0x41/0x72 Jul 4 12:47:20 Server kernel: [<ffffffff810ae4b5>] perf_event_alloc+0x420/0x66e Jul 4 12:47:20 Server kernel: [<ffffffffa06435ab>] ? kvm_perf_overflow+0x35/0x35 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffff810b042b>] perf_event_create_kernel_counter+0x22/0x112 Jul 4 12:47:20 Server kernel: [<ffffffffa06436c1>] pmc_reprogram_counter+0xbf/0x104 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa064382f>] reprogram_gp_counter+0x129/0x146 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa0783b1e>] intel_pmu_set_msr+0x2bd/0x2ca [kvm_intel] Jul 4 12:47:20 Server kernel: [<ffffffffa0643b14>] kvm_pmu_set_msr+0x15/0x17 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa0625a55>] kvm_set_msr_common+0x921/0x983 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa06269fe>] ? kvm_arch_vcpu_load+0x133/0x173 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa07833ba>] vmx_set_msr+0x2ec/0x2fe [kvm_intel] Jul 4 12:47:20 Server kernel: [<ffffffffa0622422>] kvm_set_msr+0x61/0x63 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa077c9ba>] handle_wrmsr+0x3b/0x62 [kvm_intel] Jul 4 12:47:20 Server kernel: [<ffffffffa07815f9>] vmx_handle_exit+0xfbb/0x1053 [kvm_intel] Jul 4 12:47:20 Server kernel: [<ffffffffa07830bf>] ? vmx_vcpu_run+0x30e/0x31d [kvm_intel] Jul 4 12:47:20 Server kernel: [<ffffffffa062bf76>] kvm_arch_vcpu_ioctl_run+0x38a/0x1080 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa06269fe>] ? kvm_arch_vcpu_load+0x133/0x173 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffffa061e0ec>] kvm_vcpu_ioctl+0x178/0x499 [kvm] Jul 4 12:47:20 Server kernel: [<ffffffff8149fb20>] ? vfio_pci_write+0x14/0x19 Jul 4 12:47:20 Server kernel: [<ffffffff8149b731>] ? vfio_device_fops_write+0x1f/0x29 Jul 4 12:47:20 Server kernel: [<ffffffff81109970>] ? __vfs_write+0x21/0xb9 Jul 4 12:47:20 Server kernel: [<ffffffff81117b95>] do_vfs_ioctl+0x3a3/0x416 Jul 4 12:47:20 Server kernel: [<ffffffff8111fb96>] ? __fget+0x72/0x7e Jul 4 12:47:20 Server kernel: [<ffffffff81117c46>] SyS_ioctl+0x3e/0x5c Jul 4 12:47:20 Server kernel: [<ffffffff81622c6e>] entry_SYSCALL_64_fastpath+0x12/0x6d Jul 4 12:47:20 Server kernel: Mem-Info: Jul 4 12:47:20 Server kernel: active_anon:1901826 inactive_anon:8722 isolated_anon:0 Jul 4 12:47:20 Server kernel: active_file:554832 inactive_file:653060 isolated_file:0 Jul 4 12:47:20 Server kernel: unevictable:4604002 dirty:1076 writeback:0 unstable:0 Jul 4 12:47:20 Server kernel: slab_reclaimable:273973 slab_unreclaimable:32003 Jul 4 12:47:20 Server kernel: mapped:31631 shmem:137158 pagetables:16790 bounce:0 Jul 4 12:47:20 Server kernel: free:89632 free_pcp:59 free_cma:0 Jul 4 12:47:20 Server kernel: Node 0 DMA free:15896kB min:64kB low:80kB high:96kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15980kB managed:15896kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Jul 4 12:47:20 Server kernel: lowmem_reserve[]: 0 667 31899 31899 Jul 4 12:47:20 Server kernel: Node 0 DMA32 free:127336kB min:2824kB low:3528kB high:4236kB active_anon:298168kB inactive_anon:1092kB active_file:968kB inactive_file:736kB unevictable:367324kB isolated(anon):0kB isolated(file):0kB present:831172kB managed:821456kB mlocked:367324kB dirty:0kB writeback:0kB mapped:1508kB shmem:8440kB slab_reclaimable:16520kB slab_unreclaimable:1760kB kernel_stack:448kB pagetables:2484kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 4 12:47:20 Server kernel: lowmem_reserve[]: 0 0 31231 31231 Jul 4 12:47:20 Server kernel: Node 0 Normal free:215296kB min:132272kB low:165340kB high:198408kB active_anon:7309136kB inactive_anon:33796kB active_file:2218360kB inactive_file:2611504kB unevictable:18048684kB isolated(anon):0kB isolated(file):0kB present:32505856kB managed:31981696kB mlocked:18048684kB dirty:4304kB writeback:0kB mapped:125016kB shmem:540192kB slab_reclaimable:1079372kB slab_unreclaimable:126252kB kernel_stack:12848kB pagetables:64676kB unstable:0kB bounce:0kB free_pcp:236kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Jul 4 12:47:20 Server kernel: lowmem_reserve[]: 0 0 0 0 Jul 4 12:47:20 Server kernel: Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*4096kB (M) = 15896kB Jul 4 12:47:20 Server kernel: Node 0 DMA32: 264*4kB (UME) 287*8kB (UME) 329*16kB (UME) 264*32kB (UME) 247*64kB (UME) 152*128kB (UME) 99*256kB (UME) 49*512kB (UME) 22*1024kB (ME) 1*2048kB (E) 0*4096kB = 127336kB Jul 4 12:47:20 Server kernel: Node 0 Normal: 35820*4kB (UME) 8899*8kB (UME) 128*16kB (UME) 14*32kB (UM) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 216968kB Jul 4 12:47:20 Server kernel: 1344904 total pagecache pages Jul 4 12:47:20 Server kernel: 0 pages in swap cache Jul 4 12:47:20 Server kernel: Swap cache stats: add 0, delete 0, find 0/0 Jul 4 12:47:20 Server kernel: Free swap = 0kB Jul 4 12:47:20 Server kernel: Total swap = 0kB Jul 4 12:47:20 Server kernel: 8338252 pages RAM Jul 4 12:47:20 Server kernel: 0 pages HighMem/MovableOnly Jul 4 12:47:20 Server kernel: 133490 pages reserved Jul 4 12:47:20 Server kernel: qemu-system-x86: page allocation failure: order:4, mode:0x260c0c0 This repeats multiple times with a couple of CPU assignments listed with the same error. I assume this is an OOM condition, possibly from fragmentation or QEMU overhead. I don't believe I'm assigning more than reasonable to the VM's, and leaving the host without enough for operations/Docker. I also had this show up once, which was never there previously. The device is a PCIe root port, so not specifically an end device however has not occured again. Jun 23 14:58:45 Server kernel: pcieport 0000:00:01.0: AER: Corrected error received: id=0008 Jun 23 14:58:45 Server kernel: pcieport 0000:00:01.0: PCIe Bus Error: severity=Corrected, type=Data Link Layer, id=0008(Transmitter ID) Jun 23 14:58:45 Server kernel: pcieport 0000:00:01.0: device [8086:2f02] error status/mask=00001000/00002000 Jun 23 14:58:45 Server kernel: pcieport 0000:00:01.0: [12] Replay Timer Timeout Diagnostics attached. server-diagnostics-20160705-0734.zip
  24. I've had this same behavior on the last couple of beta's, think I mentioned it at that time. Looking at the console at bootup I believe I also seen the entry for the dirty bit detected (or whatever it flags) on the USB drive, however no parity check when booted just "parity is valid". Have not had to perform a force reset on this release however.
  25. Thanks for the continued improvements! I've added some requests, feel free to comment or ignore them entirely..