SpaceInvaderOne

Community Developer
  • Posts

    1741
  • Joined

  • Days Won

    29

Everything posted by SpaceInvaderOne

  1. Hi. I was thinking i would like to have a fallback wifi network connection for my unraid server. Why? Well i have a ups on my server to power the server when i have a power failure. I also have another ups on my router and phone etc in another room. At the moment i connect my server to the router using a powerline connection. So when my power fails my server and internet router are powered but i have no network connection, from my laptop to the server due to the powerline part of the setup. Unfortunately i cant use cat 5e to link the router and server. So i was wondering if it is possible for unraid to use a wifi connection then set it to a fall back if the wired lan fails? I am thinking it probably isnt possible, but hey you never know! Failing that i guess i would just have to use my second lan port and connect that to a lan to wifi access point and power that off my server ups. But that just seems very over the top. Any ideas guys? Thanks
  2. Please post your iommu groups and devices after ACS is enabled
  3. Ah sorry didnt realise the link didnt show up. Works fine in chrome.
  4. Im using a gtx 970 at the moment, but will be getting the 1080 aswell. I cant see any reason why it wouldnt work fine
  5. Yes i think that could be an issue. <hostdev mode='subsystem' type='pci' managed='yes' xvga='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x03' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/> </hostdev> from this part of your xml i would guess that your gpu 1 is 03:00.0 and 03:00.1 sound and gpu 2 04:00.0 and the second gpu also has sound at 04:00.1 ?? This in itself how two separate sound outputs from the card are being listed shows that the vm doesnt really understand its a single card You should email limetech and ask if there is anything you should do
  6. sure heres my dropbox link https://www.dropbox.com/s/gymaipg6vprd508/MSI_util.zip?dl=0
  7. Yes firstly i would try and enable the acs override. (You will need to reboot server after this.) because you gpu is in iommu group 1 /sys/kernel/iommu_groups/1/devices/0000:00:01.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.1 Your graphics card is ( 01:00.0(gpu graphics) and 01:00.1 (gpu sound)) but there is also a PCI bridge (00:01.0) these need to be broken up and your graphics card (01:00.0 and 01:00.1) in its own group with no other devices present. Also make sure in your bios that the primary gpu is your onboard graphics not the nvidea. This is important as with nvidea gpus they cant be the primary gpu in the system as unraid needs to use the primary gpu for console. If you try to pass them through as a primary card then you get the black screen as you are getting. (this isnt the case with amd cards they can be passed through as a primary gpu, you just loose the unraid console when they are passed through) Please try this and report back here
  8. Try hyper v set to no. However in the beta this should not be necessary. Have you tried using seabios aswell as omvf? Are you trying to sli the cards, i have heard this can be problematic. If you are do you get the same error with just one card?
  9. When you share the disk via the unassigned plugin it will become a share named what the name of the disk is. for example when i share a disk it is share name ST3500312CS_9VVERKB1 (this is the name of my disk yours will be different). The drive must also be formatted. You should then be able to see it under \\tower\ST3500312CS_9VVERKB1 then from windows you will need to map a shared drive to it. ie X drive. once you have mapped a drive letter to the share then set the location in blue iris to the drive letter of the mapped share ie X if you are not sure how to map a drive check here http://windows.microsoft.com/en-gb/windows/create-shortcut-map-network-drive#1TC=windows-7
  10. yes isolating cores made a big difference for me in performance
  11. Thanks dAigo thats very interesting. I will try and benchmark disk later using what i have learned from your post..many thanks
  12. I dont think we can change any vm setting to correct this. But maybe i am wrong. I have emailed limetech today to look at this thread and advise about testing disk speeds in vms. I will report back here what they say. In meantime check out this thread http://serverfault.com/questions/437703/testing-disk-performance-of-virtual-machines
  13. Hi guys. I have recently changed my cpu to a 14 core xeon from a 4 core i7. What i have noticed is the startup speed of my win 10 vm is much slower. I have noticed if i increase the core count to the vm it is slower and if i decrease the core count it is faster (in startup speed of vm). The time it takes from when i click start to see the machine start booting onscreen. I dont notice this on my osx vms. The difference between my osx machines and windows is passed through gpu of osx is an hd6450 and win is gtx970. osx is seabios and windows is ovmf. However i have tried seabios in windows and still get same slower results. Is there any explanation for this? edit.... .....it must be into do with my gpu as if i change from gtx970 to vnc vm starts instantly. I dont understand why the core count would effect speed when i have a higher core count and passthrough the gtx. If i only passthrough one core and gtx startup is very fast, but 8 cores it takes 30 seconds before display is initalised
  14. I have tried passing through ssd disks to vms and get same cached effected results.we need to test with software that avoids cache
  15. i guess another option if you didnt want to use the cache would be to passthrough a whole disk (i wouldnt use a vdisk) outside the array to the vm, then share that from within the windows vm.
  16. cant you just write the data from blue iris to a network share on the array that is a mapped drive in windows
  17. Yeah wasnt ddr4 support only put in memtest 6.0? @limetech why dont you upgrade the memtest in unraid? Memtest86 versions have almost always been confusing, and the current state isn't any better. I won't go into the history, you can look it up, but currently there are 2 sources, both based on the original source code. One is open source and fully distributable, is the one included with unRAID, but unfortunately has fallen behind, has only a few devs (perhaps only one), and the last version is 5.01, released in 2013, the exact version we have. The other was taken commercial by PassMark, and has been greatly updated, currently on 6.3.0, with comprehensive support for recent technologies. They do provide a free version with no restrictions on usage, but only provide it as part of a bootable image that doesn't look like it could be included with others software. Perhaps there is a way, but I'd be wary of PassMark lawyers breathing on your neck. Due to the current state, the version included is a good first step, but if you have more recent tech, such as DDR4 and modern motherboards and CPU's, you should probably download and create a bootable flash with the latest PassMark Memtest86 and run it instead. Ah ok that makes perfect sense. Thanks
  18. Yeah wasnt ddr4 support only put in memtest 6.0? @limetech why dont you upgrade the memtest in unraid?
  19. You primary graphics should be the integrated graphics on cpu, as it is very difficult to pass though an nvidea card as the only card in system. The problem to me looks like it could be your iommu groups. Your nvidea gpu is in group 1 with a PCI bridge: /sys/kernel/iommu_groups/1/devices/0000:00:01.0 ************* /sys/kernel/iommu_groups/1/devices/0000:01:00.0 /sys/kernel/iommu_groups/1/devices/0000:01:00.1 You need to isolate the gpu in its own iommu group for sucessful passthrough. Try enabling acs overide in the vm settings. You will need to reboot server after doing this. Then see if that puts the gpu in it own isolated group
  20. Maybe a stupid question but the ip of the 2008vm hasnt changed has it? Or if you connecting using name try using ip address. In vm settings check the bridge is set correctly
  21. Yes should be stickied! Yes dockers can be pinned aswell. click the advanced view then add to extra parameters cpuset-cpus= The cpus can be pinned if you have or havent used isolcpus. I dont pin to isolcpus cores myself just to the ones used by unraid, i just avoid pinning dockers to the first pair of threads as i have heard the linux os prefers these itself. Edit.....oops sorry squid see you just replied before me!!
  22. i have rebuilt my unraid rig about 4 times in the last year!! I just moved to an x99 board (asrock x99m killer) i got it off ebay for £140 (UK). Its great as is micro atx and has 10 onboard sata ports and 2 gigabit lan ports. Very happy with it