Jump to content

Fabiolander

Members
  • Content Count

    14
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Fabiolander

  • Rank
    Member
  • Birthday 09/22/1968

Converted

  • Gender
    Male
  • URL
    www.azur.photo
  • Location
    France
  • Personal Text
    MB: MSI Z97 GAMING 5 | CPU: Intel® Core™ i7-4790K CPU @ 4. | RAM: 32GB DDR3 | Cooler: Noctua NH-U14S TR4-SP3 | Case: Fractal Design D | Cache: SSD SAMSUNG 250 | Parity: TOSHIBA 8TB | Array: 3x Seagate 4TB, 3x WD 3TB | Unassigned: SAMSUNG SSD EVO 250

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Arffff... I cannot use unbalance anymore. I stopped docker, VM, clear the cache drive... but the application remains locked. Did I missed something ?
  2. Hi, I encrypted my array using a password generated by my favorite password manager. So everytime Unraid reboot, the array wait for the password before to start. Perfect I'm very happy with this feature because if my server is stolen the rubber can't acess my data. Now the dark side is the array can't start automatically so I have to be onsite to enter the password. I wanted to add a bit of automation by using a keyfile accessible from an FTP server so I folow the great tuto of spaceinvader one My problem is my password generated by my password manager includes special characters : " / ' # @ .... My linux knowledge is limited and I don't know how to give special character in this password string : wget ftpsimplicite user=myuser@gmqil.com --password="mypassord@"/!'" ftp://myftp/ keyfile -O /root/keyfile wget ftpsimplicite user=myuser@gmqil.com --password="mypassord'@"/!" ftp://myftp/ keyfile -O /root/keyfile Could you please help me ? Thanks
  3. Hi, Thanks a lot for this plugin. I think I have the same blanck page problem as some others who try to add http tab when the unray server is accessed by https. So I started by creating a new tab for Sabnzb, I added a new port in the Sabnzb container then I mapped this new port to https access in the Sabnzb configuration. now it works. 🤘 My question is : Should I do the same for all others container applications ? because I have no clue how to do in plex or krusader... Do you have maybe an easier workaround ? Thanks a lot and take care Fab
  4. Maybe I will not use this VM for gaming finally. It looks super tricky to make it works properly with video by pass. In my dream I was thinking about moving my actual video editing hackintosh (i9 with 64GB RAM and Radeon64) to UNRAID server where I can host one WIN10 Gaming VM and one MAC Video editing VM. I'm afraid this project is a bit too much optimistic 🤣🤣 For now it is maybe a better compromise to keep this UNRAID server with the i7 configuration and to use it only as NAS, Media server and Home Assistant. I will continue to use dual boot between WIN10 & Catalina for other purposes. I just install 2 cheap 10GB cards between both and the transfert rate is close to the Nirvana 😎😎 Thanks again for your help
  5. Thanks a lot @bastl for your help. The VNC card is disabled ( I connect to the VM with anydesk ) A old HDMI monitor is connected to the video card The screen resolution remains locked to 640x480 and the OS is frozen now. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='4'> <name>WIN10 Gaming 1</name> <uuid>4da06f4b-18e3-b9f1-d6ca-80516e89d87b</uuid> <description>WIN10 Gaming 1</description> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>17301504</memory> <currentMemory unit='KiB'>17301504</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='4'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='6'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-4.2'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/4da06f4b-18e3-b9f1-d6ca-80516e89d87b_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> <cache mode='passthrough'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/disks/SSD-UNASSIGNED/WIN10 Gaming 1/vdisk1.img' index='1'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='ide' index='0'> <alias name='ide'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='usb' index='0' model='ich9-ehci1'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/> </controller> <controller type='usb' index='0' model='ich9-uhci1'> <alias name='usb'/> <master startport='0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/> </controller> <controller type='usb' index='0' model='ich9-uhci2'> <alias name='usb'/> <master startport='2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/> </controller> <controller type='usb' index='0' model='ich9-uhci3'> <alias name='usb'/> <master startport='4'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/> </controller> <interface type='bridge'> <mac address='52:54:00:fc:0f:43'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-4-WIN10 Gaming 1/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/user/tmp/MSI.GTX960.2048.150528.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> other things I discovered : Since I enforced 'PCIe ACS override' the 'UNRAID NVIDIA' plugin does not detect any video card anymore 'GPU diagnostic' returns this error : Error 300: Vendor utility not found. I guess it is because the videocard is now rerooted and locked by the VM but not sure so I share this info with you. Thanks again for your time 🤘
  6. Oups sorry just read the container overview it was inside. sorry for disturbing 🤐
  7. Waouhhh thanks a lot @bastl ! I went to setting > VM manager and I change PCIe ACS override settings from 'off' to 'both' Bingo@! Now NVIDIA Graphic and Audio are in the same group ! IOMMU group 0: [8086:0c00] 00:00.0 Host bridge: Intel Corporation 4th Gen Core Processor DRAM Controller (rev 06) IOMMU group 1: [8086:0c01] 00:01.0 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) IOMMU group 2: [8086:0c05] 00:01.1 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x8 Controller (rev 06) IOMMU group 3: [8086:0c09] 00:01.2 PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x4 Controller (rev 06) IOMMU group 4: [8086:8cb1] 00:14.0 USB controller: Intel Corporation 9 Series Chipset Family USB xHCI Controller IOMMU group 5: [8086:8cba] 00:16.0 Communication controller: Intel Corporation 9 Series Chipset Family ME Interface #1 IOMMU group 6: [8086:8cad] 00:1a.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #2 IOMMU group 7: [8086:8ca0] 00:1b.0 Audio device: Intel Corporation 9 Series Chipset Family HD Audio Controller IOMMU group 8: [8086:8c90] 00:1c.0 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 1 (rev d0) IOMMU group 9: [8086:8c96] 00:1c.3 PCI bridge: Intel Corporation 9 Series Chipset Family PCI Express Root Port 4 (rev d0) IOMMU group 10: [8086:8ca6] 00:1d.0 USB controller: Intel Corporation 9 Series Chipset Family USB EHCI Controller #1 IOMMU group 11: [8086:8cc4] 00:1f.0 ISA bridge: Intel Corporation Z97 Chipset LPC Controller [8086:8c82] 00:1f.2 SATA controller: Intel Corporation 9 Series Chipset Family SATA Controller [AHCI Mode] [8086:8ca2] 00:1f.3 SMBus: Intel Corporation 9 Series Chipset Family SMBus Controller IOMMU group 12: [10de:1401] 02:00.0 VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) [10de:0fba] 02:00.1 Audio device: NVIDIA Corporation GM206 High Definition Audio Controller (rev a1) IOMMU group 13: [1000:0072] 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) IOMMU group 14: [1969:e091] 05:00.0 Ethernet controller: Qualcomm Atheros Killer E220x Gigabit Ethernet Controller (rev 13) After a reboot the VM starts ! Now a new issue : Connecting to the VM with anydesk the max resolution the VM is able to display is a very laggy 640 x 480. I managed to install the latest NVIDIA graphic driver but nothing change I'm stuck in 640x480 Do you have an idea of what I missed ?
  8. Hi, This graphic card passthrough issue seems very popular with kvm 😵 So like many of us I got this VFIO error message. I tried several workarounds without success : First I tried to change the slot of the video card on the mother board but the error came back with a different slot number I added multifunction='on' and changed the virtual slot of the video sound card in the KVM xml but error remains <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> finally I tried VFIO-PCI CFG plugin and after reboot the UNRAID array cannot start again because the Parity disk disappeared... I'm stuck and I will very appreciate if someone can help me to sort out this error. Thank you nas-diagnostics-20200325-2148.zip
  9. Sorry @testdasi and thank you for your answer. I missed this rule in the forum policy, Usually other forum refuses duplicates. I will create my own thread then.
  10. No reply so far but I continue my investigation. As advised by spaceinvader one : First I tried to change the slot of the video card on the mother board but the error came back with a different slot number Execution error internal error: qemu unexpectedly closed the monitor: 2020-03-25T10:05:32.401100Z qemu-system-x86_64: -device vfio-pci,host=0000:02:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x5: vfio 0000:02:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. I added multifunction='on' and changed the virtual slot of the video sound card in the KVM xml but error remains <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x0'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x02' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> </hostdev> I tried VFIO-PCI CFG plugin and after reboot the UNRAID array cannot start again because the Parity disk disappeared. Do you have an idea where I can dig to sort out this video card passthrough issue ? thank you in advance for your time
  11. Hi, I'm very happy with my NAS Unraid and I would like to go further by creating a gaming VM that I will be able to play from different places. I follow different spaceinvader vidéo and some advices like the one above but I have the same issue as Jimmy980_ : My VM works great but every time I want to passthru the NIVIDIA GTX960 2GB I got this VM Creation error : Execution error internal error: qemu unexpectedly closed the monitor: 2020-03-22T10:06:53.908737Z qemu-system-x86_64: -device vfio-pci,host=0000:01:00.0,id=hostdev0,bus=pci.0,multifunction=on,addr=0x6,romfile=/mnt/disk6/tmp/MSI.GTX960.2048.150528.rom: vfio 0000:01:00.0: group 1 is not viable Please ensure all devices within the iommu_group are bound to their vfio bus driver. Nvidia Video card and Nvidia Sound card looks in the same group 1: Group 1 00:01.08086:0c01PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x16 Controller (rev 06) 00:01.28086:0c09PCI bridge: Intel Corporation Xeon E3-1200 v3/4th Gen Core Processor PCI Express x4 Controller (rev 06) 01:00.010de:1401VGA compatible controller: NVIDIA Corporation GM206 [GeForce GTX 960] (rev a1) 01:00.110de:0fbaAudio device: NVIDIA Corporation GM206 High Definition Audio Controller (rev a1) 02:00.01000:0072Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) My understanding of passthru technics stop here. Could you please help me ? nas-diagnostics-20200322-1127.zip
  12. Thank you for your very fast anwser. Ok so I keep it like and I think I will survive with Parity 2 only Thanks a lot for this amazing master piece of software.
  13. Hi, Thanks a lot for this product it allowed me to recycle my previous video editing computer ( i7-4790k - 32 Gb RAM - GTX960 ) and few hard drives to an unraid server with 17TB of shared drive : Parity 1 : 4TB Parity 2 : No device Disk 1: 4TB Disk 2: 4TB Disk 3: 3TB Disk 4: 3TB Disk 5: 3TB Disk 6: No device Cache : SSD 250GB Unassigned : SSD 250GB This configuration works like a charm with nfs encrypted file system. Last week I got one 8TB hard drive and I decided to add it to my unraid server. I understood the biggest must do the parity job so I tried to swap my 4TB parity disk with my brand new 8TB but the unraid server does not allow me to start again the array (start button was grey: "configuration incorrect") So step by step : I added the new 8TB as second parity disk. 28 hours later when cleared and parity check processes were completed I removed the 1st 4TO parity disk and move it to the array. Now the new configuration got 21TB of shared drive and 8TO of parity: Parity 1 : No Device Parity 2 : 8TB Disk 1: 4TB Disk 2: 4TB Disk 3: 3TB Disk 4: 3TB Disk 5: 3TB Disk 6: 4TB Cache : SSD 250GB Unassigned : SSD 250GB Everything works perfectly but now I have few questions... Is it a problem to have only one disk in parity 2 and no device in parity 1 ? Does it make sense to have 8TB parity drive when the biggest data drive is 4TB ? Do you see any optimisation ? Thanks a lot for your support. Fab