JKunraid

Members
  • Posts

    19
  • Joined

  • Last visited

JKunraid's Achievements

Newbie

Newbie (1/14)

0

Reputation

  1. I have a share setup in Unraid. If I start my Ubuntu VM (in Unraid) and go to "other locations" in the file manager I can see my Unraid server. When I click it prompts for userid/password (which I input) and then displays all my Unraid shares. I click on my share in the file manager and it appears to the left as a mounted drive in file manager. I can even add folders and files to it through the file manager The problems arise when I then switch to terminal window in same VM. I can see the mounted share in the folder /run/user/1000/gvfs but when I try to write a folder or file to it using command line it fails. (folder shows permissions of drwx------)
  2. Found the solution. Speaking as an experienced dev, this was way way too complicated to setup. I can only imagine how many non-techies give up trying to figure this out Hopefully future versions of Unraid make this far more intuative. For anyone else experiencing similar problem here is a brief walkthrough of major steps I took to get an EVGA GTX 1070 SC running on an AMD Threadripper board on Unraid 6,9.2. 1. Enable IOMMU, CSM and Legacy mode in bios (note for some GPUs UEFI rather than legacy might work better) 2.1 In Unraid go to tools > system devices. 2.2 Scroll down until you find the IOMMU group with your GPU. 2.3. Click the checkbox for all the devices in the GPU's IOMMU group and save. 3. Go to techpower up website and download vbios version for your GPU https://www.techpowerup.com/vgabios/ 4.1 (NOTE: This step might only apply to systems with single NVIDIA GPU) Download/Install a Hex editor. 4.2 Use Hex editor to modify the above VBIOS file deleting leading section (google to find exact section that needs to be deleted on your card) 4.3 Save then upload modified VBIOS file to some location on your Unraid server. 5.1 Go to VM manger screen and create new win10 VM 5.2 For initial installation for "GRAPHICS CARD" field select VNC. 5.3 Install Win10 and enable remote desktop. 5.4 Confirm you can connect with RDP 6.1 Go back to the VM template you created to edit it. 6.2. For "Graphics Card" field select you GPU from dropdown (in my case a EVGA GTX 1070 SC) 6.3 For "Graphics ROM BIOS" select your modified VBIOS 6.4. For "Sound Card" you must select sound card that is same IOMMU group as GPU (you can add second card if needed) 6.6 select "UPDATE" to save the file 7.1 reopen your VM template from above (the reason why this step is needed is because of a bug that resets below if you try to use "form view" for following steps) 7.2 click on "XML View" 7.3 scroll down to the fields that list the slots for you GPU and Sound Drive .(In my case it looks like below. Yours may differ and need additional edits if more devices on same IOMMU) <rom file='/mnt/transfer_pool/Transfer/Unraid/VBIOS/EVGA.GTX1070.8192.161103_1.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' '/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x50' slot='0x00' function='0x1'/> </source> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> 7.4 My GPU also have a sound device built in. When you plug into a physical mobo both devices are in the same PCIe slot. Unfortunately when Unraid maps it to virtual mobo it places the devices in different slots which confuse the NVIDIA driver. To fix this issue you have to put all the devices on the same IOMMU in same slots. For my particular GPU these were the two lines that I modified from above. <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' '/> (changed to below) <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0' multifunction='on'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0/> (changed to below) <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x1'/> 7.5 Click update to save XML file. Note: it's essential not to go back to the "form version" to save the VM template as it will discard these manual edits. If you do ever change the template again in form view you will need to save it then manually edit it again in "XML view" again. 8.1. Start the VM. And try to connect with RDP after a minute (does some sort of update so takes a bit longer than normal to connect) 8.2 If you can't connect with RDP do a force shutdown of VM then restart it again (for some reason I had to do it twice for it to work)
  3. UPDATE 2: Now I'm getting a message from fix common problems plugin that log is 100% full. I checked my syslog in /var/logs/ and its over 125Mb. When I tail the last 200 records show up with this message Aug 9 09:11:16 Threadripper kernel: vfio-pci 0000:50:00.0: BAR 1: can't reserve [mem 0xc0000000-0xcfffffff 64bit pref]
  4. UPDATE: I just tried checking the boxes for following items (in tools > system files) then rebooting IOMMU group 74: [10de:1b81] 50:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) [10de:10f0] 50:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) It still can't connect via RDP but I think it has bound the devices to VFIO as mentioned. Below is copy of View VFIO-PCI log (found in from tools > system files) Loading config from /boot/config/vfio-pci.cfg BIND=0000:50:00.0|10de:1b81 0000:50:00.1|10de:10f0 --- Processing 0000:50:00.0 10de:1b81 Vendor:Device 10de:1b81 found at 0000:50:00.0 IOMMU group members (sans bridges): /sys/bus/pci/devices/0000:50:00.0/iommu_group/devices/0000:50:00.0 /sys/bus/pci/devices/0000:50:00.0/iommu_group/devices/0000:50:00.1 Binding... Successfully bound the device 10de:1b81 at 0000:50:00.0 to vfio-pci --- Processing 0000:50:00.1 10de:10f0 Vendor:Device 10de:10f0 found at 0000:50:00.1 IOMMU group members (sans bridges): /sys/bus/pci/devices/0000:50:00.1/iommu_group/devices/0000:50:00.0 /sys/bus/pci/devices/0000:50:00.1/iommu_group/devices/0000:50:00.1 Binding... 0000:50:00.0 already bound to vfio-pci 0000:50:00.1 already bound to vfio-pci Successfully bound the device 10de:10f0 at 0000:50:00.1 to vfio-pci --- vfio-pci binding complete Devices listed in /sys/bus/pci/drivers/vfio-pci: lrwxrwxrwx 1 root root 0 Aug 9 07:54 0000:50:00.0 -> ../../../../devices/pci0000:40/0000:40:03.1/0000:50:00.0 lrwxrwxrwx 1 root root 0 Aug 9 07:54 0000:50:00.1 -> ../../../../devices/pci0000:40/0000:40:03.1/0000:50:00.1 ls -l /dev/vfio/ total 0 crw------- 1 root root 249, 0 Aug 9 07:54 74 crw-rw-rw- 1 root root 10, 196 Aug 9 07:54 vfio
  5. I'm not certain but believe IOMMU is enabled in bios. I can see my GPU listed under an IOMMU group in Unraid (tools > system devices) with check boxes for it unchecked. IOMMU group 74: [10de:1b81] 50:00.0 VGA compatible controller: NVIDIA Corporation GP104 [GeForce GTX 1070] (rev a1) [10de:10f0] 50:00.1 Audio device: NVIDIA Corporation GP104 High Definition Audio Controller (rev a1) I don't know if GPU IoMMU group is bound to VFIO at boot (and don't know how to check). For the record I looked at the VM's log and found this VFIIO message. Perhaps my problem? "2021-08-08T22:55:18.709941Z qemu-system-x86_64: vfio_region_write(0000:50:00.0:region1+0x42478, 0x0,8) failed: Device or resource busy:" When you say "passthrough parts", I'm assuming you mean in VM template right?. If so I've selected mygpu, nvidia audio driver, and tried all three available usb controller options in dropdown in VM template. Below is the xml view of the VM's template. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' id='31'> <name>Windows 10 Pro</name> <uuid>b178c5a5-184a-f35a-04d3-5ab9b25d87f2</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>34</vcpu> <cputune> <vcpupin vcpu='0' cpuset='1'/> <vcpupin vcpu='1' cpuset='33'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='34'/> <vcpupin vcpu='4' cpuset='3'/> <vcpupin vcpu='5' cpuset='35'/> <vcpupin vcpu='6' cpuset='4'/> <vcpupin vcpu='7' cpuset='36'/> <vcpupin vcpu='8' cpuset='5'/> <vcpupin vcpu='9' cpuset='37'/> <vcpupin vcpu='10' cpuset='6'/> <vcpupin vcpu='11' cpuset='38'/> <vcpupin vcpu='12' cpuset='7'/> <vcpupin vcpu='13' cpuset='39'/> <vcpupin vcpu='14' cpuset='8'/> <vcpupin vcpu='15' cpuset='40'/> <vcpupin vcpu='16' cpuset='9'/> <vcpupin vcpu='17' cpuset='41'/> <vcpupin vcpu='18' cpuset='10'/> <vcpupin vcpu='19' cpuset='42'/> <vcpupin vcpu='20' cpuset='11'/> <vcpupin vcpu='21' cpuset='43'/> <vcpupin vcpu='22' cpuset='12'/> <vcpupin vcpu='23' cpuset='44'/> <vcpupin vcpu='24' cpuset='13'/> <vcpupin vcpu='25' cpuset='45'/> <vcpupin vcpu='26' cpuset='14'/> <vcpupin vcpu='27' cpuset='46'/> <vcpupin vcpu='28' cpuset='15'/> <vcpupin vcpu='29' cpuset='47'/> <vcpupin vcpu='30' cpuset='16'/> <vcpupin vcpu='31' cpuset='48'/> <vcpupin vcpu='32' cpuset='17'/> <vcpupin vcpu='33' cpuset='49'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-5.1'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/b178c5a5-184a-f35a-04d3-5ab9b25d87f2_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none' migratable='on'> <topology sockets='1' dies='1' cores='17' threads='2'/> <cache mode='passthrough'/> <feature policy='require' name='topoext'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/user/domains/Windows 10 Pro Template/vdisk1.img' index='1'/> <backingStore/> <target dev='hdc' bus='sata'/> <boot order='1'/> <alias name='sata0-0-2'/> <address type='drive' controller='0' bus='0' target='0' unit='2'/> </disk> <controller type='sata' index='0'> <alias name='sata0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='usb' index='0' model='qemu-xhci' ports='15'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:ac:e6:5f'/> <source bridge='br0'/> <target dev='vnet0'/> <model type='virtio-net'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/0'/> <target type='isa-serial' port='0'> <model name='isa-serial'/> </target> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/0'> <source path='/dev/pts/0'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-31-Windows 10 Pro/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='tablet' bus='usb'> <alias name='input0'/> <address type='usb' bus='0' port='1'/> </input> <input type='mouse' bus='ps2'> <alias name='input1'/> </input> <input type='keyboard' bus='ps2'> <alias name='input2'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x50' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x50' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hub type='usb'> <alias name='hub0'/> <address type='usb' bus='0' port='2'/> </hub> <memballoon model='none'/> </devices> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain>
  6. I've been working though setting up my first Unraid server. I've configured some plugins, precleared and smart tested my HDDs, started the array, setup several cache pools, and added some shares. I'm currently in the process of setting up a win10pro VM. I managed to install windows and It works fine when I use either Unraids VNC or I RDP into it. if "VNC" is selected as my graphics option in Unraid's VM template. Unfortunately when I select my actual card (EVGA GTX 1070) I can't connect via RDP. Any suggestions as to what the problem might be?
  7. Sending RMA is a bit of pain. Can you elaborate on errors in smart log? Is it something that I need to fix right away or it something that a utility could mark a bad sector and play it by ear to see if more errors show up later (i.e. RMA it later if needed)
  8. In the process of building my first Unraid server. Ran preclear once and one of the hard drives failed (Seagate Exos 18TB). So I ran Unraid short Smart test and it failed too. The drive that failed is new. Other than the preclear and Smart test error drive seems to work (e.g. I was able to format it). Is it possible to fix the error or should I be looking to RMA the drive? (I've attached smart report) smart-20210729-1519.zip
  9. You mention "raid level". Do Unraid pools offer an exact software parallel to classical raid? (i..e Raid0 for read/write performance but no drive redundancy, raid 1 for mirroring but no performance benefit, raid 5 for increased read speeds but slower writes, Raid 10 better read/write with both stripping and mirroring because no parity,)
  10. I have a ASUS Hypecard (PCIe 4.0) and some PCIe 4.0 NVMe SSDs . One of the (hypothetical) use cases for my Unraid server is as a Windows 10 VM that acts as a compile machine (Visual Studio) that I can connect locally to from my laptop using RDP. I'd like to allocate as much resources as possible to that VM to speed up compile times Ideally I'd like to combine 4 m.2 ssds on the Hypercard + 1 M.2 ssd on the board into a single combined persistent cache that would allow an SSD to fail while also providing better read/write performance than a single or mirrored SSD drive. Is this possible with Unraid? Alternatively, could I somehow achieve the same effect by allocating multiple unassigned drives to somehow software raid them together in Windows to get better performance than standalone? Some other option?
  11. Btw - Someone told me Unraid supports up to 30 drives in array but only 1 array.. is that 30 drives total in Unraid (i.e. 28 usuable + 2 parity) or could have additional unassigned drives past 30? For instance, although a less than optimal solution it could still be useful to have a local backup of critical data broken up to various unassigned drives, Or say some extra hotswap bays where I could import data manually from a drive from another machine or do a drive to drive copy. (does unraid allow me to do that?)
  12. Thanks for link. Fun read. Ideally I would buy a case with 30+ accessible bays, mobo, cpu, etc...if starting from scratch. At the moment I'm trying to save money on by repurposing my existing equipment as much as possible.
  13. I was going to provide a link to dedicated external HBA card but just realized I have issue with it. I currently have an in 8-port LSI HBA (internal only) and was planning to use the spare SATA ports on the mobo for the other four bays case supports. As I only have four PCIe ports to work with (other three have been allocated elsewhere) could I replace my LSI HBA with another 8 port HBA that also has two 2 external SFF-8088 ports? (which would then connect to other case's 8088-to-8087 adapter, which in turn would connect to the Intel expander you mention right). If so, as this is a budget build, could you recommend an HBA from ebay? (usually can find less expensive imports from Asia or used ones)
  14. 5 "The array needs to go offline to replace disks." Argh. For clarification, which option below are you describing? (assuming option A but just to confirm) A. array would remain up (in degraded mode) even with bad disk (assuming parity disk(s) exists). if I wanted to fix the issue though, I would need to manually shut down array to physically add new drive (shutting down any programs that depend on it), then manually add new drive to Unraid array, --- and then array would be active while rebuilding? (i.e just a few minutes of downtime) B. array would immediately go down while in degraded mode. (i.e. could take awhile before I notice) C array would remain up in degraded mode and would stay down during rebuild? (i.e. rebuild would take the system down for days if using 18GB drives) D. array would be down while degraded and would stay down during rebuild? (i.e. rebuild would take the system down for days if using 18GB drive) E. Other? Secondary related question... The bulk of the data is non-critical storage (e..g Chia plots, multi-media, backups). is there a way to create a couple of additional smaller arrays (e.g using M.2 drives) specifically for the purpose of mirroring critical storage so if one array goes down the other array can keep critical apps up while I work on restoring the array with the problem? (e.g. docker images running development webserver, bitwarden, bookmark sync software, etc)
  15. Thanks for link. I have a couple of 4u case laying around so something like this might work. It's opened up a proverbial rabbit hole of more questions though. 1. The poster says he used an HBA card with external ports in primary case (the actual server) which uses some cable to connect to a "SFF-8088 to 8087 adapter" in secondary case that in turn connects to 24 port RES2SV240 expander (powered by molex) which in turn has breakout cables to power the drives in secondary case. Is that a correct synopsis? Assuming the above is true.... 2. Would this HBA card work? 3. What is the name of the cable he used? (assuming SFF-808 but just to confirm) https://www.amazon.ca/Deconn-External-SFF-8088-Cable-Attached/dp/B00S7KTXW6/ref=sr_1_1_sspa?dchild=1&keywords=sff-8088&qid=1623055881&sr=8-1-spons&psc=1&smid=A3CMOOTCHB9X54&spLa=ZW5jcnlwdGVkUXVhbGlmaWVyPUExVjBDRVY5STM4RkRYJmVuY3J5cHRlZElkPUEwOTI5ODk2M1FYSTc2SEJDREM1NyZlbmNyeXB0ZWRBZElkPUEwNTM5OTgyMTRKQ1BKWldDV0g2TyZ3aWRnZXROYW1lPXNwX2F0ZiZhY3Rpb249Y2xpY2tSZWRpcmVjdCZkb05vdExvZ0NsaWNrPXRydWU= 4. What SFF-8088 to 8087 adapter would you recommend that compatible with Unraid? 5. What would power the above SFF-8088 to 8087 adapter? Is there some cable I could use to connect to main case? 6. On similar vein, what is powering the hdds in secondary case? Is it being powered from sata power from first case?