Jump to content

JesterEE

Members
  • Posts

    169
  • Joined

  • Last visited

Posts posted by JesterEE

  1. Hey community!  I was really trying hard NOT to write this post.  My Google-fu is pretty strong, and I thought for sure, I could find a solution to my issue on the vast interweb ... but, I have just found lots of things that didn't seem to work 😥.

     

    A lot of what I did for VM setup and debugging has come from the Unraid wiki, Unraid forum, SpaceInvader One's Youtube, /r/VFIOLevel1Tech - VFIO, and MathiasHueber.com's Game VM Tweak Guide.

     

    I'm looking for help/insight into the issues I'm seeing, helpful new troubleshooting steps, or just corroboration that someone with a similar system (x570 MB, CPU architecture, Unraid/QEMU version) is seeing the same types of things ... and I'm not crazy.  The system works, the pass-through works ... it's just the end experience leaves a fair amount to be desired.  Far from bare metal!  My last hardware build on Unraid 6.7.2-stable was under-powered, so I assumed the stuttering/lag/etc. I was seeing was because my hardware wasn't up to the task.  But I upgraded recently to 2019 tech and am still seeing the same types of issues.  I am perplexed!

     

    System

    Unraid 6.8.0-Stable LinuxServerio's NVIDIA Build | ASUS ROG Strix x570-e AGESA 1.0.0.4 B bios 1405 (11/26/2019) | AMD Ryzen 3800X (8C/16T) | 32GB DDR4-3600 | Gigabyte NVIDIA GTX 1060 6GB Windforce

    Windows 10 VM is a vdisk hosted on an Unassigned Device 2.5" 1TB SATA SSD.

     

    Problem

    In a Windows 10 v1909 i440fx-4.1.1 VM with NVIDIA GPU and on-board USB pass-through provisioned with isolated 4C/8T from the 2nd CCX and 16GB RAM, I get a non-negligible amount of video/HDMI audio stuttering when watching Youtube/gaming/forcing audio output (i.e. clicking Windows the volume slider)/etc.  This happens regardless of what else is running on my Unraid system (i.e. dockers), but more frequently with other applications running.

     

    Other Notes

    • The ACS patch and unsafe interrupts are not required for my system to do hardware pass-through appropriately.
    • I notice that the lag is accompanied by either:
      • high CPU/single thread spikes in either the isolated VM cores or the host cores where the emulator and iothreads are processed (observed in the Unraid Dashboard)
        • OR
      • guest VM I/O spikes (virtio ethernet/primary disk observed in the guest Task Manager).
    • I tried to isolate cores on the 1st CCX with the cores on the 2nd CCX.  This caused input lag in the VM regardless of if the cores were assigned to the VM or not.
    • I tried to setup my my 2nd on-board gigabit NIC to be passed to the VM (maybe to alleviate the virtio ethernet lag spikes).  After adding the NIC (which is in it's own IOMMU group) to the vfio-pci.cfg file and restarting, I tested the VM before assigning the NIC to the VM (i.e. no changes to the VM).  The host change caused input lag in the VM just like the previous bullet.  This seemed like a non-starter so I reverted the binding without trying the NIC in the VM.

     

    Attempted Troubleshooting

    1. Enabling Message-Signaled Interrupts (MSI) for GPU/HDMI audio added with MSI-utility v2
      • Yes, this did not fix it!
      • Yes, I re-enabled MSI of the devices when I installed new GPU drivers
      • Yes, I restarted the guest between applying and testing
    2. Adding/Removing MSI for other virtio devices
    3. Disabling Unraid docker during testing (i.e. free up the host resources)
    4. Updated Windows 10 guest virtio drivers to v0.1.173-2 (Link)
    5. Limited cores (1, 2, 4 HT core(s))
    6. Isolated/Non-isolated VM cores
    7. emulatorpin/iothreadpin VM template directives
    8. Hyper-V Enlightenment template directives
    9. Changing the HDMI audio pass-through guest address to be the same as the GPU, but with a different function (similar to the way it is on the host)
      •     <hostdev mode='subsystem' type='pci' managed='yes'>
              <driver name='vfio'/>
              <source>
                <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/>
              </source>
              <rom file='/mnt/user/Server/config/Gigabyte.GTX1060.rom'/>
              <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
            </hostdev>
            <hostdev mode='subsystem' type='pci' managed='yes'>
              <driver name='vfio'/>
              <source>
                <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
              </source>
              <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
            </hostdev>

         

      • to

      •     <hostdev mode='subsystem' type='pci' managed='yes'>
              <driver name='vfio'/>
              <source>
                <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/>
              </source>
              <rom file='/mnt/user/Server/config/Gigabyte.GTX1060.rom'/>
              <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0' multifunction='on'/>
            </hostdev>
            <hostdev mode='subsystem' type='pci' managed='yes'>
              <driver name='vfio'/>
              <source>
                <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
              </source>
              <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x1'/>
            </hostdev>

         

    10. Windows guest default audio format change (CD/DVD/Studio 16-bit/24-bit)
    11. Q35-4.1.1 VM with the same ... everything

     

    VM Setup

     

    VM Template

    <?xml version='1.0' encoding='UTF-8'?>
    <domain type='kvm'>
      <name>Vanellope</name>
      <uuid>2eb4ab82-9e1e-dc2b-b155-d61c76458527</uuid>
      <description>Windows 10 Gaming VM</description>
      <metadata>
        <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/>
      </metadata>
      <memory unit='KiB'>16777216</memory>
      <currentMemory unit='KiB'>16777216</currentMemory>
      <memoryBacking>
        <nosharepages/>
      </memoryBacking>
      <vcpu placement='static'>8</vcpu>
      <iothreads>2</iothreads>
      <cputune>
        <vcpupin vcpu='0' cpuset='4'/>
        <vcpupin vcpu='1' cpuset='12'/>
        <vcpupin vcpu='2' cpuset='5'/>
        <vcpupin vcpu='3' cpuset='13'/>
        <vcpupin vcpu='4' cpuset='6'/>
        <vcpupin vcpu='5' cpuset='14'/>
        <vcpupin vcpu='6' cpuset='7'/>
        <vcpupin vcpu='7' cpuset='15'/>
        <emulatorpin cpuset='3,11'/>
        <iothreadpin iothread='1' cpuset='1,9'/>
        <iothreadpin iothread='2' cpuset='2,10'/>
      </cputune>
      <os>
        <type arch='x86_64' machine='pc-i440fx-4.1'>hvm</type>
        <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader>
        <nvram>/etc/libvirt/qemu/nvram/2eb4ab82-9e1e-dc2b-b155-d61c76458527_VARS-pure-efi.fd</nvram>
      </os>
      <features>
        <acpi/>
        <apic/>
        <hyperv>
          <relaxed state='on'/>
          <vapic state='on'/>
          <vpindex state='on'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
          <synic state='on'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
          <stimer state='on'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
          <reset state='on'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
          <spinlocks state='on' retries='8191'/>
          <vendor_id state='on' value='none'/>
          <frequencies state='on'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
        </hyperv>
        <vmport state='off'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
        <ioapic driver='kvm'/>    <!-- required for QEMU 4.0 or later -->
      </features>
      <cpu mode='host-passthrough' check='none'>
        <topology sockets='1' cores='8' threads='1'/>
        <cache mode='passthrough'/>
      </cpu>
      <clock offset='localtime'>
        <timer name='hypervclock' present='yes'/>
        <timer name='hpet' present='no'/>
        <timer name='rtc' present='no' tickpolicy='catchup'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
        <timer name='pit' present='no' tickpolicy='delay'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
        <timer name='tsc' present='yes' mode='native'/>  <!-- https://mathiashueber.com/performance-tweaks-gaming-on-virtual-machines/ -->
      </clock>
      <on_poweroff>destroy</on_poweroff>
      <on_reboot>restart</on_reboot>
      <on_crash>restart</on_crash>
      <devices>
        <emulator>/usr/local/sbin/qemu</emulator>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='none'/>
          <source file='/mnt/disks/CT1000MX500SSD1_1820E13CF91D/domains/Vanellope/Vanellope_i440fx_4p1p1.img'/>
          <target dev='hdc' bus='virtio'/>
          <boot order='1'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
        </disk>
        <disk type='file' device='disk'>
          <driver name='qemu' type='raw' cache='none'/>
          <source file='/mnt/disks/CT1000MX500SSD1_1820E13CF91D/games/blizzard/blizzard.img'/>
          <target dev='hdd' bus='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/win_10_1909_x64.iso'/>
          <target dev='hda' bus='ide'/>
          <readonly/>
          <boot order='2'/>
          <address type='drive' controller='0' bus='0' target='0' unit='0'/>
        </disk>
        <disk type='file' device='cdrom'>
          <driver name='qemu' type='raw'/>
          <source file='/mnt/user/isos/virtio-win-0.1.173.iso'/>
          <target dev='hdb' bus='ide'/>
          <readonly/>
          <address type='drive' controller='0' bus='0' target='0' unit='1'/>
        </disk>
        <controller type='pci' index='0' model='pci-root'/>
        <controller type='ide' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x1'/>
        </controller>
        <controller type='virtio-serial' index='0'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
        </controller>
        <controller type='usb' index='0' model='ich9-ehci1'>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x7'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci1'>
          <master startport='0'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0' multifunction='on'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci2'>
          <master startport='2'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x1'/>
        </controller>
        <controller type='usb' index='0' model='ich9-uhci3'>
          <master startport='4'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x2'/>
        </controller>
        <interface type='bridge'>
          <mac address='52:54:00:29:0d:5c'/>
          <source bridge='br0'/>
          <model type='virtio'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
        </interface>
        <serial type='pty'>
          <target type='isa-serial' port='0'>
            <model name='isa-serial'/>
          </target>
        </serial>
        <console type='pty'>
          <target type='serial' port='0'/>
        </console>
        <channel type='unix'>
          <target type='virtio' name='org.qemu.guest_agent.0'/>
          <address type='virtio-serial' controller='0' bus='0' port='1'/>
        </channel>
        <input type='mouse' bus='ps2'/>
        <input type='keyboard' bus='ps2'/>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x09' slot='0x00' function='0x0'/>
          </source>
          <rom file='/mnt/user/Server/config/Gigabyte.GTX1060.rom'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x09' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x06' slot='0x00' function='0x1'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
        </hostdev>
        <hostdev mode='subsystem' type='pci' managed='yes'>
          <driver name='vfio'/>
          <source>
            <address domain='0x0000' bus='0x06' slot='0x00' function='0x3'/>
          </source>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x0a' function='0x0'/>
        </hostdev>
        <memballoon model='none'/>
      </devices>
    </domain>


    IOMMU

    IOMMU group 0:	[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 1:	[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 2:	[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 3:	[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 4:	[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge
    IOMMU group 5:	[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 6:	[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 7:	[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 8:	[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 9:	[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge
    IOMMU group 10:	[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 11:	[1022:1484] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 12:	[1022:1484] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]
    IOMMU group 13:	[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)
    [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)
    IOMMU group 14:	[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0
    [1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1
    [1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2
    [1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3
    [1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4
    [1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5
    [1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6
    [1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7
    IOMMU group 15:	[1022:57ad] 01:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57ad
    IOMMU group 16:	[1022:57a3] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
    IOMMU group 17:	[1022:57a3] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
    IOMMU group 18:	[1022:57a3] 02:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3
    IOMMU group 19:	[1022:57a4] 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
    [1022:1485] 06:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    [1022:149c] 06:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    [1022:149c] 06:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 20:	[1022:57a4] 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
    [1022:7901] 07:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 21:	[1022:57a4] 02:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4
    [1022:7901] 08:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 22:	[1000:0072] 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)
    IOMMU group 23:	[10ec:8125] 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller
    IOMMU group 24:	[8086:1539] 05:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
    IOMMU group 25:	[10de:1c03] 09:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1)
    [10de:10f1] 09:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)
    IOMMU group 26:	[1022:148a] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function
    IOMMU group 27:	[1022:1485] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP
    IOMMU group 28:	[1022:1486] 0b:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP
    IOMMU group 29:	[1022:149c] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller
    IOMMU group 30:	[1022:1487] 0b:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller
    IOMMU group 31:	[1022:7901] 0c:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)
    IOMMU group 32:	[1022:7901] 0d:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

    Diagnostics: Attached

     

    lstopo: Attached

     

    Thanks for looking through this (long) post!

    -JesterEE

     

     

    topo_cogsworth_v3.png

    cogsworth-diagnostics-20191224-1958.zip

  2. @snailbrain did you ever get completely to root cause? I have a 95% decent experience on my QEMU 4.1 i440fx Windows 10 VM, but I am getting audio glitching somewhat frequently through my NVIDIA GTX 1060 HDMI audio passthrough similar to what you have in your video. It's really jarring, and kind of killing the enjoyable experience, so I want it gone! I've done most of the same configuration you have, except I'm still using a vdisk for OS loaded on an unassigned device and still using the virtual network interface. Your experience is the closest I've seen to what i have been experiencing and you've done a bunch more testing than I have ... though I'm getting close 😂.

     

    Anything you can share would be great!

     

    Thanks

    -JesterEE

  3. After following this thread I thought I would spin up 2 new, identically configured Windows 10 VMs in Unraid 6.8.0 to see if the i440fx vs Q35 debate shows anything for my system under QEMU 4.1.

     

    Originally I had some issues on my QEMU 3.0 i440fx Windows 10 VM with video glitching, so an update was necessary anyway.  Before getting entrenched in a VM architecture, I wanted to see which was going to pan out best at the on-set of my new build.

     

    For these benchmarks I ran a "gaming oriented" (Hyperv enlightenments, isolated CPUs, emultor pinning, iothread pinning) Windows 10 VM with 4 HT cores of an AMD 3800X in performance mode and pass-through of a Gigabyte NVIDIA GeForce GTX 1060 6GB Windforce.

     

    Here are AIDA64 results which show negligible difference in performance between the 2 VMs.  Likely, if I ran a statistically relevant number of tests, they would be the same.  Notable, in the Q35 VM, NVIDIA System Info reports my PCI-E lanes correctly (x16 PCI-E 3.0 for this card).

     

    In the end, I went with the i440fx machine.  I'm not saying I agree with getting rid of Q35 as an option.  Options are good for everyone (as long as they aren't broken)!  Just a data-point.

     

    -JesterEE

    cogsworth_vm_memory_benchmark.png

    cogsworth_vm_gpgpu_benchmark.png

    • Like 1
    • Thanks 1
  4. Thanks @Skitals for this post!  I was having a tough time configuring my ASUS ROG Strix X570-E for VFIO, and the information you distilled here really helped!

     

    For other Unraid users looking to run this board (which I recommend!), the information that @Skitals put forth is basically the same for the ASUS ROG Strix X570-E.  I will note the differences I saw in my experience.  Note this data was pulled with MB Bios 1405 (11/26/2019) [AGESA 1.0.0.4 Patch B].

     

    UEFI / BIOS Settings:

    Advanced -> AMD CBS -> IOMMU -> Enabled (Enable IOMMU)

    Advanced -> CPU Configuration -> SVM Mode -> Enabled (Enable CPU Virtualization)

    Advanced -> USB Configuration -> XHCI Hand-off- > Enabled (XHCI handled by BIOS [might not be necessary])

     

    Settings -> AMD CBS -> ACS Enable -> Enable (Not an option on the ASUS ROG Strix X570-E)

    Settings -> AMD CBS -> Enable AER Cap -> Enable (Not an option on the ASUS ROG Strix X570-E)

     

    USB Pass-through:

    This part was all really the same (HUGE THANKS on this part!).  As @Skitals did, of the 3 USB controllers on the board, I also tried to pass through the obvious one that was by itself.  I played with a TON of settings and recommendations found all over the forums, especially here, (including xen-pciback.hide boot configurations), but none worked with that controller.  When trying to boot a VM with this configuration, it would hard lock the server and force me to do a dirty restart 😡.  Finally, after finding and trying the recommendation here ... it was quite simple!  For this board, the 2 USB controllers that are connected (that you can pass-through) are the 3 Type A and 1 Type C ports below the Ethernet jacks (pictured below) and the internal headers on the motherboard (for routing to the USB connections on the case and/or extenders).  The 4 Type A headers between the HDMI/Display Port and the Realtek 2.5G Ethernet/2x USB Type A are the ones that will remain in use for Unraid.  I attached my boot drive in the "BIOS" USB position (the one with flash button) and it works just fine.  I did not try passing though all the 0b:00.# devices as we need to do with the other controllers (i.e. 06:00.0 06:00.1 06:00.3).  I just tried reserving 0b:00.3, but if I did all the devices, I bet this would have worked too.  Though, at this point, I'm not going to bother trying.

     

    Update 12/24/2019: I wanted to try and get my sound card passed so, I did bother trying to pass through all the 0b:00.# devices (i.e. 0b:00.0 0b:00.1 0b:00.3 0b:00.4).  It did not work.  Here's a reddit VFIO thread with other people talking about the same problem.  Looks like we'll all need a AGESA patch to fix this one.

     

    Here are the USB controllers on my IOMMU (from the full list, below):

     

    Passed-Though USB Controller

    [1022:1485] 06:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP

    [1022:149c] 06:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

    [1022:149c] 06:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

     

    Unraid USB Controller

    IOMMU group 29:[1022:149c] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

    12858027_4.png

     

    Single (NVIDIA) GPU Pass-through:

    Also the same.  I needed to add the video=efifb:off declaration to get my system to boot and remove the association to my single NVIDIA GTX 1060.

     

    Note, this was not required earlier on when working with this board.  While I'm not 100% sure what I changed to make this necessary I think I have an idea.  I migrated from another setup with a legacy, non-UEFI BIOS.  My Unraid USB boot drive was not set up for UEFI (i.e. the tick box in the Flash drive options), and when I first started Unraid after the hardware upgrade, I needed to enable the BIOS option for the Compatibility Support Module (CSM) to get the flash drive to boot.  This allowed me to boot Unraid, have the main display on the GTX 1060 show the boot process and drop to a CLI, while also allowing dockers to utilize the NVIDIA card (via the Linuxserver.io Nvidia build of Unraid) and the VM Manager to pass-through the card when booting a VM.  When powering off the VM, I was returned to a CLI and everything worked!  This was nice ... but I didn't want to use the CSM if I didn't need to.  So, after telling the flash drive that I'm using UEFI and switching off CSM in BIOS, I needed to use the video=efifb:off declaration to have the boot process drop utilization of the card.  So, no more CLI interface on my main display 😥.

     

    Fan Speed Sensors and PWM Controllers:

    These I didn't play with at all when it didn't work right off in Unraid 6.8.0-stable.  There are some kernel patches that need to be applied for this to work with x570 boards and Ryzen 3000 processors that are currently either deployed or optional in the Linux 5.4 kernel.  I opened a request to include them in the future builds of Unraid.  If you're like me and want them for our setups, drop a like and a comment to let Limetech know there is interest!

     

     

    IOMMU:

    Here is how my system looks.  Note, there is a GeForce GTX 1060 and a LSI SAS2008 HBA installed; everything else is the MB itself.  The ACS patch is OFF.

    MB Bios 1405 (11/26/2019) [AGESA 1.0.0.4 Patch B].

     

    IOMMU group 0:[1022:1482] 00:01.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

    IOMMU group 1:[1022:1483] 00:01.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

    IOMMU group 2:[1022:1482] 00:02.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

    IOMMU group 3:[1022:1482] 00:03.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

    IOMMU group 4:[1022:1483] 00:03.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge

    IOMMU group 5:[1022:1482] 00:04.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

    IOMMU group 6:[1022:1482] 00:05.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

    IOMMU group 7:[1022:1482] 00:07.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

    IOMMU group 8:[1022:1484] 00:07.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

    IOMMU group 9:[1022:1482] 00:08.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge

    IOMMU group 10:[1022:1484] 00:08.1 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

    IOMMU group 11:[1022:1484] 00:08.2 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

    IOMMU group 12:[1022:1484] 00:08.3 PCI bridge: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B]

    IOMMU group 13:[1022:790b] 00:14.0 SMBus: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller (rev 61)

    [1022:790e] 00:14.3 ISA bridge: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge (rev 51)

    IOMMU group 14:[1022:1440] 00:18.0 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0

    [1022:1441] 00:18.1 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1

    [1022:1442] 00:18.2 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2

    [1022:1443] 00:18.3 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3

    [1022:1444] 00:18.4 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4

    [1022:1445] 00:18.5 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5

    [1022:1446] 00:18.6 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6

    [1022:1447] 00:18.7 Host bridge: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7

    IOMMU group 15:[1022:57ad] 01:00.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57ad

    IOMMU group 16:[1022:57a3] 02:02.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3

    IOMMU group 17:[1022:57a3] 02:04.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3

    IOMMU group 18:[1022:57a3] 02:05.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a3

    IOMMU group 19:[1022:57a4] 02:08.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4

    [1022:1485] 06:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP

    [1022:149c] 06:00.1 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

    [1022:149c] 06:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

    IOMMU group 20:[1022:57a4] 02:09.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4

    [1022:7901] 07:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

    IOMMU group 21:[1022:57a4] 02:0a.0 PCI bridge: Advanced Micro Devices, Inc. [AMD] Device 57a4

    [1022:7901] 08:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

    IOMMU group 22:[1000:0072] 03:00.0 Serial Attached SCSI controller: Broadcom / LSI SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03)

    IOMMU group 23:[10ec:8125] 04:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller

    IOMMU group 24:[8086:1539] 05:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)

    IOMMU group 25:[10de:1c03] 09:00.0 VGA compatible controller: NVIDIA Corporation GP106 [GeForce GTX 1060 6GB] (rev a1)

    [10de:10f1] 09:00.1 Audio device: NVIDIA Corporation GP106 High Definition Audio Controller (rev a1)

    IOMMU group 26:[1022:148a] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function

    IOMMU group 27:[1022:1485] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP

    IOMMU group 28:[1022:1486] 0b:00.1 Encryption controller: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP

    IOMMU group 29:[1022:149c] 0b:00.3 USB controller: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller

    IOMMU group 30:[1022:1487] 0b:00.4 Audio device: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller

    IOMMU group 31:[1022:7901] 0c:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

    IOMMU group 32:[1022:7901] 0d:00.0 SATA controller: Advanced Micro Devices, Inc. [AMD] FCH SATA Controller [AHCI mode] (rev 51)

     

    USB Associations (Before reserving the 06.00.# devices):

    Bus 1 --> 0000:06:00.1 (IOMMU group 19)

    Bus 001 Device 005: ID 8087:0029 Intel Corp. 

    Bus 001 Device 004: ID 05e3:0610 Genesys Logic, Inc. 4-port hub

    Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

     

    Bus 2 --> 0000:06:00.1 (IOMMU group 19)

    Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

     

    Bus 3 --> 0000:06:00.3 (IOMMU group 19)

    Bus 003 Device 002: ID 0b05:18f3 ASUSTek Computer, Inc. AURA LED Controller

    Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

     

    Bus 4 --> 0000:06:00.3 (IOMMU group 19)

    Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

     

    Bus 5 --> 0000:0b:00.3 (IOMMU group 29)
    Bus 005 Device 003: ID 154b:005b PNY Flash Drive (<-- Boot Drive)
    Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

     

    Bus 6 --> 0000:0b:00.3 (IOMMU group 29)

    Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub

     

     

    Happy Computing!!!

    -JesterEE

     

  5. 2 hours ago, juan11perez said:

    @JesterEE

    Im using a 3900X with an Asus x470-f gaming. With unraid 6.8.0rc7 i get cpu temp but no motherboard. With 6.8 stable nothing.

     

    That's interesting! This was the last release that was on the 5.4 kernel before 6.8-RC releases were reverted to the 4.19 kernel. So, maybe, the CPU monitoring patches have already been applied, but not all the appropriate patches for temperature monitoring of the x470/x570 boards were incorporated. Even more reason for this feature request! @juan11perez thanks for sharing!

     

    -JesterEE

    • Like 3
  6. 15 minutes ago, tjb_altf4 said:

    Can't speak for the other driver, but nct6775 is working on 6.7.2, are you sure it's gone in 6.8 ?

    EDIT: I believe this is specific to x570/Zen 3000 units.  If your sig is current, this will not apply to your system.

     

    No joy.  Loading both drivers in the System Temperature plugin results in no entries being added in the sensors pull downs.  Running

    sensors -s

     from the CLI results in a "No sensors found" message.

     

    I migrated from a i7 with Unraid 6.7.2/6.8.0 and this was not an issue with that architecture (obviously, different sensors apply to this build, so not apples to apples).

     

    Additional references:

    https://forum.level1techs.com/t/temperature-system-monitoring-for-ryzen-3000-and-x570-motherboards-in-linux/145548

    https://github.com/ocerman/zenpower

     

    -JesterEE

    • Like 1
  7. I'm running a new Ryzen 3000 series system on Unraid 6.8.0 and can not query the systems temperature sensors using the Dynamix - System Temp plugin.  After a little digging, I found users Skitals and Leoyzen made kernels forking 6.8.0-RC5 to support this features (as well as others).  I'd like to see the Linux kernel patches for the k10temp and nct6775 make it into the next Unraid RC build to support users of this platform and CPU architecture. 

     

    Note, my Realtek R8125 on my ASUS ROG Strix x570-E seems to be working with the out-of-tree drivers and I don't seem to have the VFIO bug that is referenced in the 2nd thread on my 6.8.0-stable system.

     

    References:

     

     

     

    Thank You!

     

    -JesterEE

    • Like 7
  8. On 12/11/2019 at 1:33 PM, JesterEE said:

    Yes, this has been working without modification since my post. Make sure the docker is closed before making the edit or it may be overwritten by the active process.

    Also, IMHO, Deluge is less than graceful about it's shutdown process, which has bit me more than once in the past with various levels of state corruption.  Best case, a setting or 2 is lost ... worst case, the torrent states are all lost and I'm left with an empty seed list with dangling files unattached to a torrent 😱😡.  The latter (which is REALLY annoying) has yet to happen on my Unraid docker based system, so 🤞.

     

    Deluge does create back-ups (one back-up) of config and state files, but it doesn't check for file corruption before doing so.  So, if the file(s) get corrupted, upon the next internal back-up task, the back-up file will also be corrupted 😣.

     

    I tend to keep clean versions of my config files in the appdata directory that I can revert back to quickly with a simple SSH and cp command.  Specifically, the following files:

    • core.conf
    • web.conf

    I'd recommend other's do the same.  The state files are harder since they change often.  This is ripe for a Deluge plug-in, but I have not found one that does it and I have more important things to do 😂.

     

    -JesterEE

  9. On 12/10/2019 at 9:16 AM, HynesJeff said:

    Does this still work for you? when I restart the container, the default_daemon line is reset back to an empty string

    Yes, this has been working without modification since my post. Make sure the docker is closed before making the edit or it may be overwritten by the active process.

    • Thanks 1
  10. On 10/15/2019 at 12:36 AM, JesterEE said:

    Has anyone had any luck setting the default_daemon configuration parameter in the web.conf file in Deluge v2? It's the first time I'm using this docker (and Deluge 2) and the WebUI isn't automatically connecting to the daemon when I set it with:

    "default_daemon": "127.0.0.1:58846".

     

    Thanks

    -JesterEE

    So, this was the wrong way to do it.  The right way (in deluge 2) is to look up the localclient host identifier in the hostlist.conf file (in the docker's appdata).  The identifier is the first entry on the localclient data structure.  It will look something like this:

    {
        "file": 3,
        "format": 1
    }{
        "hosts": [
            [
                "b9c637d760f46310352c4324a243ad12",
                "127.0.0.1",
                58846,
                "localclient",
                "13f527495ef04a34c6db4146324aaef42432bd2c"
            ]
        ]
    }

    The entry needed for this example is b9c637d760f46310352c4324a243ad12.

     

    Next, take this entry and place it as the default_daemon in the web.conf file.  For example:

    "default_daemon": "b9c637d760f46310352c4324a243ad12",

    -JesterEE

  11. 20 minutes ago, JohnS said:

     

    All the methods above are using servers on the lan or remotely, which I can see the use case for, but could Unraid also use a similar method as Bitlockered Microsoft Windows, using an inserted USB flash drive which has the keyfile on it.

    You can, but I don't really see the point. Physical access to the drives gives physical access to the USB key, and your encryption is no longer really helping you keep your data safe.

  12. 13 minutes ago, _Shorty said:

    Read the last couple of pages.

    Ok, ya, just did that and I see someone reporting a possible unicode display issue. Though, between my last post and now, I did a hash file export and got additional errors for the same files, this time with the correct path

    Oct 22 21:28:22 Cogsworth bunker: error: no export of file: /mnt/disk4/Users/bsmith/Eng_archive/XMod/mach/v5/sys/depot/Filters/data_processing/coverlap_g_sdb.fil

    So, script bug?

     

    -JesterEE

  13. I ran a full check today and the plugin found some corruption errors. I checked the system log and found that the paths of the files were not complete making it hard to find the offending files for recovery.

     

    Here is an example of the output with a mangled path:

    Oct 22 17:58:39 Cogsworth bunker: error: SHA256 hash key mismatch, t/Filters/data_processing/coverlap_g_sdb.fil is corrupted

    Has anyone else seen this before? Is it a plugin bug or something I did wrong?

     

    -JesterEE

  14. Actually ... after understanding the Resilio Sync Features a bit more, it looks like not every node of the swarm needs to have a Pro license to exchange data.  Nodes that should have all the shared data (i.e. server clients) can use the free version and nodes that may only need parts of the data and want to use the selective sync capability (i.e. phone, laptop, etc. clients) need the pro version.  So, in an Unraid setup, I can have a docker app utilizing the free version of the software for each user (with PUID, PGID, and UMASK set appropriately for each) and not have to worry about ACLs.  This would be more of an issue if my user count was in the 10s or 100s, but with single digit users, this is not too big of a problem.  This might actually be better in fact because the Resilio Sync database will be unique for each user, and the share files, as well that the database files, can be owned by that user without any additional setup.  The only additional complication is more NetworkingFu to access the administration WebUI for each user, but that's manageable.

     

    Nevertheless, I agree with the OP in that not having a solid, easy to use, and functional way to have control of the users and groups at the file system level is at the best inconvenient, and at the worst, a security risk.  I hope to see this is a future release!

     

    -JesterEE

  15. On 8/25/2018 at 6:46 AM, pwm said:


    Yes, I'm a bit sad that the groups file isn't represented in /boot/config like the other files.

     

    So the machine needs to recreate custom groups and assign users to them on boot (the 'go' file), like this:

    @pwm So I came across this post.  Can you please verify that you aren't using the setfacl command in your solution?  I was trying to set ACLs and although it's "supported" in Unraid (i.e. the command is present), the array isn't mounted with ACL access, so the command won't work for me on 6.7.2 when trying to modify a directory on the array.  Thanks.

     

    My use case is a bit different.  I am using the LinuxServer Resilio Sync docker app and want to keep the file permissions intact while the docker service synchronizes the files between connected nodes in the swarm.  The problem is, the docker runs as a specific user:group (nobody:users by default) and not the user that actually owns the files (which I set on the command line).  I could run N docker containers of the app for N users (each with $user:users PUID:PGID), but that would require N licenses of the software ... and I'm not about to do that when the 1 license I have is more than fine.  I was hoping to run the docker as a sync "super user", that is in the same group as the users that have files syncing ... and have the ACLs keep the PUID while the group inheritance is handled by the groupadd and useradd commands I define in the /boot/config/go file.

     

    If I need to use chown commands, I'd have to use cron and/or inotifywait to constantly update the file attributes which would be very far from a viable solution.

     

    -JesterEE

  16. On 9/30/2019 at 9:00 AM, tuxflux said:

    However, this gets reset every time the container is restarted or updated.

    I can't remember where I read it, I think it was on the git issues log for deluge, but I think this is a deluge issue. I believe the work around was to stop the app, and manually edit the core.conf file with the plugins you want to enable. It looks like this file isn't being properly updated when some plugins are enabled/disabled, so it won't remember the setting the next time the app is launched.

  17. Has anyone had any luck setting the default_daemon configuration parameter in the web.conf file in Deluge v2? It's the first time I'm using this docker (and Deluge 2) and the WebUI isn't automatically connecting to the daemon when I set it with:

    "default_daemon": "127.0.0.1:58846".

     

    Thanks

    -JesterEE

  18. I hit the weirdest issue yesterday loading this docker for the first time.  On first load, JDownloader seems to need to update from the update server to function.  It just so happens that the update server (update.appwork.org) was down at the time!  Upon launching the docker and looking at the WebUI, I saw that it was looking for me to change the internet connection method (i.e. Direct Connection or proxy) since it was unable reach the update server and assumed it was a user configuration error rather than the update sever being offline.  Upon canceling the connection, a fatal error was received and the program terminated.

     

    For posterity, this is the error I received (copied from the log output file, it will look a little different in the WebUI).

    --ID:1TS:1562019721687-7/1/19 6:22:01 PM -  [] -> Exception thrown at org.jdownloader.update.launcher.SecondLevelLauncher.init(SecondLevelLauncher.java:531): org.jdownloader.update.launcher.JDLauncherFailedException: java.lang.NoClassDefFoundError: org/jdownloader/startup/Main
            at org.jdownloader.update.launcher.SecondLevelLauncher.launchJDownloader(SecondLevelLauncher.java:760)
            at org.jdownloader.update.launcher.SecondLevelLauncher.init(SecondLevelLauncher.java:513)
            at org.jdownloader.update.launcher.SecondLevelLauncher.runMain(SecondLevelLauncher.java:247)
            at org.jdownloader.update.launcher.JDLauncherViaClassLoader.main(JDLauncherViaClassLoader.java:10)
            at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
            at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
            at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
            at java.lang.reflect.Method.invoke(Method.java:498)
            at org.jdownloader.update.launcher.JDLauncher.main(JDLauncher.java:79)
    Caused by: java.lang.NoClassDefFoundError: org/jdownloader/startup/Main
            at org.jdownloader.update.launcher.SecondLevelLauncher.launchJDownloader(SecondLevelLauncher.java:700)
            ... 8 more
    Caused by: java.lang.ClassNotFoundException: org.jdownloader.startup.Main
            at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
            at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349)
            at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
            ... 9 more

     

    If someone else gets this error on first launch, my suggestion is to ping the update server to see if you get a response.  If not, try later.

  19. On 5/21/2016 at 6:56 AM, trurl said:

    Another thing I have found when rsyncing between two servers is that rsync will often create the folders before it writes the files. Depending on your split level, this can result in the destination drive being predetermined before the files are written. Once it has created the folder for a file, the split level can cause it to try to write files to that folder even after the drive is full. I don't know if there is any rsync option to prevent this behavior.

    Thanks for posting this on the forums so I can find it years later! 

     

    Ran into this today while migrating data to a new array ... VERY FRUSTRATING!  After investigating, I found that lots of empty folders in a user share hierarchy were created on one disk, while the other way basically empty, so unRAID kept trying to populate the nearly full disk since the folders were already there.

     

    I hope unRAID in active use is better than in migration. It's not exactly been smooth sailing thus far.

  20. 4 hours ago, jonathanm said:

    Redundancy is about uptime availability, not data backup. You need to reconsider your storage strategy if it's currently possible to lose important data during this operation.

    Haha, you're not wrong 😁. I have backups of the really important, cry for a week if you lost it, stuff. That's not an issue. It's all those darn Linux isos ... They so big 🤣!

     

    In all seriousness though, cloud backup is on the list of things to do when I migrate.

     

    -JesterEE

  21. 7 hours ago, kizer said:

    The great thing about unRAID is you can spend as much or as little as you want to get started.

    Since I posted this topic I've done a lot of reading on the forum, and searching for good solutions, and I think this is the most true statement I've seen yet! Even though I haven't gotten all my concerns addressed I think I have enough to go on to know that even with some flaws, UnRaid seems to still be the best solution currently available. Worst case, I start with the hardware I have (which is pretty decent IMO) and build up as I go and have need, and possibly have another machine for the heavier tasks

     

    I was concerned about having the NAS and the media server(s) on 2 different PCs on the same LAN, but the internet says it works fine over a wired gigabit connection, so I'm less concerned about it actually functioning if it doesn't work well all on my current hardware.

     

    Hopefully in the next few weeks I can find the time to get my data in a place to migrate. I'm a little paranoid about the move because to get 1 parity drive up and start the array, I need to remove almost all the redundancy I currently have on my pool (14TB of 30TB). Not going to feel great about it till every drive is added back in. Also, I need to figure out a write cache RAID 1 of SSDs. I have a 1TB SATA SSD, but I think getting another would be too much for this task. I'm thinking 2x500GB drives would be more appropriate and then I can use the 1TB for a VM drive or something.

     

    -JesterEE

  22. @fluisterben Thanks!  Much appreciated!  Most of these I have never seen ... I'll have to take a closer look!  So, your looking to migrate too?  Thoughts on your own migration decision?

     


    So I looked a bit at dual CPU server grade hardware and/or using server grade components in consumer housing solutions.  Man, really not liking the options here!  It seems you either get a cheapish 2nd hand (ebay) large capacity LOUD rack mount chassis that you have to modify to make work in a home setting (and still have to mount ... somewhere), deal with the loud server and buy an expensive acoustic mounting enclosure, buy some crazy expensive amalgamation case with lots of expansion slots and modification options, or a less expensive case with extras to support additional expansion slots and be left with minimal room for routing/air flow (also still rather expensive).  I see a common thread here of expensive, and I can not justify any of it :|.

     

    Going the cheapest possible option which will leave only minimal future proofing with insufficient cooling (IMO) and will still cost ~$150 new with an additional ~$120 to upgrade to capacity ... and getting one used is not too easy from some quick searches.  And that doesn't even take into consideration the extra fans and CPU coolers likely needed to prevent it from suffocating.  At that point, buying the 2nd hand server chassis is the cheapest, but it's, so, LOUD!

     

    I see no good economical home solutions here.

     

    -JesterEE

×
×
  • Create New...