Jump to content

Jorgen

Members
  • Posts

    290
  • Joined

  • Last visited

Posts posted by Jorgen

  1. My oldest SSD seems to have finally given up the ghost. It passes a short SMART test, but the Extended test always fails in the first 10%. With read errors, if I have interpreted the SMART report correctly (attached).

    It was mounted via unassigned devices and had no important data on it, so I'm not worried about data loss. Just want to know if it's definitely bound for the scrap heap or if anything can be done to give it a second lease of life?

    I think it's over 10 years old, and it has served me well so not really holding my breath here... :)

     

    tower-smart-20241205-1118.zip

  2. 5 hours ago, jdu said:

    Also, I'm confused what the VPN_INPUT/OUTPUT_PORTS should be if I want to simply enable port forwarding for Qbittorrent.

    Those ports are not used for the torrent port forwarding.

    The input/outport ports are used to allow traffic in or out of the container outside the VPN tunnel.  Required for scenarios where other containers are sharing the same docker network, but I don’t think that applies to you.

     

    I think you would be much better off using one of the VPN enabled torrent dockers from Binhex instead, rather than trying to use the proxy function of privoxy. 

     

  3. I had a power outage and once the power was back on, I started my server, and then started the array from my mobile phone.

    It appears the parity disk had some issues during boot, but I didn't notice on the small phone screen. Hence starting the array without investigating.

    Diagnostics attached in current state. Array is running, parity is disabled.

    Do I shut down server, check the cabling to the parity disk and boot it up again?

    IF logs are clear after reboot, how do I get the parity disk and array back into a working/protected state?

     

    tower-diagnostics-20240816-1354.zip

  4. On 8/10/2024 at 9:18 PM, Jorgen said:

    Since the flickering/log errors are intermittent, I'm not sure if this has resolved the issue. I've been running it like this for a few hours, and so far so good, even after stress testing by running 5 simultaneous 4K youtube streams at the same time.

     

    I'm going to mark this as resolved, no more flickering or log errors for 2 days of daily use. It showed up many times a day before adding the qemu override lines. I'll update the first post with the required steps

  5. That issue was about name resolution not working, preventing the deluge app to start and hence the web UI not being available at all. I’m not sure it resolved showing IPs on the docker page. I don’t use custom network and assigned IPs for dockers so can’t help on that part.

    But if you can get to the deluge UI via the IP in the browser, you should be able to get to it from the Web UI button on the docker page by hardcoding the IP and port. Did you try that?

  6. 3 hours ago, Mlatx said:

    "[IP}:[PORT:8112]"

    If that's the exact error message you get from the browser, you have a syntax error in the config.

    Use all square brackets, you have a curly bracket thrown in
    This is my exact config if that helps:
     

    http://[IP]:[PORT:8112]/

     

    Actually, [IP] seems to resolve to the server IP on closer inspection? If you use custom IPs for the dockers it probably won't work. I guess you could hardcode the actual docker IP in the config, but there might be other more dynamic solutions too. That's above my paygrade though, sorry.

  7. On 7/30/2024 at 10:17 PM, Jorgen said:

    I'm struggling to convert the ProxMox formatted arguments below to something unraid will accept in the XML. All three of these are needed according to the GitHub page, and I have none of them added.

    args: -set device.hostpci0.addr=02.0 -set device.hostpci0.x-igd-gms=0x2 -set device.hostpci0.x-igd-opregion=on 

     

     

    Ok, finally had some time to look into this. The need for both x-igd-gms=0x2 and x-igd-opregion=on is explained here https://github.com/qemu/qemu/blob/master/docs/igd-assign.txt. BUT that doc is 8 years old and written for older generation iGPUs. SR-IOV and UEFI seem to have changed things, but I haven't found any definitive sources to explain if and why these two lines are required for new generations.

    Nevertheless, the ROM github page calls them out as requirements, so who am I to argue.

    This is how to add them to an unraid VM config.

     

    1. First off, you need to edit the VM in XML mode.
       
    2. Replace
      <domain type='kvm'>

      with

      <domain type='kvm' id='14' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

      To allow us to add Qemu overrides commands to the XML
       

    3. Add this block of code at the end of the XML, just before the closing </domain> tag

        <qemu:override>
          <qemu:device alias='hostdev0'>
            <qemu:frontend>
              <qemu:property name='x-igd-opregion' type='bool' value='true'/>
              <qemu:property name='x-igd-gms' type='unsigned' value='2'/>
            </qemu:frontend>
          </qemu:device>
        </qemu:override>

       

    4. Unraid seems to automatically add hostdev alias names to the hostdev devices when the VM is running. For me, it correctly labelled the iGPU as hostdev0, but your milage may vary... This is what my iGPU and audio hostdevs look like in the XML while the VM is running:
          <hostdev mode='subsystem' type='pci' managed='yes'>
            <driver name='vfio'/>
            <source>
              <address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
            </source>
            <alias name='hostdev0'/>
            <rom file='/mnt/cache/domains/iGPU_ROM_files/gen12_igd.rom'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
          </hostdev>
          <hostdev mode='subsystem' type='pci' managed='yes'>
            <driver name='vfio'/>
            <source>
              <address domain='0x0000' bus='0x00' slot='0x1f' function='0x3'/>
            </source>
            <alias name='hostdev1'/>
            <rom file='/mnt/cache/domains/iGPU_ROM_files/gen12_gop.rom'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1' multifunction='on'/>
          </hostdev>

    Since the flickering/log errors are intermittent, I'm not sure if this has resolved the issue. I've been running it like this for a few hours, and so far so good, even after stress testing by running 5 simultaneous 4K youtube streams at the same time.

     

    I also made two more changes to the system, which might have helped as well:

    • Removed discrete GPU (GT710), the PCIe slot is now empty.
    • Enabled HDMI sound for the iGPU in BIOS. Not sure why I had it disabled in the first place, or if it even mattered

    Will report back after a few more days of daily use

     

    Oh, I also spent a lot of time trying to work out how to support legacy mode, see instructions below, but it turns out unraid adds  "-nodefaults" automatically so that was all taken care of already.

    Quote

     To make use of legacy mode, simply remove all other graphics options and use "-nographic" and either "-vga none" or "-nodefaults", along with adding the device using vfio-pci

     

  8. On 7/29/2024 at 9:42 PM, Jorgen said:

    I'll try increasing it in BIOS and work out what the ROM repo maintainer means with:
     

    Quote

    Pay attention to the BIOS settings: DVMT pre allocated, do not exceed 64M, 64M corresponds to x-igd-gms=0x2, if it exceeds 64M, x-igd-gms must be increased!

     

     

    The flickering is definitely related to the log errors, they happen at the same time. No flickering, no error.

     

    Just increasing the shared memory (DVMT pre-allocated) in BIOS with no other changes did not improve things.


    I'm struggling to convert the ProxMox formatted arguments below to something unraid will accept in the XML. All three of these are needed according to the GitHub page, and I have none of them added.

    args: -set device.hostpci0.addr=02.0 -set device.hostpci0.x-igd-gms=0x2 -set device.hostpci0.x-igd-opregion=on 

     

    Maybe something along the lines of https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-converting_qemu_arguments_to_domain_xml#sub-sect-Domain_Commands-Converting_QEMU_arguments_to_domain_XML

     

  9. Was experiencing severe flickering in MS Teams app today, when watching a shared screen. It could be related to hardware transcoding, but no conclusive evidence.

    However, my syslog got spammed with large sections of this today, which I think corresponds with the Teams flickering episodes. Can't be sure though, will watch the log next time it happens.

     

    Jul 29 09:02:29 Tower kernel: dmar_fault: 3204 callbacks suppressed
    Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3
    Jul 29 09:02:29 Tower kernel: DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x70680000 [fault reason 0x06] PTE Read access is not set
    Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3
    Jul 29 09:02:29 Tower kernel: DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x70680000 [fault reason 0x06] PTE Read access is not set
    Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3
    Jul 29 09:02:29 Tower kernel: DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x70680000 [fault reason 0x06] PTE Read access is not set
    Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3
    Jul 29 09:03:43 Tower kernel: dmar_fault: 2705 callbacks suppressed

     

     

    EDIT: seems related to the amount of memory allocated to the iGPU: https://forum.proxmox.com/threads/dmar-dma-read-no_pasid-request-device-00-02-0-fault-addr-0xc8e8c000-fault-reason-0x06-pte-read-access-is-not-set-intel-610-integrated-graphic.128012/

     

    I'll try increasing it in BIOS and work out what the ROM repo maintainer means with:
     

    Quote

    Pay attention to the BIOS settings: DVMT pre allocated, do not exceed 64M, 64M corresponds to x-igd-gms=0x2, if it exceeds 64M, x-igd-gms must be increased!

     

  10. I've updated the BIOS settings after further reading, and to get rid of error messages related to resizable memory allocation in the unraid boot logs.

    New setting that works well so far:

    • Above 4G Decoding: Enabled
    • C.A.M. Clever access memory: Enabled
    • Share Memory: 64M (DVMT pre-allocated)

    I'm now running dual monitor output in my Win10 VM from the iGPU, via HDMI and Displayport.

     

    No real-life problems so far, but I'm getting the below in the unraid logs when I start the VM:

    Tower kernel: vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100
    Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09
    Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09
    Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09
    Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09


    The first line is just reporting that some native extended capability of the iGPU is not being passed through to the VM. Not sure which one yet, but it doesn't seem to cause any issues.

    I will investigate ROM header signature warning and see if I should raise this on the ROM github page or if there's something else I'm missing. Again, doesn't seem to affect anything in practice, so priority is low.

     

  11. Just wanted to say thanks for this plugin! It works perfectly for my simple use case of starting a VM with a WOL packet from my phone.

     

    I was using dmacias wake-on-lan plugin that required an additional pyton2 plugin to run on newer unraid versions. But after the upgrade to unraid 6.12.11 the python plugin did not install anymore and the WOL plugin stopped working.

    • Like 1
  12. On 7/5/2024 at 9:00 PM, Jorgen said:

    I would actually prefer to remove the GPU and only use the iGPU and onboard audio, but when I tried that the iGPU was not recognized by the VM. After some reading I think I need to rebuild the VM with Seabios instead of the current OVMF, but that’s a project for another day.


    After many, many hours of going down rabbit holes and trying various methods, I finally managed to pass the iGPU through to the Win10 VM and have dipslay output via HDMI to a monitor:

     

  13. Since I got my i5-12400 CPU in a recent hardware refresh, I've struggled to pass through the integrated intel GPU (and audio) to a Windows 10 VM. I wanted the VM to use the iGPU with an attached monitor for HDMI output, but most guides I came across were either for older generation CPUs or were about virtual GPU usage via SR-IOV (which is also cool, but not what I wanted to achieve).

    I was able to isolate the iGPU and pass it through to a Win10 VM, installing the intel drivers, but it always ended with a code 43 error in the device manager, a black screen and this error in the VM logs:
     

    qemu-system-x86_64: vfio-pci: Cannot read device rom at 0000:00:02.0
    Device option ROM contents are probably invalid (check dmesg).
    Skip option ROM probe with rombar=0, or load from file with romfile=

     

    Setting rombar=0 did remove the error from the log, but did not resolve code 43 or the black screen.

     

    However, it did lead me to this guide: https://www.cnx-software.com/2023/12/17/how-to-use-a-hdmi-monitor-usb-mouse-keyboard-in-promox-ve-on-an-intel-alder-lake-n-mini-pc/

    And this github repo containing rom files for gen 12+ intel iGPUs! https://github.com/gangqizai/igd

    The instructions in the repo are all in Chinese, but a bit of google translate made it pretty clear how to use the files.

     

    After some trial and error, I've managed to get the iGPU passed through to a Win10 VM on unRAID and outputting to the monitor via HDMI. Happy days.

     

     Please read these notes from the repo maintainer before trying this!

    Quote

    Usage restrictions

    This ROM does not support commercial use and is only for technical research by DIY enthusiasts.

    This ROM only supports Intel core graphics and does not support AMD

    Only supports UEFI, normal boot. Secure boot is not supported yet.

    Only supports OVMF mode, seabios does not support it

    The memory must be at least 4G. If it is less than 4G, there may be problems.

    Pay attention to the BIOS settings: DVMT pre allocated, do not exceed 64M, 64M corresponds to x-igd-gms=0x2, if it exceeds 64M, x-igd-gms must be increased!

     

     

    Here are the steps/settings that worked for me on unRAID with my specific hardware.

     

    UnRAID settings:

     

    Host BIOS settings that MIGHT be relevant (ASRock Z690M-ITX/ax, i5-12400):

    • VT-d: Enabled
    • Primary graphics adapter: Onboard
    • Above 4G Decoding: Enabled
    • C.A.M. Clever access memory: Enabled
    • SR-IOV support: Enabled
    • Share Memory: 64M (DVMT pre-allocated)
    • IGPU Multi-monitor: Enabled

     

    Steps

    1. Download the two ROM files from the GitHub repo and save to the domains share. I used /mnt/cache/domains/iGPU_ROM_files/

      1. gen12_igd.rom (for the iGPU)

      2. gen12_gop.rom (for the sound card)

    2. Bind the iGPU and soundcard (and the rest of the devices in the same iommu group in my case) to VFIO in system devices
      image.thumb.png.e7a24c3bd489997b5979d4c882548972.png
       

    3. Plug monitor into iGPU HDMI and/or DP ports

    4. Reboot unRAID

    5. Create a new Windows 10 VM with NoVNC graphics card, CPU host passthrough, i440fx-7.2 and OVMF

      1. Install Windows with NoVNC, install standard virtio drivers

      2. Shut down VM

    6. Add iGPU as a second graphics card for the VM in UI editor

    7. Add intel soundcard in UI editor

    8. Start VM

    9. Confirm iGPU is visible in device manager

    10. Download and install intel iGPU driver (I used https://www.intel.com/content/www/us/en/support/detect.html)

    11. Shut down VM

    12. Remove NoVNC from VM in UI editor and set iGPU as primary graphics card

    13. Browse for and set graphics card ROM file in UI editor to gen12_igd.rom

    14. Save changes
       

    15. Edit VM XML and add gen12_gop.rom to soundcard, following the syntax of the iGPU from step above

      1. I also added multifunction=on for the iGPU and soundcard, and changed the slot of the soundcard to be the same as the iGPU, but I don't know if either of these are actually required
         

    16. The relevant part of the VM XML looks like this now:

          <hostdev mode='subsystem' type='pci' managed='yes'>
            <driver name='vfio'/>
            <source>
              <address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
            </source>
            <rom file='/mnt/cache/domains/iGPU_ROM_files/gen12_igd.rom'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/>
          </hostdev>
          <hostdev mode='subsystem' type='pci' managed='yes'>
            <driver name='vfio'/>
            <source>
              <address domain='0x0000' bus='0x00' slot='0x1f' function='0x3'/>
            </source>
            <rom file='/mnt/cache/domains/iGPU_ROM_files/gen12_gop.rom'/>
            <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1' multifunction='on'/>
          </hostdev>

       

    17. At the top of the XML, replace this line
      <domain type='kvm'>

      with

      <domain type='kvm' id='14' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'>

      To allow us to add Qemu overrides commands to the XML
       

    18. Add this block of code at the end of the XML, just before the closing </domain> tag

        <qemu:override>
          <qemu:device alias='hostdev0'>
            <qemu:frontend>
              <qemu:property name='x-igd-opregion' type='bool' value='true'/>
              <qemu:property name='x-igd-gms' type='unsigned' value='2'/>
            </qemu:frontend>
          </qemu:device>
        </qemu:override>

       

    19. Start the VM and enjoy monitor output from the iGPU!
    • Like 3
  14. On 7/5/2024 at 9:00 PM, Jorgen said:

    I’ve ordered a M.2 to USB controller adapter, to be able to pass the whole controller through to the VM which should let me use the USB sound card AND enjoy hotswap of other USB devices used by the VM. Will report back once that has arrived and been tested.

     

    The M.2 USB controller is installed and working really well. Got this one: https://www.aliexpress.com/item/1005005007315119.html

    Unraid system device reports it as:

    USB controller: Renesas Technology Corp. uPD720201 USB 3.0 Host Controller (rev 03)
    ScreenShot2024-07-20at4_16_04pm.png.53b61ce4df7a4eed94a33487ff8050c4.png

     

    It required external power via an included sata power connector, which required a molex to sata adapter as the PSU were already out of sata connectors.
    It was also EXTREMLY tight to attach the power connector (the white part at the top left of picture) as it had to be done after installing the card, but the clearance between the back of the card and a motherboard heatsink was very tight. Not the best design to have the power connector covering the mounting screw when installed. Anyway, it's in there now and won't come out unless absolutely necessary.

    IMG_2464.thumb.jpeg.8859ff4f6a0f9c11f0cdc773c1a1f633.jpeg

     

    All the extra cables added to the spaghetti of course, but airflow is still good, just not very good looking...

    IMG_2468.thumb.jpeg.85aa4ee6aa2d807eaceec8a2136c3f8f.jpeg

     

    • Like 1
  15. My unRAID build has been going strong for almost 10 years, with minimal hardware upgrades. But it was getting a bit long in the tooth and I was running out of options to expand. Memory was maxed out at 16GB and I had occasional OOM problems when running my daily driver Win10 VM and dockers and other services were requiring extra RAM at the same time. Time for a hardware refresh!

    I wanted to keep the Node 304 case as I really like the compact size, and I don’t think I will need more HDD slots in the future, there is still plenty of scope to just replace existing disks with higher capacity ones if need be. So that meant sticking to the mini-ITX mobo size.

    Turns out m-itx mobo’s are hard to find for a reasonable price in Australia and none of them have more than 4 SATA ports. My old Asus board had 6 SATA port onboard, which is why I picked it in the first place. With a single PCIe slot already spoken for by the GPU, this meant I was at least 2 ports short to accommodate all my drives (I had an unassigned SSD connected externally via USB, but wanted to move this into the case so really I was 3 ports short). I eventually settled on the Asrock board mainly due to price and availability.

    Another problem with my existing build was how cramped everything in the case was. The large tower CPU cooler was overkill and got in the way of the power and data connections for the disks. The PSU was also taking up a lot of space, making cable management a nightmare. Not just visually, but actually fitting some cables around components to get to where they need to be. So, smaller CPU cooler and a SFX PSU was in order.

     

    Old setup

    • CPU: Intel Core i7-4770 3.4 GHz Quad-Core Processor Replace
    • CPU Cooler: be quiet! Pure Rock 51.7 CFM Sleeve Bearing CPU Cooler Replace
    • Motherboard: Asus H87I-PLUS Mini ITX LGA1150 Replace

    • Memory: G.Skill Ripjaws X 16 GB (2 x 8 GB) DDR3-2400 CL11 Memory Replace

    • Video Card: Asus GT710-SL-2GD5 GeForce GT 710 2 GB Video Card Keep

    • Case: Fractal Design Node 304 Mini ITX Tower Case Keep

    • Power Supply: SeaSonic G 450 W 80+ Gold Certified Semi-modular ATX Power Supply Replace

    • Parity: Western Digital Red 12 TB 3.5" 7200 RPM Internal Hard Drive Keep

    • Storage: Western Digital Red 4 TB 3.5" 5400 RPM Internal Hard Drive Keep

    • Storage: Western Digital Red 4 TB 3.5" 5400 RPM Internal Hard Drive Keep

    • Storage: Seagate Archive 8 TB 3.5" 5900 RPM Internal Hard Drive Keep

    • Cache 1: Samsung 870 Evo 500 GB 2.5" Solid State Drive Keep

    • Cache 2: Samsung 870 Evo 500 GB 2.5" Solid State Drive Keep

    • Unassigned drive: Samsung 850 Evo 250 GB 2.5" Solid State Drive Keep

     

    Old hardware is really crammed in there with very little breathing room:

    IMG_2416.thumb.JPEG.5e90bd5dbaa382c6f5c829ffe4370741.JPEG

     

    IMG_2420.thumb.JPEG.3951b16b48436371347521fafb228379.JPEG

     

     

    Clearing the case

    Out with the old! Dust, dust, everywhere!

    IMG_2426.thumb.JPEG.92c29176155aaeb02e81fa33d5e50e29.JPEG

     

     

    New components

    Found a neat little M.2 adapter with a built-in sata controller and 5 ports. Suddenly I have more sata ports than I know what to do with. When the SSDs die in the future I will move over to NVME drives, but that’s a problem for future me. Also bought a cheap M.2 heatsink, but only using the back plate of it to prevent the little sata adapter from bending when plugging in cables. Someone mentioned it as a potential problem, so better safe than sorry.

     

     

    All the new stuff (minus the CPU cooler that was fashionably late to the party.

    IMG_2365.thumb.JPEG.f42ac3f6107b3f5d3c5f000d492f0ae4.JPEG

     

    She’s alive! Preflight of components before mounting in the case

    IMG_2374.thumb.JPEG.62c7331c2232d17bb96e7ecccaa20cce.JPEG

     

    BIOS update

    IMG_2376.thumb.JPEG.c63a37cda9545c2f2f35741870a1e6a7.JPEG

     

    Memtest ran successfully overnight

    IMG_2375.thumb.JPEG.cc2ee85b6bf4cb559064e3c681bffcd8.JPEG

     

    Installing new parts

    Sooooo much room!

    IMG_2428.thumb.JPEG.db2074a369728d560a44f89c411e55c3.JPEG

     

    Removed M.2 heatsink and installed the little sata controller

    IMG_2378.thumb.JPEG.b02b58263a2ec4b9d35cb23653ebde34.JPEG

     

    The stock Fractal PSU bracket with an SFX adapter still took up too much room as it was oddly placed in the middle of the case with the PSU hovering mid-air. So I paid my son’s schoolmate to 3D-print a custom bracket I found online: https://thangs.com/designer/Alanflame/3d-model/Fractal Design NODE 304 bracket for Lian Li 850W 80%2B Gold SFX Power Supply-996322

    Had to cut out a little notch for the power switch, but other than that it worked like a charm.

    Cables are still a pain, but so much easier than the previous situation.

    IMG_2429.thumb.JPEG.8948c3e678ea73f8a6462d8e900a5e80.JPEG

     

    Disks reinstalled, sata cable spaghetti restored but at least it’s not touching the CPU cooler anymore!

    IMG_2433.thumb.JPEG.635230fe384cb68c7c4666fa80185b54.JPEG

     

    All done and ready to be tucked away. Everything can breath more easily. PSU cables are a bit stiff but kind of bent into shape in the end.

    IMG_2435.thumb.JPEG.6fe4bf01ca9241255fb9b4682bccdc31.JPEG

     

    IMG_2434.thumb.JPEG.0dec486ec6a3064ca5bba613103a85ad.JPEG

     

    Problems and other notes

    Passing through my headset audio and mic turned out to be a bit of a struggle. The GPU and monitor combo only handles audio out, so I had to pass through the onboard audio (without the iGPU, for now). This caused more problems as the intel 1Gbe NIC was in the same IOMMU group as the onboard audio, and none of the overrides made any difference to the groups. So I disabled the intel NIC in the BIOS and use the Realtek 2.5Gbe NIC instead. I was worried about using Realtek as they have a bad reputation with unRAID, but so far so good!

    I attempted to use a cheap USB Audio card instead, but the Win10 VM wouldn’t have a bar of it.

    I’ve ordered a M.2 to USB controller adapter, to be able to pass the whole controller through to the VM which should let me use the USB sound card AND enjoy hotswap of other USB devices used by the VM. Will report back once that has arrived and been tested. Unorthodox use of M.2 slots perhaps, but it’s actually a quite handy little trick when dealing with mini-ITX boards with only one PCIe slot.

    I would actually prefer to remove the GPU and only use the iGPU and onboard audio, but when I tried that the iGPU was not recognized by the VM. After some reading I think I need to rebuild the VM with Seabios instead of the current OVMF, but that’s a project for another day.

     

    Here are the native IOMMU groups of this board with the Intel NIC and bluetooth/Wifi module disabled, in case anyone else is considering it.

    SystemDevicesAsrockZ690M-itx.thumb.png.687a376cc9ff6d31c0d0a41c398517a4.png

    • Like 3
  16. On 2/6/2024 at 10:02 AM, WobbleBobble2 said:

    Does your icloud method require a paid icloud account with enough storage to store your entire library? Any way to do this with the free 5gb account? 

    Yes, unfortunately. Unless your library and everything else Apple likes to store in iCloud comes in under 5GB…

  17. 2 hours ago, sir_storealot said:

    I cant seem to solve this :(

    Any idea how to find out what deluge is doing?

    One thing that could lead to some of those symptoms is running out of space in the /downloads directory.
    for example: I have my downloads go to a disk outside the array mounted by unassigned devices, and somehow the disk got unmourned but the mount point remained. This caused deluge to write all downloads into a temp ram area in unraid, which filled up quickly and caused issues. I never found any logs showing this problem, just stumbled upon it by chance while troubleshooting.

  18. On 3/25/2023 at 12:57 PM, mitch98 said:

    Running a Windows VM? No chance in my (albeit limited) experience on Unraid. 

     

    Since you’re new to unraid, have you looked at Spaceinvaderone’s video guides?

    There are tweaks you can do on the Windows side to get it to work better as an unraid VM. I had similar CPU spiking issues until I tweaked the MSI interrupt settings inside Windows.

    The hyper-v changes in this thread also helped of course.

    I’m not actually sure if the MSI interrupts were covered in this video series, could also have been in:

     

     

  19. On 3/14/2023 at 3:20 AM, C-Dub said:

    I don't mind starting Prowlarr again from scratch but I don't know enough about Unraid/Docker/databases to fix this without help.


    1. Stop container

    2. Backup Prowlarr appdata folder

    3. Delete everything in prowlarr appdata folder 

    4. Start container

     

    you can also uninstall container after step 1 and reinstall it after step 3

×
×
  • Create New...