-
Posts
290 -
Joined
-
Last visited
Jorgen's Achievements
-
Jorgen started following Parity disk disabled, what now? , Is this SSD beyond salvage? Failing Extended SMART test, but passing Short test , [Support] binhex - bitmagnet and 7 others
-
My oldest SSD seems to have finally given up the ghost. It passes a short SMART test, but the Extended test always fails in the first 10%. With read errors, if I have interpreted the SMART report correctly (attached). It was mounted via unassigned devices and had no important data on it, so I'm not worried about data loss. Just want to know if it's definitely bound for the scrap heap or if anything can be done to give it a second lease of life? I think it's over 10 years old, and it has served me well so not really holding my breath here... tower-smart-20241205-1118.zip
-
Those ports are not used for the torrent port forwarding. The input/outport ports are used to allow traffic in or out of the container outside the VPN tunnel. Required for scenarios where other containers are sharing the same docker network, but I don’t think that applies to you. I think you would be much better off using one of the VPN enabled torrent dockers from Binhex instead, rather than trying to use the proxy function of privoxy.
-
Disk errors gone after cable reseating and reboot, 26 hours parity rebuild completed without errors. Phew.
-
Thanks! Having more power outages, will let that settle down before proceeding. Server is currently off after power cut. Need to get that UPS....
-
I had a power outage and once the power was back on, I started my server, and then started the array from my mobile phone. It appears the parity disk had some issues during boot, but I didn't notice on the small phone screen. Hence starting the array without investigating. Diagnostics attached in current state. Array is running, parity is disabled. Do I shut down server, check the cabling to the parity disk and boot it up again? IF logs are clear after reboot, how do I get the parity disk and array back into a working/protected state? tower-diagnostics-20240816-1354.zip
-
I'm going to mark this as resolved, no more flickering or log errors for 2 days of daily use. It showed up many times a day before adding the qemu override lines. I'll update the first post with the required steps
-
That issue was about name resolution not working, preventing the deluge app to start and hence the web UI not being available at all. I’m not sure it resolved showing IPs on the docker page. I don’t use custom network and assigned IPs for dockers so can’t help on that part. But if you can get to the deluge UI via the IP in the browser, you should be able to get to it from the Web UI button on the docker page by hardcoding the IP and port. Did you try that?
-
If that's the exact error message you get from the browser, you have a syntax error in the config. Use all square brackets, you have a curly bracket thrown in This is my exact config if that helps: http://[IP]:[PORT:8112]/ Actually, [IP] seems to resolve to the server IP on closer inspection? If you use custom IPs for the dockers it probably won't work. I guess you could hardcode the actual docker IP in the config, but there might be other more dynamic solutions too. That's above my paygrade though, sorry.
-
Ok, finally had some time to look into this. The need for both x-igd-gms=0x2 and x-igd-opregion=on is explained here https://github.com/qemu/qemu/blob/master/docs/igd-assign.txt. BUT that doc is 8 years old and written for older generation iGPUs. SR-IOV and UEFI seem to have changed things, but I haven't found any definitive sources to explain if and why these two lines are required for new generations. Nevertheless, the ROM github page calls them out as requirements, so who am I to argue. This is how to add them to an unraid VM config. First off, you need to edit the VM in XML mode. Replace <domain type='kvm'> with <domain type='kvm' id='14' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> To allow us to add Qemu overrides commands to the XML Add this block of code at the end of the XML, just before the closing </domain> tag <qemu:override> <qemu:device alias='hostdev0'> <qemu:frontend> <qemu:property name='x-igd-opregion' type='bool' value='true'/> <qemu:property name='x-igd-gms' type='unsigned' value='2'/> </qemu:frontend> </qemu:device> </qemu:override> Unraid seems to automatically add hostdev alias names to the hostdev devices when the VM is running. For me, it correctly labelled the iGPU as hostdev0, but your milage may vary... This is what my iGPU and audio hostdevs look like in the XML while the VM is running: <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </source> <alias name='hostdev0'/> <rom file='/mnt/cache/domains/iGPU_ROM_files/gen12_igd.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0' multifunction='on'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x00' slot='0x1f' function='0x3'/> </source> <alias name='hostdev1'/> <rom file='/mnt/cache/domains/iGPU_ROM_files/gen12_gop.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x1' multifunction='on'/> </hostdev> Since the flickering/log errors are intermittent, I'm not sure if this has resolved the issue. I've been running it like this for a few hours, and so far so good, even after stress testing by running 5 simultaneous 4K youtube streams at the same time. I also made two more changes to the system, which might have helped as well: Removed discrete GPU (GT710), the PCIe slot is now empty. Enabled HDMI sound for the iGPU in BIOS. Not sure why I had it disabled in the first place, or if it even mattered Will report back after a few more days of daily use Oh, I also spent a lot of time trying to work out how to support legacy mode, see instructions below, but it turns out unraid adds "-nodefaults" automatically so that was all taken care of already.
-
The flickering is definitely related to the log errors, they happen at the same time. No flickering, no error. Just increasing the shared memory (DVMT pre-allocated) in BIOS with no other changes did not improve things. I'm struggling to convert the ProxMox formatted arguments below to something unraid will accept in the XML. All three of these are needed according to the GitHub page, and I have none of them added. args: -set device.hostpci0.addr=02.0 -set device.hostpci0.x-igd-gms=0x2 -set device.hostpci0.x-igd-opregion=on Maybe something along the lines of https://docs.redhat.com/en/documentation/red_hat_enterprise_linux/6/html/virtualization_administration_guide/sub-sect-domain_commands-converting_qemu_arguments_to_domain_xml#sub-sect-Domain_Commands-Converting_QEMU_arguments_to_domain_XML
-
Was experiencing severe flickering in MS Teams app today, when watching a shared screen. It could be related to hardware transcoding, but no conclusive evidence. However, my syslog got spammed with large sections of this today, which I think corresponds with the Teams flickering episodes. Can't be sure though, will watch the log next time it happens. Jul 29 09:02:29 Tower kernel: dmar_fault: 3204 callbacks suppressed Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3 Jul 29 09:02:29 Tower kernel: DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x70680000 [fault reason 0x06] PTE Read access is not set Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3 Jul 29 09:02:29 Tower kernel: DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x70680000 [fault reason 0x06] PTE Read access is not set Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3 Jul 29 09:02:29 Tower kernel: DMAR: [DMA Read NO_PASID] Request device [00:02.0] fault addr 0x70680000 [fault reason 0x06] PTE Read access is not set Jul 29 09:02:29 Tower kernel: DMAR: DRHD: handling fault status reg 3 Jul 29 09:03:43 Tower kernel: dmar_fault: 2705 callbacks suppressed EDIT: seems related to the amount of memory allocated to the iGPU: https://forum.proxmox.com/threads/dmar-dma-read-no_pasid-request-device-00-02-0-fault-addr-0xc8e8c000-fault-reason-0x06-pte-read-access-is-not-set-intel-610-integrated-graphic.128012/ I'll try increasing it in BIOS and work out what the ROM repo maintainer means with:
-
I've updated the BIOS settings after further reading, and to get rid of error messages related to resizable memory allocation in the unraid boot logs. New setting that works well so far: Above 4G Decoding: Enabled C.A.M. Clever access memory: Enabled Share Memory: 64M (DVMT pre-allocated) I'm now running dual monitor output in my Win10 VM from the iGPU, via HDMI and Displayport. No real-life problems so far, but I'm getting the below in the unraid logs when I start the VM: Tower kernel: vfio-pci 0000:00:02.0: vfio_ecap_init: hiding ecap 0x1b@0x100 Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09 Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09 Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09 Tower kernel: vfio-pci 0000:00:02.0: Invalid PCI ROM header signature: expecting 0xaa55, got 0x7b09 The first line is just reporting that some native extended capability of the iGPU is not being passed through to the VM. Not sure which one yet, but it doesn't seem to cause any issues. I will investigate ROM header signature warning and see if I should raise this on the ROM github page or if there's something else I'm missing. Again, doesn't seem to affect anything in practice, so priority is low.
-
After upgrading to unraid 6.12.11 the required python2 plugin did not install for me anymore, meaning the WOL plugin was not working either. Swapped to this (beta) plugin and WOL start of VMs is now working nicely again for me.
-
Just wanted to say thanks for this plugin! It works perfectly for my simple use case of starting a VM with a WOL packet from my phone. I was using dmacias wake-on-lan plugin that required an additional pyton2 plugin to run on newer unraid versions. But after the upgrade to unraid 6.12.11 the python plugin did not install anymore and the WOL plugin stopped working.