-
Posts
29 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by Kung-Fubick
-
-
Hi, I can't figure this issue out no matter how much I google and try so I will try posting here.
Currently I am running multiple docker containers behind a VPN tunnel (wg0) setup with unraids VPN manager (using a mullvad config) and all of that works great when I am on my physical network: I can reach each webUI by accessing my NAS LAN address with port number (10.0.1.25:7878 for example).
However when I try to access these docker container by going through my unraid tailscale IP (100.50.50.50:7878 for example) it can't connect if the container is using wg0 but I can reach it if it uses br0 (default bridge).
So I have tried adding a subnet route in tailscale to where my VPN ip pool is (10.10.10.0/24) without any success.- 1
-
So I got this message under common problems for the first time after using unraid for a bit over 2 years:
"Your server has detected hardware errors. The output of mcelog has been loggeg. Post your diagnostics and ask for assistance on the Unraid forums". I am not sure where to start but posting diagnostics here in the hopes that someone will be able to help. -
Looks like that did the trick. Thank you for your help @JorgeB!
Sometimes the simple solutions are the best ones.- 1
-
Okay, I guess I could try a 2nd GPU. Because I have tried it with the older AMD GPU and it's still the same result. Will return once I have tried with a 2nd GPU.
-
Well neither does the 3600 but it should still give an output for the bios, right?
-
Hi, this is not strictly an unraid issue but the community has been very helpful previously. I have an unraid build with a b450f motherboard and
a AMD Ryzen 9 5950X. I am however unable to get any type of output signal from it. It boots up unraid just fine and I was able to get an output on my older AMD Ryzen 5 3600 which I used in this build before switching CPU. Now I need to change bios settings because they reversed after I switched processors, but this is impossible because I can't get a HDMI/DP signal from the server. Any help is greatly appreciated.
I use to have an old AMD GPU which I have now taken out to see if it would make a difference but nothing has changed. -
Hi,
I am unable to access my unraid webgui from the browser. I tried booting unraid into GUI mode and logging in there but I was unable to reach it from there as well. The array is currently not started since I use encryption and start it by manually entering the key from the webgui. So no VMs or docker containers are currently running. This came after a reboot. What I did before rebooting was setting the ACSI override setting in VM manager from "both" to "disabled". The unraid usually runs a pfSense VM as my primary router and a static IP 10.0.1.25. As of the reboot I am have an old router which has DHCP enabled and runs on the same subnet.
I don't think it's an issue with the router since I can't access the webgui while booting in GUI mode. This is the only time I have done this and it opened a web browser with the address "localhost" which could not be reached.
I have no idea how to fix this issue.EDIT 1:
I switched the port of the UNRAID Usb bootable drive and now I can access the webGUI from booting into GUI mode. The odd thing is that it says all of my drives are missing and it has changed name from "NAS" to "TOWER". Have I managed to accidentally pass through my sata drives?
Or does anyone have any idea what I can do to fix this? I still can't access the webgui from personal computers but that might be a routing error for all I know.
EDIT 2:
Seems like everything is working correctly after I switched back to "Both" on the ACS override in VM manager. "Multifunction" seems to work as well. The original problem seems to have been switching this setting and switching where the USB bootable drive is when trying to figure out what was wrong from start. -
I will keep trying a bit and see if it gets me anywhere.
Yeah those prices are truly insane, got the card to be able to run a mac vm with linux and windows.
I do have a nvidia gpu that currently occupies my workstation so I might get a new card and put the old nvidia card in for ubuntu and windows. Then try with the R9 for the mac.
Big thanks for all the effort you put into helping me anyway! -
2 minutes ago, ghost82 said:
That's because of the amd reset bug, unfortunately your gpu suffers from this issue: gpu is not able to properly reset after a shutdown or restart of a vm, and you need to reboot the whole host.
And unfortunately gnif vendor reset patch seems to not support your gpu.
Not the most friendly gpu to play with passthrough...
Oh well that's a bummer, tried with the ubuntu VM aswell. Screen went black and gpu fans maxed out so had to reboot anyway.
So if I would use this card I would have to reboot host every time I shutdown the VM?
Also is there any easy way to find out which cards that are supported? -
36 minutes ago, ghost82 said:
Everything looks good but...
It could be you need to dump the vbios of the gpu and set it in the vm xml.
Do not download a vbios from internet, are you able to dump the vbios of the card?
See here (EDIT E):
https://www.reddit.com/r/VFIO/comments/6iyd9m/screen_goes_black_after_starting_vm/
Getting some display output, was into the "repair windows screen" which I was able to click around in once I forwarded the USB controller.
Tried reinstalling windows and it all went fine until some type of crash when screen went fully black. This was when I had finished installing and was browsing in the OS. And all of this happened after I added the vbios. However right now when I start either Windows VM (created a new one to reinstall windows) it freezes while booting. Getting the windows "ring" of loading and the TianoCore picture.
Going to try again and see if I did something wrong after installing windows. I did install a display driver that might have been intended for VNC (followed an old spaceinvader video).
Attaching another diagnostics if it is of any interest. -
I have the spaceinvader script for it so I will use that.
Will update after. -
Removed the USB controller and replaced the ealier gpu section with the one you wrote. Recognize that part from spaceinvaders video.
I didn't change anything else in the VM. It still starts after the changes but I still get a black screen.
Attaching diagnostics after trying to start the VM with configurations.
Here is the output of the lines you provided me with:
for iommu_group in $(find /sys/kernel/iommu_groups/ -maxdepth 1 -mindepth 1 -type d);do echo "IOMMU group $(basename "$iommu_group")"; for device in $(\ls -1 "$iommu_group"/devices/); do if [[ -e "$iommu_group"/devices/"$device"/reset ]]; then echo -n "[RESET]"; fi; echo -n $'\t';lspci -nns "$device"; done; done
IOMMU group 17
[RESET] 02:00.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU group 7
00:05.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 25
[RESET] 09:00.0 Non-Volatile memory controller [0108]: Samsung Electronics Co Ltd NVMe SSD Controller SM981/PM981/PM983 [144d:a808]
IOMMU group 15
01:00.1 SATA controller [0106]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset SATA Controller [1022:43c8] (rev 01)
IOMMU group 5
[RESET] 00:03.4 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 23
[RESET] 08:00.0 VGA compatible controller [0300]: Advanced Micro Devices, Inc. [AMD/ATI] Tonga PRO [Radeon R9 285/380] [1002:6939] (rev f1)
IOMMU group 13
00:18.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 0 [1022:1440]
00:18.1 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 1 [1022:1441]
00:18.2 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 2 [1022:1442]
00:18.3 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 3 [1022:1443]
00:18.4 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 4 [1022:1444]
00:18.5 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 5 [1022:1445]
00:18.6 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 6 [1022:1446]
00:18.7 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Matisse Device 24: Function 7 [1022:1447]
IOMMU group 3
00:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 21
02:07.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU group 11
[RESET] 00:08.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU group 1
[RESET] 00:01.3 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 28
[RESET] 0b:00.1 Encryption controller [1080]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Cryptographic Coprocessor PSPCPP [1022:1486]
IOMMU group 18
02:01.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU group 8
00:07.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 26
[RESET] 0a:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Function [1022:148a]
IOMMU group 16
01:00.2 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Bridge [1022:43c6] (rev 01)
IOMMU group 6
00:04.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 24
08:00.1 Audio device [0403]: Advanced Micro Devices, Inc. [AMD/ATI] Tonga HDMI Audio [Radeon R9 285/380] [1002:aad8]
IOMMU group 14
[RESET] 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset USB 3.1 XHCI Controller [1022:43d5] (rev 01)
IOMMU group 4
[RESET] 00:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse GPP Bridge [1022:1483]
IOMMU group 22
[RESET] 03:00.0 Ethernet controller [0200]: Intel Corporation I211 Gigabit Network Connection [8086:1539] (rev 03)
IOMMU group 12
00:14.0 SMBus [0c05]: Advanced Micro Devices, Inc. [AMD] FCH SMBus Controller [1022:790b] (rev 61)
00:14.3 ISA bridge [0601]: Advanced Micro Devices, Inc. [AMD] FCH LPC Bridge [1022:790e] (rev 51)
IOMMU group 30
0b:00.4 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse HD Audio Controller [1022:1487]
IOMMU group 2
00:02.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 20
02:06.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU group 10
00:08.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 29
[RESET] 0b:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Matisse USB 3.0 Host Controller [1022:149c]
IOMMU group 0
00:01.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse PCIe Dummy Host Bridge [1022:1482]
IOMMU group 19
02:04.0 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] 400 Series Chipset PCIe Port [1022:43c7] (rev 01)
IOMMU group 9
[RESET] 00:07.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Internal PCIe GPP Bridge 0 to bus[E:B] [1022:1484]
IOMMU group 27
[RESET] 0b:00.0 Non-Essential Instrumentation [1300]: Advanced Micro Devices, Inc. [AMD] Starship/Matisse Reserved SPP [1022:1485] -
2 hours ago, ghost82 said:
There are several labels, I'm not in front of unraid now so I cannot be specific, but you should have label unraid, label unraid with gui, and you mention also label unraid safe mode.
If you boot unraid (label unraid) add video=efifb:off to that label.
If you boot unraid with gui (label unraid with gui) add video=efifb:off to that label.
If you boot unraid in safe mode (label unraid safe mode) add video=efifb:off to that label.
I don't think you booted unraid in safe mode, so your changes don't apply.
Fix this by applying video=efifb:off to the correct label.
Yeah, changed it to the regular label and it seems like that did something good. Searched in /proc/iomem for efifb, no result.
However when I now try to start the VM the result is still the same.
Here is the output of the VM log, it has changed a little, I think the lines starting with "blockdev" are new.
-display none \
-no-user-config \
-nodefaults \
-chardev socket,id=charmonitor,fd=31,server,nowait \
-mon chardev=charmonitor,id=monitor,mode=control \
-rtc base=localtime \
-no-hpet \
-no-shutdown \
-boot strict=on \
-device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 \
-device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 \
-device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 \
-device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 \
-device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 \
-blockdev '{"driver":"file","filename":"/mnt/user/cacheShare/VM/Windows 10/vdisk1.img","node-name":"libvirt-3-storage","cache":{"direct":false,"no-flush":false},"auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-3-format","read-only":false,"cache":{"direct":false,"no-flush":false},"driver":"qcow2","file":"libvirt-3-storage","backing":null}' \
-device virtio-blk-pci,bus=pci.0,addr=0x3,drive=libvirt-3-format,id=virtio-disk2,bootindex=1,write-cache=on \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/Windows.iso","node-name":"libvirt-2-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-2-format","read-only":true,"driver":"raw","file":"libvirt-2-storage"}' \
-device ide-cd,bus=ide.0,unit=0,drive=libvirt-2-format,id=ide0-0-0,bootindex=2 \
-blockdev '{"driver":"file","filename":"/mnt/user/isos/virtio-win-0.1.173-2.iso","node-name":"libvirt-1-storage","auto-read-only":true,"discard":"unmap"}' \
-blockdev '{"node-name":"libvirt-1-format","read-only":true,"driver":"raw","file":"libvirt-1-storage"}' \
-device ide-cd,bus=ide.0,unit=1,drive=libvirt-1-format,id=ide0-0-1 \
-netdev tap,fd=33,id=hostnet0 \
-device virtio-net,netdev=hostnet0,id=net0,mac=52:54:00:0b:57:c3,bus=pci.0,addr=0x2 \
-chardev pty,id=charserial0 \
-device isa-serial,chardev=charserial0,id=serial0 \
-chardev socket,id=charchannel0,fd=34,server,nowait \
-device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 \
-device usb-tablet,id=input0,bus=usb.0,port=1 \
-device vfio-pci,host=0000:08:00.0,id=hostdev0,bus=pci.0,addr=0x5 \
-device vfio-pci,host=0000:08:00.1,id=hostdev1,bus=pci.0,addr=0x6 \
-device vfio-pci,host=0000:01:00.0,id=hostdev2,bus=pci.0,addr=0x8 \
-sandbox on,obsolete=deny,elevateprivileges=deny,spawn=deny,resourcecontrol=deny \
-msg timestamp=on
2021-10-22 10:43:37.681+0000: Domain id=1 is tainted: high-privileges
2021-10-22 10:43:37.681+0000: Domain id=1 is tainted: host-cpu
char device redirected to /dev/pts/0 (label charserial0)
2021-10-22T10:43:41.008138Z qemu-system-x86_64: vfio: Cannot reset device 0000:01:00.0, depends on group 15 which is not owned.
2021-10-22T10:43:42.032887Z qemu-system-x86_64: vfio: Cannot reset device 0000:01:00.0, depends on group 15 which is not owned.
Also pasting the /proc/iomem if it is relevant:
less /proc/iomem
00000000-00000fff : Reserved
00001000-0009ffff : System RAM
000a0000-000fffff : Reserved
000a0000-000bffff : PCI Bus 0000:00
000c0000-000dffff : PCI Bus 0000:00
000c0000-000cffff : Video ROM
000f0000-000fffff : System ROM
00100000-09c7efff : System RAM
04000000-04a00816 : Kernel code
04c00000-04e4afff : Kernel rodata
05000000-05127f7f : Kernel data
05471000-055fffff : Kernel bss
09c7f000-09ffffff : Reserved
0a000000-0a1fffff : System RAM
0a200000-0a20ffff : ACPI Non-volatile Storage
0a210000-0affffff : System RAM
0b000000-0b01ffff : Reserved
0b020000-a7b4f017 : System RAM
a7b4f018-a7b6d457 : System RAM
a7b6d458-a7b6e017 : System RAM
a7b6e018-a7b7f057 : System RAM
a7b7f058-b920dfff : System RAM
b920e000-b920efff : Reserved
b920f000-baf98fff : System RAM
baf99000-bc5a7fff : Reserved
bc5a8000-bc6dafff : ACPI Tables
bc6db000-bcd6afff : ACPI Non-volatile Storage
bcd6b000-bd9fefff : Reserved
bd9ff000-beffffff : System RAM
bf000000-bfffffff : Reserved
c0000000-fec2ffff : PCI Bus 0000:00
d0000000-e01fffff : PCI Bus 0000:08
d0000000-dfffffff : 0000:08:00.0
d0000000-dfffffff : vfio-pci
e0000000-e01fffff : 0000:08:00.0
e0000000-e01fffff : vfio-pci
f0000000-f7ffffff : PCI MMCONFIG 0000 [bus 00-7f]
f0000000-f7ffffff : Reserved -
18 minutes ago, ghost82 said:
That region is in use by efifb, unraid is using the gpu, so if you want to pass it through you need to prevent efifb to attach, so just disable in syslinux config, append this
video=efifb:off
so it becomes something like (no gui):
append video=efifb:off initrd=/bzroot
Save and reboot.
I tried this, (added it under Unraid OS Safe Mode (no plugins, no GUI))
So full entry under this is:
kernel /bzimage
append video=efifb:off initrd=/bzroot unraidsafemode
I assumed that's what you meant, saved and rebooted. Is this the wrong place to edit it?
Checked /proc/iommu , same group still in use by efifb.
Thank you for helping me. -
1 hour ago, ghost82 said:
Output of "cat /proc/iomem" from unraid terminal please.
This is the full output
-
cat /proc/iomem
00000000-00000fff : Reserved
00001000-0009ffff : System RAM
000a0000-000fffff : Reserved
000a0000-000bffff : PCI Bus 0000:00
000c0000-000dffff : PCI Bus 0000:00
000c0000-000cffff : Video ROM
000f0000-000fffff : System ROM
00100000-09c7efff : System RAM
04000000-04a00816 : Kernel code
04c00000-04e4afff : Kernel rodata
05000000-05127f7f : Kernel data
05471000-055fffff : Kernel bss
09c7f000-09ffffff : Reserved
0a000000-0a1fffff : System RAM
0a200000-0a20ffff : ACPI Non-volatile Storage
0a210000-0affffff : System RAM
0b000000-0b01ffff : Reserved
0b020000-a7b4f017 : System RAM
a7b4f018-a7b6d457 : System RAM
a7b6d458-a7b6e017 : System RAM
a7b6e018-a7b7f057 : System RAM
a7b7f058-b920dfff : System RAM
b920e000-b920efff : Reserved
b920f000-baf98fff : System RAM
baf99000-bc5a7fff : Reserved
bc5a8000-bc6dafff : ACPI Tables
bc6db000-bcd6afff : ACPI Non-volatile Storage
bcd6b000-bd9fefff : Reserved
bd9ff000-beffffff : System RAM
bf000000-bfffffff : Reserved
c0000000-fec2ffff : PCI Bus 0000:00
d0000000-e01fffff : PCI Bus 0000:08
d0000000-dfffffff : 0000:08:00.0
d0000000-d08c9fff : efifb
e0000000-e01fffff : 0000:08:00.0
f0000000-f7ffffff : PCI MMCONFIG 0000 [bus 00-7f]
f0000000-f7ffffff : Reserved
f0000000-f7ffffff : pnp 00:00
fc900000-fcbfffff : PCI Bus 0000:0b
fc900000-fc9fffff : 0000:0b:00.3
fc900000-fc9fffff : xhci-hcd
fca00000-fcafffff : 0000:0b:00.1
fca00000-fcafffff : ccp
fcb00000-fcb07fff : 0000:0b:00.4
fcb08000-fcb09fff : 0000:0b:00.1
fcb08000-fcb09fff : ccp
fcc00000-fcdfffff : PCI Bus 0000:01
fcc00000-fccfffff : PCI Bus 0000:02
fcc00000-fccfffff : PCI Bus 0000:03
fcc00000-fcc1ffff : 0000:03:00.0
fcc00000-fcc1ffff : igb
fcc20000-fcc23fff : 0000:03:00.0
fcc20000-fcc23fff : igb
fcd00000-fcd7ffff : 0000:01:00.1
fcd80000-fcd9ffff : 0000:01:00.1
fcd80000-fcd9ffff : ahci
fcda0000-fcda7fff : 0000:01:00.0
fce00000-fcefffff : PCI Bus 0000:09
fce00000-fce03fff : 0000:09:00.0
fce00000-fce03fff : nvme
fcf00000-fcffffff : PCI Bus 0000:08
fcf00000-fcf3ffff : 0000:08:00.0
fcf60000-fcf63fff : 0000:08:00.1
fd200000-fd2fffff : Reserved
fd200000-fd2fffff : pnp 00:01
fd500000-fd57ffff : amd_iommu
fd600000-fd7fffff : Reserved
fea00000-fea0ffff : Reserved
feb80000-fec01fff : Reserved
fec00000-fec003ff : IOAPIC 0
fec01000-fec013ff : IOAPIC 1
fec10000-fec10fff : Reserved
fec10000-fec10fff : pnp 00:05
fec30000-fec30fff : Reserved
fec30000-fec30fff : AMDIF030:00
fed00000-fed00fff : Reserved
fed00000-fed003ff : HPET 0
fed00000-fed003ff : PNP0103:00
fed40000-fed44fff : Reserved
fed80000-fed8ffff : Reserved
fed81500-fed818ff : AMDI0030:00
fedc0000-fedc0fff : pnp 00:05
fedc2000-fedcffff : Reserved
fedd4000-fedd5fff : Reserved
fee00000-ffffffff : PCI Bus 0000:00
fee00000-fee00fff : Local APIC
fee00000-fee00fff : pnp 00:05
ff000000-ffffffff : Reserved
ff000000-ffffffff : pnp 00:05
100000000-83f37ffff : System RAM
83f380000-83fffffff : Reserved -
Hi
I have been trying to passthrough this card for some time but I am only met with a black screen.
What I have done:
Watched spaceinvader videos on passing through.
Passed through 2 IUMMO groups (GPU and sound from GPU). These are group 23 and 24.
Tried with and without vbios (Pretty sure this only applies to nvidia cards.
My attempts to solve it:
Been trying with different linux distros and safe graphics installs.
Tried installing the distro with VNC and then installing the drivers, then removing VNC but still the same black screen (0 output).
Tried the same on windows, it recognized my card after drivers were installed but still a black screen after removing VNC.
Attaching logs and files I was able to find. I can upload more logs or info if needed just make a post what you would like to see.
I think I have made an error while passing through the card.
Any tips and help much appreciated.iummo.txt log.txt nas-diagnostics-20211022-0034.zip nas-syslog-20211021-2234.zip vfio-log.txt Win10_log.txt
-
Realized you can list IP addresses that are exceptions from the proxy.
Thanks for the guide anyway!- 1
-
Hey,
Just used this guide to setup transmission and I really appreciate the guide, worked great.
Havn't gotten around to setting up remote gui for transmission yet so going to try that.
I use firefox as my default browser however and using the proxy as you described will make it so
I can't access my unraid GUI (I am assuming this is intended behavior). Is remote GUI the best way to go
or do you have any idea how to solve this differently?- 1
-
1 minute ago, jonathanm said:
Possibly, but not a sure bet. At the very least you need to run a long SMART test on any drive before you use it with Unraid.
Just because a specific model has a high failure rate doesn't mean all drives of that model are prone to early failure, just something to keep an eye on. I would trust one of those 3TB models that passed a long SMART and a preclear cycle more than I'd trust that 4TB if it failed SMART.
Drives must be fully tested for Unraid, no point using drives that are bad or untrustworthy.
Thank you, I will keep this in mind. I will run smart tests on my current disks over night.
Thing is one of the disks (the one i'm trying to use as a parity disk) keeps dissapearing time to time. I did find that it failed a mandatory smart command so I guess that disk is quite dead.
Will probably end up getting 2 new HDDs within a month.
-
Just now, trurl said:
You can't allow untrustworthy drives in your parity array. In order to reliably rebuild all bits of a disk, it must be able to reliably read all bits of parity PLUS all bits of all other disks.
Yes, I realized that but checking if that disk works instead of my real parity disk would tell me something atleast
-
23 minutes ago, jonathanm said:
Also worth noting that the 3TB drives are a model so notorious for failing they have their own wiki about it.
So the Seagate IronWolf ST4000VN008 64MB 4TB would be better?
-
Yeah, thanks for pointing that out, it was just an experimental drive that is around 10 years old so that wouldn't be surprising. I recently learned that the other 2 drives are around 5 years old.
-
Here is another diagnostics after a failed parity build.
Just want to add that the parity disk that keeps failing has OK smart tests.
Trying to reach docker webUI behind VPN while using tailscale plugin.
in General Support
Posted
Looks like I won't be able to, in the attempt at resolving this I managed to crash my Unraid by adding (what I can only assume) was an incorrect vpn configuration.
I think from now on I will have to do it the VM way, or only run arr applications without a VPN and just have the torrenter behind a VPN.