bfeist

Members
  • Posts

    175
  • Joined

  • Last visited

Everything posted by bfeist

  1. I'm trying to setup OwnCloud OCIS which doesn't have a community application template. I've created everything in the unraid docker config and have successfully started the service. However, every time I stop the container it becomes orphaned, has to be deleted and then recreated via "add container" (which picks up my saved config to create a new container). How do I get unraid to stop orphaning the image? Thanks!
  2. Hi all, I've been getting a new error starting with 6.12.1, I believe. I'm on 6.12.2 now and it continues. The log fills with the following errors: Jul 3 00:44:51 Tower kernel: radeon 0000:00:01.0: GPU lockup (current fence id 0x000000000001e95b last fence id 0x000000000001e95c on ring 6) Jul 3 00:44:51 Tower kernel: radeon 0000:00:01.0: ring 6 stalled for more than 365322385msec Jul 3 00:44:51 Tower kernel: radeon 0000:00:01.0: GPU lockup (current fence id 0x000000000001e95b last fence id 0x000000000001e95c on ring 6) Jul 3 00:44:52 Tower kernel: radeon 0000:00:01.0: ring 6 stalled for more than 365322889msec Jul 3 00:44:52 Tower kernel: radeon 0000:00:01.0: GPU lockup (current fence id 0x000000000001e95b last fence id 0x000000000001e95c on ring 6) Jul 3 00:44:52 Tower kernel: radeon 0000:00:01.0: ring 6 stalled for more than 365323393msec I'm running an old AMD `AMD A10-7850K Radeon R7, 12 Compute Cores 4C+8G @ 3700 MHz` CPU. It has a GPU onboard that I'm not using for anything and I don't have a GPU card in the server. Any help you could provide would be greatly appreciated. Diagnostics attached. tower-diagnostics-20230704-2324.zip
  3. I see two (see below) I performed the following as you instructed: Made a dummy change to eth0 - I changed the default gateway, applied changes then changed the value back, applied changes Rebooted. Default route missing again, no route to internet. Screenshot below. Diagnostics after reboot attached. Last year I did make an attempt to add a usb-based NIC but it's no longer connected. This might be the root of the problem. Thanks again. tower-diagnostics-20230628-1211.zip
  4. Default gateway was already set on `eth0`. Diagnostics attached. I appreciate the help. tower-diagnostics-20230628-1002.zip
  5. For some reason, each time I upgrade to a new version of unraid, I can't route to outside the network. I discovered it's because the default route is missing. In the file attached, you can see the line I have to add manually in order to restore outbound network access. Any thoughts as to how I can fix this? Thanks!
  6. @ich777 Thanks for the comments. After leaving the device connected for a few minutes (maybe an hour or so), the log started to fill with Feb 8 10:09:23 Tower kernel: xhci_hcd 0000:00:10.1: WARN waiting for error on ep to be cleared Feb 8 10:09:23 Tower kernel: r8152 9-2:1.0 eth0: failed tx_urb -22 Feb 8 10:09:24 Tower kernel: xhci_hcd 0000:00:10.1: WARN waiting for error on ep to be cleared Feb 8 10:09:24 Tower kernel: r8152 9-2:1.0 eth0: failed tx_urb -22 again. I removed the USB NIC, the driver plugin, the go configuration changes and then spent 5 hours yesterday trying to figure out why my unraid box was unable to route to the internet. Thought it might be a mac address binding issue after switching eth0 to the USB NIC. Reset my router. Reset docker. Eventually tried using a dynamic DHCP IP and it worked. Looked carefully at the network settings after setting dynamic and discovered that unraid adds a default route when using DHCP. Somehow, unraid deleted the default route from eth0 to the internet when using a static IP. Fixed now, but it was a stark reminder of just how fragile UNRAID is and how shallow my understanding of linux networking is. I don't do any of this config management stuff often enough to actually learn any of it. I'm going to take a breather from trying to get this to work. I think my whole idea of having a USB NIC be my primary NiC for unraid isn't a great one to begin with, even without the issues in this thread. I was hoping to free up a PCI-E slot for a GPU, but things will just have to stay as they are. Thanks to the great unraid community for your help as always.
  7. Thanks again for your help. I added the line (with eth2) to my go file, but the speed is still 1Gbps. root@Tower:~# mii-tool -v eth2 eth2: negotiated 1000baseT-FD flow-control, link ok product info: vendor 00:07:32, model 4 rev 0 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control System Devices still shows: Bus 009 Device 002 Port 9-2 ID 0bda:8156 Realtek Semiconductor Corp. USB 10/100/1G/2.5G LAN I tried making the device eth0. I tried resetting the device with custom rules (as was suggested a long time ago in this thread) by adding the following to my go file: cp /boot/config/rules.d/50-usb-realtek-net.rules /etc/udev/rules.d/50-usb-realtek-net.rules chmod 644 /etc/udev/rules.d/50-usb-realtek-net.rules udevadm control --reload-rules usbreset 0bda:8156 #(get these numbers via running usbreset without any arguments - Number 003/002 ID >0bda:8156 USB 10/100/1G/2.5G LAN for me) ethtool -s eth2 autoneg on advertise 0x80000000002f No change. Along the way, at one point syslog was being filled with: Feb 8 10:09:23 Tower kernel: xhci_hcd 0000:00:10.1: WARN waiting for error on ep to be cleared Feb 8 10:09:23 Tower kernel: r8152 9-2:1.0 eth0: failed tx_urb -22 Feb 8 10:09:24 Tower kernel: xhci_hcd 0000:00:10.1: WARN waiting for error on ep to be cleared Feb 8 10:09:24 Tower kernel: r8152 9-2:1.0 eth0: failed tx_urb -22 Feb 8 10:09:25 Tower kernel: xhci_hcd 0000:00:10.1: WARN waiting for error on ep to be cleared Feb 8 10:09:25 Tower kernel: r8152 9-2:1.0 eth0: failed tx_urb -22 Feb 8 10:09:25 Tower kernel: xhci_hcd 0000:00:10.1: WARN waiting for error on ep to be cleared Feb 8 10:09:25 Tower kernel: r8152 9-2:1.0 eth0: failed tx_urb -22 Could this be a driver issue, or something to do with my troubleshooting? Regardless. I have put it back on eth2, It's still only showing up as 1Gbps capable. But I'll let it run like this for a while to see if I get the console errors again.
  8. I very much appreciate your help with this. I have installed the plugin and have the interface up. Currently it's negotiating at 1000base-T-FD eth2: negotiated 1000baseT-FD flow-control, link ok product info: vendor 00:07:32, model 4 rev 0 basic mode: autonegotiation enabled basic status: autonegotiation complete, link ok capabilities: 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD advertising: 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control link partner: 1000baseT-HD 1000baseT-FD 100baseTx-FD 100baseTx-HD 10baseT-FD 10baseT-HD flow-control Any thoughts? Previously, with the failing driver, it was linking at 2.5Gbps before it failed a few minutes later. EDIT: Just noticed that it's detecting the USB RTL8156 as a "r8152' Feb 7 16:02:16 Tower kernel: usb 9-1: new SuperSpeed USB device number 3 using xhci_hcd Feb 7 16:02:16 Tower kernel: usb 9-1: reset SuperSpeed USB device number 3 using xhci_hcd Feb 7 16:02:16 Tower kernel: r8152 9-1:1.0 eth2: v2.16.3 (2022/07/06) Feb 7 16:02:16 Tower kernel: r8152 9-1:1.0 eth2: This product is covered by one or more of the following patents: Feb 7 16:02:16 Tower kernel: US6,570,884, US6,115,776, and US6,327,625. Feb 7 16:02:16 Tower kernel: Feb 7 16:03:16 Tower kernel: IPv6: ADDRCONF(NETDEV_CHANGE): eth2: link becomes ready Feb 7 16:03:16 Tower kernel: r8152 9-1:1.0 eth2: carrier on Feb 7 16:03:18 Tower avahi-daemon[3495]: Joining mDNS multicast group on interface eth2.IPv6 with address fe80::a2ce:c8ff:fe67:79b5. Feb 7 16:03:18 Tower avahi-daemon[3495]: New relevant interface eth2.IPv6 for mDNS. Feb 7 16:03:18 Tower avahi-daemon[3495]: Registering new address record for fe80::a2ce:c8ff:fe67:79b5 on eth2.*. Feb 7 16:05:24 Tower avahi-daemon[3495]: Interface eth2.IPv6 no longer relevant for mDNS. Feb 7 16:05:24 Tower avahi-daemon[3495]: Leaving mDNS multicast group on interface eth2.IPv6 with address fe80::a2ce:c8ff:fe67:79b5. System Devices pages lists it as: Bus 009 Device 004 Port 9-1 ID 0bda:8156 Realtek Semiconductor Corp. USB 10/100/1G/2.5G LAN
  9. I totally agree. We need an official unraid solution. A few posts above someone mentioned that at some point unraid will change how drivers are loaded and that could be a solution but I've been using unraid long enough to know that that time could be measured in years.
  10. Update on this for the general information of anyone reading this thread: It turned out that one of my drives which wasn't showing as failing at all was getting itself into a very very slow (bytes per second) state when writing files to it. This would appear whenever my mover script decided to write to that drive. No SMART errors appeared and unraid didn't handle the situation at all, everything just slowed to a crawl. I could eventually stop the array and reboot. This would put everything back to normal for possibly weeks--until the mover decided to write to that one drive again. I finally decided to just replace it to see what happens. Since replacing it I have done several data operations across the whole array (upgraded my dual parity drives to 18TB drives) with no issues. No clue what's wrong with that one drive. I ran a preclear on it just for fun and it worked with no problems and threw no errors and reported no reallocated sectors. No idea why any of this happened but hey.
  11. Thanks very much! I’ll try flipping one the other way around. I actually don’t know which way is “up” anyway. It’s also useful to know that it might be a connection issue between the drive and the cage.
  12. @magictoaster76Did your other cages have the same problem? I'd like to try to get to the bottom of this. I'm interested to hear any thought you might have.
  13. Amazon.ca here as well. All 6 of mine have this problem but they are all in my server at this point, so there's no returning them. If you do manage to fix one, please let me know how you did it. I'll pull my server apart and will do the same. This is a pretty crazy problem to be having consistently across many devices. When this first happened, I contacted Icy Dock and got a formulaic answer about sending the faulty items to them for inspection. Not interested in doing that.
  14. Sadly, no. Interesting to hear that you have the same problem. Did you buy yours off Amazon? Maybe that seller is selling a whole pile of defective units.
  15. That's what I was worried about. This is very unusual. I just did a parity verification a week or so ago and it ran at full speed with no errors.
  16. How did you determine which drive was the problem, or are you talking about the drive that was being rebuilt?
  17. I'm experiencing this right now, attempting to rebuild to an older drive that I successfully precleared. @jedimstr, are you saying that the drive being rebuilt to might be bad and is causing this?
  18. Also, I found a bug in "Ultimate UNRAID Dashboard - Version 1.6 - 2021-03-20 (falconexe).json". Line 6016 is currently: "datasource": "Varken", But it should be: "datasource": "$Datasource_Varken",