Strayer

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Strayer

  • Rank
    Newbie
  1. Yeah, I'm really curious how I managed to break it this far. I also assume Unraid showed me a warning and I just didn't notice it? Really weird. Anyhow, I was lucky and could restore the drive. Thats what you get for carelessly clicking things without thinking. 🤦‍♂️
  2. Phew. I managed to fix it. I noticed wipefs in the disk logs in the UI: Apr 21 08:45:58 noatun root: /dev/nvme0n1p1: 6 bytes were erased at offset 0x00000000 (crypto_LUKS): 4c 55 4b 53 ba be Apr 21 08:45:58 noatun root: /dev/nvme0n1p1: 6 bytes were erased at offset 0x00004000 (crypto_LUKS): 53 4b 55 4c ba be Those are the LUKS headers. After writing those bytes back at the place, I was able to open the cache device again. For reference, if anyone else runs into this: echo -en '\x4c\x55\x4b\x53\xba\xbe' | dd of=/dev/nvme0n1p1 bs=1 conv=notrunc seek=$((0x
  3. Docker recently gained the option to also set IPv6 NAT rules to forward IPv6 traffic to containers: https://github.com/moby/moby/pull/41622 It is relatively easy to enable, but requires enabling experimental features. As a quick test, I added this daemon.json to a Debian VM that has Docker 20.10 installed: { "ipv6": true, "ip6tables": true, "experimental": true, "fixed-cidr-v6": "fd00:dead:beef::/48" } Now Docker will automatically create IPv6 NAT rules to forward ports to containers: root@debian:~# docker run --rm -p 80:80 nginx [...] fd15:4
  4. I played around with this a bit more and changing <timer name='hpet' present='no'/> to yes reduced the idle CPU usage down to ~5% on the Unraid host, which is much more acceptable. I'll have to monitor the VM since I'm not really sure if this has any downsides.
  5. I'm running OpenWRT inside a VM that is only used for WireGuard VPN as my router doesn't support it itself. The VM is running at 99-100% idle, according to top inside the VM. The qemu-system-x86 process on the Unraid host on the other hand runs at a constant ~30% CPU usage according to top. Note: I'm on Unraid 6.9.0-rc.2, the system itself is a repurposed QNAP TS-451A with a slow Celeron N3060, so having 30% of the CPU blocked full-time is quite annoying. top in the VM: Mem: 40100K used, 203720K free, 392K shrd, 9608K buff, 8828K cached CPU: 0% usr 0% sys 0% nic 9
  6. That does make sense, as the TS-451A does include an eMMC module that is used for installing the official QNAP firmware. That would indeed be detected as a drive when the MMC module is available. Maybe it could be built but blacklisted in a way that the user can remove the blacklist entry if they want to use it? I don't mind paying extra for the included eMMC that I wouldn't use. An empty SD slot shouldn't count towards the drive count, as far as I know Linux only shows a block device for the slot if something is in there.
  7. I'm running Unraid on a QNAP TS-451A which includes a SD card reader. The kernel in Unraid 6.8.3 and 3.9.0-rc2 doesn't seem to include the relevant kernel configuration. See for example: https://www.thinkwiki.org/wiki/How_to_get_the_internal_SD_card_working The Unraid kernel config doesn't have this enabled: # CONFIG_MMC is not set I'd like to use the SD card slot for a (mostly) read-only storage for Docker container configuration files and such with a high endurance SD card. Currently, SD cards aren't detected at all. root@Tower:~# lspci 00:00.0
  8. Sorry, didn’t want to sound pretentious! Just wanted to put in context that I personally find this functionality very important for my peace of mind. The self-test that was started yesterday evening is still running, so it seems to be fine for now with disabled spin down. I will do some more tests when the self-test is done. Yes, I wondered about this too. I’m pretty sure this is a related problem though since the spin-down was still enabled then… I definitely started the extended tests, because I monitored the progress for a few hours. Next day every mention of the test
  9. I seem to have the same issue. I'm setting up a new Unraid server with the same version right now and the drives are unable to finish a SMART extended test. In fact, I don't even see the error message mentioned by the original poster, it just is as if the test never even started - both the self-test log and the last smart test result pretend that no test ever ran. I just added a fourth drive, the system is now in the clearing state and seems to have started a SMART test on all drives. I disabled the spindown just to be sure and will see what happens tomorrow. For what its worth, re
  10. Still on 6.8.3, yes. I'm looking forward to a few changes in 6.9, but I still prefer to wait for a stable release.
  11. Its also still broken for me and makes me crazy. Everytime I login, I see that broken icon 😡 😉 I fixed this manually for now by replacing the PNG in /var/lib/docker/unraid/images/ with the correct one and removing the cached PNG from /var/local/emhttp/plugins/dynamix.docker.manager/images.
  12. I fixed this for now by creating the Docker network manually and not checking the custom networks in the Unraid Docker settings. docker network create -d macvlan \ --subnet=10.100.80.0/24 --gateway=10.100.80.1 \ --subnet=fd3b:2815:be50::8/64 --gateway=fd3b:2815:be50::8 \ --ipv6 \ --aux-address='dhcp2=10.100.80.2' \ --aux-address='dhcp3=10.100.80.3' \ --aux-address='dhcp4=10.100.80.4' \ --aux-address='dhcp5=10.100.80.5' \ -o parent=br0.8 \ vlan8 Nice side effect is that I can now use the full address range and tell Docker to not do something with the very small DHCP ran
  13. I'm trying to run a container with a static IPv4 and IPv6 address (specifically a DNS server that will be propagated by the router, thus the requirement for static IP addresses). For this I'm trying to create a Docker macvlan network so I can run containers, reachable from other VLANs via the router. The network itself is set up correctly and works as every other VLAN I have: root@noatun:~# ip addr show dev br0.8 17: br0.8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b4:2e:99:ad:6c:51 brd ff:ff:ff:ff:ff:ff inet 10.100.8
  14. No, this was just a testing container. The actual container I have the issue with is the only mongo:4.0 container (even the only mongo container at all) on the system. Since it is not really an important issue, I will probably wait for 6.9 to check this again. Thanks for the help so far!
  15. Sorry, should have mentioned that. I'm on 6.8.3. Tried clearing the cache and also tried it with dev tools (and disable cache option), doesn't seem to make a difference. I just tried it with a new container and it happens again. Initial container configuration: Initial icon URL: https://i.imgur.com/KDZALBF.png Updated icon URL: https://i.imgur.com/ajVjfzk.png When trying to change the icon URL to the updated URL, the icon still stays the same.