Strayer

Members
  • Content Count

    23
  • Joined

  • Last visited

Everything posted by Strayer

  1. Yeah, I'm really curious how I managed to break it this far. I also assume Unraid showed me a warning and I just didn't notice it? Really weird. Anyhow, I was lucky and could restore the drive. Thats what you get for carelessly clicking things without thinking. 🤦‍♂️
  2. Phew. I managed to fix it. I noticed wipefs in the disk logs in the UI: Apr 21 08:45:58 noatun root: /dev/nvme0n1p1: 6 bytes were erased at offset 0x00000000 (crypto_LUKS): 4c 55 4b 53 ba be Apr 21 08:45:58 noatun root: /dev/nvme0n1p1: 6 bytes were erased at offset 0x00004000 (crypto_LUKS): 53 4b 55 4c ba be Those are the LUKS headers. After writing those bytes back at the place, I was able to open the cache device again. For reference, if anyone else runs into this: echo -en '\x4c\x55\x4b\x53\xba\xbe' | dd of=/dev/nvme0n1p1 bs=1 conv=notrunc seek=$((0x
  3. I accidentally changed the pool size of my xfs-encrypted cache to 2, didn't realize the mistake and started the array. After realizing what I'd done, I stopped the array and tried to undo the change, but the pool size couldn't be changed anymore. I tried to create a new cache pool and assigned the device there, but now it says "Unmountable: Volume not encrypted". It seems like the LUKS headers got messed up somehow when I started the array with the increased pool size. Can I recover from this in any way? I'm already dumping the current cache disk contents to a backup file so I can
  4. Docker recently gained the option to also set IPv6 NAT rules to forward IPv6 traffic to containers: https://github.com/moby/moby/pull/41622 It is relatively easy to enable, but requires enabling experimental features. As a quick test, I added this daemon.json to a Debian VM that has Docker 20.10 installed: { "ipv6": true, "ip6tables": true, "experimental": true, "fixed-cidr-v6": "fd00:dead:beef::/48" } Now Docker will automatically create IPv6 NAT rules to forward ports to containers: root@debian:~# docker run --rm -p 80:80 nginx [...] fd15:4
  5. I played around with this a bit more and changing <timer name='hpet' present='no'/> to yes reduced the idle CPU usage down to ~5% on the Unraid host, which is much more acceptable. I'll have to monitor the VM since I'm not really sure if this has any downsides.
  6. I'm running OpenWRT inside a VM that is only used for WireGuard VPN as my router doesn't support it itself. The VM is running at 99-100% idle, according to top inside the VM. The qemu-system-x86 process on the Unraid host on the other hand runs at a constant ~30% CPU usage according to top. Note: I'm on Unraid 6.9.0-rc.2, the system itself is a repurposed QNAP TS-451A with a slow Celeron N3060, so having 30% of the CPU blocked full-time is quite annoying. top in the VM: Mem: 40100K used, 203720K free, 392K shrd, 9608K buff, 8828K cached CPU: 0% usr 0% sys 0% nic 9
  7. That does make sense, as the TS-451A does include an eMMC module that is used for installing the official QNAP firmware. That would indeed be detected as a drive when the MMC module is available. Maybe it could be built but blacklisted in a way that the user can remove the blacklist entry if they want to use it? I don't mind paying extra for the included eMMC that I wouldn't use. An empty SD slot shouldn't count towards the drive count, as far as I know Linux only shows a block device for the slot if something is in there.
  8. I'm running Unraid on a QNAP TS-451A which includes a SD card reader. The kernel in Unraid 6.8.3 and 3.9.0-rc2 doesn't seem to include the relevant kernel configuration. See for example: https://www.thinkwiki.org/wiki/How_to_get_the_internal_SD_card_working The Unraid kernel config doesn't have this enabled: # CONFIG_MMC is not set I'd like to use the SD card slot for a (mostly) read-only storage for Docker container configuration files and such with a high endurance SD card. Currently, SD cards aren't detected at all. root@Tower:~# lspci 00:00.0
  9. Sorry, didn’t want to sound pretentious! Just wanted to put in context that I personally find this functionality very important for my peace of mind. The self-test that was started yesterday evening is still running, so it seems to be fine for now with disabled spin down. I will do some more tests when the self-test is done. Yes, I wondered about this too. I’m pretty sure this is a related problem though since the spin-down was still enabled then… I definitely started the extended tests, because I monitored the progress for a few hours. Next day every mention of the test
  10. I seem to have the same issue. I'm setting up a new Unraid server with the same version right now and the drives are unable to finish a SMART extended test. In fact, I don't even see the error message mentioned by the original poster, it just is as if the test never even started - both the self-test log and the last smart test result pretend that no test ever ran. I just added a fourth drive, the system is now in the clearing state and seems to have started a SMART test on all drives. I disabled the spindown just to be sure and will see what happens tomorrow. For what its worth, re
  11. Still on 6.8.3, yes. I'm looking forward to a few changes in 6.9, but I still prefer to wait for a stable release.
  12. Its also still broken for me and makes me crazy. Everytime I login, I see that broken icon 😡 😉 I fixed this manually for now by replacing the PNG in /var/lib/docker/unraid/images/ with the correct one and removing the cached PNG from /var/local/emhttp/plugins/dynamix.docker.manager/images.
  13. I fixed this for now by creating the Docker network manually and not checking the custom networks in the Unraid Docker settings. docker network create -d macvlan \ --subnet=10.100.80.0/24 --gateway=10.100.80.1 \ --subnet=fd3b:2815:be50::8/64 --gateway=fd3b:2815:be50::8 \ --ipv6 \ --aux-address='dhcp2=10.100.80.2' \ --aux-address='dhcp3=10.100.80.3' \ --aux-address='dhcp4=10.100.80.4' \ --aux-address='dhcp5=10.100.80.5' \ -o parent=br0.8 \ vlan8 Nice side effect is that I can now use the full address range and tell Docker to not do something with the very small DHCP ran
  14. I'm trying to run a container with a static IPv4 and IPv6 address (specifically a DNS server that will be propagated by the router, thus the requirement for static IP addresses). For this I'm trying to create a Docker macvlan network so I can run containers, reachable from other VLANs via the router. The network itself is set up correctly and works as every other VLAN I have: root@noatun:~# ip addr show dev br0.8 17: br0.8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b4:2e:99:ad:6c:51 brd ff:ff:ff:ff:ff:ff inet 10.100.8
  15. No, this was just a testing container. The actual container I have the issue with is the only mongo:4.0 container (even the only mongo container at all) on the system. Since it is not really an important issue, I will probably wait for 6.9 to check this again. Thanks for the help so far!
  16. Sorry, should have mentioned that. I'm on 6.8.3. Tried clearing the cache and also tried it with dev tools (and disable cache option), doesn't seem to make a difference. I just tried it with a new container and it happens again. Initial container configuration: Initial icon URL: https://i.imgur.com/KDZALBF.png Updated icon URL: https://i.imgur.com/ajVjfzk.png When trying to change the icon URL to the updated URL, the icon still stays the same.
  17. That is what I assumed, but the new image is under a completely different URL, unraid still doesn't download it again. I even tried setting it to the icon URL of one of the other containers, still no avail.
  18. After setting the icon URL once, unraid never seems to download it again after changing it. I added an icon with wrong dimensions. After fixing the icon and uploading it to a new URL, everytime I update the container to the new icon URL or remove and recreate it, it loads the old icon again. It seems like the icon is cached somewhere, but I wasn't able to find it. Any ideas where to start to fix this?
  19. That worked! I had to set the "Docker Hub URL", then the container showed up. Thanks for the help!
  20. Doesn't look like it. Neither Safari nor Firefox seem to block anything, also no errors in the consoles. The unraid host is whitelisted in Firefox too.
  21. I just installed this plugin and it doesn't seem to register any Docker containers. I have one stopped and one running, but the settings page is empty: Not really sure how to proceed… can I help debugging this in any way?
  22. Is it possible to block/allow ports for containers running in a vlan? I'd like to allow all devices in my local LAN to access all Docker containers running on Unraid by default, which I did by just allowing forwarding from the LAN VLAN to the Docker VLAN, but I'd still like to be able to specify which ports in a container should be accessible. I could do this on the router by disabling the default forward rule and explicitly only allow forwarding to specific ports, but then I'd have to configure Docker stuff in two locations (containers in Unraid, allowed ports in the router). I'd
  23. Sorry for bringing more noise to this issue, but having just built an unRAID Server with 4 HDDs and 2 NVMe SSDs I want to make sure not to thrash them right away. My intention was to run both SSDs in an encrypted BTRFS pool, but as far as I understand I'd be affected by this bug then. Right now there is one of the SSDs installed and running as encrypted BTRFS. It only has two VMs running on it, one is Home Assistant, the other an older Debian server with a few Docker containers that I some day want to migrate. Home Assistant is around 6GB in size, the older server is approx. 20GB i