-
Content Count
23 -
Joined
-
Last visited
Community Reputation
0 NeutralAbout Strayer
-
Rank
Newbie
-
Accidentally changed pool slots, now can't mount cache device anymore
Strayer replied to Strayer's topic in General Support
Yeah, I'm really curious how I managed to break it this far. I also assume Unraid showed me a warning and I just didn't notice it? Really weird. Anyhow, I was lucky and could restore the drive. Thats what you get for carelessly clicking things without thinking. 🤦♂️ -
Accidentally changed pool slots, now can't mount cache device anymore
Strayer replied to Strayer's topic in General Support
Phew. I managed to fix it. I noticed wipefs in the disk logs in the UI: Apr 21 08:45:58 noatun root: /dev/nvme0n1p1: 6 bytes were erased at offset 0x00000000 (crypto_LUKS): 4c 55 4b 53 ba be Apr 21 08:45:58 noatun root: /dev/nvme0n1p1: 6 bytes were erased at offset 0x00004000 (crypto_LUKS): 53 4b 55 4c ba be Those are the LUKS headers. After writing those bytes back at the place, I was able to open the cache device again. For reference, if anyone else runs into this: echo -en '\x4c\x55\x4b\x53\xba\xbe' | dd of=/dev/nvme0n1p1 bs=1 conv=notrunc seek=$((0x -
I accidentally changed the pool size of my xfs-encrypted cache to 2, didn't realize the mistake and started the array. After realizing what I'd done, I stopped the array and tried to undo the change, but the pool size couldn't be changed anymore. I tried to create a new cache pool and assigned the device there, but now it says "Unmountable: Volume not encrypted". It seems like the LUKS headers got messed up somehow when I started the array with the increased pool size. Can I recover from this in any way? I'm already dumping the current cache disk contents to a backup file so I can
-
Docker recently gained the option to also set IPv6 NAT rules to forward IPv6 traffic to containers: https://github.com/moby/moby/pull/41622 It is relatively easy to enable, but requires enabling experimental features. As a quick test, I added this daemon.json to a Debian VM that has Docker 20.10 installed: { "ipv6": true, "ip6tables": true, "experimental": true, "fixed-cidr-v6": "fd00:dead:beef::/48" } Now Docker will automatically create IPv6 NAT rules to forward ports to containers: root@debian:~# docker run --rm -p 80:80 nginx [...] fd15:4
-
qemu-system-x86 process at ~30% CPU even when Linux VM is 99-100% idle
Strayer replied to Strayer's topic in VM Engine (KVM)
I played around with this a bit more and changing <timer name='hpet' present='no'/> to yes reduced the idle CPU usage down to ~5% on the Unraid host, which is much more acceptable. I'll have to monitor the VM since I'm not really sure if this has any downsides. -
I'm running OpenWRT inside a VM that is only used for WireGuard VPN as my router doesn't support it itself. The VM is running at 99-100% idle, according to top inside the VM. The qemu-system-x86 process on the Unraid host on the other hand runs at a constant ~30% CPU usage according to top. Note: I'm on Unraid 6.9.0-rc.2, the system itself is a repurposed QNAP TS-451A with a slow Celeron N3060, so having 30% of the CPU blocked full-time is quite annoying. top in the VM: Mem: 40100K used, 203720K free, 392K shrd, 9608K buff, 8828K cached CPU: 0% usr 0% sys 0% nic 9
-
That does make sense, as the TS-451A does include an eMMC module that is used for installing the official QNAP firmware. That would indeed be detected as a drive when the MMC module is available. Maybe it could be built but blacklisted in a way that the user can remove the blacklist entry if they want to use it? I don't mind paying extra for the included eMMC that I wouldn't use. An empty SD slot shouldn't count towards the drive count, as far as I know Linux only shows a block device for the slot if something is in there.
-
[6.9.0-rc2] Extended SMART self test aborted by host due to spindown
Strayer commented on John_M's report in Prereleases
Sorry, didn’t want to sound pretentious! Just wanted to put in context that I personally find this functionality very important for my peace of mind. The self-test that was started yesterday evening is still running, so it seems to be fine for now with disabled spin down. I will do some more tests when the self-test is done. Yes, I wondered about this too. I’m pretty sure this is a related problem though since the spin-down was still enabled then… I definitely started the extended tests, because I monitored the progress for a few hours. Next day every mention of the test -
[6.9.0-rc2] Extended SMART self test aborted by host due to spindown
Strayer commented on John_M's report in Prereleases
I seem to have the same issue. I'm setting up a new Unraid server with the same version right now and the drives are unable to finish a SMART extended test. In fact, I don't even see the error message mentioned by the original poster, it just is as if the test never even started - both the self-test log and the last smart test result pretend that no test ever ran. I just added a fourth drive, the system is now in the clearing state and seems to have started a SMART test on all drives. I disabled the spindown just to be sure and will see what happens tomorrow. For what its worth, re -
Still on 6.8.3, yes. I'm looking forward to a few changes in 6.9, but I still prefer to wait for a stable release.
-
Its also still broken for me and makes me crazy. Everytime I login, I see that broken icon 😡 😉 I fixed this manually for now by replacing the PNG in /var/lib/docker/unraid/images/ with the correct one and removing the cached PNG from /var/local/emhttp/plugins/dynamix.docker.manager/images.
-
I fixed this for now by creating the Docker network manually and not checking the custom networks in the Unraid Docker settings. docker network create -d macvlan \ --subnet=10.100.80.0/24 --gateway=10.100.80.1 \ --subnet=fd3b:2815:be50::8/64 --gateway=fd3b:2815:be50::8 \ --ipv6 \ --aux-address='dhcp2=10.100.80.2' \ --aux-address='dhcp3=10.100.80.3' \ --aux-address='dhcp4=10.100.80.4' \ --aux-address='dhcp5=10.100.80.5' \ -o parent=br0.8 \ vlan8 Nice side effect is that I can now use the full address range and tell Docker to not do something with the very small DHCP ran
-
I'm trying to run a container with a static IPv4 and IPv6 address (specifically a DNS server that will be propagated by the router, thus the requirement for static IP addresses). For this I'm trying to create a Docker macvlan network so I can run containers, reachable from other VLANs via the router. The network itself is set up correctly and works as every other VLAN I have: root@noatun:~# ip addr show dev br0.8 17: br0.8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b4:2e:99:ad:6c:51 brd ff:ff:ff:ff:ff:ff inet 10.100.8
-
No, this was just a testing container. The actual container I have the issue with is the only mongo:4.0 container (even the only mongo container at all) on the system. Since it is not really an important issue, I will probably wait for 6.9 to check this again. Thanks for the help so far!
-
Sorry, should have mentioned that. I'm on 6.8.3. Tried clearing the cache and also tried it with dev tools (and disable cache option), doesn't seem to make a difference. I just tried it with a new container and it happens again. Initial container configuration: Initial icon URL: https://i.imgur.com/KDZALBF.png Updated icon URL: https://i.imgur.com/ajVjfzk.png When trying to change the icon URL to the updated URL, the icon still stays the same.