vitaprimo

Members
  • Posts

    66
  • Joined

  • Last visited

Everything posted by vitaprimo

  1. Thanks, I'm sorry, I meant to answer earlier but I got food poisoning, slept-on-the-shower-food-poisoning. 🤢 On the plus side, I may have lost some weight. I'm not using the disks anymore, even if they have not shown any signs of failure, I know they old, but pretending they're good I kept trying different things so I'd know how to proceed when an actual emergency presents itself; the disks wouldn't mount again formatted as ZFS either in their own pool or in the Unraid pool, as a group or by each of themselves (after Tools→New Config, of course). I remembered though, that ZFS format data can persist on a disk even after this has been reformatted to something else, so I changed the format to Btrfs and finally they mounted again. I don't know what to make of it, I'm just leaving it out there for whomever it serves a purpose. Gratzie ancora. 🙇‍♂️
  2. When Unraid is virtualized or when it is using iSCSI disks — scenarios where disks can grow in size — how does it respond is a disk size is augmented? Is it able to detect/adjust to the change? Will it crash? I guess that's it. =]
  3. Scratch that, I couldn't wait. I did zpool import disk1 to import the main array without starting the thing, it succeeded. Therefore, that's the asnwer to the question. However, in regards to what I wanted to do, it didn't quite work, the problem that appeared earlier when attempting mounting the pool the system would hang forever (and show some micro kpanics) is still there. [Wed22@ 6:15:58][root@zx3:~] #〉zpool import alpha Message from syslogd@zx3 at Nov 22 06:16:46 ... kernel:VERIFY3(size <= rt->rt_space) failed (281442912784384 <= 2054406144) Message from syslogd@zx3 at Nov 22 06:16:46 ... kernel:PANIC at range_tree.c:436:range_tree_remove_impl() I haven't given up though, I think I still might have an idea or two. Thanks! =]
  4. It shows up ! Last login: Wed Nov 22 03:15:25 on ttys003 [Wed22@ 5:36:29][v@zx9:~] $〉ssh zx3 Linux 6.1.49-Unraid. [Wed22@ 5:36:32][root@zx3:~] #〉zpool import pool: disk1 id: 9807385397724693529 state: ONLINE action: The pool can be imported using its name or numeric identifier. config: disk1 ONLINE sdd1 ONLINE pool: alpha id: 1551723972850019203 state: DEGRADED status: One or more devices contains corrupted data. action: The pool can be imported despite missing or damaged devices. The fault tolerance of the pool may be compromised if imported. see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-4J config: alpha DEGRADED raidz1-0 DEGRADED sdb1 ONLINE sdc1 ONLINE sdd1 UNAVAIL invalid label [Wed22@ 5:36:35][root@zx3:~] #〉zpool list no pools available [Wed22@ 5:38:17][root@zx3:~] #〉 sdd1 is not inserted, BTW. No idea why it says invalid label. Now I just need to figure out the order to start, if I should start at all, the array. I think it should be mountable without starting the array with zpool import alpha though I'll continue reading — I'm using the manpages from Fedora 39's ZFS — a bit more for clues, then I'll skim Unraid's docu one last time. It's very little VM data, of which I have a backup, or rather a version, but the one in these disks has been OCPDed to the max. Thank you ! ❤️
  5. Oh yeah, my bad. I just caught on. Thanks for correcting me. I'll do that and come back. It might take a little bit, I had reinserted the ESXi SD card to check on something. It goes inside the giant heavy toaster. 😔 Thanks!
  6. Since my ZFS pool fails to mount, I thought maybe I could nudge it into working by changing enough things about it so the system is forced to look into it…or something. It's a ZRAID, it should be able to work with any 1 missing disk. I removed 1 disk and booted the system. I log in to mount it manually the pool (auto-mount is temporarily disabled) but it tells me I have a missing cache disk and won't let me start the array without extra steps. I'm not sure what it's talking about since I don't have a cache anything; it's an all-flash ZFS pool, there's no need. I have one other obligatory disk in the regular array but it's empty and doesn't have any of the supporting pools (parity, cache). Does it refer to this pool – the ZRAID one – as the cache? And followup: when it says "remove the missing cache disk" does it mean the individual disk drive or the volume "disk" would be? And followup of the followup: if it means the whole pool, how do I get it to mount degraded then? I can't remove one disk while mounted to degrade it live because that's why I'm trying to get it to mount degraded, it won't mount when it's normal. C'est tout, thanks.
  7. Huh. I thought it was going to take longer to find it. Nov 19 19:48:46 zx3 monitor: Stop running nchan processes Nov 19 19:48:47 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Nov 19 19:48:50 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Nov 19 19:48:50 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Nov 19 19:48:53 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Nov 19 19:48:56 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Nov 19 19:48:58 zx3 root: error: /plugins/unassigned.devices/UnassignedDevices.php: wrong csrf_token Nov 19 19:49:52 zx3 kernel: mdcmd (31): set md_num_stripes 1280 Nov 19 19:49:52 zx3 kernel: mdcmd (32): set md_queue_limit 80 Nov 19 19:49:52 zx3 kernel: mdcmd (33): set md_sync_limit 5 Nov 19 19:49:52 zx3 kernel: mdcmd (34): set md_write_method Nov 19 19:49:52 zx3 kernel: mdcmd (35): start STOPPED Nov 19 19:49:52 zx3 kernel: unraid: allocating 15750K for 1280 stripes (3 disks) Nov 19 19:49:52 zx3 kernel: md1p1: running, size: 71687336 blocks Nov 19 19:49:52 zx3 emhttpd: shcmd (205): udevadm settle Nov 19 19:49:53 zx3 emhttpd: Opening encrypted volumes... Nov 19 19:49:53 zx3 emhttpd: shcmd (206): touch /boot/config/forcesync Nov 19 19:49:53 zx3 emhttpd: Mounting disks... Nov 19 19:49:53 zx3 emhttpd: mounting /mnt/disk1 Nov 19 19:49:53 zx3 emhttpd: shcmd (207): mkdir -p /mnt/disk1 Nov 19 19:49:53 zx3 emhttpd: /usr/sbin/zpool import -d /dev/md1p1 2>&1 Nov 19 19:49:56 zx3 emhttpd: pool: disk1 Nov 19 19:49:56 zx3 emhttpd: id: 9807385397724693529 Nov 19 19:49:56 zx3 emhttpd: shcmd (209): /usr/sbin/zpool import -N -o autoexpand=on -d /dev/md1p1 9807385397724693529 disk1 Nov 19 19:50:01 zx3 emhttpd: shcmd (210): /usr/sbin/zpool online -e disk1 /dev/md1p1 Nov 19 19:50:01 zx3 emhttpd: /usr/sbin/zpool status -PL disk1 2>&1 Nov 19 19:50:01 zx3 emhttpd: pool: disk1 Nov 19 19:50:01 zx3 emhttpd: state: ONLINE Nov 19 19:50:01 zx3 emhttpd: scan: scrub repaired 0B in 00:00:01 with 0 errors on Tue Nov 14 00:00:02 2023 Nov 19 19:50:01 zx3 emhttpd: config: Nov 19 19:50:01 zx3 emhttpd: NAME STATE READ WRITE CKSUM Nov 19 19:50:01 zx3 emhttpd: disk1 ONLINE 0 0 0 Nov 19 19:50:01 zx3 emhttpd: /dev/md1p1 ONLINE 0 0 0 Nov 19 19:50:01 zx3 emhttpd: errors: No known data errors Nov 19 19:50:01 zx3 emhttpd: shcmd (211): /usr/sbin/zfs set mountpoint=/mnt/disk1 disk1 Nov 19 19:50:02 zx3 emhttpd: shcmd (212): /usr/sbin/zfs set atime=off disk1 Nov 19 19:50:02 zx3 emhttpd: shcmd (213): /usr/sbin/zfs mount disk1 Nov 19 19:50:02 zx3 emhttpd: shcmd (214): /usr/sbin/zpool set autotrim=off disk1 Nov 19 19:50:02 zx3 emhttpd: shcmd (215): /usr/sbin/zfs set compression=on disk1 Nov 19 19:50:03 zx3 emhttpd: mounting /mnt/alpha Nov 19 19:50:03 zx3 emhttpd: shcmd (216): mkdir -p /mnt/alpha Nov 19 19:50:03 zx3 emhttpd: shcmd (217): /usr/sbin/zpool import -N -o autoexpand=on -d /dev/sdb1 -d /dev/sdc1 -d /dev/sdd1 1551723972850019203 alpha Nov 19 19:50:29 zx3 kernel: VERIFY3(size <= rt->rt_space) failed (281442912784384 <= 2054406144) Nov 19 19:50:29 zx3 kernel: PANIC at range_tree.c:436:range_tree_remove_impl() Nov 19 19:50:29 zx3 kernel: Showing stack for process 25971 Nov 19 19:50:29 zx3 kernel: CPU: 3 PID: 25971 Comm: z_wr_iss Tainted: P IO 6.1.49-Unraid #1 Then comes the trace: [ look at me, saying things authoritatively as if I knew what they mean 😆 ] Nov 19 19:50:29 zx3 kernel: Call Trace: Nov 19 19:50:29 zx3 kernel: <TASK> Nov 19 19:50:29 zx3 kernel: dump_stack_lvl+0x44/0x5c Nov 19 19:50:29 zx3 kernel: spl_panic+0xd0/0xe8 [spl] Nov 19 19:50:29 zx3 kernel: ? memcg_slab_free_hook+0x20/0xcf Nov 19 19:50:29 zx3 kernel: ? zfs_btree_insert_into_leaf+0x2ae/0x47d [zfs] Nov 19 19:50:29 zx3 kernel: ? slab_free_freelist_hook.constprop.0+0x3b/0xaf Nov 19 19:50:29 zx3 kernel: ? bt_grow_leaf+0xc3/0xd6 [zfs] Nov 19 19:50:29 zx3 kernel: ? bt_grow_leaf+0xc3/0xd6 [zfs] Nov 19 19:50:29 zx3 kernel: ? zfs_btree_find_in_buf+0x4c/0x94 [zfs] Nov 19 19:50:29 zx3 kernel: ? zfs_btree_find+0x16d/0x1b0 [zfs] Nov 19 19:50:29 zx3 kernel: ? rs_get_start+0xc/0x1d [zfs] Nov 19 19:50:29 zx3 kernel: range_tree_remove_impl+0x77/0x406 [zfs] Nov 19 19:50:29 zx3 kernel: ? range_tree_remove_impl+0x3fb/0x406 [zfs] Nov 19 19:50:29 zx3 kernel: space_map_load_callback+0x70/0x79 [zfs] Nov 19 19:50:29 zx3 kernel: space_map_iterate+0x2d3/0x324 [zfs] Nov 19 19:50:29 zx3 kernel: ? spa_stats_destroy+0x16c/0x16c [zfs] Nov 19 19:50:29 zx3 kernel: space_map_load_length+0x93/0xcb [zfs] Nov 19 19:50:29 zx3 kernel: metaslab_load+0x33b/0x6e3 [zfs] Nov 19 19:50:29 zx3 kernel: ? slab_post_alloc_hook+0x4d/0x15e Nov 19 19:50:29 zx3 kernel: ? __slab_free+0x83/0x229 Nov 19 19:50:29 zx3 kernel: ? spl_kmem_alloc_impl+0xc1/0xf2 [spl] Nov 19 19:50:29 zx3 kernel: ? __kmem_cache_alloc_node+0x118/0x147 Nov 19 19:50:29 zx3 kernel: metaslab_activate+0x36/0x1f1 [zfs] Nov 19 19:50:29 zx3 kernel: metaslab_alloc_dva+0x8bc/0xfce [zfs] Nov 19 19:50:29 zx3 kernel: ? preempt_latency_start+0x2b/0x46 Nov 19 19:50:29 zx3 kernel: metaslab_alloc+0x107/0x1fd [zfs] Nov 19 19:50:29 zx3 kernel: zio_dva_allocate+0xee/0x73f [zfs] Nov 19 19:50:29 zx3 kernel: ? kmem_cache_free+0xc9/0x154 Nov 19 19:50:29 zx3 kernel: ? spl_kmem_cache_free+0x3a/0x1a5 [spl] Nov 19 19:50:29 zx3 kernel: ? preempt_latency_start+0x2b/0x46 Nov 19 19:50:29 zx3 kernel: ? _raw_spin_lock+0x13/0x1c Nov 19 19:50:29 zx3 kernel: ? _raw_spin_unlock+0x14/0x29 Nov 19 19:50:29 zx3 kernel: ? tsd_hash_search+0x70/0x7d [spl] Nov 19 19:50:29 zx3 kernel: zio_execute+0xb1/0xdf [zfs] Nov 19 19:50:29 zx3 kernel: taskq_thread+0x266/0x38a [spl] Nov 19 19:50:29 zx3 kernel: ? wake_up_q+0x44/0x44 Nov 19 19:50:29 zx3 kernel: ? zio_subblock+0x22/0x22 [zfs] Nov 19 19:50:29 zx3 kernel: ? taskq_dispatch_delay+0x106/0x106 [spl] Nov 19 19:50:29 zx3 kernel: kthread+0xe4/0xef Nov 19 19:50:29 zx3 kernel: ? kthread_complete_and_exit+0x1b/0x1b Nov 19 19:50:29 zx3 kernel: ret_from_fork+0x1f/0x30 Nov 19 19:50:29 zx3 kernel: </TASK> And ish hann..nnngs… I guess that's it.
  8. Well, it is: ... pool: alpha id: 1551723972850019203 state: UNAVAIL status: The pool is formatted using an incompatible version. action: The pool cannot be imported. Access the pool on a system running newer software, or recreate the pool from backup. see: http://www.sun.com/msg/ZFS-8000-A5 config: alpha UNAVAIL newer version raidz1-0 ONLINE disk/by-id/ata-KINGSTON_SV300S3... Oh, wait, do you mean as long as the guest OS expect the partition to be the first--and not the second such as in the case of TrueNAS? That make a little more sense. I couldn't help myself however and sort of already tried importing the pool in another system, Fedora. It seems it comes with outdated ZFS. Well, not "comes" since it's and afterinstall...but you get the idea. 🫤 I'll update to Fedora 39, this is 38, to see if it's like a compatibility thing and gets sorted out on its own. If it doesn't I'm going deep on FreeBSD, maybe even OpenIndiana -- or not, that's kind of a lot -- just FreeBSD then. 😃 For what it's worth though, if Fedora can identify the filesystem and reassemble the Zpool, even thought it didn't occur to me to even mark the disks way after I had taken them out of the caddies, it gives me a little hope. That and the fact that ZFS is not there yet in Unraid and the fact that the unresponsive Zpool came with the cutest little kpanics in the log, which I kinda forgot to mention earlier. I saved the text somewhere, I'll come back to paste it as soon as I find it, maybe it helps the devs. Nevertheless, I don't know why am I rambling on and on instead of just saying thank you, because that's what I logged in for. Thanks.
  9. Just about when I was flipping into prod my little server, a problem that had already happened once, appeared again. The first time I just figured it was my fault since it was a testing system. Not this time. I had recreated my servers from ZFS snapshot clones, reducing about half a terabyte (OS data only) of very high IO in less than 100GB, super lean de-duplicated, compressed data[sets]. It was meticulous, it was thought through. The original vDisks still exist but, I want kinda want to rescue my work on it. Is the ZFS filesystem implemented on Unraid the standard ZFS (on Linux)? Would it mount (assuming it's OK) in Fedora, FreeBSD or anywhere else? Thanks.
  10. 😂 I'm definitely gonna be the idiot of the week, aren't I? I haven't bothered to check there in ages since it's not something I needed (pinning). That said, in answer to my question/what I said before, would it work? Dynamically (re-)allocating the core would still be more useful if, say, two VMs with overlapping cores spiked their loads so that they can be scheduled to idle cores instead of sharing the maxed out core(s). After all, oversubscription is kinda the whole main point of hypervisors, right? I'll guess I have to set up a new VM to test. This is when I wish I had an orchestrator for Unraid.
  11. Um, that's kinda it. I'll elaborate… I want to assign a number of cores to a VM, but not which cores should it use. I'd like to leave that to the hypervisor to decide on its own. That way I don't need a spreadsheet to keep track of 'em. On containers it's easy, you just add --cpu=#, I believe (might be a little off). Just now, while gathering details to write this, I noticed when you switch to XML view, there are some hard-to-miss CPU-related tags: <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='8'/> <vcpupin vcpu='1' cpuset='20'/> <vcpupin vcpu='2' cpuset='9'/> <vcpupin vcpu='3' cpuset='21'/> </cputune> Of those the <cputune> element, obviously seems responsible for the assignment of the cores. I thought maybe if I get rid of the tag I solve the issue, until I noticed the attributes of the <vcpu> tag/element above; placement='static'. Since I don't know what's the opposite of "static", e.g; dynamic, auto, reactive, responsive, elastic, something-NUMA, sky's the limit. If I were to just remove the attribute and the element below it like this: <vcpu>4</vcpu> Would that work? tenquiu veri motsh* Thanks, for real this time. __________________________________________________________________________________ *: "thank you very much", as it would be spelled in Spanish 😆
  12. Didn't work. [Sat11@ 1:05:54][root@zx3:~] #〉docker network create \ > --attachable \ > --driver macvlan \ > --gateway 10.10.0.1 \ > --subnet 10.10.0.0/24 \ > --ipv6 \ > --gateway XXXX:XXXX:XXXX:XaXX:: \ > --subnet XXXX:XXXX:XXXX:XaXX::/120 \ > --opt parent="br0" z0a00 Error response from daemon: failed to allocate gateway (XXXX:XXXX:XXXX:XaXX::): Address already in use (Sorry for the Xs, it's a global address) It's the same error for every network, so I removed the IPv6 network and tried again: [Sat11@ 1:07:21][root@zx3:~] #〉docker network create \ > --attachable \ > --driver macvlan \ > --gateway 10.10.0.1 \ > --subnet 10.10.0.0/24 \ > --opt parent="br0" z0a00 Error response from daemon: network dm-27dcdc7bb8a6 is already using parent interface br0 Again, negative — and — if try to see what's dm-27dcdc7bb8a62; [Sat11@ 1:07:42][root@zx3:~] #〉docker network inspect dm-27dcdc7bb8a6 [] Error: No such network: dm-27dcdc7bb8a6 I mean… WT-holy-F! 🤬 ________________________________________________________________________________________________ +info: (trimmed/related) LINKS [Sat11@ 1:18:58][root@zx3:~] #〉ip l 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 … 7: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff 8: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff permaddr e4:11:5b:bc:c2:90 9: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff permaddr e4:11:5b:bc:c2:92 10: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP mode DEFAULT group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff permaddr e4:11:5b:bc:c2:94 11: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP mode DEFAULT group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff … 14: bond0.10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0.10 state UP mode DEFAULT group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff 20: br0.10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff … 23: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN mode DEFAULT group default link/ether 02:42:b9:0c:7b:5a brd ff:ff:ff:ff:ff:ff ADDRESSES [Sat11@ 1:18:21][root@zx3:~] #〉ip a 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever … 7: eth0: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff 8: eth1: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff permaddr e4:11:5b:bc:c2:90 9: eth2: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff permaddr e4:11:5b:bc:c2:92 10: eth3: <BROADCAST,MULTICAST,SLAVE,UP,LOWER_UP> mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff permaddr e4:11:5b:bc:c2:94 11: bond0: <BROADCAST,MULTICAST,MASTER,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0 state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff inet6 fe80::e611:5bff:febc:c28e/64 scope link valid_lft forever preferred_lft forever … 14: bond0.10@bond0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master br0.10 state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff … 17: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff inet 10.1.0.13/24 metric 1 scope global br0 valid_lft forever preferred_lft forever inet6 XXXX:XXXX:XXXX:X1XX::d/120 metric 1 scope global valid_lft forever preferred_lft forever inet6 fe80::e611:5bff:febc:c28e/64 scope link valid_lft forever preferred_lft forever … 20: br0.10: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether e4:11:5b:bc:c2:8e brd ff:ff:ff:ff:ff:ff … 23: docker0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:b9:0c:7b:5a brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever GATEWAYS/ROUTES v4 [Sat11@ 1:22:41][root@zx3:~] #〉ip r default via 10.11.11.1 dev shim-br0.11 default via 10.11.11.1 dev br0.11 metric 1 10.1.0.0/24 dev br0 proto kernel scope link src 10.1.0.13 metric 1 10.11.11.0/24 dev shim-br0.11 proto kernel scope link src 10.11.11.13 10.11.11.0/24 dev br0.11 proto kernel scope link src 10.11.11.13 metric 1 10.14.0.0/24 dev br0.14 proto kernel scope link src 10.14.0.13 metric 1 172.17.0.0/16 dev docker0 proto kernel scope link src 172.17.0.1 linkdown GATEWAYS/ROUTES v6 [Sat11@ 1:26:44][root@zx3:~] #〉ip -6 r ::1 dev lo proto kernel metric 256 pref medium XXXX:XXXX:XXXX:X1XX::/120 dev br0 proto kernel metric 1 pref medium XXXX:XXXX:XXXX:XbXX::/120 dev br0.11 proto kernel metric 1 pref medium XXXX:XXXX:XXXX:XeXX::/120 dev br0.14 proto kernel metric 1 pref medium fe80::/64 dev br0 proto kernel metric 256 pref medium fe80::/64 dev bond0 proto kernel metric 256 pref medium fe80::/64 dev br0.11 proto kernel metric 256 pref medium fe80::/64 dev bond0.11 proto kernel metric 256 pref medium fe80::/64 dev br0.14 proto kernel metric 256 pref medium fe80::/64 dev bond0.14 proto kernel metric 256 pref medium default via XXXX:XXXX:XXXX:XbXX:: dev br0.11 metric 1 pref medium Nowhere to be found.
  13. Anyway, kidding aside, I'm having a bit of an issue trying to set up networking for Docker; previously I managed to configure the macvtap interface introduced recently, although I was using a much faster NIC to make up for the bonded+bridged 4x single-gig NICs on this host's board and to satisfy the requirement of using the interface directly as instructed. That other NIC basically belongs to my main firewall which lives on vSphere. I reasoned though, Unraid moves most of the data in the network anyway, perhaps sharing it would be more efficient — not that is that much. Truth is, a single gig should be plenty of bandwidth — so it's how it ended up in Unraid. Once in Unraid, I had to enable bridging anyway so I could set up a trunk port for the firewall; but the firewall wouldn't work correctly or as transparently as I expected; if I pinged it, it (the firewall) would reply from Unraid's IP address closest to the source of the echo requests. Because of that interception, IPv6 and multicast weren't working. I had to put it back on vSphere. Container networking was okay though. But back on the bonded bridge, container networking has been impossible to set up; if I set the addresses in Unraid**, my default gateway is overridden misrouting the traffic as a consequence. If I set no addresses in Unraid like I had before, and instead use the docker network commands it won't let me create the networks because my gateway is allegedly already in use elsewhere. I tried docker network rm $(docker network ls -q), to nuke them out before recreating my own, but it didn't work. And the gateways are not specified anywhere, not manually at least, contrary to what docker says when I issue the commands. Any advice? **: in Network Settings so they show up as checkboxes in the Docker settings I have a feeling that the gateways are cached somewhere hence I'm unable to set them. In the meantime, I'll try a restart to see if that clears them. 🤞
  14. Thanks for answering and sorry for taking this long, Earlier I meant permanently attached storage by the way, not removable storage. The server has an 8-port SAS/SATA controller. By the looks of it, this is only the first release of Unraid with ZFS support, I thought it already had support for a while because after starting using it again, it took a long time to discover it wasn't checking for notifying of updates, and it was very outdated. I just assumed support was already mature. I had been drafting a question over what you mention, for weeks now; the required Unraid array in a pure ZFS setup (I wasn't sure if it was my mistake preventing the start of the pool), so I've been thinking about exploring the pure Btrfs pool angle. My question was about that. RAID5/6 aside, Btrfs is kinda low key superior to ZFS because of its flexibility, but there's nothing about Btrfs metadata/system/data profiles or setting up a pool outside of the context of cache. Just today in the morning, I was still on it, making some drawings (attached) to help me illustrate a post I couldn't get under control and then…power went out. Talk about anticlimactic. It's funny now. The silver lining though, is that I have much less to research now, thank you very very much, seriously.
  15. So what does the slave option does again? It was never answered. The closest to an answer was: I'm curious, what is enslaved? The mount on the container? The mount on the host? The [unassigned] device? And also, what makes it enslaved? Why is one of the other options "Shared"—will "slave" make it …not? Is that why is it slave? As in slave to single container?
  16. I've set several times a the same username with matching group name, and give them an specific UID and GID to match other servers but this keeps getting lost. Only password changes stick. The user loses(reverts from ) the UID and the group gets completely erased, hence making the user lose the GID as well. How do I make Unraid not ignore these changes? I already left the AD domain because it was too messy; I could never access anything even if set as public and after verifying Kerberos works on every participant system. I confirmed that /etc/passwd and /etc/group get updated after issuing the commands too but somehow both files manage to revert back. Right now I'm not sure when do they revert either: if it's a reboot what triggers it, or what—I've stopped the array a few times make adjustments but it's always up, not down—it shouldn't factor on this, I think. The only warning I get is when things are already failing. Is this supposed to happen? If so, how do I stop it so I can save the data I need to save? Thanks.
  17. I tried using Unraid as a datastore for a hypervisor but it's not good. Not good at all. So I flipped it, and made Unraid the hypervisor, and started storing data on it for once. The server I converted only has drive bays for 2.5" drives though. A lot of them but only 2.5". I only have flash drives on that format; they're faster and much cheaper — I checked just now: the most expensive flash drive at a local store is about USD35 sales tax included for a 1TB unit, which is USD70 for the same in magnetic — and according the the docu it's fine using them in the array. I'm not using a cache disk because I don't need it if all the disks are flash already, isn't that the point? That's unless parity is block based rather than file-based; then dual disks might be faster, but there's no indication of that, from what I gather. So that's that. That leaves out parity. As I understand, parity is written in sync (practically and technically or programmatically; as in [mount] sync/async) with the array data, I'm using the fast mode on hybrid ZFS, split-any, most-free, BTW. Additionally, due to TRIM and star alignment or whatever, it can't be a flash drive, which limits my options to 7200RPM magnetic disks because I don't think they make large enough SAS drives or at least not without the enterprise price tag. For an all flash array I think that might be still too slow so I'm thinking, with the help of iSCSI moving it off server and use a RAID 0 or a RAID 10 array for the parity volume. Is that allowed? Does Unraid need a disk or a volume? I don't remember its name but iSCSI has a command to unallocate/vacate or something to that effect, I see it on vSphere all the time. Is that enough for the parity volume? Thanks. _____________________________
  18. Never mind, Just as I posted this I realized I was referencing a MAC address in the container definition, however, that wasn't even it. It was IPv6. There's full IPv6 support in the network, including containers, there has always been, but somehow despite each of Unraid's interfaces being dual stacked, and all of the IPv6 information appears in the Docker section of the settings; it's not working. I also got again the "WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap." message, but at least it deploys the containers now. I'll look it up later, I need some sleep. =/ Thanks anyway!
  19. I'm migrating back to Unraid. It had been a while since I used it, and unsurprisingly, it was outdated. My distro was on 6.10, missing out on ZFS and the new networking approach to container networking, which I found out about skimming the release notes. I keep a file of my (successful) docker run commands that (re)deploys my containers as if they never stopped running when migrated. No need for compose — I kind of loathe compose. =P I digress, now running 6.12, on the macvtap thing, I deployed a random container to see what syntax would present Unraid so I could use it to adapt my notes/script my file to accordingly, but the only difference is the network; now goes by the Linux's interface name rather than the custom name I had defined networks with. Below is the syntax I used pre-Unraid 6.12 to deploy containers. containerName=fxn docker stop "$containerName" ; docker rm "$containerName" ; docker run \ --detach \ --restart 'always' \ --name "$containerName" \ --network 'z0a00' \ --ip '10.10.0.44' \ --ip6 '2001:db8:db9:a00::2c' \ --mac-address '00:50:56:0a:00:2c' \ --hostname 'fxn.proxy.domain.tld' \ --cpus '1' \ --memory '2048MB' \ --ulimit 'nofile=65536:65536' \ -v "/mnt/user/containerbridge/$containerName/config":"/config" \ -v "/mnt/user/containerbridge/$containerName/data":"/data" \ -v "/mnt/user/fxn":"/mnt/user/fxn" \ -v "/netvol/zx0_one/":"/one" \ -v "/netvol/zx0_dtwo":"/two" \ -v "/netvol/zx0_three":"/three" \ -v "/netvol/zx0_four":"/four" \ -e 'TZ=America/New York' \ -e 'PUID=2088' \ -e 'PGID=35538' \ fxn/fxn On 6.12, z0a00 was changed for bond0.10, --network was also shortened to --net, or perhaps it's just auto-completing it, sort of like the ip command does. containerName=fxn docker stop "$containerName" ; docker rm "$containerName" ; docker run \ --detach \ --restart 'always' \ --name "$containerName" \ --net 'bond0.10' \ --ip '10.10.0.44' \ --ip6 '2001:db8:db9:a00::2c' \ --mac-address '00:50:56:0a:00:2c' \ --hostname 'fxn.proxy.domain.tld' \ --cpus '1' \ --memory '2048MB' \ --ulimit 'nofile=65536:65536' \ -v "/mnt/user/containerbridge/$containerName/config":"/config" \ -v "/mnt/user/containerbridge/$containerName/data":"/data" \ -v "/mnt/user/fxn":"/mnt/user/fxn" \ -v "/netvol/zx0_one/":"/one" \ -v "/netvol/zx0_dtwo":"/two" \ -v "/netvol/zx0_three":"/three" \ -v "/netvol/zx0_four":"/four" \ -e 'TZ=America/New York' \ -e 'PUID=2088' \ -e 'PGID=35538' \ fxn/fxn When I tried it, I got errors with both, one of them I sort of expected, the other I'm not sure I understand: With the new syntax: WARNING: Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap. With the old syntax (and old network) docker: Error response from daemon: network z0a00 not found. There were also -l options, but it seemed to be related to metadata of the container so Unraid would pick them up for management in its GUI. -l net.unraid.docker.managed=dockerman -l net.unraid.docker.webui=… -l net.unraid.docker.icon=… and: -e HOST_OS=… -e HOST_HOSTNAME=… -e HOST_CONTAINERNAME=… Doesn't seem related to networking at all …maybe HOSTNAME, but it's a stretch. How is networking added to containers now (via the CLI)?? Thanks.
  20. I fixed it! (I'm sorry, I couldn't wait.) I created an interface for management: …and another to serve solely as a bridge without any addressing whatsoever: ( … ) I just realized I need another for (heavy) data transfer, but that should do it. Now fingers crossed it doesn't kpanic after a few hours like it's been doing for a while now.
  21. When Docker is enabled, my custom networks switch around the gateways in a way that the parent interface (or native VLAN) becomes the default gateway which is undesired. The default gateway should be on VLAN 11 of the bridge, i.e; br0.11. Docker disabled (✓): Docker enabled (✕): As seen above, routes have high metrics (unimaginatively 200 + the VLAN's ID number) to avoid overriding VLAN 11's gateway as the default. When Docker is down, this is obeyed; when Docker is up this is partly obeyed but there's what it seems like a shortcut of sorts related to DockerDHCP/IPAM that overrides the default gateway even if the checkboxes for that functionality are unmarked (and thus I assumed it meant for it to be off). On the Docker settings regardless if it's on macvlan vs ipvlan there aren't settings that could potentially be used to fix it: I have not tried it yet — partly because all input in Docker's network definitions except for the last in line have been grayed out — removing the network definition entry causing the problem but I have a feeling it'll just move (the default route) down to the next in line listed if I do anyway (i.e. from br0 to br0.10). I hope I'm wrong but just in case I'll wait for advice before I start wrecking things. Could you guys help me a little? You can just put a hint if you don't feel like typing and I'll do my homework from there.
  22. I need to pass this protocol modem to a container, in regular Docker I think you'd just use the -v option to map it as a volume, but my attempts here have made the container non-bootable at all. I made the container privileged (las time I did this, it was requirements for containers to access devices) and I was just going to map the path but I found a proper device option which I tried first. It failed. It also wasn't even available anywhere in the tree, I had to paste it because the / character is not typable whereas it is in other modes or in other fields. I went back to trying adding the path as originally planed but it failed as well. I tried the documentation--nothing there, it guided me here. Any advice to do this? The device in the host/Unraid appears as /dev/ttyUSB0 as in the screenshot above. Thanks.
  23. I moved my iTunes library from another file server to Unraid. Instead of using the existing database though, I'm creating a new one. The files are already in place since it's a copy of the top level directory, I'm only scrapping the database file for a new one so there should be minimal disk activity had stray files were found in need of reorganization. With the new library created, I started the scan on iTunes to add the files on the new library but it found 300 tracks. That's tracks, not files. With metadata files it should be plenty more of what was found, but then again the metadata files of the total would be much, much bigger too. I neglected for streaming services and entrusted it to iTunes Match, so I have no idea how big is supposed to be but I remember that even after all of the lost data it was somewhere around 14K. That's plenty more than 300. I did another scan and the number was raised to 500+. On the third scan it was raised to 6 tracks to 700. In other words, the files are there but when I do the scans it seems to only find small batches at a time. Before being hosted in Unraid, the library was read form another SMB file share, it had (and still is) been hosted there for like a decade, re-scanned a few times too over the network without issues on the early days of SMB3 as well as later on and, it's stable Active Directory environment. I can even request Kerberos tickets from Unraid, I accidentally found out the other day thinking I was logged in somewhere else. I've hosted the same library in Red Hat Enterprise Linux, Debian, Synology's DSM, FreeNAS+TrueNAS, macOS Server and Windows Server and this is the first time I see this behavior at the same time I'm actually ignorant on the actual manual setup of SMB shares as all of these platforms have a specialized easy GUI to do it (that's Cockpit or Webmin on RHEL+Deb) and on the past I've always used Unraid as an app- never file server. So I'm unaware if I should've set some tunable adjustment and overlooked it. Setup and testing The array is an all-flash pool with only user shares distributed using the most free file-to-disk allocation scheme. I figured that would behave somewhat like a RAID0 or JBOB thus offering the least disk contingency. The server is basically dedicated to my iTunes library, so the files are small but not so small that's a burden in resources. I checked on my client, the iTunes host; a macOS machine that's still able to run iTunes. It's a server system so it already has some optimizations, nevertheless I rechecked some basics, like not writing dot files, confirm disabled access times, limit SMB to SMB3, disable packet encryption and packet signing and of course requesting fresh Kerberos tickets on both systems before doing library re-scans. The most telling test was when I unmounted the share and remounted it using the NFS protocol. After a re-scan it found all remaining files. Now I just hope it's not unreliable as just before the server stopped responding out of the blue requiring to be cold reset each time. Any advice is welcome.
  24. Hey I'm attempting was setting email and I overshoot past the config file into the templates so I got distracted on that it's the one time I'm thankful for ADHD. I browse through them and saw they're heavy on the trackers and CDNs. And on an inbox there's no reverse proxy to block the scripts with CSPs…plus, I'd kind of like to design my own from scratch. Do you know by chance how these work? Are they parsed (the placeholders) before sending the email, or is there anything noteworthy about them? And, are the placeholders expected to be just markup or some kind of script or syntax? Thanks!
  25. Iduhno… I was doing something totally unrelated, got some light inspiration drew it up, got distracted and now I can't remember WTF I was doing. Anyway, it tolerates a little stretching so you don't have to install plugins or write CSS if you like it. I don't think the Unraid logo follows a spacing/scaling formula though. I'm getting distracted again. ****! See ya.