Bastian

Members
  • Posts

    23
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Bastian's Achievements

Noob

Noob (1/14)

0

Reputation

2

Community Answers

  1. Since the containers still exist, but are only in the exited status, you can start all of them via docker start $(docker ps -q --filter "status=exited") For someone with some knowledge in bash that is pretty easy. It is just some text processing. All information are stored in the template in /boot/config/plugins/dockerMan/templates-user. I am doing something similar with description of the shares.
  2. Some quick testing with @gluebabys suggestion. I created two shares: visible: Export = Yes, Security = Private, my user has read/write access nvisible: Export = Yes, Security = Private, my user has no access Without any modification to SMB: Both shares are visible. After applying ABE like described in this article: I just added those two settings to the SMB configuration: It probably needs more testing to figure out any edge-case, drawbacks, etc., but it seems to be technically possible.
  3. So, I just noticed that my UI isn't working anymore. ssh still works so I could dig a little bit through, though I can't make much of it. What makes it more weird, most commands related to the disks don't work anymore: I can't list the content of one of my pools (raidz1 of 5 SSDs) df -h does not work zfs list does not work docker ps does not work diagnostics also does not work (otherwise I would have attached it) My first idea was to just reboot the server, but powerdown -r also doesn't work. iostat (l to p are the SSDs) Linux 6.1.82-Unraid (Alpha) 04/02/2024 _x86_64_ (24 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 4.79 0.01 7.05 0.72 0.00 87.44 Device tps kB_read/s kB_wrtn/s kB_dscd/s kB_read kB_wrtn kB_dscd loop0 0.00 0.07 0.00 0.00 35844 0 0 loop1 0.02 0.38 0.00 0.00 198260 0 0 loop2 0.00 0.01 0.00 2.04 2776 276 1055424 nvme0n1 0.30 17.15 15.68 15.53 8888813 8129196 8046948 sda 0.02 1.49 0.00 0.00 770227 1467 0 sdb 0.44 13.40 7.59 0.00 6946975 3933300 0 sdc 0.43 13.84 7.59 0.00 7171563 3933736 0 sdd 23.08 7554.26 15.33 0.00 3915211556 7945944 0 sde 128.23 625.13 613.32 351.97 323990385 317869372 182419128 sdf 22.98 7538.51 15.40 0.00 3907046351 7980860 0 sdg 18.93 5734.18 0.01 0.00 2971900352 7404 0 sdh 18.69 5680.79 0.01 0.00 2944232616 6432 0 sdi 18.65 5669.81 0.01 0.00 2938540916 6860 0 sdj 19.35 5672.81 0.01 0.00 2940097268 7720 0 sdk 18.54 5669.16 0.01 0.00 2938205876 6500 0 sdl 130.58 614.48 584.03 351.97 318473885 302689744 182419128 sdm 132.93 654.28 613.38 352.09 339097085 317901404 182479608 sdn 124.78 574.18 584.05 352.08 297587537 302698432 182473900 sdo 126.98 607.63 584.17 352.09 314922621 302761532 182479608 sdp 133.15 658.90 613.38 352.08 341491741 317899456 182473900 syslog looks more frightening and is also the part I can't make anything out of it Apr 2 00:24:54 Alpha kernel: Modules linked in: vhci_hcd usbip_host usbip_core xt_CHECKSUM ipt_REJECT nf_reject_ipv4 ip6table_mangle iptable_mangle vhost_net vhost vhost_iotlb xt_nat veth xt_conntrack nf_conntrack_netlink nfnetlink xfrm_user xfrm_algo xt_addrtype br_netfilter nvidia_uvm(PO) md_mod xt_MASQUERADE xt_tcpudp xt_mark iptable_nat tcp_diag inet_diag ip6table_nat nf_nat nf_conntrack nf_defrag_ipv6 nf_defrag_ipv4 tun nct6775 nct6775_core hwmon_vid ip6table_filter ip6_tables iptable_filter ip_tables x_tables macvtap macvlan tap bridge stp llc atlantic igb i2c_algo_bit nvidia_drm(PO) nvidia_modeset(PO) zfs(PO) edac_mce_amd edac_core intel_rapl_msr zunicode(PO) intel_rapl_common iosf_mbi zzstd(O) zlua(O) kvm_amd zavl(PO) nvidia(PO) icp(PO) kvm zcommon(PO) crct10dif_pclmul crc32_pclmul znvpair(PO) crc32c_intel ghash_clmulni_intel sha512_ssse3 sha256_ssse3 spl(O) video sha1_ssse3 btusb aesni_intel drm_kms_helper btrtl crypto_simd nvme btbcm cryptd btintel input_leds wmi_bmof mxm_wmi rapl drm Apr 2 00:24:54 Alpha kernel: led_class nvme_core joydev bluetooth mpt3sas backlight k10temp i2c_piix4 syscopyarea raid_class sysfillrect ccp sysimgblt ecdh_generic i2c_core ahci scsi_transport_sas fb_sys_fops ecc libahci wmi button acpi_cpufreq unix [last unloaded: atlantic] Apr 2 00:24:54 Alpha kernel: ---[ end trace 0000000000000000 ]--- Apr 2 00:24:54 Alpha kernel: RIP: 0010:kmem_cache_alloc+0xa4/0x14d Apr 2 00:24:54 Alpha kernel: Code: 04 24 74 05 48 85 c0 75 1a 45 89 f0 4c 89 f9 83 ca ff 44 89 e6 48 89 ef e8 2a fc ff ff 48 89 04 24 eb 25 8b 4d 28 48 8b 7d 00 <48> 8b 1c 08 48 8d 8a 00 01 00 00 65 48 0f c7 0f 0f 94 c0 84 c0 74 Apr 2 00:24:54 Alpha kernel: RSP: 0018:ffffc9002f26bb80 EFLAGS: 00010202 Apr 2 00:24:54 Alpha kernel: RAX: 6932001749ec0f1c RBX: ffff888108ef0800 RCX: 0000000000000800 Apr 2 00:24:54 Alpha kernel: RDX: 0000000059a4be0c RSI: 0000000000042c20 RDI: 0000606fc0e0f9e0 Apr 2 00:24:54 Alpha kernel: RBP: ffff888103e8ac00 R08: 0000000000042c20 R09: 0000000000000000 Apr 2 00:24:54 Alpha kernel: R10: 0000000000000000 R11: 0000000000000002 R12: 0000000000042c20 Apr 2 00:24:54 Alpha kernel: R13: ffff888103e8ac00 R14: 0000000000001000 R15: ffffffffa1017efb Apr 2 00:24:54 Alpha kernel: FS: 0000153d9fffe6c0(0000) GS:ffff88903d300000(0000) knlGS:0000000000000000 Apr 2 00:24:54 Alpha kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 2 00:24:54 Alpha kernel: CR2: 00001468e0cf2110 CR3: 000000021cbee000 CR4: 00000000003506e0 Apr 2 00:29:30 Alpha agetty[54078]: tty1: input overrun Apr 2 00:33:33 Alpha kernel: general protection fault, probably for non-canonical address 0x6932001749ec171c: 0000 [#33] PREEMPT SMP NOPTI Apr 2 00:33:33 Alpha kernel: CPU: 12 PID: 55442 Comm: worker Tainted: P D O 6.1.82-Unraid #1 Apr 2 00:33:33 Alpha kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./X399 Professional Gaming, BIOS P3.80 12/04/2019 Apr 2 00:33:33 Alpha kernel: RIP: 0010:kmem_cache_alloc+0xa4/0x14d Apr 2 00:33:33 Alpha kernel: Code: 04 24 74 05 48 85 c0 75 1a 45 89 f0 4c 89 f9 83 ca ff 44 89 e6 48 89 ef e8 2a fc ff ff 48 89 04 24 eb 25 8b 4d 28 48 8b 7d 00 <48> 8b 1c 08 48 8d 8a 00 01 00 00 65 48 0f c7 0f 0f 94 c0 84 c0 74 Apr 2 00:33:33 Alpha kernel: RSP: 0018:ffffc9004f6bf7b8 EFLAGS: 00010202 Apr 2 00:33:33 Alpha kernel: RAX: 6932001749ec0f1c RBX: ffff888108ef0800 RCX: 0000000000000800 Apr 2 00:33:33 Alpha kernel: RDX: 0000000059a4be0c RSI: 0000000000042c20 RDI: 0000606fc0e0f9e0 Apr 2 00:33:33 Alpha kernel: RBP: ffff888103e8ac00 R08: 0000000000042c20 R09: 0000000000000000 Apr 2 00:33:33 Alpha kernel: R10: 0000000000000000 R11: 0000000000000003 R12: 0000000000042c20 Apr 2 00:33:33 Alpha kernel: R13: ffff888103e8ac00 R14: 0000000000001000 R15: ffffffffa1017efb Apr 2 00:33:33 Alpha kernel: FS: 0000153da01ff6c0(0000) GS:ffff88903d300000(0000) knlGS:0000000000000000 Apr 2 00:33:33 Alpha kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Apr 2 00:33:33 Alpha kernel: CR2: 00001468b8cc4270 CR3: 000000021cbee000 CR4: 00000000003506e0 Apr 2 00:33:33 Alpha kernel: Call Trace: Apr 2 00:33:33 Alpha kernel: <TASK> Apr 2 00:33:33 Alpha kernel: ? __die_body+0x1a/0x5c Apr 2 00:33:33 Alpha kernel: ? die_addr+0x38/0x51 Apr 2 00:33:33 Alpha kernel: ? exc_general_protection+0x30f/0x345 Apr 2 00:33:33 Alpha kernel: ? asm_exc_general_protection+0x22/0x30 Apr 2 00:33:33 Alpha kernel: ? spl_kmem_cache_alloc+0x45/0x4b4 [spl] Apr 2 00:33:33 Alpha kernel: ? kmem_cache_alloc+0xa4/0x14d Apr 2 00:33:33 Alpha kernel: ? kmem_cache_alloc+0x4e/0x14d Apr 2 00:33:33 Alpha kernel: spl_kmem_cache_alloc+0x45/0x4b4 [spl] Apr 2 00:33:33 Alpha kernel: ? srso_return_thunk+0x5/0x10 Apr 2 00:33:33 Alpha kernel: ? percpu_counter_add_batch+0x85/0xa2 Apr 2 00:33:33 Alpha kernel: abd_alloc_linear+0x63/0x8f [zfs] Apr 2 00:33:33 Alpha kernel: vdev_raidz_map_alloc+0x22c/0x2e9 [zfs] Apr 2 00:33:33 Alpha kernel: vdev_raidz_io_start+0x35/0x2d7 [zfs] Apr 2 00:33:33 Alpha kernel: ? vdev_mirror_rebuilding+0x65/0x65 [zfs] Apr 2 00:33:33 Alpha kernel: zio_vdev_io_start+0x22b/0x23d [zfs] Apr 2 00:33:33 Alpha kernel: zio_nowait+0xf0/0x10a [zfs] Apr 2 00:33:33 Alpha kernel: vdev_mirror_io_start+0x1d9/0x1e0 [zfs] Apr 2 00:33:33 Alpha kernel: zio_vdev_io_start+0x22b/0x23d [zfs] Apr 2 00:33:33 Alpha kernel: zio_nowait+0xf0/0x10a [zfs] Apr 2 00:33:33 Alpha kernel: arc_read+0xd78/0xf60 [zfs] Apr 2 00:33:33 Alpha kernel: ? dbuf_rele_and_unlock+0x4ef/0x4ef [zfs] Apr 2 00:33:33 Alpha kernel: dbuf_read_impl.constprop.0+0x49d/0x51c [zfs] Apr 2 00:33:33 Alpha kernel: dbuf_read+0x2c6/0x4da [zfs] Apr 2 00:33:33 Alpha kernel: dmu_buf_hold_array_by_dnode+0x1be/0x41c [zfs] Apr 2 00:33:33 Alpha kernel: dmu_read_uio_dnode+0x4e/0xe7 [zfs] Apr 2 00:33:33 Alpha kernel: ? srso_return_thunk+0x5/0x10 Apr 2 00:33:33 Alpha kernel: ? srso_return_thunk+0x5/0x10 Apr 2 00:33:33 Alpha kernel: ? zfs_rangelock_enter_impl+0x48a/0x4b5 [zfs] Apr 2 00:33:33 Alpha kernel: dmu_read_uio_dbuf+0x41/0x59 [zfs] Apr 2 00:33:33 Alpha kernel: zfs_read+0x283/0x33a [zfs] Apr 2 00:33:33 Alpha kernel: zpl_iter_read+0xb7/0x149 [zfs] Apr 2 00:33:33 Alpha kernel: vfs_read+0x105/0x19f Apr 2 00:33:33 Alpha kernel: ksys_pread64+0x64/0x84 Apr 2 00:33:33 Alpha kernel: do_syscall_64+0x6b/0x81 Apr 2 00:33:33 Alpha kernel: entry_SYSCALL_64_after_hwframe+0x64/0xce Apr 2 00:33:33 Alpha kernel: RIP: 0033:0x153ec4bc6e07 Apr 2 00:33:33 Alpha kernel: Code: 08 89 3c 24 48 89 4c 24 18 e8 15 50 f8 ff 4c 8b 54 24 18 48 8b 54 24 10 41 89 c0 48 8b 74 24 08 8b 3c 24 b8 11 00 00 00 0f 05 <48> 3d 00 f0 ff ff 77 31 44 89 c7 48 89 04 24 e8 65 50 f8 ff 48 8b Apr 2 00:33:33 Alpha kernel: RSP: 002b:0000153da01fdca0 EFLAGS: 00000293 ORIG_RAX: 0000000000000011 Apr 2 00:33:33 Alpha kernel: RAX: ffffffffffffffda RBX: 0000153daa6a3920 RCX: 0000153ec4bc6e07 Apr 2 00:33:33 Alpha kernel: RDX: 0000000000001000 RSI: 0000153e78ce4000 RDI: 000000000000000c Apr 2 00:33:33 Alpha kernel: RBP: 0000153e78ce4000 R08: 0000000000000000 R09: 0000000000000001 Apr 2 00:33:33 Alpha kernel: R10: 0000000122198000 R11: 0000000000000293 R12: 0000000000000000 Apr 2 00:33:33 Alpha kernel: R13: 000056230cca4c45 R14: 0000153ec2da5378 R15: 0000153d9ffff000 Apr 2 00:33:33 Alpha kernel: </TASK> What else can I do? I am at wits end
  4. +1 from me, sounds like an awesome idea, though probably hard to implement with docker, samba and VMs running in the background
  5. I am not op, just a by-passer adding his 5 cents ^^ Reading your post I was curious how zfs would behave in that scenario and posted my findings. True, I might have missed the original questions a bit 😅
  6. Did some research and stumbled upon this post on SE. Tl;dr: For zfs, redundant configuration (mirror, raidz, raidz2, raidz3) can correct themself at read, single disks have to set copies to 2 or higher to correct errors. The part about striped I didn't understand 😕
  7. I don't know if this is already tracked somewhere else, but imo this bug isn't solved! I know it is difficult to handle, since the NFS service can be started dynamically, the symlinks not, but to just ignore the global setting can't be part of the solution. Sorry, if I sound a little blunt, english isn't my first language and I am still a little bit upset since this bug cost me last 4 hours. My proposed solution: A tooltip on the field itself which rules for an exclusive share has been broken Disable the NFS Export button if NFS has been enabled after the symlink has been created (this is a bug either way, you can do that currently) With this you could use the global NFS setting again.
  8. What. The actual. F! This little comment on the bug report had the solution and I don't know how that could be missed. The global setting for NFS doesn't get regarded. I once used NFS, but disabled it globally after I had no use for it anymore. Enabled NFS -> Changed all NFS Export Settings -> (Disabled NFS again) -> Restarted the array and the symlinks are working now.
  9. Stopped both the array and the server multiple times that since I moved the files and made the configuration changes.
  10. Just triple-checked all disks and pools, it only exists on that one pool.
  11. Hi all, I've tried to move all my ISOs to an all SSD pool (raidz1 of 6 x 500GB) and use it as exclusive share. I've moved all the data to the pool through the share config and mover I verified that all data is only stored on the pool Stopped the array Enable exclusive shares (they have been disabled on my server, I usually don't use /mnt/user/) Started the array Based on the tooltip, all requirements are met. The share primary is set to the pool secondary is set to none The data only exists on the pool NFS is completely disabled On two of my shares, which are also only located on pools, it works, but not on the ISOs share. My share configuration: A compute in the share overview, showing the exclusive on another share and that there is no other data on another drive The server and array have been restarted multiple times in the meantime. I am at wits end, has anyone an idea? Thanks in advance! alpha-diagnostics-20231225-2035.zip
  12. +1 With docker compose labals unraid could still save all necessary metadata directly on the stack.
  13. Just to make sure, the truncate only happens if you open the log in browser. If you open the log through the shell (usually located in /var/lib/docker/containers/), it is the full log, without any truncating. Since the file is located in /var/ it is only stored in memory and doesn't take up any space on the docker image. This also means you loose any log on reboot.
  14. As you already said, /var/log is mostly logging of the host system. There you can find the log of unraid itself (syslog), the log of the docker engine (I presume, docker.log) and the log of various tools running on the host. As stupid as it sounds, there is none. What you see in `docker logs` (or the UI) is what you get. You have to take into consideration that docker is no normalised ecosystem. It us up to each application and image maintainer how much and what they want to log. There are ways to extend the applications logging. For example, most server (Spring, Asp, etc) have different log levels, controlled through environment variables, but that depends very heavy on the exact implementation. So, what to do if a container fails? Does it fail on creation (unraid will prompt a "command failed" on saving)? You should get a pretty telling exception message from docker with the reason. Does it exit with a non-zero code after creation? Check the docker logs. If the creator printed something, it should be in there. Otherwise you have to contact the creator (through an application thread here in the forum, github issue, etc) and attach both the docker log and docker run command (as printed by unraid on saving). They wrote the application and might see more than a third-party can. There could be additional resources, like an internal log file, a configuration file created by the application, etc, but that varies from application to application. I just noticed that the UI only shows the last 100 or so lines. For the full log you can check the actual log file handled by docker. To get the path just run `docker inspect --format='{{.LogPath}}' <container id>`. I hope it helped and could clear up your question somewhat
  15. In my defence, I am not a front-end developer and worked with HTML the last time 10 years ago. <script /> is not valid syntax, has to be <script></script>.