Jump to content

Glassed Silver

Members
  • Posts

    166
  • Joined

  • Last visited

Posts posted by Glassed Silver

  1. @Squid I'm really sorry to ping you, but I haven't found any comment from you on the docker update notification spam situation so far (maybe I didn't use the search correctly or used the wrong terms, idk...)

     

    Is it an issue you're aware of and that we can expect a fix for some time soon-ish? I don't think FCP really needs to check on any of my docker containers, I auto-update a lot and I also manually update some of them all on my own, if we could exclude that check, that'd be so helpful, especially as it would decrease the processing needed.

     

    Getting daily notifications about some dozen or so containers having an update available at a given moment is very unhelpful since it buries any possible errors that are left unseen since I notice myself just swiping away all of my Pushover notifications in bulk sometimes when I don't have time for minor stuff like that.

     

    In any case thank you so much for this wonderful plug-in!

    • Like 1
  2. 2 hours ago, ich777 said:

    That's not how my container works.

    My container pulls the initial version from Mozilla itself, extracts it and then it uses the built in updater to update itself, it doesn't pull anything from the Debian repo itself in terms of Thunderbird.

     

    No plans yet and not for the foreseeable future for my containers.

     

    EDIT: maybe I can look into that when real life calms down a bit but I assume that will not be the case this year…

    Fair enough, but then how do I get it to download a version that's not rather old?

     

    latest gave me version 102, which happens to be what Debian's repo has.

     

    Okay, I just investigated. Apparently you use the FF ESR/Thunderbird ppa, that one only offers 102 as latest for Thunderbird.

     

    It'd be nice if you could make it an option to use their thunderbird-next repo instead through a variable. However it's nice to see you don't rely on Debian for the main third-party packages that supply reliable repos themselves.

     

    Other than that, thank you so much for the container. :)

    • Like 1
  3. Any reason why the Thunderbird container doesn't let me pick unstable?

     

    The Debian repo might not be the best source to reference for packages that need to be very up-to-date...

     

    I'd suggest adding the Mozilla repo apt repo and sourcing from there upon every reboot of the container rather than pulling from Debian. Being stuck 14-15 versions behind is no laughing matter when you try to be using the latest release and unstable is unavailable as a tag.

     

    And if it's not too much to ask, do you have any plans to switch to kasmvnc over novlc?

  4. I don't have the GPU Stat plugin and get this error logged anyhow. Since unRAID primarily works off a web UI and is typically extended by running heaps of plugins that seem to rely on this setting being set correctly, probably preferably higher than what it's set to by default I'd suggest LimeTech to look into this a little closer.

     

    I recently re-did my docker config since I switched from a btrfs vdisk to folders on a ZFS cache pool and the more dockers I added back from my stored templates the slower the UI got, and I bet this has something to do with it as well. I have over 90 dockers (before anyone feels an itch: I'm not here to justify my setup, you will not see me entertain this discussion...)

     

    unRAID's web UI HAS to scale better with large needs, this has been an issue for years now and as you extend your use cases for unRAID the UI punishes you much for it.

  5. 13 hours ago, jademonkee said:

    Just adding to the voices on this issue:

    I use MACVLAN on Docker, and have had no problems with that. I have Unifi gear (USG and 2x APs, with the controller running in Docker), and have no problems (except for it complaining that my bonded eth on the server shares an IP address).

    If there's anything I can do to help troubleshoot this problem (contributing to a known-working hardware list, for instance), feel free to reach out.

    Could it be none of your dockers use br0?

  6. 1 hour ago, wirenut said:

    I am one who has had an unclean shutdown after upgrading.

    I followed the instructions for the upgrade to 6.12.3. I did have to use the command line instruction, upgrade went fine and server was up for 6 days, this morning i needed to reboot server.

    I spun up all discs

    I individually stopped all my dockers and my VM.

    I hit reboot button.

    it is about 4 hours into unclean shutdown parity check with no errors, log repeated this while shutting down:

     

    Jul 22 08:34:04 Tower root: umount: /mnt/cache: target is busy.
    Jul 22 08:34:04 Tower emhttpd: shcmd (5468228): exit status: 32
    Jul 22 08:34:04 Tower emhttpd: Retry unmounting disk share(s)...
    Jul 22 08:34:09 Tower emhttpd: Unmounting disks...
    Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): umount /mnt/cache
    Jul 22 08:34:09 Tower root: umount: /mnt/cache: target is busy.
    Jul 22 08:34:09 Tower emhttpd: shcmd (5468229): exit status: 32
    Jul 22 08:34:09 Tower emhttpd: Retry unmounting disk share(s)...

     

    attached diagnostics captured from reboot.

     

    not sure what/why it happened.

    tower-diagnostics-20230722-0834.zip 213.08 kB · 1 download

    Probably a good idea to post this into a dedicated thread if you ask me.

     

    off-topic/rant:

    The longer I am on 6.11.5/6.12.x the more I wish I had heeded the cautious lessons of MACVLAN issues and remained on 6.9.2 until resolved,

    That being said, a few weeks ago I didn't know I'd get a Fritz!Box router...

  7. 20 hours ago, JorgeB said:

    IMHO yes, you still cannot remove devices, change pool topology, etc.

    Sure, those definitely are aspects that still need to be worked on, but this lifts ZFS from a very difficult to plan for FS for homelabbers who are attracted by unRAID's unique selling points to something that makes it workable where the remaining hurdles are a lot easier (and cheaper) to mitigate.

     

    It most certainly is 80% of the way. Way more than a little if you ask me.

  8. On 6/29/2023 at 7:27 PM, JorgeB said:

    I saw the video a couple of days ago, but thanks for linking the pull request with more details, good that it looks like this is going to get finally done, nice feature mainly for hobbyists, which I assume covers most Unraid users, and makes zfs raidz a little more flexible.

    A little? This is in my opinion the ONE change most homelabbers will have waited for if they want to expand as they go rather than moving, recreating pools, keeping money invested in drives that you wouldn't need storage-wise, etc...

     

    I for one am ECSTATICb and cannot wait for this to land in unRAID. Glorious times, the single-most imporant feature I've been waiting for for sure for unRAID.

  9. 9 hours ago, tucansam said:

    I've never seen this many X.X.y releases in a row in maybe forever.  Are the major releases not ready for prime-time before they're rolled out?

     

    More eyes = more bugs found.

     

    Beta tests and RCs are done by few people, gold releases are done by everyone except for the few who are cautious.

    • Upvote 2
  10. 19 minutes ago, faxxe71 said:

    Update done from 6.11.5 to 6.12.3:

    update all Apps and Dockers

    disabled VM (have one running)

    disabled Docker (21 always running)-i use ipvlan with few dockers on different ip in br0

    backup from the boot stick

    update and reboot properly

    enabled the docker service and 21 dockers came up correctly

    enabled the VM service and 1 VM starts correctly

     

    Everything works flawless and its now up since 24h.

     

    -faxxe

     

    Yeah, IPVLAN is fine, problem is that not for all of us that's a viable option.

     

    LimeTech seems to like to pretend it is, unless they simply are too busy catching up with this mess and want to communicate betterment once they figured it out. Personally communication should come first, but sadly this is the kind of behavior I always feared to experience at some point down the line with a proprietary and commercial product that's based on community development and support so much.

     

    Happy day when LT steers this ship around again, love unRAID regardless, but for a server OS this is becoming a bit of a sour tale.

    • Like 1
    • Upvote 1
  11. At this point I'd already be happy seeing LimeTech publicly acknowledging the issue and communicating they are looking into it.

     

    The status quo seems to be based on letting the community figure it out with a variety of workarounds and hacks which I find problematic at best and concerning at worst.

    • Like 1
    • Upvote 1
  12. 2 hours ago, ich777 said:

    This is a bit hard since I don't know what card the users run and why? Users have maybe a reason to use such a card but they are from a efficiency standpoint really bad nowadays.

    But the downloads from the legacy drivers are decreasing from release to release and the real reason why I don't want to do a "legacy" branch from the driver is that Nvidia could drop the legacy driver any time as they did with many things in the past. :/

     

    All other branches will update properly even if you upgrade Unraid to a newer version. :)

     

    Thanks! :)

    1) I think you misunderstood what I meant, my point is you should tell users that you fall back to latest.

     

    2) As for the different point that Nvidia could drop legacy drivers anytime.... Sure... Then update the plugin to drop the channel?

     

    In any case that wasn't even my point. :)

    • Thanks 1
  13. 12 minutes ago, ich777 said:

    The main issue is that the driver version 470.129.06 doesn't exist for the new Unraid version because Nvidia releases from time to time also new legacy drivers and the new version number is: 470.199.02

     

    It's a safety measure from me that it falls back to the latest driver so that at least a driver is installed, I know it is inconvenient and involves another reboot but I won't change that in the plugin because I encourage users to upgrade to something more recent like a NVIDIA T400

    Sorry but a GT710 is not worth the money... :/

    If your plugin is that opinionated maybe you should provide information about that within it?

     

    Not trying to sound sour, I don't even have a GPU in my unRAID server, but I may at some point. It could avoid confusion. Just thinking aloud. :)

    • Thanks 1
  14. 43 minutes ago, ich777 said:

    What router do you have?

     

    I know that the Fritzbox can‘t handle that but what Router do you have?

     

    Anyways in my oppinion it is better to avoid crashes instead of having traces…

     

    For what applicaiton(s) do you need static IP address(es)?

    I do have a FritzBox, indeed.

     

    Right now I have a stable system only because I deactivated Docker, but obviously at some point I'd like to use one of the main features of the OS I paid for again.

     

    As for what I need static IPs for: services the need to be reachable at all times from a reliable address regardless of DHCP and DNS.

  15.   

    20 hours ago, ich777 said:

    Uninstall the SR-IOV plugin and reboot, the maintainer has not uploaded new packages which are required for 6.12.3

    You can also go back to 6.12.2 and install the SR-IOV plugin again if you really need it and wait until the maintainer uploads the updated plugin packages so you can upgrade to 6.12.3

     

    Something is also wrong with your modprobe.d files, you have a i915.conf and a i915.conf.conf where one is set up wrong:

    options i915 force_probe=4c8b
    options i915 enable_guc=2blacklist i915

     

    Please also switch from MACVLAN to IPVLAN in your Docker settings, you have to stop the Docker service to change that setting.

    Sadly MACVLAN is needed for static IPs and for some routers that don't support multiple IPs for a single MAC address...

  16. Alright, sounds like a good compromise, thank you so much for willing to help on those terms.

     

    Here's a snippet of around the mount attempt. If you need more, like you suggested I'll merrily provide more. :)

     

    Jun 20 22:43:20 Ahri kernel: scsi 1:0:6:0: Direct-Access     ATA      TOSHIBA MG08ACA1 0102 PQ: 0 ANSI: 6
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:6:0: SATA: handle(0x0010), sas_addr(0x5001438024fdabe7), phy(6), device_name(0x5000039af8d2f414)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:0:0: [sdb] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:6:0: enclosure logical id (0x5001438024fdabe0), slot(7)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:6:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:6:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
    Jun 20 22:43:20 Ahri kernel: sdc: sdc1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:1:0: [sdc] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sdf: sdf1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:4:0: [sdf] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sde: sde1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:3:0: [sde] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sdd: sdd1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:2:0: [sdd] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sdg: sdg1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:5:0: [sdg] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sd 1:0:6:0: Attached scsi generic sg7 type 0
    Jun 20 22:43:20 Ahri kernel: end_device-1:0:6: add: handle(0x0010), sas_addr(0x5001438024fdabe7)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:6:0: [sdh] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:6:0: [sdh] 4096-byte physical blocks
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:7:0: Direct-Access     ATA      TOSHIBA MG08ACA1 0102 PQ: 0 ANSI: 6
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:7:0: SATA: handle(0x0011), sas_addr(0x5001438024fdabe8), phy(7), device_name(0x5000039af8d2e434)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:7:0: enclosure logical id (0x5001438024fdabe0), slot(8)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:7:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:7:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:6:0: [sdh] Write Protect is off
    Jun 20 22:43:20 Ahri kernel: sd 1:0:6:0: [sdh] Mode Sense: 7f 00 00 08
    Jun 20 22:43:20 Ahri kernel: sd 1:0:6:0: [sdh] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Jun 20 22:43:20 Ahri kernel: sdh: sdh1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:6:0: [sdh] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sd 1:0:7:0: Attached scsi generic sg8 type 0
    Jun 20 22:43:20 Ahri kernel: end_device-1:0:7: add: handle(0x0011), sas_addr(0x5001438024fdabe8)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:8:0: Direct-Access     ATA      Samsung SSD 860  4B6Q PQ: 0 ANSI: 6
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:8:0: SATA: handle(0x0012), sas_addr(0x5001438024fdabed), phy(12), device_name(0x5002538e90804b01)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:8:0: enclosure logical id (0x5001438024fdabe0), slot(13)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:8:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:8:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:7:0: [sdi] 27344764928 512-byte logical blocks: (14.0 TB/12.7 TiB)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:7:0: [sdi] 4096-byte physical blocks
    Jun 20 22:43:20 Ahri kernel: sd 1:0:8:0: Attached scsi generic sg9 type 0
    Jun 20 22:43:20 Ahri kernel: end_device-1:0:8: add: handle(0x0012), sas_addr(0x5001438024fdabed)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:8:0: [sdj] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:7:0: [sdi] Write Protect is off
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:9:0: Direct-Access     ATA      Samsung SSD 860  4B6Q PQ: 0 ANSI: 6
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:9:0: SATA: handle(0x0013), sas_addr(0x5001438024fdabee), phy(13), device_name(0x5002538e702b1ef3)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:9:0: enclosure logical id (0x5001438024fdabe0), slot(14)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:9:0: atapi(n), ncq(y), asyn_notify(n), smart(y), fua(y), sw_preserve(y)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:9:0: qdepth(32), tagged(1), scsi_level(7), cmd_que(1)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:9:0: Attached scsi generic sg10 type 0
    Jun 20 22:43:20 Ahri kernel: end_device-1:0:9: add: handle(0x0013), sas_addr(0x5001438024fdabee)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:10:0: Enclosure         HP       Gen8 ServBP 12+2 3.30 PQ: 0 ANSI: 5
    Jun 20 22:43:20 Ahri kernel: sd 1:0:7:0: [sdi] Mode Sense: 7f 00 00 08
    Jun 20 22:43:20 Ahri kernel: sd 1:0:9:0: [sdk] 1953525168 512-byte logical blocks: (1.00 TB/932 GiB)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:10:0: set ignore_delay_remove for handle(0x0014)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:10:0: SES: handle(0x0014), sas_addr(0x5001438024fdabf9), phy(24), device_name(0x0000000000000000)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:10:0: enclosure logical id (0x5001438024fdabe0), slot(0)
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:10:0: qdepth(1), tagged(0), scsi_level(6), cmd_que(0)
    Jun 20 22:43:20 Ahri kernel: sd 1:0:8:0: [sdj] Write Protect is off
    Jun 20 22:43:20 Ahri kernel: sd 1:0:8:0: [sdj] Mode Sense: 7f 00 00 08
    Jun 20 22:43:20 Ahri kernel: sd 1:0:9:0: [sdk] Write Protect is off
    Jun 20 22:43:20 Ahri kernel: sd 1:0:9:0: [sdk] Mode Sense: 7f 00 00 08
    Jun 20 22:43:20 Ahri kernel: sd 1:0:8:0: [sdj] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Jun 20 22:43:20 Ahri kernel: sd 1:0:9:0: [sdk] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Jun 20 22:43:20 Ahri kernel: scsi 1:0:10:0: Attached scsi generic sg11 type 13
    Jun 20 22:43:20 Ahri kernel: end_device-1:0:10: add: handle(0x0014), sas_addr(0x5001438024fdabf9)
    Jun 20 22:43:20 Ahri kernel: sdj: sdj1
    Jun 20 22:43:20 Ahri kernel: sdk: sdk1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:8:0: [sdj] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sd 1:0:9:0: [sdk] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: sd 1:0:7:0: [sdi] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA
    Jun 20 22:43:20 Ahri kernel: sdi: sdi1
    Jun 20 22:43:20 Ahri kernel: sd 1:0:7:0: [sdi] Attached SCSI disk
    Jun 20 22:43:20 Ahri kernel: ata2: SATA link down (SStatus 0 SControl 300)
    Jun 20 22:43:20 Ahri kernel: ata3: SATA link down (SStatus 0 SControl 300)
    Jun 20 22:43:20 Ahri kernel: ata4: SATA link down (SStatus 0 SControl 300)
    Jun 20 22:43:20 Ahri kernel: ata5: SATA link down (SStatus 0 SControl 300)
    Jun 20 22:43:20 Ahri kernel: ata6: SATA link down (SStatus 0 SControl 300)
    Jun 20 22:43:20 Ahri kernel: mgag200 0000:01:00.1: vgaarb: deactivate vga console
    Jun 20 22:43:20 Ahri kernel: Console: switching to colour dummy device 80x25
    Jun 20 22:43:20 Ahri kernel: [drm] Initialized mgag200 1.0.0 20110418 for 0000:01:00.1 on minor 0
    Jun 20 22:43:20 Ahri kernel: fbcon: mgag200drmfb (fb0) is primary device
    Jun 20 22:43:20 Ahri kernel: mgag200 0000:01:00.1: [drm] drm_plane_enable_fb_damage_clips() not called
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid b8c88925-d8be-4ab9-8ef8-17484fdafe91 devid 1 transid 4309837 /dev/sdd1 scanned by udevd (1083)
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid 878890c5-81f9-4f8f-8f69-818e2af4175e devid 1 transid 360242 /dev/sdf1 scanned by udevd (1057)
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid c2923828-e4c3-483e-bc45-ba3fe7ef2b6a devid 1 transid 5985837 /dev/sdb1 scanned by udevd (1038)
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid 943d58d4-9214-47a8-8d61-01272b03f9ba devid 1 transid 368310 /dev/sdg1 scanned by udevd (1083)
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid 71dab375-beb9-4327-bea1-6a7ac48dcd5f devid 1 transid 133339 /dev/sde1 scanned by udevd (1038)
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid 979bf416-df16-4476-91f0-ea4e6c4084b2 devid 1 transid 1796375 /dev/sdc1 scanned by udevd (1057)
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid d5b29014-e26e-4407-a52d-73db030f39ef devid 2 transid 44263972 /dev/sdk1 scanned by udevd (1057)
    Jun 20 22:43:20 Ahri kernel: Console: switching to colour frame buffer device 128x48
    Jun 20 22:43:20 Ahri kernel: BTRFS: device fsid d5b29014-e26e-4407-a52d-73db030f39ef devid 1 transid 44263972 /dev/sdj1 scanned by udevd (1083)
    Jun 20 22:43:20 Ahri kernel: mgag200 0000:01:00.1: [drm] fb0: mgag200drmfb frame buffer device
    Jun 20 22:43:20 Ahri kernel: igb 0000:02:00.3: removed PHC on eth3
    Jun 20 22:43:20 Ahri kernel: igb 0000:02:00.2: removed PHC on eth2
    Jun 20 22:43:20 Ahri kernel: igb 0000:02:00.1: removed PHC on eth1
    Jun 20 22:43:20 Ahri kernel: igb 0000:02:00.0: removed PHC on eth0
    Jun 20 22:43:20 Ahri kernel: igb: Intel(R) Gigabit Ethernet Network Driver
    Jun 20 22:43:20 Ahri kernel: igb: Copyright (c) 2007-2014 Intel Corporation.
    Jun 20 22:43:20 Ahri kernel: igb 0000:02:00.0: added PHC on eth0
    Jun 20 22:43:20 Ahri kernel: igb 0000:02:00.0: Intel(R) Gigabit Ethernet Network Connection

     

  17. I decided to reboot after noticing that my docker image was unreadable as well as /mnt/user/system and anything below it.

     

    I decided to reboot and now my cache pool of two drives won't mount. Brining my array into maintenance mode yielded me this on the read-only filesystem check against my cache pool:
     

    [1/7] checking root items [2/7] checking extents [3/7] checking free space tree free space info recorded 1351 extents, counted 1361 There are still entries left in the space cache cache appears valid but isn't 6182821691392 [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... Checking filesystem on /dev/sdj1 UUID: d5b29014-e26e-4407-a52d-73db030f39ef found 759680454656 bytes used, error(s) found total csum bytes: 236747020 total tree bytes: 1881374720 total fs tree bytes: 1275772928 total extent tree bytes: 314228736 btree space waste bytes: 386624833 file data blocks allocated: 7470265802752 referenced 690431000576

     

    Should I run the check without the read-only parameter?

     

    I will provide any diagnostics that are asked for, but I will not read megabytes of compressed files trying to censor them for privacy.

  18. Getting rid of the Docker tab sounds horrible and insisting on keeping that Action Centre enabled notification active is.... a really odd choice to be fair.

     

    Docker containers and plugins are not just different delivery algorythms to achieve the same in my honest opinion.

     

    Plugins typically extend the capability of the host OS and docker applications are more like user apps. To me those are severely differnt things.

     

    I'm really puzzled by those views. I also don't think having the app store and the app presentation of already installed apps as a unified place in one tab is major UX progress.

     

    More nesting does not always equal more simplicity and overview....

  19. 6 hours ago, Dexter84 said:

    Hi zusammen,

     

    ich habe jetzt aucn die 6.10 laufen. Ich denke das Docker ipv6 hat sich nun wie erwartet gelöst. Wie erwartet vergibt jetzt Docker eigenständig IPV6 Adressen. Du brauchst auch kein VLAN (wobei ich am überlegen das wie mgutt zu machen, um eben mein netz nicht mit den ganzen Container Adressen zu überfluten).

    Tauchen die docker eigentlich auch als hosts im Netz sichtbar auf (z.B. im Router) wenn du keine Ports mappst?

     

    Oder behält Docker die dann für sich und lässt gut sein?

     

    Mir wäre es lieber wenn die Docker alle keine IPv6-Adressen hätten, sondern IPv6 nur auflösen und erreichen könnten. Also docker-internes IPv6to4 oder so. Aber wenn ich dann hinterher Dutzende Einträge mehr im Router habe dann sollls wohl so sein... :/

  20. 6 hours ago, lukeoslavia said:

    Hello, I am having a couple of minor issues with the container, maybe permission related? 

    When we try to upload any image (or file) we get the following error in the log:

    Error code: EACCES
    Original stack:
    Error: EACCES: permission denied, open '/directus/uploads/90e3355a-57c1-4807-9119-6319c0c54935.png'
    08:00:04 request errored POST 503 /files 21ms

     

    I was hoping maybe you knew of a solution.

     

    Also, when I go to edit my admin account info, when the page opens, a dialog box pops up with

    [FORBIDDEN] You don't have permission to access this.

    What did you set for PUID and PGID? Confirm it's the same as when you run this from the container's shell:

    id

     

    What you set and what displays there should definitely match, this is basically a sanity check for my template.

     

     

    Where I expect a deviation is your files as you already guessed. To confirm and fix in that case:

     

    From the container's shell again, run this command against the Uploads, Extensions and if applicable Database folders' mapped locations:

    ls -lh

     

    like so

    ls -lh /directus/uploads/
    ls -lh /directus/extensions/
    ls -lh /directus/database/

    You only need to check the last one if you use SQLite rather than a dedicated database backend.

     

    To fix, if needed run this, should work from the container's shell again, if not, change the paths to the mapped host paths and run from unRAID's shell:

     

    chmod <PUID>:<PGID> -R /directus/uploads/
    chmod <PUID>:<PGID> -R /directus/extensions/
    chmod <PUID>:<PGID> -R /directus/database/

     

    Lastly run this command against the same directories:

    chown 755 -R /directus/uploads/
    chown 755 -R /directus/extensions/
    chown 755 -R /directus/database/

     

    That should do it 👍

×
×
  • Create New...