isvein

Members
  • Posts

    408
  • Joined

  • Last visited

Report Comments posted by isvein

  1. 26 minutes ago, JorgeB said:

    The ARC should only start to get populated if you read/write from a zfs filesystem, do you have something installed like the cache dirs plugin that would generate reads?

    I dont have that one installed, but maybe it is some other plugin, gonna check it out more :)
    It makes sense that the cache gets r/w`s as I have docker there.

    But so far it just started to fill up after reboot in like the first 5min.

    I do have the
    Dynamix File Integrity installed, but if that was generating writes and reads, I guess it would so on the xfs drive that was getting 0 r/w

  2. Ok, so I think I got it, maybe.
    I turned off arc cache on all datasets and then I restarted the server.
    When started, the arc used 7% of ram and it started to build up to 34% and as it did the read and writes increased too.
    So I think this has something to do with that zfs reads and writes data to ram "all the time".
    Clearly some data is cached to ram even if you tun it off on the datasets, else the zfs ram usage would not increase.

    But it would be good to get some info on this from someone who knows zfs better :)

  3. Found out it seems to have nothing to do with docker at least.
    I cleared the R/W counter and waited around 20min with docker service turned off and as you can see, its only R/W on the ZFS drives, not the XFS one.
    I start to think this not a error or bug, but how zfs may work, going to test some more.

    drives.JPG

  4. I tested  now, sat drives spin down to 30min (just not to need to wait for hours) and kept away from the "main" page and they did indeed spin down. My arrays are 10 of 11 drives are zfs.
    But once I clicked on "Main", they all spun up again as expected pr info from @Iker (all zfs drives, the xfs one keeps off as also expected)

  5. 1 hour ago, bonienl said:

     

    Unraid is not involved in custom networks created by the user, it is however managing macvlan / ipvlan custom networks.

     

    When you create your own custom networks using docker CLI, it is your own responsiblity to do the correct creation of such a network.

     

    that I totally understand and that is fine, but it still dont explain why

    "docker create network test --subnet 192.168.2.0/24 --gateway 192.168.2.1" does not work as expected but

    "docker create network test --subnet 172.2.0.0/16 --gateway 172.2.0.1" does work.

    By working I mean I can access the dockers from the lan (in my case thats is 192.168.0.0/24)
    (aka the network gets routed/not routed  correctly. By not stetting a driver they both defaults to bridge, so should work in both cases)

  6. On 5/18/2023 at 7:32 AM, isvein said:

    Seems to still be problems with Docker running off folder on zfs :(
    If I look in say  NginxproxyManager:

     

    [app         ] [5/18/2023] [7:28:47 AM] [Global   ] › ℹ  info      Manual db configuration already exists, skipping config creation from environment variables
    [app         ] [5/18/2023] [7:28:49 AM] [Migrate  ] › ℹ  info      Current database version: none
    [app         ] [5/18/2023] [7:28:51 AM] [Setup    ] › ℹ  info      Added Certbot plugins certbot-dns-domeneshop~=0.2.8 
    [app         ] [5/18/2023] [7:28:51 AM] [Setup    ] › ℹ  info      Logrotate Timer initialized
    [app         ] [5/18/2023] [7:28:51 AM] [Setup    ] › ℹ  info      Logrotate completed.
    [app         ] [5/18/2023] [7:28:51 AM] [IP Ranges] › ℹ  info      Fetching IP Ranges from online services...
    [app         ] [5/18/2023] [7:28:51 AM] [IP Ranges] › ℹ  info      Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
    [app         ] [5/18/2023] [7:28:56 AM] [IP Ranges] › ✖  error     getaddrinfo EAI_AGAIN ip-ranges.amazonaws.com
    [app         ] [5/18/2023] [7:28:56 AM] [SSL      ] › ℹ  info      Let's Encrypt Renewal Timer initialized
    [app         ] [5/18/2023] [7:28:56 AM] [SSL      ] › ℹ  info      Renewing SSL certs close to expiry...
    [app         ] [5/18/2023] [7:28:56 AM] [IP Ranges] › ℹ  info      IP Ranges Renewal Timer initialized
    [app         ] [5/18/2023] [7:28:56 AM] [Global   ] › ℹ  info      Backend PID 437 listening on port 3000 ..

    I get what looks like an network error.
    This error does not happen if set to default bridge

    Tried again, following the procedure in this post:
     

    Quote

    First, create a docker user share configured as follows:

    Share name: docker

    Use cache pool: Only

    Select cache pool: name of your ZFS pool

    Next, on Docker settings page:

    Enable docker: Yes

    Docker data-root: directory

    Docker directory: /mnt/user/docker

    Even deleted my old "Docker" share and created an new with the name "docker"
    Same happens, same errors containers wont work on custom networks created in terminal with "docker create network "name" --subnet "custom-subnet" and then setting fixed ip for the docker(s)

    Not setting a static ip on container with custom subnet does also NOT work.

    What does work, is custom docker network created WITHUOUT the --subnet flag so you get an 172.18.0.0/16 subnnet and with no static IP to the container.

    ------------------

    So I checked the settings for both this networks using "docker network inspect "name"" and I see that the 172 subnet gets an gateway in its config but my custom subnet netwrok does NOT get an gateway.

    But, even creating a network with for example  "docker create network "name" --subnet "192.168.2.0/24" --gateway "192.168.2.1"" does not work, the config gets the gateway but the container cant connect

    So something in the back-end seems to not route things right and/or setup custom subnets correct when docker is on zfs folder

    -----------
    When using the same command for example  "docker create network "name" --subnet "192.168.2.0/24" --gateway "192.168.2.1"" when docker is on an btrfs image (still on the zfs-pool) it works

    Also found out that if you dont use the --gateway flag, the docker network does not list an gateway, but it still works on the brtfs image

    Just using "docker network create "name" so you get an 172 subnet, does create an gateway in the config tho.
    --------------

    So I cant understand it any different that when the docker is in a folder on zfs, something strange happens with the routing when creating a custom network with custom subnet or that the docker implementation on Unraid is not meant to work with custom networks with custom subnets

     

    -----
    Edit again:
    I tried to make an custom subnet of 172.50.0.0/16 and that DOES work 😮 (just to see what would happen if I used the same network class as the auto generated ones)
    even with custom ip address for the container.
    So for some reason, docker on my side does NOT like 192.168.X.0/24 subnets when on folder.

    if this is what it takes, I guess I can change my custom networks scopes :)
     

  7. Seems to still be problems with Docker running off folder on zfs :(
    If I look in say  NginxproxyManager:

     

    [app         ] [5/18/2023] [7:28:47 AM] [Global   ] › ℹ  info      Manual db configuration already exists, skipping config creation from environment variables
    [app         ] [5/18/2023] [7:28:49 AM] [Migrate  ] › ℹ  info      Current database version: none
    [app         ] [5/18/2023] [7:28:51 AM] [Setup    ] › ℹ  info      Added Certbot plugins certbot-dns-domeneshop~=0.2.8 
    [app         ] [5/18/2023] [7:28:51 AM] [Setup    ] › ℹ  info      Logrotate Timer initialized
    [app         ] [5/18/2023] [7:28:51 AM] [Setup    ] › ℹ  info      Logrotate completed.
    [app         ] [5/18/2023] [7:28:51 AM] [IP Ranges] › ℹ  info      Fetching IP Ranges from online services...
    [app         ] [5/18/2023] [7:28:51 AM] [IP Ranges] › ℹ  info      Fetching https://ip-ranges.amazonaws.com/ip-ranges.json
    [app         ] [5/18/2023] [7:28:56 AM] [IP Ranges] › ✖  error     getaddrinfo EAI_AGAIN ip-ranges.amazonaws.com
    [app         ] [5/18/2023] [7:28:56 AM] [SSL      ] › ℹ  info      Let's Encrypt Renewal Timer initialized
    [app         ] [5/18/2023] [7:28:56 AM] [SSL      ] › ℹ  info      Renewing SSL certs close to expiry...
    [app         ] [5/18/2023] [7:28:56 AM] [IP Ranges] › ℹ  info      IP Ranges Renewal Timer initialized
    [app         ] [5/18/2023] [7:28:56 AM] [Global   ] › ℹ  info      Backend PID 437 listening on port 3000 ..

    I get what looks like an network error.
    This error does not happen if set to default bridge

  8. Tried to change my cache to zfs and use docker using folder instead of image today.

    While docker works fine with the old image file, with folder, custom networks wont work, the dockers seems to have no route even if I tried to create the docker network with and without --gateway "ip" option.

    Anyone else has tried this?
    Tried to search but could not find an tread about it.

    For all I know this may not be zfs related at all, but it worked fine under btrf cache drive
    Guess I just use the image for now.
    :)

  9. 18 hours ago, KyleK29 said:

     

    JorgeB gave a great answer. Once you're past the Pool/Vdev config, you have datasets which appear as just normal folders, but are a collection within the larger pool. You can also nest datasets within datasets if you want to get granular with your snapshots. 

     

    For example, on my system I have 4x4TB drives configured into 1 ZFS pool with 2 vdev mirror groups of 2 drives (~7TB of usable space). 

     

    - A few side notes, LZ4 compression is really good, and there's been some tests that show it can perform faster (I/O wise) when enabled. 

    - If you're worried about redundancy, Vdev mirrors are the way to go. They can be a tad more friendly. See: https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/

    - To the end-user, this just looks like a folder (zfspool) of folders.

     

    You can get an idea for the flexibility of this in the screenshots below, using the Sanoid plugin to handle scheduling of  snapshots of sub datasets for individual docker/VM's running. Interface is from the ZFS Master plugin. I imagine a lot of these plugin features will eventually make their way into native unRAID at some point, since ZFS already provides this stuff - it's just the interface.

     

    image.thumb.png.38429765804e284dd6f8dedc2444535a.png

     

    Now it got complicated 😵💫

     

    But thanks 😃

  10. Im trying to understand the ZFS lingo:

     

    -an Vdev is an set of drives in a given raid array (can be compared to a raid array like raid0,1,5,6 etc)

     

    -an ZFS Pool is an logical collection of 1 or more Vdevs that looks like a single point of access to the user

     

    -Vdevs are independent even if in a ZFS-Pool, so say 1 Mirror Vdev and 1 stripe Vdev in same ZFS-Pool does not make it equal to raid 10

     

    -if one drive in any Vdev goes down, you loose access to the entire ZFS pool

     

    Do I understand that correct?

  11. 2 hours ago, itimpi said:

    Think of Exclusive as being equivalent to what used to be Use Cache=Only, but with a new performance optimisation that allows access to by-pass the overheads of fuse that implements User Shares for other modes.

    Maybe that is why nextcloud suddenly loaded way faster pictures and stuff, even if my appdata runs only on data ssd in raid1 btrfs 😮

  12. So Im trying to understand the new "exclusive" mode, and by reading about it I have a hard time get it, so I checked my shares as I though that an share that is ONLY on a pool (example the cache pool) should have exclusive on.
    Found out the system, tmp and VM share was exclusive yes, but my appdata was no

    So I checked the drives one by one and sure I found "appdata" on one of the drives, but the folder was empty.
    But this fits the description of how exclusive works, it only cares if the "sharename" (in my example "appdata") is on any other disks in the array or other pools, files or no files. 

    So I removed the "appdata" from the disk, restarted the array, and then "appdata" was on "exclusive yes"

    But what I still dont get is what it does in practice. I think it has something to do how the share gets mounted, but that is as far as I get 😐

  13. Woop! This fixed the top bar/menu for me. :D
    its not cut off on the bottom, was this in different browsers so not sure if it was something in the update that fixed it or an local thing.

    Dockers still just spins on update and never finishes, but if I click on docker in menu again, updates are shown if any and they update normally so its not a big deal for me.

  14. 10 hours ago, dalben said:

    I thought the same but then looking at the logs realised that after loading RC2 rendered my docker.img as a read only volume.  btrfs complained of corrupt super block.  Not sure if RC2 related or a coincidence.

     

    Now in the process of rebuilding my docker image.

     

    Edit:  Looks like my entire cache pool/drive is read only and also getting read errors.  Was working before.  Reboot hasn't helped.   Need to dig in and see what's happening.

     

    Diags attached if anyone wants to dig in deeper.

    tdm-diagnostics-20230320-0601.zip

    Checked this myself, but here all is RW and when it works, it updates just fine.