Jclendineng

Members
  • Posts

    201
  • Joined

  • Last visited

Report Comments posted by Jclendineng

  1. First of all, great work! I did as instructed and macvlan works well so far.  Question:

     

    I have 3 NICs, eth0 and eth1/eth2, I disabled bridging on eth0, eth1/eth2 is using bonded LACP, with a few VLANs that docker is using. Is it recommended that I disable bridging on ALL interfaces and only use bonding? current Docker network below:

     

    root@Unraid:~# docker network ls
    NETWORK ID     NAME      DRIVER    SCOPE
    410ede2fc44b   br1       macvlan   local
    8ef814bdd249   br1.10    macvlan   local
    24c58ff2a324   br1.15    macvlan   local
    18d59eef887b   br1.20    macvlan   local
    a65a073f4cd7   bridge    bridge    local
    8002478340ab  eth0      macvlan   local
    f6e5f48f1e23     host      host      local
    5882761c0802  none      null      local

  2. 11 hours ago, scot said:

    A few issues that are fairly critical but technical users may work around.  Realistically these really need solutions for the regular unraid person experimenting with zfs.

     

    First, creating a 6.12 RC5 system using a zfs raid as primary storage, it is suggested to use a second usb key as a starting point to get the array running.  This does work, but what is not clear to the user is that this will be the storage location for Dockers/VM Images, even if you go in to select the zfs drive as the location for appdata etc after the unraid system starts.  It does not automatically move the information from the usb key to ZFS and new downloaded images still seem to be created on the usb key, which is invariably going to be extremely slow.

     

    This results in the UI becoming completely unresponsive during installs of docker images, startng and stopping VMs and more.  Without clear easy to see reasons why.  Easy solutions should be available or at least documented around to tell the user how to officially get everything into zfs.

     

    As per other posts this will probably be a non-issue in 6.13, but we are not there yet and this should be dealt with now and not wait IMO.

     

    Second: ZFS use as a storehouse for docker images/containers breaks shutdown/reboot/Stopping of the array.  Cause is simple, if you stop the array the dockers will not be shut down until the zfs subsystem is turned off, but zfs is busy due to the dockers etc so it cannot be stopped.  The order of operations needs to be corrected to stop the dockers and VMs first, then the zfs system. 

     

    Third: ZFS is using logical drives as part of its assignment.  it is MUCH safer to use disk-by-id to assign drive info.  (so a device hw GUID vs /dev/sba) so then if you remove a drive and put it back it ALWAYS gets the correct information and it is easier to deal with failures etc. 

     


                sdf1    ONLINE       0     0     0
                sdg1    ONLINE       0     0     0
      VS:

    ata-Hitachi_HDS722020ALA330_[..] ONLINE 0 0 0

    ata-Hitachi_HDS722020ALA330_[..] ONLINE 0 0 0

     

    Number 3 is a good point, never had any issues with number 2, numerous restarts, and number 1 isn’t really an issue, it’s in the release notes and it already requires the user to be somewhat familiar with zfs for now. It’s a power user feature and I didn’t have any issues with this particular item. That said, like you said it will be resolved in 6.13…I don’t have any issues not having an array though, setup is super simple (moving from ssd to platter would be nice, that I am looking forward to). 

  3. So I upgraded and it looks like nothing changed per se other than naming. I have a data pool and a cache pool with a dummy usb as an array. I can only use the mover to copy from data/cache to array vs cache as primary, data as secondary, and the mover moves between cache and data. It would be nice to move between pools since I have a nvme pool as a cache pool and a data pool with only platter drives. Otherwise solid update, nothing broke, 10 or so datasets and 30 odd dockers all started just fine after upgrade. 

  4. It hasn't happened since, my assumption is I was snapshotting the docker dataset, it was at 10k snapshots or so, and the whole server reset due to that, so this is most likely related to the other report I posted. I "resolved" that in the other thread by limiting snapshots and maintaining good snapshot cleanup scripts, resolving this as Im 99% sure that was the cause.

  5. On 4/21/2023 at 2:21 AM, JorgeB said:

    You don't have to use an image, just be aware of the legacy datasets and best practices, release notes has move info.

    So I followed best practices, Ill mark this as "fixed" but for people reading this, I disabled snapshots on the docker dataset, that will not prevent snapshots when docker images are updated, and the release notes still dont say anything about managing them, but that info can be found in the link JorgeB posted above. Basically, dont auto snapshot docker images as they are legacy and lead to 5 minute plus slowdowns on the "Main" page when you reach 8000 snapshots or so. I worked around this by setting my default page at login to dashboard so if snapshots got out of hand I could still log in, I dont auto snapshot docker dataset, and I have a user script that automatically cleans up snapshots and docker images that arent cleaned up on its own. That leaves me with around 600 snapshots for around 30 docker applications, and a 8 second load time when I hit the Main page, which is acceptable. 

  6. 42 minutes ago, JorgeB said:

    This is not an Unraid issue:

     

    https://stackoverflow.com/questions/52387244/how-to-clean-up-docker-zfs-legacy-shares

     

    Also see rc3 release notes for more info about recommend practices for this.

    Yes I set it up per the release notes, but looking through the link you posted, maybe this isnt a great idea, seems like the docker zfs storage isnt super ready for prime time...thank you!

  7. On 4/16/2023 at 4:30 AM, rachid596 said:

    Hello,

    Since i upgrade yesterday from RC2 to RC3 my server is very slow and ui is unresponsive. My cpu is always 90% load, usually is 5%.

    I think it's a kernel problem, i attach my diagnostics.

     

    Thanks.

    tower-diagnostics-20230416-1008.zip

    Mine is the same way. Cant access the UI, had a crash after upgrading to RC3 but didnt get any logs. It takes about 5 minutes for the UI to finally load but then just hangs...potentially a plugin issue? RC2 worked fine but hard to tell with 0 release notes

  8. I had a complete server crash, I’ll post a bug report but I didn’t have any indication of a cause. I upgraded, and a while later a complete hard reset. 
     

    edit. A diff on the change log would be nice…current change log isn’t really a changelog, iirc previous rc’s have listed the rc a change was added in brackets, I vote that make a reappearance! 

  9. Ill file a bug report if this is indeed a bug but the unraid-api is using 100% of 1-2 entire CPU cores constantly. It is always using 100% of a specific core + 1 or 2 random others...expected?

     

     7955 root      20   0   14.4g   3.7g  50432 R 310.2   1.5   3485:00 unraid-api   - 3 cores here?! 

     

    Edit: I placed a bug report as Im also seeing errors with "My Servers", maybe related                                                            

  10. "cache" in this sense only means "which drives the writing occurs on first" so for ZFS you would select pools as "Cache Only", I have 2 pools, a ssd pool and a platter disk pool, I select each dataset (or share) and mark each as <Pool> Only in cache settings, so each dataset is tied to the pool its on, you should create a pool, create datasets, then mark those datasets as only writing to THAT pool (called cache for the time being)

     

    An unrelated question, would it be possible to get CORS settings in the UI? I know reverse proxies are not supported but we all use them and it would be nice to add the reverse proxy as an allowed origin.