gyto6

Members
  • Posts

    148
  • Joined

  • Last visited

Posts posted by gyto6

  1. Hi everyone,

     

    As spelled in the title, my Unraid (UEFI) flash drive can't detect the GPU, and its really weird in the logs.

    Before that, my motherboard is set to fully run in UEFI, and peripherals can boot only with EFI. Virtualization and VT-D are active as Interupt Remapping. Decode Above 4G (useless in my case) is active and reverting those parameters doesn't works to help Unraid launch the GPU..

     

    At launch, I could detect the GPU as I left the pcie lanes in auto or x16.

    After setting by mistake my pcie bifurcation in x4x4x4x4 for my GPU and booting, my graphic card finally showed up!


    As we can notice :

     

    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: [10de:1eb1] type 00 class 0x030000
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: reg 0x10: [mem 0xfa000000-0xfaffffff]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: reg 0x14: [mem 0xffe0000000-0xffefffffff 64bit pref]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: reg 0x1c: [mem 0xfff0000000-0xfff1ffffff 64bit pref]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: reg 0x24: [io  0xe000-0xe07f]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: reg 0x30: [mem 0xfb000000-0xfb07ffff pref]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: PME# supported from D0 D3hot D3cold
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.0: 63.008 Gb/s available PCIe bandwidth, limited by 8.0 GT/s PCIe x8 link at 0000:00:03.0 (capable of 126.016 Gb/s with 8.0 GT/s PCIe x16 link)
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.1: [10de:10f8] type 00 class 0x040300
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.1: reg 0x10: [mem 0xfb080000-0xfb083fff]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.2: [10de:1ad8] type 00 class 0x0c0330
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.2: reg 0x10: [mem 0xfff2000000-0xfff203ffff 64bit pref]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.2: reg 0x1c: [mem 0xfff2040000-0xfff204ffff 64bit pref]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.2: PME# supported from D0 D3hot D3cold
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.3: [10de:1ad9] type 00 class 0x0c8000
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.3: reg 0x10: [mem 0xfb084000-0xfb084fff]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:06:00.3: PME# supported from D0 D3hot D3cold
    May 22 20:16:21 rohrer-enard kernel: pci 0000:00:03.0: PCI bridge to [bus 06]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:00:03.0:   bridge window [io  0xe000-0xefff]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:00:03.0:   bridge window [mem 0xfa000000-0xfb0fffff]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:00:03.0:   bridge window [mem 0xffe0000000-0xfff20fffff 64bit pref]
    May 22 20:16:21 rohrer-enard kernel: pci 0000:00:03.2: PCI bridge to [bus 07]

    GPU with x4 or 8x lanes is up.

    Unraid.thumb.png.693b31a877a240138cf839da872ec132.png

     

     

    But if I set back to x16, the gpu doesn't launch.. It's kind of "skipped", jumping from peripheral 05:00 to 08:00

     

    May 22 21:39:51 rohrer-enard kernel: pci 0000:05:00.0: [8086:f1a8] type 00 class 0x010802
    May 22 21:39:51 rohrer-enard kernel: pci 0000:05:00.0: reg 0x10: [mem 0xfb200000-0xfb203fff 64bit]
    May 22 21:39:51 rohrer-enard kernel: pci 0000:00:02.3: PCI bridge to [bus 05]
    May 22 21:39:51 rohrer-enard kernel: pci 0000:00:02.3:   bridge window [mem 0xfb200000-0xfb2fffff]
    May 22 21:39:51 rohrer-enard kernel: pci 0000:00:03.0: PCI bridge to [bus 06]
    May 22 21:39:51 rohrer-enard kernel: pci 0000:00:1c.0: PCI bridge to [bus 07]
    May 22 21:39:51 rohrer-enard kernel: pci 0000:08:00.0: [1a03:1150] type 01 class 0x060400
    May 22 21:39:51 rohrer-enard kernel: pci 0000:08:00.0: enabling Extended Tags
    May 22 21:39:51 rohrer-enard kernel: pci 0000:08:00.0: supports D1 D2
    May 22 21:39:51 rohrer-enard kernel: pci 0000:08:00.0: PME# supported from D0 D1 D2 D3hot D3cold
    May 22 21:39:51 rohrer-enard kernel: pci 0000:00:1c.2: PCI bridge to [bus 08-09]
    May 22 21:39:51 rohrer-enard kernel: pci 0000:00:1c.2:   bridge window [io  0xe000-0xefff]
    May 22 21:39:51 rohrer-enard kernel: pci 0000:00:1c.2:   bridge window [mem 0xfa000000-0xfb0fffff]
    May 22 21:39:51 rohrer-enard kernel: pci_bus 0000:09: extended config space not accessible


    I already tried to set full legacy boot from Unraid and Legacy Oprom from the motherboard, it didn't work.

    I disabled any virtualization parameter within the motherboard with no more result.

    I finaly installed Windows Server, set the GPU PCIE lanes in x16, and then the GPU showed up!

     

    Does this error speaks to anyone? Is it an Unraid bug or misconfiguration?

     

    I managed some parameters in the GRUB, but as far, it didn't helped me by now.

    Thanks for your help, I dragged the logs files for better support.
     

    x8 syslog.txt x16 syslog.txt

    Unraid.png

  2. 1 hour ago, Arragon said:

    Does that mean it is save now to go to 6.10?  I remember people having problems especially with Docker on ZFS (Dataset, apparently ZVOL seemed OK).  Can anyone confirm?

    Got the message disclaimed by @ich777, and it indeed downloaded ZFS at boot.
     

    ZVOL and Dataset are still working, everything is fine to me.

    • Like 1
  3. 18 hours ago, anylettuce said:

    well moving my zfs pool over to trunas was a failure. Looks like a AMD issue. I have since moved back and started over fresh with unraid and existing zfs pool

    what is the best way to get this to connect to another unraid setup? its mostly going to server to linux enviroment. 

    Sanoid/Syncoid or SEND/RECEIV commands will help you through SSH replication.

    Maybe you were concerned about another way to clone your pool?

  4. Pour la partie WAN, nous allons ouvrir l'accès à la de Nginx à Internet par HTTP + HTTPS. Cela revient à une redirection des ports 80 et 443 vers l'IP de notre Nginx (192.168.3.253 pour rappel).

     

    Sur la LiveBox, connectez-vous à l'espace d'administration en saisissant l'adresse IP de votre Box dans un navigateur et allez dans "Réseau" puis "NAT/PAT".

     

    image.png.629aa6e54583dae7011861182f34dee3.png

    Sélectionnez "Secure Web Server HTTPS" et "Web Server HTTP", TCP uniquement à moins que vous ne sachiez ce que vous faites, et sélectionnez votre Nginx en cible. Laissez IP externes sur toutes.

     

    image.thumb.png.a9ccdbb800283a6b3407d0a0f494e423.png

    Connectez-vous à votre OVH dans un nouvel onglet, puis : 1 Web Cloud (en haut) -> 2 Déroulez "Noms de domaine" à gauche et cliquez sur votre domaine -> 3 DynHost -> 4 Gérer les accès.

    image.thumb.png.689f98dc386cb6ab5e2a07726e483682.png

    Cliquez enfin sur Créer un identifiant, puis créer un suffixe (ou pseudo de connexion) et son mot de passe. Laissez un * dans "Sous-domaine" pour que ces identifiants puissent régir des sous-domaines. Enregistrez quelques part ces informations et validez.

    image.png.dcdc7ae40e214312b0e3edbe16fdc2d9.png

    Retournez à la fenêtre précédente et sélectionnez cette fois-ci "Ajoutez un DynHost". Ici, nous souhaitons simplement que "domaine.com", nous laissons donc le champs, vierge. Pour test.domaine.com, renseignez "test" dans sous-domaine. Remplir enfin avec votre IP WAN correspondant au serveur ou votre maison et visible ici https://ipv4.lafibre.info/.

    Si une erreur apparaît, rendez-vous dans la section "Zone DNS" et assurez qu'aucun champ de type A ne figure dans dans la zone.

    image.thumb.png.a81a1a95586fdfd60644b2c1ef7d570c.png

    Enfin, nous retournons dans l'espace de la Livebox, dans DynDNS cette fois-ci. On renseigne le nom de domaine auquel noussouhaitons nous connecter, l'ID (le Suffixe) définit auparavant et le mot de passe, et voilà !

    image.png.7bb65c90c72049fd77918cec3f1a33ed.png

  5. Si je peux être un peu plus exhaustif sur le sujet:

     

    Le serveur doit avoir une IP LAN fixe, le WAN peut-être dynamique, tant qu'un DynDNS est configuré.

     

    Dans Nginx, selon ton registrar, tu peux lui fournir tes clés d'API pour qu'il s'y connecte et génère automatiquement un nouveau certificat SSL avant son expiration (il fait cette tâche tout seul).

     

    Je prend mon cas pour exemple:

     

    Mon serveur a une IP LAN Fixe, tout comme le Docker Nginx, le docker Pi-Hole et un Docker pour NextCloud. J'ai une box Orange qui change d'adresse dynamiquement. J'ai enfin un nom de domaine chez OVH.

     

    Les IPs -> Box Orange : 192.168.3.1 / Pi-Hole : 192.168.3.254 / Nginx-Proxy-Manager : 192.168.3.253 / NextCloud : 192.168.3.252 / IP WAN Dynamique

     

    image.png.f527d80669e68718119c6dba37a2478e.png

    Pi-Hole va servir de DHCP, et il distribue sa propre IP comme adresse DNS primaire. L'adresse de passerelle est celle de la box Orange 192.168.3.1.


    image.thumb.png.04dc906a8cd1655b1db138e38d9095c7.png

    Pour accéder à NextCloud depuis le LAN comme le WAN, nous allons procéder ainsi:

    Dans Local DNS -> DNS Records, on lie notre nom de domaine à l'adresse IP de Nginx, car c'est lui qui fournira le certificat SSL pour le https.

     

    image.png.7856c9a440f8e5dfaa5b93e53d963b37.png

    A présent, on rajoute notre nom de domaine dans Nginx que l'on lie à l'adresse IP de NextCloud. Notez que NextCloud accepte que le HTTPS par défaut (en forçant https), on le définit donc dans scheme ainsi. A l'inverse, Photoprism n'est accessible que part HTTP et il faudrait définir "scheme" ainsi. On renseigne le port d'accès selon celui que vous avez définit dans le Port Mappings de Docker.
    "Scheme" définit comment Nginx accède au site, pas le client final.

     

    image.png.8ef66e799a0c4f29e303b65fb4d83549.png

    Ports Mappings du docker

     

    image.thumb.png.ee9d61fe5b16a43db02f6456f586bd32.png

    Maintenant, nous nous rendons dans la partie SSL:

    Selon les registrars, certaines informations sont à remplir dans "Credentials File Content". Pour OVH, il faut créer les Token dans l'espace API d'OVH : https://api.ovh.com/createToken/

     

    image.thumb.png.4b9802dfedfbb4889ff4addce29db92f.png

    Saisissez-vous identifiants OVH, et saisissez comme dans l'illustration vos rights. Notez que ce sont les droits les plus permissifs et qu'il conviendra quand on a la main d'ajuster comme ceci :

    Spoiler

    GET /domain/zone/

    GET: /domain/zone/{domain.name}/

    GET /domain/zone/{domain.ext}/status

    GET /domain/zone/{domain.ext}/record

    GET /domain/zone/{domain.ext}/record/*

    POST /domain/zone/{domain.ext}/record

    POST /domain/zone/{domain.ext}/refresh

    DELETE /domain/zone/{domain.ext}/record/*

    Remplacer domain.ext avec votre nom de domaine racine (sans sous domaine comme test.domaine.com par exemple)

     

    En cliquant sur Create Keys, les clés nécessaires sont fournies. Enregistrez les, vous ne les reverrez-plus mais vous en aurez à nouveau besoin si jamais vous perdiez votre configuration.

     

    En sélectionnant "Request a New SSL Certificate", et en renseignant les clés précédemment obtenus dans "Credential Files Content", nous pouvons sauvegarder la configuration. Certbot va alors tourner un moment (1 minute).

     

    image.thumb.png.ef55590e6b185028eb444e3a6715250d.png

     

    Le résultat précédant apparait, toutes les requêtes clientes adressées à Nginx avec ce nom de domaine sont chiffrées par Let'sEncrypt. Nginx lui communiquera en HTTP ou HTTPS avec l'application selon la configuration sélectionnée dans "scheme".

     

    La partie LAN est finie, penchons nous sur le WAN.

     

  6. 16 hours ago, Andrea3000 said:


    Thank you for the reply.
    Maybe I misunderstood that article, but what I would like to do is to let Unraid and ZFS handle the snapshots of the data on the server, even when the Mac is off.

    I would then like to use the Time Machine interface on the Mac to browse through the snapshots and restore files and folders if I need.

     

    Is that possible?

    It is.

     

    Concerning the automatic snapshot, I'll let you do your choice. Dont forget that a snapshot is not a backup. But sending your snapchots (with Sanoid or ZnapZend) on another system through SSH, USB disk (not prefered) or ISCSI is a backup if the second system is out of touch of the first (fire, stealth.. )

     

    I conclude that most of your files is located on your Unraid only, so what matters is the smb config. It requires the "fruit" argument to let MacOS handle zfs snapshots as Truenas does.

     

    https://blog.gwlab.page/building-nas-with-zfs-afp-for-time-machine-d8d67add1980

  7. 17 minutes ago, Jack8COke said:

     

     

    Im sorry but im not able to do this. Do i have to create a dataset before i run your command? Or i do it with the first command already? Because i can not find docker under /mnt/ssdpool.

    Which label type i have to use when i use the cfdisk command?

    when i run the last command it says:

    mount: /mnt/ssdpool/docker: mount point does not exist.

     

     

    Create a folder and it'll work.

    For the labeltype, always gpt. If you do not use gpt, it's because you know what you're doing.

  8. 47 minutes ago, Jack8COke said:

    Hello,

     

    i try to use ZFS because of the snapshot and shadow copy function. But i have the problem that i can not use docker on my single nvme zfs pool. I have created everything according to the first post but everytime i try to add a container it stands still and i can not restart the machine or stop the array. I have to hard reset the system. I use unraid 6.9.2. Following you can see a screenshot where it say's "Please wait" for a small docker like adguard for 1 hour now. 2 of 4 CPU cores are on 100% . Do you have an idea?

     

    Yep, mount your docker.img into a ZVOL formated as your docker.img filesystem. Set your docker.img there (copy the old one in the new location) and you're good.

    Create a script to mount the Zvol automatically at first "Start Array"

     

    Refer to my older post for specifics commands

  9. Thanks again for this welcome update.

     

    I suggest an idea. Everytime we proceed to an action in the "SNAPS" UI, the windows automaticaly closes. Could it be possible to keep the windows on the selected dataset opened?

     

    Else to select several snapshots with a checkbox (Only possible with deletion I think in this case) for massive operations, when we proceed snapshots test for exemple.

     

    To be understood, I tested Sanoid earlier, and it tooks 30 snapshots in a row for each dataset (20) due to a bad configuration. I used the CLI to do the work as the window closed by itself for each deletion.

     

    And thanks for your work!

  10. 12 minutes ago, steini84 said:


    Just to put it out there. I have personally moved to sanoid/syncoid


    Sent from my iPhone using Tapatalk

    Good to know. 😆

     

    For now, I'm having to much trouble with Sanoid, so I'm getting my hands with Znapzend to understand where I might have been wrong with Sanoid which seems indeed a bit more powerfull.

  11. 6 hours ago, Iker said:

    i'll request specific thread for the Plugin support; so we could stop flooding this thread :P.

     

    Sounds like a good idea indeed. 😁

     

    Thanks again, I'm getting my hand on it. For an unknow reason, the "SNAPS" button stays grey, even if the array, docker and VM are  stopped.

    Do I need to manually create a snapshot to get it available? *Some tests have shown that YES*

     

    Sounds cooler if we can trigger a snapshot creation on the local pool with a single button from your GUI.

     

    Thanks again for this welcomed update! 😄

     

    Edit : The plugin cannot manage snapshots from the Pool. Only from the dataset by now. I don't know if this is by design.

  12. On 4/2/2022 at 5:30 AM, te5s3rakt said:

    I’ve been planning a new NAS (replacing a Synology unit), and was toying between TrueNAS and Unraid.

    TrueNAS was very appealing at first, due to ZFS. As mentioned above, it looks very shinny to new users as it looks like it can do no wrong.

    All this talk of bit rot scares me. Some people are 100% “it’s real, protect your shit or definitely loose it”. Then I’m all panicked, and change my mind to TrueNAS. But then I read other’s counter posts, and they’re all “it’s not really a thing on modern hardware, get ECC RAM, stop panicking, and call it a day”. Both camps make good points 🤷‍♂️.

    This is why I got out of PC gaming and brought an Xbox a decade ago. Tech can be so opinion based sometimes lol 🤦‍♂️.

    Ultimately though, I believe I’ve settled on Unraid. I’m so over “managing” the tech in my house, and Unraid sounds stupid simple to use. I just want to power something on, boot up plex/emby, and let it do it’s thing. The last two NAS’s I’ve had over the last 8 years have been synology units, and I literally probably logged into their respective GUI’s maybe half dozen times between them.

     

    Mmm, I do think about that sweet ZFS though 🤔.

    Was considering running this plugin to get the best of both worlds. I wonder what Unraid official support will look like though? How will it work? And is it worth just waiting for that?

    Think that’ll literally just copy this plugin into installer, and call it “officially supported” now?

    Or do we think it’ll be implemented another way. Not sure they’ve mentioned how they’ll support it, only that they’re looking at it hey?

    I didn't got everything...

     

    All I could say is that managing ZFS isn't just install the plugin and see the system being the most briliant and powerfull. ZFS has the ability to manage volumes, file system, and backup on a custom way that requires you to have the knowledges concernings your equipments, volume aggregation, and your files workload. At least, you've to finally take over ZFS that offers a lot of improvement according to your devices and you'll spend months and years to optimize your ZFS system and correct your mistakes.

     

    If using RAID on HDD was quite easy due to the lack of innovations compatible with and mostly turned to the final use, there was first many concerns about RAID on SSD drive due to Write Amplification, Overprovisioning, cells degradation etc... And now NVMes drives with namespaces, sets, endurances groups, and over the top speed without talking about low latencies technologies (Optane, Z-NAND) for now expected as SLOG for database mostly.

     

    There're tons to talk about drives technologies before talking about ZFS, dataset, L2ARC, Interrupt trouble with NVMe drives, L2ARC calibration... Tons to talk about...

    I don't know Truenas enough, but all I could say is that is GUI helps a few to take your hands on ZFS. But what helps mostly is practicing.

     

    I've read on the forum that ZFS is expecteded unofficialy for Unraid 6.11, but if you're expecting the GUI to do all the work, you shouldn't.

     

    It'll works, as a smartwatch works to get the hour. But most ZFS users would take a mechanical watch instead because they like to know how it works. It costs more, but they love it, take care of it, and the watch finally lasts longer.

    • Like 1
  13. 22 hours ago, BVD said:

     

    I've an theory on that, but it's probably an unpopular one -

    ZFS has gotten so much more popular in recent years, with a lot of folks diving in head first. Hearing how it protects data, the ease of snapshots and replication, how easy it makes backups, that they migrate everything over without first learning what it really is and isn't, what it needs, how to maintain it, and what the tradeoffs are.

    Then when something eventually goes sideways (neglected scrubs, power outage during a resilver, whatever, they change xattr with data already in the pool, set sync to disabled for better write performance, any number of things both environmental or user inflicted), the filesystem is resilient enough that it 'still works', so it's expected that anything on it still should as well... 

    Hell, I've been neck deep in storage my entire career, and using ZFS since Sun was still Sun, and I *STILL* find myself having to undo stupid crap I did in haste on occasion. 

    The fact that you partitioned the disks first wouldn't change any functional behavior in the driver (similar to running zfs on sparse files, the same code/calls are used). Either 'it's fixed', or I'd simply taxed the driver beyond what it'd optimized for at the time, at least that's my feeling anyway.

    Well, only the truth hurts.

     

    I share that point of view, every one is looking for the best ZFS optimization for best performance, and most of the "tutorials" that you can find on the net doesn't talk about the risk it implies. Most of the people writing them are not even aware about that. The best sources of information I could've found except of LVL1Tech are the websites listed at the beginning of the topic.
    Some books are rare by the way. Mostly english of course, and for some already invested users, they're lacking a lot about daily operations to maintains ZFS. Even if you've to figure out how to manage your ZFS, it's bit disapointting to buy a 20€ book who's learnt you to create your pool from scratch, but doesn't explore the aspect of data transfers between a ZFS send/receive and a ZFS clone.

    Talking about ZFS on Unraid itself, I didn't meet any troubleshoot on Unraid 6.10.

    Only storage controllers problem with my NVMe's SSD which was suddenly not recognized with Intel Vt-D (IOMMU) whatever the Unraid version, a well known problem with Crucial P5.
    Or one of my partition (mounted with /dev/sdx at time) which suddenly changed after a reboot, so I recreated the pool with /dev/disk/by-id/and no more suffered about any problems. And that was still not related to a specific Unraid version.



     

  14. 6 hours ago, ich777 said:

    I run everything (Docker, libvirt, system) on my ZFS Mirror without any formatting.

     

    The only difference is that I first created a partition and them created a mirror with the corresponding partitions like /dev/sdx1 /dev/sdy1

     

    Is it possible that this is the reason why everything runs without a hitch on ZFS on my system?

    Do you means that with ZFS Master, you can't see tons of datasets with snapshot generated by Docker?

     

    Referring to the @BVD interventions, I didn't suffer of any performance troubleshoot. The only matter was thatr all those dataset generated by docker was always flooding the GUI and any zfs list command.

     

    Thanks for the precision @Iker, the ZVOL support could be interesting. Meanwhile, I simply mounted the ZVOL within a dataset in order to get it listed in the GUI.

  15. Thanks again @BVD ! I'm finally able to run the docker.img inside of a ZVOL.

     

    For those interested:

     

    zfs create -V 20G pool/docker # -V refers to create a ZVOL
    cfdisk /dev/pool/docker # To create easily a partition
    mkfs.btrfs -q /dev/pool/docker-part1 # Simple to format in the desired sgb
    mount /dev/pool/docker-part1 /mnt/pool/docker # The expected mount point

     

     

  16. 32 minutes ago, BVD said:

    You've got a couple options - 

    1. Create a zvol instead, format it, and keep docker writing there (which is now a block storage devices)

    2. Your plan, creating a fileset and using a raw disk image

     

    In either case, couple things you can do - 

    * redundantmetadata=most - containers are throwaway, no real reason to have doubly redundant metadata when you can just pull the container back down anyway; wasted resources

    * primarycache=none (or metadata at most) - containers might be (probably imperceptably given you're on NVME and not SATA) slower to initially start, but once they do, the OS is already handling memory caching anyway, leaving ZFS often duplicating efforts (and using memory by doing so)

    I've got a whole whack of performance tuning notes lying around for one thing or another - if the forum ever gets markdown support, I'll post em up, maybe then can be helpful to someone else lol

    Well, thank you for all these precious advice!

     

    The problem to me by now is that I have to mount the ZVOL to Unraid. I'm figuring out how to do this on Unraid without ISCSI. The docker .img is indeed not functional (tons of bugs) and trying this on an USB key is not a good idea at all.

     

    Thanks again. ;)

  17. 2 hours ago, BVD said:

     

    I think you probably meant to post that for the ZFS Master plugin? This one just adds the ZFS driver. Maybe there's a hover over help menu or something? Not sure.

    To the question at hand though, it looks like you're using the docker ZFS driver (using folders instead of .img) - personally, I'd recommend against that. The data within these directories are just the docker layers that get rebuilt each time you update a container. Doing it this way just makes managing ZFS a mess, as you end up with all sorts of unnecessary junk volumes and/or snapshots listed out every time you do a zfs list / zfs get / etc. Plus, it creates so danged many volumes that, once you get to a decent number of containers (or if you've automated snapshots created for them), filesystem latencies can get pretty stupid.

    Indeed, I posted in the wrong topic once again... My mistake.

     

    Indeed, using the directory for docker is actually a mess due to all theses datasets and snapshots as you described very well.
     

    I'll manage to migrate to the .img docker. I did a screenshot from every containers configuration as I won't get them back from Previous App and put the .img file in a 1M recordsize dataset. I don't thing that there's too much configuration for a dataset due to the low use of the docker files itself and the NVMe drives do most of the performances.

     

    Thanks again! @BVD

  18. Hi everyone,

     

    It's been a while I noticed it, but some of the dockers datasets display a yellow icon, even the snapshot as we can see in the screenshot. Those managed by docker itself, not mines.

     

    image.thumb.png.b058a928e824e4992712a59a73859de6.png

     

    I imagine that it means that the dataset or snapshot is damaged if the colors rules are the same as for the pools.

     

    Everything is working whatever. I'd simply ask to you how you manage this on your own servers or if you let it sink?