glennv

Members
  • Posts

    299
  • Joined

  • Last visited

Everything posted by glennv

  1. An ignore option for a specific state/status message that resets when the state and or status message changes. That is an interesting idea. That way you will still notice when it changes to a different state than the one you ignored.
  2. I would not overthink it . My 2 cents: If the zfs status -x does not show all is healthy , just flag it as not healthy and additionaly show the status and action fields contents, which are designed to tell you what is going on. So rather then trying to interpret and grade the level of severity , you just spit out what zfs gives us.
  3. Personally i think the plugin is correct as your pool is not 100% healthy as the message indicates. Its working but it is not as it should be. This is how it should look : But what would be good is that "if" the status is not 100% healthy, the plugin shows the status message , so you know why and can act on it or ignore if its ok with you. The whole pupose of a dashboard. So your situation should "not" be reported as healthy but maybe as warning or attention.
  4. Great job. working perfectly fine. Tnx for the quick support
  5. sure. Here you go. zpool_status.txt
  6. Tnx. Updated and its already better but only the 1st of the 3 pools now show info. The last 2 (virtuals and virtuals2) keep showing no info. Any info i can get you to help debug, just let me know. p.s. > zpool version zfs-2.0.3-1 zfs-kmod-2.0.3-1 edit: Looks like something with your grabbing of fields/delimiters
  7. Yeah same here. Uninstalled for now. I am sure it will eventualy improve but for now at least for me it is not doing what i need it to do. I now backup to a local temp dir and then daily sync it the with my B2 cloud account using Syncovery. Fully encrypted full backup in the cloud. p.s. I found another attention point, not a big one , but if you are using git yourself in any subdir of the flash (eg to manage your scripts or other config/dev) this unraid.net backup also exludes it as uses git as well itself as backup method but on highest /boot level this will conflict with a lower level separate git tree . Saw the messages in the logs before i uninstalled it. So depending on your restore strategy and location of the git server (eg on a gitlab docker on same unraid server) it may compromise your restore. Warrants yet another warning as i was not aware it was using git as backup method.
  8. Cool. Tnx but dongt see anythingyet other then the names of my zfs pools: Here the output of zpool status: p.s. ZFS is compiled into the kernel with the ich777 kernel build docker
  9. p.s. I dont believe in backups that only backup certain parts or exclude certain areas in certain situation but then again something else in other situations and depending on the moon phases etc however well intended. That can create a nightmare of a mess during restores. And if it has to be then please put it in bold large flashing letters on the screen where users set up the actual backup, impossible to miss. False sense of security is way worse than no security. So not like this screen , which suggest all is good and shiny... and p.s. the appdata backup plugin properly backs up all bz* files but now advices in large letters to move to unraid.net plugin , which does not backup bz* files. So another possible trap to fall in without a proper warning but even a worse warning (although surely also well intended).
  10. Ok then let me know then how to get a system with the AMD GPU vendor reset bug from gniff implemented without compiling and i am your man. Until then i guess i have to keep building the kernel with ich777's docker And for same reason i have to keep doing my own flash backups then, as the dedicated builds are required in case of a restore. Will script it myself and upload to my own B2 cloud account instead then.
  11. Ouch, thanks for telling me as that is a nogo for me then (would break my ZFS stuff during a restore, which includes all my docker image + data and VM's , where a restore should be designed to be as uneventfull as possible and having to remember that its only a partial backup is not good). Will stick with my own backups for now. Pitty. Was the only reason i wanted to use this plugin for. Could you make it an extra user option to include the bz* files in the backups ?
  12. Wierd. Zero issues here. Have it on 6.9.1 (on 2 servers) not yet upgraded to 6.9.2
  13. I just found the issue. The .sha256 files on my system contained 2 values and if i use your command only 1, which makes more sense. So i replaced them and now it works. Thanks.
  14. @ljm42 I checked and these files already exist. I upgraded using the kernel builder docker from ich777 (as i need to build with the AMD GPU reset bug patch) and then placed the resulting files in /boot. Do i need to recreate these ? Same issue on my backup unraid server (upgraded with same method)
  15. Just installed plugin just for the cloud usb flash backup, but after login on and then clicking activate i immediately get this error:
  16. If you want to run all vm's at the same time and want gpu for each you need 3 separte gpu's. You can not actively share a GPU between VM's If you dont need GPU acceleration you can run a VM without GPU and just use VNC / RDP to access it, but it depends on what you want to do with your VM's
  17. I did it without realising there was an issue before. It just worked. I am running with ZFS compiled in 6.9.1 via the docker kernel builder from ich777, so not the plugin. So not sure if that makes any diff. Doubt it but am not the expert here on that. Backed up the docker.img just in case so i could back and then reinstalled all dockers from the previous installed option and was super smooth. First one by one to be sure, then the rest one big sweep. As long as you have no persistant data in the dockers but cleanly outside the image (in appdata) and you are fine. If your normal docker updates work fine , this also works fine.
  18. I think a large factor on how your experiences on VM's are shaped is the usage of the VM's. If you have 24x7 heavy active production VM's running vfx renders, code compiles etc and your btrfs/zfs systems crashes , then you will see more of the "recovery power" of the filesystem and also more easily find its flaws. My issues where repeatable and unfortunately or fortunately happened in a time where i had lots of total systems crashes due to gpu issues. So this tested the skills of boths filesystems to the limits. Under these same cicomstances hosted on the same ssd's on the same OS version , withe everything else the same, btrfs failed more then once, zfs passed all tests. That is just my experience , but of course every system / setup is different and can lead to different results. Even version of btrfs/zfs/unraid etc can have a large efect on the results. In he end we all stick with what we trust
  19. I had soooooooo many problems with hosting VM images on the btrfs pool. Several corruptions and often unrepairable. And with a VM image a corruption is pretty bad as you have to rebuild the whole VM if you are unlucky . Beeing smart and using snapshots did not help me as eventualy one of my main VM's was dead (when i had to rebuild the whole pool as unrecoverable broken) Moved to ZFS and never a single issue since . Host can crash like hell and often did sometimes more then once per day in the time i was fighting AMD GPU reset bugs) but the ZFS pools laugh at me about it and continue as if nothing happened (just telling me not to worrie i repaired evrything for you). So not going back ever. Only use it for cache atm as need a proper raided cache which requires btrfs, but the day it doesnt i am gone from btrfs. Stung once, twice, thrice byebye
  20. Why havent you moved yet to hosting docker in a folder on zfs instead of hosting a docker image on top of zfs ? I moved to it last week and works fine and dont have to worrie about a docker.img file anymore. Its also more transparent as you can just browse the content of all images etc etc.
  21. Not sure if this is still relevant as 2 days ago (not knowing about this post) i moved from docker image to folder on zfs (dataset)and everything works just fine. I run 6.9.1 build with zfs included via kernel helper docker. edit : directly on zfs so no zvol stuff as mentioned in other posts
  22. 0. Make a screenshots of you current docker tab (usefull for step 5 so you know the exact names of all your current installed docker templates ) 1. Stop all dockers 2. In docker settings : Stop docker service 3. Rename the existing docker.img so you have a backup in case stuff does not go as you expect. Also create a directory named docker at some location where you want your dockers to live now. 4. In Docker settings : change to directory mode and specify the target directory (add trailing slash or use the picker to select your previously created docker directory where you want your docker images to live. ) and start the docker service again. 5. Under your docker tab you should have now nothing, as expected. Go to the app's > previous apps and reinstall all your old dockers. Be carefull when picking from that list as all your old attempts / tempates etc will be there as well. So use the list from step 0 as reference. Remark: This assumes that you have all your docker persistant data somewhere else (eg in appdata etc) as all data not in these persistant locations will be wiped by a reinstall of the docker images. The same behavior when you upgrade your docker to a new version for example. So if your upgrades typicaly work fine this will also work fine. This is normal behavior and you as good practice should never have data living inside your docker that is not captured on a persistant location. Unless its temporary not required data of course.
  23. Just an observation and fyi : On ZFS it creates a separate dataset for each docker image that is pulled. So now i have 431 extra datasets if i do a 'zfs list' . Also snapshotting/send/recieve becomes a pain, but i think of not doing that for the docker datasets as can always pull fresh images if needed. Maybe just a few where there is a tight and critical relation between image versions and appdata content that needs to be in sync. But the rest i will forget about. But for total granular control, it of course great.
  24. Yeah but i wanted to test a few. Also i have a few hosted on my own docker repo etc etc. But most of them done as you mentioned. Working on the last few of my own custom dockers now and all seems good. Advantage is that now with my zfs backups (send/recieve) that now also include the core docker images etc i have more granular control over the content when needed for more complex restore scenarios etc
  25. Just went for it and seems to work fine on zfs. Done 4 dockers , 30+ to go .........