Marshalleq

Members
  • Posts

    967
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Confirming that did the trick. It may be unrelated but I noticed one of the devices listed via zpool list -v is showing as /dev/sdj1 whereas unraid sees it as /dev/sdj - which is what everything else is configured as. I don't use partitions with ZFS.
  2. Hmmm, downgrade didn't fix it either, I rebooted it the other day no problem, so not sure what's changed. Just saw your comment - good idea, was thinking similar.
  3. After upgrade my main zfs SSD pool does not start automatically. However using pool import – a in console does work correctly. I checked unraid still has correct device names against those listed against the functioning pool pool list -a and that part of it is correct. I'm trying a downgrade to see if it comes right but just noting this here in case anyone else comes across it.skywalker-diagnostics-20240222_2054.zip
  4. Just updating that for the parts that are working, it's definitely working better than znapzend. In particular cleaning up old snapshots. Thanks for the help, just gotta follow through and stop using space invaders script then I think from what Ive read it will give me everything I need.
  5. Nice, I've got this started now using space invaders script, but it's a bit weird and limiting so will convert to the method you're using. Am hoping I can do a replication different to the snaps like znapzend did.
  6. Ive been grumpily playing with it. I say grumpily because I really just want znapzend to work and a few things about Sanoid are pretty horrible. Like it doesn't run in the background and you have to rely therefore on cron to run it in the background which makes it less capable and a little bizarre with how the cron aligns to the script schedule. For example I want hourly and daily snapshots at the source and weekly monthly and yearly at the destination - cant be done. Might be able to do it if I ditch space invaDERS script which honestly we shouldn't have to use it anyway, something to look into. Could also do it using two scripts for one location which would be silly. Don't like by default it snaps all child datasets either. All in all it needs more time spent trying to make basic functionality work that should be baked into the os. I'm nursing a concussion at the moment, so it probably feels harder than it actually is.
  7. I don't understand why this has turned into such a steaming pile. I looked into sanoid and syncoid out of desperation and it looks very complicated to set up. Znapzend is running and starts at boot. Yet it never runs a backup job. Foundational stuff like this makes me want to jump ship from unraid. Something in the zfs implementation from the unraid folks is what has messed it up. If I didn't have a broken arm at the moment I think id just virtualise unraid and be done with it. I mean they've actually never gotten this foundational stuff right, but plugins have usually filled the gaps. That's not even possible now it seems. Perhaps because dev pack and nerd pack are no longer available?
  8. @steini84How do you find syncoid / sanoid for removing older snapshots that are over the allowance? I.e. If we did hourly for a week, then daily for a month, then monthly for a year, we should expect only 4 weeks of daily backups right? This is something that I never got working in znapzend. It seemed to just keep everything indefinitely. BTW I never did get znapzend working after the ZFS went built in, it only works for a few hours or days then stops. So as per your suggestion, I am looking at Syncoid / Sanoid. (Still figuring out that difference).
  9. Does this mean we can finally untether the license from the array? It really doesn't play nicely with the built in ZFS support. I have just now had to reboot the whole system due to a single failed zfs disk. Something not typical of a ZFS array and entirely caused by having to stop the array to change a disk as far as I know.
  10. Wow good to know! Will have to check mine.
  11. Sort of. But it combines all that and includes scheduling and it's all stored in the drive structure. So it's quite a bit more really.
  12. I'm not sure what the go file changes are, but other than that I don't see any problems. You say you don't want it to be part of the array, but it will be part of its own separate array, accessible under the standard Unraid array method. This does change some things, there are a bunch of ZFS related things that sort of don't work like hot swapping disks without stopping the whole unraid system. This is because you have to stop the array(s) to change the disk in the GUI. A bit of a problem which is making me rethink my whole relationship with unraid at present. But answering you directly, I don't see any problems, it's very similar to my setup, I have a few extra things in mine but it just pulls them in directly. Shares I just keep the same. Autotrim and feature flags like that are irrelevant and will just continue on. Compression is irrelevant to the import process also. User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so. I assume scripts will be better.
  13. As people have finally gotten to (read the whole thread), the issue is a licensing issue, so in a sense this thread is kind of named wrong as actually a whole bunch of implications are created via the method chosen to enforce the licence. Personally, I think that the value in unraid has far surpassed the unraid array now which is undoubtedly where it all started and really it should have nothing to do with how they apply the licence anyway - it was probably just the most convenient method at the time. Unraid have a unique customer focus which nobody else has - which unfortunately has meant some of the more typical NAS features haven't been well implemented yet. These impacts are not normally expected from hosts providing virtualisation capabilities and needing to provide high uptime. Presently if you want high uptime and you know what you're doing, you would certainly not use unraid. Esxi, proxmox and TrueNAS scale to name a few software types and also QNAP and Synology style of NAS's all don't exhibit these issues - I can't think of one product that offers virtualisation capabilities that requires you to stop the array for these kinds of changes. Anyone got one? I would suggest we bring visibility to the impacts and ask for fixes to them, while that's sort of done here it's hidden in a big long thread. But raising it like that, perhaps we can get Limetech to understand that it's a bit of a negative on their product compared to other offerings and they might do something about it. Perhaps someone can summarise them at the first post by editing it? As a starter for 10 - some unexpected impacts that I have noticed include: (I'm doing this half asleep so please correct any you think I'm mistaken on and also note by 'system' below I mean any customer facing services, not the core OS). Having to stop the whole system to replace a failed disk and this now includes ZFS Having to stop the whole system to change the name of a ZFS array Having to stop the whole system to change the mount point of a ZFS array Having to stop the whole system to make simple networking changes I think there are quite a few more scenarios, some of them are fair - like isolating CPU cores. When you look at it, it's mostly about disk management I think - which is a bit embarrassing as that is a fundamental of a NAS. And this is the point right, in this day and age we expect a bit better and it's possible to do, if Limetech get the message properly. Open to alternative suggestions. Great discussion in this thread!
  14. Yeah I wish unraid would find another way of enforcing their license. It actually ruins their product a bit. I sort of thought it was ok to do it on their proprietary unraid array, but doing it on open source zfs is a bit low. this and a few other things are making me wonder about running unraid as a vm inside proxmox or truenas scale lately. For virtualisation, backups and especially networking unraid is left in the dust by these platforms. Unraid wins in some other areas though particularly the user interface for docker and vms and the docker App Store. I haven’t used the unraid array for many years now so that’s not an issue. In fact using zfs in those other products would be a better experience. It’d be good to know if unraid have any plans and what they are to improve zfs and the array integration with licensing. .
  15. Well it's your choice, but really I would highly recommend keeping it up to date, there are also a lot of risks in not updating it. Anyway, I'm not normally one of those people that don't answer questions because there's some other thing I don't like (I hate that), but in this case it would actually solve what I think is your problem - you don't have ZFS installed. So on that, can you confirm - you are asking for how to install the ZFS plugin? Or are you asking for how to transfer files using ZFS once you have the plugin installed? Assuming the former - have you tried going into the community App Store and checking if the plugin is there, and then installing it? If so and it's not available, you will need to get a matching version of the plugin to your installed unraid version as they are compiled for the kernel that matches the version of unraid that you have. I would suggest start there and report back. I'm not running an old version, so am unable to test. Thanks.
  16. I don’t get it, what is preventing you guys from just upgrading unraid to the latest version. I know some people are scared of upgrades but there’s really no reason to be. The update has been out for a long time now and is quite stable.
  17. If you can still get it to install then I think yes. Or just upgrade unraid to the latest version,
  18. Wow this is going back a bit. I have stopped paying attention to it and can't really say if it's still happening - I assume it is. Last week I changed my docker back from folder type to image type. There are just too many bugs with folder type on the new native ZFS implementation - and I never liked how it made a dataset for each docker container either. Finally, since ZFS went native, I have a properly functioning docker again. Personally, I preferred the plugin, for one thing I didn't have to stop the array to add or repair disks.
  19. Hi all, does anyone know why this is happening? It actually ends up stopping my VM's and dockers in the end. Despite what it eludes to below, the driver is actually installed. Is there some new method I am not aware of? Oh, this log for me is at: /run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/239d6e30c683fd56432d60fe02198e271c6d56ad3d7b562a74e1c9e51be78d68 Which is why it keeps filling up my drive. It's a small 32MB Tmpfs partition. Thanks! ``` cat log.json {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using OCI specification file path: /var/run/docker/containerd/daemon/io.containerd.runtime.v2.task/moby/239d6e30c683fd56432d60fe02198e271c6d56ad3d7b562a74e1c9e51be78d68/config.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Auto-detected mode as 'legacy'","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Ignoring 32-bit libraries for libcuda.so: [/usr/lib/libcuda.so.535.54.03]","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Using prestart hook path: /usr/bin/nvidia-container-runtime-hook","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /dev/dri/card0 as /dev/dri/card0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /dev/dri/renderD128 as /dev/dri/renderD128","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 as /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate glvnd/egl_vendor.d/10_nvidia.json: pattern glvnd/egl_vendor.d/10_nvidia.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /etc/vulkan/icd.d/nvidia_icd.json as /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Selecting /etc/vulkan/implicit_layer.d/nvidia_layers.json as /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate egl/egl_external_platform.d/15_nvidia_gbm.json: pattern egl/egl_external_platform.d/15_nvidia_gbm.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate egl/egl_external_platform.d/10_nvidia_wayland.json: pattern egl/egl_external_platform.d/10_nvidia_wayland.json not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate X11/xorg.conf.d/10-nvidia.conf: pattern X11/xorg.conf.d/10-nvidia.conf not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/nvidia_drv.so: pattern nvidia/xorg/nvidia_drv.so not found","time":"2023-06-26T11:08:27+12:00"} {"level":"warning","msg":"Could not locate nvidia/xorg/libglxserver_nvidia.so.535.54.03: pattern nvidia/xorg/libglxserver_nvidia.so.535.54.03 not found","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounts:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /usr/lib64/libnvidia-egl-gbm.so.1.1.0 at /usr/lib64/libnvidia-egl-gbm.so.1.1.0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /etc/vulkan/icd.d/nvidia_icd.json at /etc/vulkan/icd.d/nvidia_icd.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Mounting /etc/vulkan/implicit_layer.d/nvidia_layers.json at /etc/vulkan/implicit_layer.d/nvidia_layers.json","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Devices:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /dev/dri/card0","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /dev/dri/renderD128","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Hooks:","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Injecting /usr/bin/nvidia-ctk [nvidia-ctk hook create-symlinks --link ../card0::/dev/dri/by-path/pci-0000:81:00.0-card --link ../renderD128::/dev/dri/by-path/pci-0000:81:00.0-render]","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Applied required modification to OCI specification","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Forwarding command to runtime","time":"2023-06-26T11:08:27+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:28+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:28+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:33+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:33+12:00"} {"level":"info","msg":"Running with config:\n{\n \"AcceptEnvvarUnprivileged\": true,\n \"NVIDIAContainerCLIConfig\": {\n \"Root\": \"\"\n },\n \"NVIDIACTKConfig\": {\n \"Path\": \"nvidia-ctk\"\n },\n \"NVIDIAContainerRuntimeConfig\": {\n \"DebugFilePath\": \"/dev/null\",\n \"LogLevel\": \"info\",\n \"Runtimes\": [\n \"docker-runc\",\n \"runc\"\n ],\n \"Mode\": \"auto\",\n \"Modes\": {\n \"CSV\": {\n \"MountSpecPath\": \"/etc/nvidia-container-runtime/host-files-for-container.d\"\n },\n \"CDI\": {\n \"SpecDirs\": null,\n \"DefaultKind\": \"nvidia.com/gpu\",\n \"AnnotationPrefixes\": [\n \"cdi.k8s.io/\"\n ]\n }\n }\n },\n \"NVIDIAContainerRuntimeHookConfig\": {\n \"SkipModeDetection\": false\n }\n}","time":"2023-06-26T11:08:39+12:00"} {"level":"info","msg":"Using low-level runtime /usr/bin/runc","time":"2023-06-26T11:08:39+12:00"} ```
  20. Hi there are not many replies are there. Did you ever get this going? There seems to be something different about Plex and I'm wondering if it's the official Plex container maybe doesn't support it properly. Will try that tonight.
  21. I'm having the same issue. I have NVIDIA_VISIBLE_DEVICES and my GPU pasted in there, I have a screen plugged in (it wasn't turned on though - I assume I don't have to actually have it on - that would be annoying - it was on at the wall though). I also have --runtime=nvidia in extra parameters, no spaces. I have the two checkboxes on in in Plex for hardware acceleration. Have restarted docker, and the whole machine. I can see the GPU is detected using the GPU stats plugin. There used to be some other parameters needed - driver capabilities and such, these I assume are no longer required with the new driver? I'm running unraid 6.12.0-rc6. Also, I'm running official Plex docker. Anything I've missed? Thanks.
  22. Just chiming in here because I also get this issue (different container) and have done a few times. I don't understand why it's happening. Sometimes parts of the directory come through, like just the folder name but without the files. Another time one directory will come through completely but not the other. The main difference I note is I'm using the extra dockerhub link rather than Unraid App Store default in this case for idrive backup container here: https://hub.docker.com/r/taverty/idrive/ I'd bet if I set this up manually in docker on ubuntu or something it'd work. I think there some fancy unraid stuff meant to make it easier getting in the way or something, I don't think it's me but could be wrong! The first time I ran this container, I had an empty scripts directory and no /etc directory. I deleted it and tried again, this time I have no scripts directory and a full /etc directory. I've triple checked my paths are right, it makes no sense. And to make matters worse the data is lost both inside and outside the container. i.e in this case there is nothing under /opt/idrive/IDriveForLinux/scripts/ or the hosts equivalent mountpoint. If I remove the line it all comes back with a container restart. Seems like some kind of weird bug. I've removed the docker file a few times including deleting the whole image and the host docker directory simultaneously, still the same. If anyone knows any tricks around this, that would be appreciated! Thanks, Marshalleq.
  23. I just upgraded to rc5. It works perfectly, so the bug I had is gone, well done to the team and thank you to @unraid! Still have the USB key though.
  24. I think this is still manual. I don't expect any zfs gui to come out in this release. Someone may correct me on that. RC5 came around fast. Anyone know if I can get rid of my USB key that is used to boot the unraid array yet? I'm still on RC2 because RC3 had problems. I have a horrible feeling these will still be present in RC5 - but will see! Slightly surprised to hear them say that they expect a stable in a few weeks.