Marshalleq

Members
  • Posts

    967
  • Joined

  • Last visited

1 Follower

About Marshalleq

  • Birthday October 17

Converted

  • Gender
    Male
  • URL
    https://www.tech-knowhow.com
  • Location
    New Zealand
  • Personal Text
    TT

Recent Profile Visitors

3580 profile views

Marshalleq's Achievements

Collaborator

Collaborator (7/14)

139

Reputation

  1. Confirming that did the trick. It may be unrelated but I noticed one of the devices listed via zpool list -v is showing as /dev/sdj1 whereas unraid sees it as /dev/sdj - which is what everything else is configured as. I don't use partitions with ZFS.
  2. Hmmm, downgrade didn't fix it either, I rebooted it the other day no problem, so not sure what's changed. Just saw your comment - good idea, was thinking similar.
  3. After upgrade my main zfs SSD pool does not start automatically. However using pool import – a in console does work correctly. I checked unraid still has correct device names against those listed against the functioning pool pool list -a and that part of it is correct. I'm trying a downgrade to see if it comes right but just noting this here in case anyone else comes across it.skywalker-diagnostics-20240222_2054.zip
  4. Just updating that for the parts that are working, it's definitely working better than znapzend. In particular cleaning up old snapshots. Thanks for the help, just gotta follow through and stop using space invaders script then I think from what Ive read it will give me everything I need.
  5. Nice, I've got this started now using space invaders script, but it's a bit weird and limiting so will convert to the method you're using. Am hoping I can do a replication different to the snaps like znapzend did.
  6. Ive been grumpily playing with it. I say grumpily because I really just want znapzend to work and a few things about Sanoid are pretty horrible. Like it doesn't run in the background and you have to rely therefore on cron to run it in the background which makes it less capable and a little bizarre with how the cron aligns to the script schedule. For example I want hourly and daily snapshots at the source and weekly monthly and yearly at the destination - cant be done. Might be able to do it if I ditch space invaDERS script which honestly we shouldn't have to use it anyway, something to look into. Could also do it using two scripts for one location which would be silly. Don't like by default it snaps all child datasets either. All in all it needs more time spent trying to make basic functionality work that should be baked into the os. I'm nursing a concussion at the moment, so it probably feels harder than it actually is.
  7. I don't understand why this has turned into such a steaming pile. I looked into sanoid and syncoid out of desperation and it looks very complicated to set up. Znapzend is running and starts at boot. Yet it never runs a backup job. Foundational stuff like this makes me want to jump ship from unraid. Something in the zfs implementation from the unraid folks is what has messed it up. If I didn't have a broken arm at the moment I think id just virtualise unraid and be done with it. I mean they've actually never gotten this foundational stuff right, but plugins have usually filled the gaps. That's not even possible now it seems. Perhaps because dev pack and nerd pack are no longer available?
  8. @steini84How do you find syncoid / sanoid for removing older snapshots that are over the allowance? I.e. If we did hourly for a week, then daily for a month, then monthly for a year, we should expect only 4 weeks of daily backups right? This is something that I never got working in znapzend. It seemed to just keep everything indefinitely. BTW I never did get znapzend working after the ZFS went built in, it only works for a few hours or days then stops. So as per your suggestion, I am looking at Syncoid / Sanoid. (Still figuring out that difference).
  9. Does this mean we can finally untether the license from the array? It really doesn't play nicely with the built in ZFS support. I have just now had to reboot the whole system due to a single failed zfs disk. Something not typical of a ZFS array and entirely caused by having to stop the array to change a disk as far as I know.
  10. Wow good to know! Will have to check mine.
  11. Sort of. But it combines all that and includes scheduling and it's all stored in the drive structure. So it's quite a bit more really.
  12. I'm not sure what the go file changes are, but other than that I don't see any problems. You say you don't want it to be part of the array, but it will be part of its own separate array, accessible under the standard Unraid array method. This does change some things, there are a bunch of ZFS related things that sort of don't work like hot swapping disks without stopping the whole unraid system. This is because you have to stop the array(s) to change the disk in the GUI. A bit of a problem which is making me rethink my whole relationship with unraid at present. But answering you directly, I don't see any problems, it's very similar to my setup, I have a few extra things in mine but it just pulls them in directly. Shares I just keep the same. Autotrim and feature flags like that are irrelevant and will just continue on. Compression is irrelevant to the import process also. User scripts sounds fine too, I use znapzend which still doesn't work for me despite some multiple attempts to do so. I assume scripts will be better.
  13. As people have finally gotten to (read the whole thread), the issue is a licensing issue, so in a sense this thread is kind of named wrong as actually a whole bunch of implications are created via the method chosen to enforce the licence. Personally, I think that the value in unraid has far surpassed the unraid array now which is undoubtedly where it all started and really it should have nothing to do with how they apply the licence anyway - it was probably just the most convenient method at the time. Unraid have a unique customer focus which nobody else has - which unfortunately has meant some of the more typical NAS features haven't been well implemented yet. These impacts are not normally expected from hosts providing virtualisation capabilities and needing to provide high uptime. Presently if you want high uptime and you know what you're doing, you would certainly not use unraid. Esxi, proxmox and TrueNAS scale to name a few software types and also QNAP and Synology style of NAS's all don't exhibit these issues - I can't think of one product that offers virtualisation capabilities that requires you to stop the array for these kinds of changes. Anyone got one? I would suggest we bring visibility to the impacts and ask for fixes to them, while that's sort of done here it's hidden in a big long thread. But raising it like that, perhaps we can get Limetech to understand that it's a bit of a negative on their product compared to other offerings and they might do something about it. Perhaps someone can summarise them at the first post by editing it? As a starter for 10 - some unexpected impacts that I have noticed include: (I'm doing this half asleep so please correct any you think I'm mistaken on and also note by 'system' below I mean any customer facing services, not the core OS). Having to stop the whole system to replace a failed disk and this now includes ZFS Having to stop the whole system to change the name of a ZFS array Having to stop the whole system to change the mount point of a ZFS array Having to stop the whole system to make simple networking changes I think there are quite a few more scenarios, some of them are fair - like isolating CPU cores. When you look at it, it's mostly about disk management I think - which is a bit embarrassing as that is a fundamental of a NAS. And this is the point right, in this day and age we expect a bit better and it's possible to do, if Limetech get the message properly. Open to alternative suggestions. Great discussion in this thread!
  14. Yeah I wish unraid would find another way of enforcing their license. It actually ruins their product a bit. I sort of thought it was ok to do it on their proprietary unraid array, but doing it on open source zfs is a bit low. this and a few other things are making me wonder about running unraid as a vm inside proxmox or truenas scale lately. For virtualisation, backups and especially networking unraid is left in the dust by these platforms. Unraid wins in some other areas though particularly the user interface for docker and vms and the docker App Store. I haven’t used the unraid array for many years now so that’s not an issue. In fact using zfs in those other products would be a better experience. It’d be good to know if unraid have any plans and what they are to improve zfs and the array integration with licensing. .
  15. Well it's your choice, but really I would highly recommend keeping it up to date, there are also a lot of risks in not updating it. Anyway, I'm not normally one of those people that don't answer questions because there's some other thing I don't like (I hate that), but in this case it would actually solve what I think is your problem - you don't have ZFS installed. So on that, can you confirm - you are asking for how to install the ZFS plugin? Or are you asking for how to transfer files using ZFS once you have the plugin installed? Assuming the former - have you tried going into the community App Store and checking if the plugin is there, and then installing it? If so and it's not available, you will need to get a matching version of the plugin to your installed unraid version as they are compiled for the kernel that matches the version of unraid that you have. I would suggest start there and report back. I'm not running an old version, so am unable to test. Thanks.