Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. I think this is still manual. I don't expect any zfs gui to come out in this release. Someone may correct me on that. RC5 came around fast. Anyone know if I can get rid of my USB key that is used to boot the unraid array yet? I'm still on RC2 because RC3 had problems. I have a horrible feeling these will still be present in RC5 - but will see! Slightly surprised to hear them say that they expect a stable in a few weeks.
  2. Edit the smb extras file in /boot. I’m sure there will be a gui method coming in the future but as far as I know this is still it.
  3. Yes ZFS. Thanks, but I still think this is a bug given rolling back to rc2 the problem goes away. What you highlight above are correct paths. I did reboot a few times to confirm consistent problem. The paths were mounted already when navigating via bash. Somehow the unraid system wasn't seeing them though. Obviously I shouldn't have to do it manually either. So if there's something specific you want me to test I'm happy to do that. You want me to upgrade and manually mount zfs again? Given I've already tried that I am a little reluctant but will do it if it is needed to convince for further investigation.
  4. I hadn't noticed this issue, will look. I've used docker directory on zfs for quite a while too, including up to RC2. So we now have to use an image, with BTRFS on it? That seems a bit wrong.
  5. After upgrading to RC3 the following two symptoms occurred. Going back to RC2 they went away (diagnostics attached). Symptom 1: The VM service showed as being started but all VM's we missing under the VM menu. Symptom 2: All dockers show as started and can be navigated to, but were not able to see the disks. Symptom 3: The disks were still mounted and navigable via ssh - which I wasn't expecting - at least the ones I checked which was ssd1pool and hdd2pool. skywalker-diagnostics-20230416-0801.zip
  6. It is still unclear though what exactly changed in RC3, as some of this is most certainly not new, in fact since rc1. I think this is normal for unraid though right? They sort of call it an rc3 changeling but lump everything else in. But if I recall correctly they used to put rc1/2/3 next to the items that they applied to so you could tell. For example I am uncertain if anything in the ZFS section is rc3 specific. Certainly the first half of it isn't.
  7. Aha, it's there now! Perhaps my browser didn't update, or they just added it?! Weird!
  8. Thanks, I think you meant this page right? Because I've already looked there, it has up to RC2, but no RC3 that I can find. Beca
  9. Has anyone found a changelog for RC3? There are links and headings that say they're a changelog but they all seem to be older releases...
  10. After all the comments, I thought it might be useful to make some terminology clear because even knowing how this all works, you have to be sort of eagle eyed to understand the explanations and how they relate the the question. Probably the best way to summarise it more clearly that I can think of is like this: The UNRAID Array When people mention the 'array' (at least how they are replying to you here), they actually mean Unraids implementation of raid, which accepts differing sized disks. Even though technically speaking (by my training at least) any form of RAID is an array. Yes, ZFS can now be a single disk inside an unraid array as of latest RC. Note that Unraids array is very much slower than other arrays and it doesn't offer any form of recovery or protection other than being able to rebuild the number of disks that match your parity level. There is no 'bitrot' support as some call it, there is no checking that files are corrupted or integrity checking of files as they're written or even similar features provided by non-self healing arrays, not as far as I know (someone please correct me if I'm wrong). There used to be a plugin that checked integrity, but it was awkward and I don't think it really did anything worthwhile in the end. I uninstalled it. ZFS vs BTRFS. They're similar and they're not. There's a good write up here: https://history-computer.com/btrfs-vs-zfs/ however the things that make the difference to me personally are listed below, some will undoubtedly disagree. BTRFS If you search these forums you will see that there are a number of people complaining about losing data with BTRFS. It may be better now, but in my opinion losing data even once due to a file system is a major shortfall that isn't worth the risk of trying again. I did try again because people said it was a coincidence, but silly me - another set of lost data. And at the time I had to use btrfs for my docker image which consistently had problems. I won't use it anymore because I don't trust it, others will disagree and may (hopefully) be able to point out that they actually found bugs that have fixed this issue - because choices are good. Having bugs of that nature in a filesystem that's been said is production ready though doesn't give me any confidence whatsoever. It should never have happened. ZFS If you really want a safe file system, ZFS has the longest proven pedigree. Probably because it had the most resources thrown at it by Sun Microsystems in the original years while it was still open code. Now that the legal issues are out of the way it has I would say the most active and stable development and mature core. Another option If you truly want data protection, whatever file system you choose, a nice trick is to just create a ZFS or BTRFS mirror and put the unraid array along side it separately. So you have one array that is self healing and one that is multi disk size capable. For people that rely on the different sized disks feature, but want self healing, I think this is a good middle ground. Because self healing is usually best suited to photographs, documents and such like which you want to last forever. Large binary files that perhaps you could redownload, temp space and that sort of thing the Unraid array is well suited for because all that the unraid array will essentially do is allow you to replace failed disks. At least that's my understanding. Hope that helps.
  11. Well, when you first drop the plugin and update to Unraid with the built in plugin, nothing really changes. You can still do zpool import -a and use the original names. The issue comes when you use unraids GUI. So what I did is import the pools, then list out the disks with zpool list -v, take a screenshot of that, then export the pools. Then you have all the info you need to import into the GUI. The GUI gives you both the UUID and the /dev device name so that it's super easy to select.
  12. I have and have always had my zfs pools under /mnt. In the built in ZFS I still have them under /mnt and haven’t noticed any problems. The only thing I haven’t worked out yet is znapzend plugin. Currently not working but I believe others have gotten it to function. Also there are pool naming constraints currently. I had to do probably 1-2 hours work to rename all my dockers, vms etc to use the lower case names (not so bad) but more annoying is the restrictions on where numbers can be placed. So I now have hdd1pool instead of HDDPool1. Totally messed up my naming scheme. However the unraid gui is a big improvement on matching UUID and /dev device names. Slight negative is that you have to stop the whole server to create the array. I find that to be quite diasappointing but I don’t do it that much so it’s not too bad, just enough to be annoying. Got two more disks today, so here I go again. Stop the whole server for something that technically doesn’t need that to happen.
  13. I’ve run ZFS plugin for years and have never once run dockers and vms on the unraid cache. The whole point for me was to get these off btrfs so that I could get a reliable experience. Which was successful. So I’ve never understood why people keep saying that to be honest. Works for me anyway, perhaps I’m just lucky.
  14. Hi all, anyone noticed a reduction in backup speed in recent times? Due to the lastpass hack, I decided to roll my client side keys and am in the process of doing a completely new backup. There is a lot of data, but it really is slow. For example, on one of my backups I have 3TB remaining and it is saying it will take 4.8 months to complete. I have a 1G connection that's giving me in reality around 700Mb/s upload. I suspect they might throttle transfers that take over a certain amount of time or something. Any thoughts? Thanks.
  15. From the links you posted I wouldn't say it's working as intended, rather it's working the same as it is for everyone else, that is, there is a bug for windows based guests. Which leads me to wonder about the comparisons to proxmox because it was said that this issue didn't happen there. Given Proxmox also uses KVM the only remaining options seem to be that they've implemented a workaround or we're mistaken about the difference. It would be interesting to know if there is a workaround as if that were true, that is something unraid could implement also.
  16. Hi all, very exciting to see ZFS finally arrive into unraid! I guess this is a request or perhaps it's validation of roadmap, presently I have the following problem: My pools are named SSDPool1, SSDPool2, HDDPool1, HDDPool2 etc. RC2 currently limits the pool naming from what is available in native ZFS to lowercase only, no special characters and no numbers at the beginning or end of the pool name. So I have at least two of those things I must change. So easy right? I can just make it ssd1pool, ssd2pool etc I suppose. Annoying but I can live with that. However, what I must also do, is redo all my docker containers, vm's, probably some plugins and other things I'm not thinking of to reference the new mount point. Yes I CAN do this, however it is a lot more work and true to the title, not a seamless import. Another option might be to keep the new restricted pool name and manually set a new mountpoint back to the original. Can someone from unraid confirm two things so that I know how to progress? 1 - Are we stuck with this naming restriction, or is it on a plan somewhere to relax / remove it in a later rc? 2 - If I change the mountpoint manually, will unraid respect that, or is there some reason why I need to keep it the same as the pool name as set in the GUI? Many thanks for all your hard work. Marshalleq.
  17. Hi all, well I tried one pool which worked technically, but what I assume is a constraint around the design has meant I have to roll it back. Not sure if there's somewhere else I should feed this back to, will look in a moment. The specifics are that the unraid pool names have non-zfs limitations around them i.e. unraid require to not have capital letters, symbols or numbers at the beginning or end. So my SSDPool1 has to become something like ssdpool which is annoying when I have numbered lists of pool names. That in itself wouldn't be too bad if I didn't have so much referencing the former mount point and makes this more like a mammoth task to use the unraid GUI version than a minor one. For now, I will stick to import -a. Hopefully this is something that will be addressed in the future so that it's easier to import from native ZFS or perhaps other systems such as TrueNAS. Also note it actually renames the zfs pool as well (it's not just the mount point that changes), so you have to undo this by exporting and then reimporting with zpool import oldpoolname newpoolname then zfs set mountpoint=mountpoint newpoolname. I was excited to do this, but not looking forward to this much work right now. I suspect I could actually just redirect the mountpoint and leave the changed name, but unsure how unraid will treat that so left it for now.
  18. Finally getting around to it now. I recall someone saying I still need a USB stick though to run a dummy array on. So I'll keep it like that for now, but if anyone knows otherwise I'm keen to hear about it! Thanks.
  19. Thanks @JorgeB I note that rc2 is out today, but in the changelog I don't see mentioned of special vdevs etc. There's something about non-root vdevs, is that it? I guess I'm just asking so I don't waste my time importing pools that still can't be imported. Thanks!
  20. Thanks. So I will wait then for those pools, probably add them with the above method for any pools that don't have those things. Thanks again.
  21. Thanks, I did actually, but you made me think I missed something, so I revisited. What I was hoping for was not to use any fudged, not true zfs weirdness in unraid, which the post below seems to border on. I'm not brave enough yet to go sticking zfs into unraid pool strangeness because of all the extra bits my pools have that unraid currently don't support - the advice above due to this was to 'import only', which I took to mean import -a not the gui am I wrong about that? I'm not confident I know what unraid is actually doing enough yet to perform this step. Does the GUI import workaround method below work for the unsupported parts of zfs like special vdevs? Thankfully I'm not using encryption, but I nearly set it up this week because really I should be doing that these days. Part of my problem might simply be I've not used unraid pools since before multi pools were supported - so there will be a small learning there. I have about 30 drives in my ZFS setup. I have dedup, special vdev for storing metadata, there's a 6 disk mirror in there, a couple of 8 disk raidz2 pools, so it's just a bit to unpick - better to be safe than sorry. Key parts from the announcement I think you're referring to in red. Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report). When creating a new ZFS pool you may choose "zfs - encrypted", which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time. During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.
  22. Thanks for pointing this out, no mine isn't working either, timestamps stopped at time of upgrade. So you're saying if I go an start it, it will then just snap everything every hour? That's weird.
  23. Done it. Updated unraid then removed zfs plugin (that’s probably better other way around). Nothing was detected so did a zpool import -a then I was able to stop and start docker. one disk is apparently unavailable but otherwise it’s working as far as I can tell. Will let it settle before trying anything further. For those wondering zfs version is 2.1.9-1.
  24. Ok thanks so sounds like when you say import only that the command line is still available to do whatever we want and the limitations are in some kind of fui implementation? if so I’ll do it. Have been looking forward to this, hopefully it isn’t too unraided like some weird unraid array requirement. Seems not though. Thanks!
  25. Thanks @steini84, it is strangely exciting to be able to remove my usb drives from my setup. Just a note that you may not be out of the woods yet because according to the release notes only certain zfs features are supported. I am not sure how those are being limited but I have more than one pool, cache and special vdev, way more than 4 drives, dedup and probably other things. I would have thought zfs was just zfs or maybe the limitations are just in a new gui? Hoping I can just upgrade and that’s that. Slightly scared to do it! 😮