Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Posts posted by Marshalleq

  1. 13 hours ago, ashman70 said:

    I have a backup server running the latest RC of unRAID 6.12. I am contemplating wiping all the disks and formatting them in ZFS so each dis is running ZFS on its own. Would I still get the data protection from bit rot from ZFS by doing this? 

    After all the comments, I thought it might be useful to make some terminology clear because even knowing how this all works, you have to be sort of eagle eyed to understand the explanations and how they relate the the question.

     

    Probably the best way to summarise it more clearly that I can think of is like this:

     

    The UNRAID Array

    When people mention the 'array' (at least how they are replying to you here), they actually mean Unraids implementation of raid, which accepts differing sized disks.  Even though technically speaking (by my training at least) any form of RAID is an array.  Yes, ZFS can now be a single disk inside an unraid array as of latest RC.  Note that Unraids array is very much slower than other arrays and it doesn't offer any form of recovery or protection other than being able to rebuild the number of disks that match your parity level.  There is no 'bitrot' support as some call it, there is no checking that files are corrupted or integrity checking of files as they're written or even similar features provided by non-self healing arrays, not as far as I know (someone please correct me if I'm wrong).  There used to be a plugin that checked integrity, but it was awkward and I don't think it really did anything worthwhile in the end.  I uninstalled it.

     

    ZFS vs BTRFS.  They're similar and they're not.  There's a good write up here: https://history-computer.com/btrfs-vs-zfs/ however the things that make the difference to me personally are listed below, some will undoubtedly disagree.

     

    BTRFS

    If you search these forums you will see that there are a number of people complaining about losing data with BTRFS.  It may be better now, but in my opinion losing data even once due to a file system is a major shortfall that isn't worth the risk of trying again.  I did try again because people said it was a coincidence, but silly me - another set of lost data.  And at the time I had to use btrfs for my docker image which consistently had problems.  I won't use it anymore because I don't trust it, others will disagree and may (hopefully) be able to point out that they actually found bugs that have fixed this issue - because choices are good.  Having bugs of that nature in a filesystem that's been said is production ready though doesn't give me any confidence whatsoever.  It should never have happened.

     

    ZFS

    If you really want a safe file system, ZFS has the longest proven pedigree.  Probably because it had the most resources thrown at it by Sun Microsystems in the original years while it was still open code.  Now that the legal issues are out of the way it has I would say the most active and stable development and mature core.

     

    Another option

    If you truly want data protection, whatever file system you choose, a nice trick is to just create a ZFS or BTRFS mirror and put the unraid array along side it separately.  So you have one array that is self healing and one that is multi disk size capable.  For people that rely on the different sized disks feature, but want self healing, I think this is a good middle ground.  Because self healing is usually best suited to photographs, documents and such like which you want to last forever.  Large binary files that perhaps you could redownload, temp space and that sort of thing the Unraid array is well suited for because all that the unraid array will essentially do is allow you to replace failed disks.  At least that's my understanding.

     

    Hope that helps.

     

  2. Well, when you first drop the plugin and update to Unraid with the built in plugin, nothing really changes.  You can still do zpool import -a and use the original names.  The issue comes when you use unraids GUI.  So what I did is import the pools, then list out the disks with zpool list -v, take a screenshot of that, then export the pools.  Then you have all the info you need to import into the GUI.  The GUI gives you both the UUID and the /dev device name so that it's super easy to select.

    • Thanks 1
  3. I have and have always had my zfs pools under /mnt. In the built in ZFS I still have them under /mnt and haven’t noticed any problems. The only thing I haven’t worked out yet is znapzend plugin. Currently not working but I believe others have gotten it to function. Also there are pool naming constraints currently. I had to do probably 1-2 hours work to rename all my dockers, vms etc to use the lower case names (not so bad) but more annoying is the restrictions on where numbers can be placed. So I now have hdd1pool instead of HDDPool1. Totally messed up my naming scheme. However the unraid gui is a big improvement on matching UUID and /dev device names. Slight negative is that you have to stop the whole server to create the array. I find that to be quite diasappointing but I don’t do it that much so it’s not too bad, just enough to be annoying. Got two more disks today, so here I go again. Stop the whole server for something that technically doesn’t need that to happen. 

    • Thanks 1
  4. I’ve run ZFS plugin for years and have never once run dockers and vms on the unraid cache. The whole point for me was to get these off btrfs so that I could get a reliable experience. Which was successful. So I’ve never understood why people keep saying that to be honest. Works for me anyway, perhaps I’m just lucky. 

  5. Hi all, anyone noticed a reduction in backup speed in recent times?  Due to the lastpass hack, I decided to roll my client side keys and am in the process of doing a completely new backup.  There is a lot of data, but it really is slow.  For example, on one of my backups I have 3TB remaining and it is saying it will take 4.8 months to complete.  I have a 1G connection that's giving me in reality around 700Mb/s upload.  I suspect they might throttle transfers that take over a certain amount of time or something.  Any thoughts?  Thanks.

  6. From the links you posted I wouldn't say it's working as intended, rather it's working the same as it is for everyone else, that is, there is a bug for windows based guests.  Which leads me to wonder about the comparisons to proxmox because it was said that this issue didn't happen there.  Given Proxmox also uses KVM the only remaining options seem to be that they've implemented a workaround or we're mistaken about the difference.  It would be interesting to know if there is a workaround as if that were true, that is something unraid could implement also.

  7. Hi all, well I tried one pool which worked technically, but what I assume is a constraint around the design has meant I have to roll it back.  Not sure if there's somewhere else I should feed this back to, will look in a moment.

     

    The specifics are that the unraid pool names have non-zfs limitations around them i.e. unraid require to not have capital letters, symbols or numbers at the beginning or end.  So my SSDPool1 has to become something like ssdpool which is annoying when I have numbered lists of pool names.

     

    That in itself wouldn't be too bad if I didn't have so much referencing the former mount point and makes this more like a mammoth task to use the unraid GUI version than a minor one.  For now, I will stick to import -a.

     

    Hopefully this is something that will be addressed in the future so that it's easier to import from native ZFS or perhaps other systems such as TrueNAS.

     

    Also note it actually renames the zfs pool as well (it's not just the mount point that changes), so you have to undo this by exporting and then reimporting with zpool import oldpoolname newpoolname then zfs set mountpoint=mountpoint newpoolname.

     

    I was excited to do this, but not looking forward to this much work right now.

     

    I suspect I could actually just redirect the mountpoint and leave the changed name, but unsure how unraid will treat that so left it for now.

  8. On 3/20/2023 at 10:34 PM, JorgeB said:

    Yep, that's it, it should now import pools with log, cache, dedup, special and/or spare vdev(s).

    Finally getting around to it now.  I recall someone saying I still need a USB stick though to run a dummy array on.  So I'll keep it like that for now, but if anyone knows otherwise I'm keen to hear about it!  Thanks.

  9. On 3/18/2023 at 8:37 AM, JorgeB said:

    You can import the pools using the GUI, but wait for rc2 if the pool has cache, log, dedup, special and/or spare vdev(s), then just create a new pool, assign all devices (including any special vdevs) and start the array, what you currently cannot do is add/remove those special devices using the GUI, but you can always add them manually and re-import the pool.

    Thanks @JorgeB I note that rc2 is out today, but in the changelog I don't see mentioned of special vdevs etc.  There's something about non-root vdevs, is that it?  I guess I'm just asking so I don't waste my time importing pools that still can't be imported.  Thanks!

  10. 13 hours ago, ich777 said:

    I would recommend that you read the announcement post for Unraid 6.12.0-RC1 on how to import your pools to Unraid and there is also the ZFS version included.

    Thanks, I did actually, but you made me think I missed something, so I revisited.  What I was hoping for was not to use any fudged, not true zfs weirdness in unraid, which the post below seems to border on.  I'm not brave enough yet to go sticking zfs into unraid pool strangeness because of all the extra bits my pools have that unraid currently don't support - the advice above due to this was to 'import only', which I took to mean import -a not the gui am I wrong about that?  I'm not confident I know what unraid is actually doing enough yet to perform this step.  Does the GUI import workaround method below work for the unsupported parts of zfs like special vdevs?  Thankfully I'm not using encryption, but I nearly set it up this week because really I should be doing that these days.  Part of my problem might simply be I've not used unraid pools since before multi pools were supported - so there will be a small learning there.

     

    I have about 30 drives in my ZFS setup.  I have dedup, special vdev for storing metadata, there's a 6 disk mirror in there,  a couple of 8 disk raidz2 pools, so it's just a bit to unpick - better to be safe than sorry.

     

    Key parts from the announcement I think you're referring to in red.

    Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

     

    When creating a new ZFS pool you may choose "zfs - encrypted", which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

     

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

     

     

  11. 4 hours ago, Jclendineng said:

    Does this work for anyone on 6.12? I got it to run once after setup but it doesn’t run automatically, I set the service to start at boot and have a from for every hour. 

    Thanks for pointing this out, no mine isn't working either, timestamps stopped at time of upgrade.  So you're saying if I go an start it, it will then just snap everything every hour?  That's weird.

  12. Done it. Updated unraid then removed zfs plugin (that’s probably better other way around). Nothing was detected so did a zpool import -a then I was able to stop and start docker. 

     

    one disk is apparently unavailable but otherwise it’s working as far as I can tell. Will let it settle before trying anything further.   For those wondering zfs version is 2.1.9-1. 
     

    • Like 1
  13. Ok thanks so sounds like when you say import only that the command line is still available to do whatever we want and the limitations are in some kind of fui implementation?

     

    if so I’ll do it. Have been looking forward to this, hopefully it isn’t too unraided like some weird unraid array requirement. Seems not though. Thanks!

  14. Thanks @steini84, it is strangely exciting to be able to remove my usb drives from my setup. Just a note that you may not be out of the woods yet because according to the release notes only certain zfs features are supported. I am not sure how those are being limited but I have more than one pool, cache and special vdev, way more than 4 drives, dedup and probably other things. I would have thought zfs was just zfs or maybe the limitations are just in a new gui?  Hoping I can just upgrade and that’s that. Slightly scared to do it! 😮

    • Like 1
  15. There used to be a beta track for this plugin if I recall. There was also the automatic builder community kernel script where you could just point it at the zfs code. Not sure if that’s still around. That thing was awesome but I’m guessing unraid folks didn’t like it much due to perception it’s too hard to support. 

  16. Well I do get 7 results, though I can't say it's alarming given the server has been up for 63 days.

    Screenshot 2022-12-30 at 9.39.39 AM.png

     

    I might also add that I do have some kind of hardware problem that requires me to clear the pool every now and then.  It's probably a disk or a controller, or heat related.  Not too frequent now anyway, but it may well be 6 or 7 times in the three months.  That may then lead me to think that you have a similar issue if you have a lot of entries - though I could be completely off here as I'm just guessing.  But it's likely worth googling why you might get this in your log and can it happen when ZFS is detecting errors.

  17. Hey well just to narrow it down a bit, I run ZFS, ZFS Companion and ZFS Master and have done so for a long time I also run znapzend and a few other things but I have never seen it do this.  I assume by your post this is just printing live in the console?  Maybe take a look to see if something got into cron somehow?  Or User Scripts?

     

    Unless you've got unmounted ZFS disks that you do not want to mount it's probably pretty safe.  However one thing that occurs to me is to double check this is not a safety feature about disks dropping out - I seriously doubt it because I think that would exhibit more in the Array health i.e zpool status -x has nothing in it.

  18. I really dislike that error!  I get it too, it shouldn't be so persistent in the GUI and should be more informational as it always comes back after a reboot which is not really desirable.  BTW I recall I read that expanders shouldn't be used in JBOD mode, could be wrong but perhaps this is the problem.  I've also had this error with an overheating SAS card.  Good luck!

×
×
  • Create New...