Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Regarding BTRFS I would highly recommend not using it. You will find there are two opinions on this, but if you search around the forums you will see quite a number of cases like yours, where there are BTRFS errors suddenly happening and they require a reformat to resolve. With that kind of risk hanging over your file system it makes the benefits of BTRFS kinda pointless IMO. Right now, the best bet is to use XFS for everything, which will mean you can't run a mirror as a cache, but it's extremely more likely IMO that you'll have a BTRFS failure than a disk failure.
  2. I actually forgot all about this - have been away for work. My last post when it was working was January 7. I see it's been working well until January 18, upon which it stopped again (that's 7 days ago). @steini84 Is there some way to turn on extra logging to a file? Thanks.
  3. Yes, I don' know what the cause is but a few days ago I changed to q35-5.0 and that seemed to fix it. Before that, it didn't lock up but it was painfully slow - but it looked like a lockup if you weren't prepared to wait 20 minutes for your vm to boot. So possibly you could get to 5.0 as well. Edit - 5.0 still had slowness issues, just less than 5.1 - trying 4.2 again. Looking through the changes here, there's not a lot that's changed - so it shouldn't be too hard to pin down.
  4. @Joly0 you don't happen to be running xattr=sa do you? Or any other changes from the norm? Was reading the issues list and there are a few BSD related things. Just trying to think of any similarities. @steini84 on the thread above, they're suggested we do a staggered approach of all the commits to see where in the code history this happened between 2.0.0 and 2.0.1. I might just give that a go - might need to just confirm some things with you first.
  5. This may be our issue: https://github.com/openzfs/zfs/issues/11476
  6. Yeah the zfs array unmounts separately. I guess the same problem is causing the unraid array to not unmount properly, which in turn is meaning things are still active running on the ZFS array. Only other thing I could think of would be to reboot in maintenance mode, or to set the unraid array not to start at boot. But I'm not sure if that will give @steini84 the information he wants. I am definitely not running that particular docker BTW. It could be an 'other' common docker component I suppose. At least now we have two of us with the issue - maybe someone else will get it too and we can start to narrow it down. I might just have time to try run those commands tonight if I'm lucky.
  7. Yes if you unmount the unraid array, then it shouldn't give you that error. The unmounting of the unraid array would stop all the things that are preventing the zfs array from running that command. Out of interest, which docker? Perhaps I'm running the same one and it's related to that. Thanks.
  8. Oh right. I think first you should stop the unraid array (go to the main screen of the unraid going, scroll down and stop it there). This will stop docker, vm's shares and such like so that nothing is accessing ZFS - then that should allow the zpool command to run. Then run the command in console - zfs import -a which I think will try to auto unmount the ZFS parts of the ZFS array as part of the command. I've never run this command before, but in case you can't get your array back, the command would be the opposite e.g. zpool import -a Yeah - docker seems more impacted. There were other screens that were an issue too. Docker was running in the background if I recall - but I do have 128GB RAM in my system which might have been hiding that particular issue.
  9. @Joly0 can you run the commands requested by @steini84 above? I'm not going to be able to get to it for around a week. e.g. Stop the array Open up a terminal and type in: 'zpool export -a' and wait for it to finish Then click reboot And report back. Probably good to run and post a diagnostics too. Thanks.
  10. I've just built 6.9 RC2 with zfs support. However I don't seem to be able to do GPU passthrough with my NVIDIA 1070TI (EVGA). It seems to either complain with the error you get if you have an AMD card, or it boots to a black screen. This was working in 6.9RC2 with the zfs plugin, but not now with this compiled version. I'm not 100% it's the compiled version either. Every now and then I get issues, which I'm suspicious are resolved by moving the card out of slot 1. Going to try that now anyway. EDIT: Moved my card out of the primary slot and it seems to be working now. What isn't working on ZFS 2.0.1 is rebooting. I'll see how it goes, but suspect I'll roll back to 2.0.0.
  11. @steini84 Just an update that today I've compiled zfs version 2.0.1 on 6.9 RC2 using the unraid kernel helper. That seems to be working perfectly so far. So this is good as it points to some other issue, that should be fixable. I'd be interested to know if anyone else has any issue with the 2.0.1 plugin. EDIT: It actually looks like there is at least one thing still happening - and that is that I can't actually reboot without hard rebooting by holding the power button down. So that to me confirms there is some issue with 2.0.1 or some feature it's introduced that's incompatible with something in unraid. I'll keep testing. EDIT2: Nope my docker tab now is also not loading. So right now I definitely don't recommend 2.0.1 given this happens on two machines and two separate builds of ZFS.
  12. So for my main box I got around it by using the community kernel direct download here to get it going, in case anyone else gets stuck. Until this plugin is fixed or whatever is happening. Note the direct download is 6.8.3 though, with ZFS 2.0.0.
  13. Another curiosity, along with the zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz going missing, the zfs plugin get's uninstalled, both at reboot.
  14. @steini84 There's something wrong with 2.0.1. I removed unassigned devices plugin altogether, hard rebooted (because I have no choice - saved by CoW). Still same issue, so then did: cd /boot/config/plugins/unRAID6-ZFS/packages/ rm * wget https://github.com/Steini1984/unRAID6-ZFS/blob/master/packages/zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz.md5 wget https://github.com/Steini1984/unRAID6-ZFS/blob/master/packages/zfs-2.0.0-unRAID-6.9.0-rc2.x86_64.tgz Hard rebooted again because no choice and the server now reboots correctly. However I must have done something wrong as ZFS is not working. Looking in the packages directory only the md5 file remains and zfs file is missing. I tried the whole procedure again and the file is missing again. Zfs command doesn't work. It's like the tgz file is just being deleted.
  15. There were two plugin updates at the same time, unassigned devices and nerd pack. I assume it'll be some issue with unassigned devices rather than ZFS but I am still investigating. After upgrade the docker tab loads indefinitely, the main tab is either slow or only partially loads. It also won't reboot via GUI or command line reboot requests, which is the most concerning thing. This happening on both systems I just updated with ZFS and the two plugins.
  16. I got zfs-2.0.1-1 zfs-kmod-2.0.1-1 However, for reasons unknown, on both my machines the GUI is now mostly unresponsive. Anyone installed it and got it working?
  17. @Joly0 Can you point at what the specific fix is from here? I run lancache too, but I haven't noticed (nor tested) for any issues such as you describe because I don't understand how there could create an issue with a single docker container - it sounds like you're saying ZFS / RC2 prevented writes - but we'd be seeing that across the board. Either way, there are some good performance fixes in ZFS 2.01. Regarding updates, I'm not sure if it needs to be manually compiled first by @steini84, but either way it's supposed to apply when you reboot.
  18. What part of RAISE would you think replaces trim? I can't see anything even slightly related in there.
  19. No sure if you're running ZFS or not - but if not, I think spin down will work, assuming you're not hit by one of those SAS spin down issues. If you are, I do believe ZFS will largely keep your disks up and rely on their internal idle mechanisms for power / heat reduction.
  20. Did some testing and znapzend is definitely starting at boot, so something is stopping it at some later point, or perhaps it's intermittently starting at boot on both my servers. So looks like logs will be key.
  21. So just checked in on it today and it's stopped again. Comparing the last snapshot to the last reboot I can see that they are indeed correlated. So this is frustrating. @steini84 I have attached diagnostics which should include any startup errors and when it did eventually start manually (just now). Will have a look through them again too and post back. Perhaps the auto start 'touch' method outlined at the beginning of this post no longer works? Of note, this is happening on both my unraid systems. skywalker-diagnostics-20210106-1501.zip
  22. I would always install trim on all SSD's. I think it's highly worth and and have never had an issue. I followed your link but it didn't make immediate sense to me what you mean. Just install trim I reckon.
  23. Hi, as far as I know yes. Provided of course you have the unraid zfs plugin installed. I assume you won't have to upgrade the versions of zfs on the disks first, I would certainly try not to in case you need to go back to ZFS on freenas. The main thing is that you gotta import your pools normally, but I'm pretty sure the plugin will do that automatically. If not, I think it's something like zpool import -a or similar.
  24. As per this thread, I would need a ZFS driver added to the docker system. Otherwise, I cannot use docker in directory mode on my ZFS pool. Placing this here so that hopefully this can be added in the near future. Thanks.
  25. Thanks Squid. I half thought I read that, but just got lost wondering why it even needed to know it's underlying filesystem and therefore a driver, therefore thought I must have misunderstood. Do you know why it needs a driver for an underlying filesystem? I'll do a feature request, if Unraid actually does end up including ZFS it won't harm to have it done already.
×
×
  • Create New...