Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. I also misread @yros - thought he said theres absolute no benefit, but actually said there's no absolute benefit, which it quite different. Anyway, I look at the videos of freenas and am a bit jealous of their GUI. There's a lot to be liked about how polished that is and not just with ZFS. But in particuarl managing snapshots and backups and such through the GUI would be much easier. Especially from a monitoring perspective. Even the pool creation and so on would be cool - and a display of what's active and it's available size etc.
  2. LOL, of course there's benefit, not everyone that wants ZFS knows command line, this is unraid after all, not some enterprise geek OS. Even me with nearly 30 years in IT spanning from command line days in Novell server OS's can admit that the GUI can be useful cause you don't have to remember stuff. There may not be benefit from a functionality point of view, but from a user perspective it has a lot of potential benefit. You may even find a lot of extra people start installing it if there was a GUI.
  3. This is one place I'd somewhat disagree. I can use it fine in the command line, but I think a gui would also be fantastic. @steini84 did you mean to say we don't need to update for every build of ZFS, or don't need to update it for every build of unraid? Thanks.
  4. Sounds like a cool setup. I'm not sure you can use Unraids default array options to use partitions though - generally it uses whole disks. It might be you'd have to do something with mdadm or similar. Or even better, perhaps ZFS can be configured to use them somehow. I'd say what would make more sense is to leave the 2x2TB disks out and set them up seperately, then you'd be able to do exactly what you're talking about and gain the remainder of the 6TB disks for your ZFS array. Generally when you ping @steini84 he'll update the zfs for you - I wasn't sure I wanted to bother with this one, though I was feeling guilty about not testing it (which I can't without ZFS updated) so perhaps it's a good idea. I really want the 5.0 series kernels and this is the pathway to those.
  5. @limetech I've just re-read your release plan as I was confused with this release not being in the kernel 5 series. I now understand that has been flagged for 6.9 rather than 6.8.1. Indicatively, can you advise if 6.9 is intended to be a more distant target e.g. 1+ years or something sooner? Thanks.
  6. That’s a bit unusual isn’t it. Have you checked logs? Probably some clues in there. I would have thought almost nothing could stop snapshots except maybe faulty hardware or I/o issues. Maybe check the system log too. Sent from my iPhone using Tapatalk
  7. @ezra those don't look problematic to me. Only the first few - which is normal as those may not have had the new snapshot running. Also, after setting up a new snapshot, you do have to end the process, sometimes this doesn't work as expected either and a reboot or something sets it off right without your realising it. Could be that.
  8. Docker startup and shutdown dependencies (e.g so the database is shutdown second and started first), docker grouping and ZFS would be my votes. Sent from my iPhone using Tapatalk
  9. I've actually stopped doing this. The problem was multiple streams and it seemed to occasionally randomly fail playing mid stream, frustrating the Mrs and causing issues for the other streams too, I assume due to lack of disk space. Sometimes I had to reboot the server to resolve - I assume it wasn't cleaning up the files properly after it dumped me out of the stream. I was only using a 4G ramdisk so I could have made more, but then found out it was the reason my Live TV recording was failing and I then started wondering about things like creating thumbnails and other stuff. I do have quite a bit of RAM in this machine, but ultimately thought the amount of RAM required to fix all of this was wasteful, so I turned it off nearly a week ago, and put it back on my Enterprise SSD. Hopefully 1DWPD is enough. Work in progress!
  10. @eds that doesn't sound normal. I don't have to do that anyway.
  11. @Martyzzz Just a random educated guess - if you're running a separate database, check that the ip address of the database didn't change, that happened to me once and with a new motherboard install with a new NIC, it may be likely. @can0199 yes I have, but I have no idea if I will be much help as it was a while ago now, here's my config if that helps. I have a vague recollection I had to change a config file somewhere. Only Office Document Server Nextcloud
  12. And don’t forget you can get good performance through zfs on unraid. Who knows, you may be able to use mdadm to get standard raid 5 too. Unraid has a nice feature set for home, proxmox for enterprise.
  13. Out of interest, has anyone tried this? I'm interested to know if it broke GPU passthrough like some of the other newer bioses did. I suspect it's different on X399 though. Thanks.
  14. What I wasn't sure about is it said you have to have nextcloud stopped. Which would mean these commands won't work I assume.
  15. Has anyone here done the 17.02 upgrade yet? And if so, did you come across this? https://help.nextcloud.com/t/warning-regarding-need-bigint-after-17-01-to-17-02-upgrade/66531
  16. You don't have to have a standard unraid array running as far as I know. Anything that works on ZFS on linux will work here - so yes, if you're after a striped array - that would work too. So to this end - ZFS does work for array drives - just not unraid array drives. FreeNAS doesn't necessarily make sense because of as you say the ecosystem of unraid is better. FreeNAS has a way better GUI for ZFS is probably the biggest difference. But also being BSD you'll have to learn about their BSD Jails for docker equivalent (no native docker) and also BHIVE for virtual machines which it seems people complain about a lot. There can also be driver issues if you have anything but fairly standard hardware because the linux kernel drivers are going to be more than in freebsd. For these reasons, if Unraid doesn't suit you - I'd actually consider proxmox before FreeNAS. Proxmox support zfs and other standard filesystems and arrays (ZFS doesn't do RAID 5 reliably for example) and has a more enterprise feature set which is nice (e.g proper vm backups, docker and LXC which is very cool (think a whole ubuntu distro in 2MB). That's my 2c anyway. For me I nearly moved away from unraid too - but adding ZFS kept me going. Believe it or not the last version kept killing my disks for some reason, so ZFS gave me the security and less important stuff is on the unraid array. Hope that helps a little.
  17. I'm still not sure if it's needed in unraid at all. I'm pretty sure it's all handled by the bios these days.
  18. Yeah, it's been updated to support all those versions. ZFS is a bit tricky when it comes to understanding disk space which is where all my hesitation is. It has about 20GB free, but I suspect that's the issue. When you delete something on ZFS, it actually doesn't free the space right away due to the snapshots. I expect this is my issue.
  19. Sorry, I posted above and hadn't realised there were other posts. Yeah it's on ZFS. I got fed up with btrfs so changed it out has never been an issue until now. Anyway, I'm pretty sure it's something to do with that device now that I've had some time. I'll probably go post on the ZFS plugin page now if I need help. Thanks everyone.
  20. Well it's not RC9 as it appears to be also happening on RC7 (managed to downgrade). So to that end I expect this is a localised system issue. What I can't figure out is, why when I have a perfectly good filesystem on my SSD (checked) the BTRFS docker.img file get's errors and the system makes it to read only. There must be something obvious I'm not understanding.
  21. Anyone else getting this? I've run a full scrub on the underlying disk and have even deleted the docker.img file and started from scratch. I was able to install one docker image successfully, then the next one came up read-only file system. This seems to be what's happening after a time each time I reset. It 'feels' like it's specific to RC9, but I was only on RC8 for a little while. RC7 was working fine for me. I have attached logs because I suspect someone else is going to know much more than me on this one. obi-wan-diagnostics-20191209-0927.zip
  22. Basically, I'm sure there must be a place to download these old versions and install them manually, but I haven't found it yet. I'm having quite a few issues in RC9 and I don't really want to run the old kernel anyway - I only did it because I thought it would be more stable, but for me it's not. I've had my first ever unraid server crash on it actually and it's messed up all my dockers causing quite a lot of havoc. I think rc8 might have been OK, but was only on it for a day or so. Thanks.
  23. I think it's prudent to add - ZFS does not publish shares, it adds configuration so that unraid can publish that configuration via it's own SMB implementation. Also, I don't recall having to do anything to publish my shares at boot - I've see a few people say that they need to do something and have always been confused by that. But since I did shift some existing shares from the unraid array to zfs, I did have to first remove the shares from the unraid config to get them to work. Also had to make sure the file permissions were right on the ZFS files. But other than that they do seem to work automatically.
  24. I thought this was big enough to post here to create awareness. I don't expect anyone here can do anything about it though as I assume it needs to be resolved at the network / kernel level. https://www.zdnet.com/article/new-vulnerability-lets-attackers-sniff-or-hijack-vpn-connections/?ftag=TRE-03-10aaa6b&bhid=10041925
×
×
  • Create New...