Jump to content

Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Thanks for the reply and link, I'm just setting it up under a folder on a ZFS drive via it's location, so /mnt/mountpoint/docker/docker/ When I point it at non zfs i.e. /mnt/user/docker/docker/ it works, but if I change it to the zfs one I get the error. So based on the error in the log, I assume it's detecting ZFS somehow, however, I didn't check to see what the log says on XFS, perhaps it's the same and the cause is something else. Either way, I strongly suspect it's some unraid config / bug here.
  2. Hi, for years I've begrudgingly run a btrfs docker image, which was hosted on an SSD with underlying XFS or ZFS no problem. There has been for a while the option of instead storing this into a folder, which I'd prefer for various reasons. However, it seems that docker is deciding that it won't store files on the file system, I assume because it's currently formatted ZFS and is looking for XFS or BTRFS underneath. I don't understand why it would care, given that it's no longer an image what file system is underneath? The below from the docker.log file failed to start daemon: error initializing graphdriver: prerequisites for driver not satisfied (wrong filesystem?) Anyone know why? I'd like to log a ticket, but thought I'd ask here first. Thanks.
  3. I may have found the problem, it appears the upgrade to RC2 or RC1 renamed the disks and left the USB boot drive with the name of a pre-existing raid array disk i.e. the usb disk drive is now named disk1. Unfortunately unraid decided to put a backup share therefore, onto the usb drive and periodically an application would try to send data there filling up the USB drive. Why it was empty when I looked I don't know, so there's some doubt in my mind, but this looks like the likely culprit. Edit - Nope this is incorrect, I was confusing between two USB devices, one that is now a dummy array and another that is the boot device. Geez, sometimes you just need more coffee. But, for the first time now for some reason znapzend started working again so don't know why that is yet.
  4. @steini84 Had a quick look by running it manually. I'm getting an out of space error, but actually there's a ton of space on the device, 47G actually - see below. I'll try a few things to see if I can start it. Also, the log has no details since the last successful backup, which was 21 December. znapzend --logto=/var/log/znapzend.log --daemonize Warning: unable to close filehandle GEN0 properly: No space left on device at /opt/znapzend-0.20.0/lib/ZnapZend.pm line 827. znapzend (5197) is running in the background now.
  5. Mines been running a lot longer since a) moving to virtio-net b) updating to the rc series with newer kernel and c) probably more specifically, removing a rather taxing 24x7 compression routine I run (which I will turn back on at some point. I actually haven't had a crash yet, still monitoring. I also notice for the first time EVER for me on unraid, the VM get's to the Tianobios screen in sub 5 seconds. Previously, that only ever happened for the first run after a reboot, then it would be e.g. 1 minute before it would get there, or even longer. I still suspect something about threadripper has been causing this for me and I doubt it's gone, just reduced.
  6. I've got the auto boot on file 'touched' into the appropriate directory, but it's not auto booting. Could be an RC1 thing I guess? Anyone else got this issue?
  7. I get the feeling this drive power down isn't as much of an issue now. 75% less consumption with the newer drives.
  8. True and yeah you have really brought the whole idea into reality and we are all very grateful for that
  9. Hey! You know I wasn't even looking at who I was replying to - didn't realise it was you lol. So you're saying you don't want to have to go through all the dockers and repoint their host paths to a new location? It takes a little while but wasn't too bad. This article was meant to be a sort of 'not saying it's for everyone, but hey I went all ZFS and this is what I found'. The number one thing I notice is all the sluggish stuff gone. I think we don't realise the impact of all those drive spin downs and whatnot sometimes. I'm also quite happy that if I ever get the hump with it, I can move it to TrueNAS scale or proxmox. The only disadvantages are lack of drive power downs and flexibility of drive expansion really. Thanks for the RC2 update.
  10. Well according to RC2's release notes I was wrong about SAS and domains. But 1.5 of your issues are sorted in RC2 by the looks.
  11. Even when I did run unraids array, I always ran VM's and dockers on ZFS without difficulty, just point them at the path and set the default path in the settings. Making shares is quite different though. You have to use the samba extra config in settings, but that is one of the simplest configs existing in the linux world IMO. We do you want them in /mnt/user? I've never ever put them in there, even pre-ZFS. Just copy them to ZFS and point everything there. The main thing right now is there are certain (less used) functions that require the array to be stopped in order to change. That DOES require VM's and dockers to be stopped unfortunately, though ZFS continues to run I think. But because the array in my case is a dummy array on a usb stick, it is much, much faster to do (like 5 seconds or so). This is actually part of what I was writing about. The sluggish problems of unraid disappear. Yeah that sounds good.
  12. I guess it was a long article and I may not have been the clearest, but I didn't set out for having all my storage in the fast zone, it was a punt that has worked out much better than I imagined. And the main point is that the sluggish parts of unraid are no longer sluggish. I'm not sure what you mean by second point, but ZFS does continue to work without the array, and I don't need to stop the unraid array since it's just running on a usb stick with nothing on it. So VM's and dockers run fine. They all run off ZFS though.
  13. Yeah, I used to run it that way, but right now the performance aspects of using ZFS will keep me from going back there for a while. I really forgot what proper storage was meant to perform like. I mean, I know unraid has it's place and it's idea is really well tailored to a certain market, but it really does have some performance related challenges that you sort of learn to live with after a while. I just hope that there will actually be a way to run it without starting the unraid array. I don't want ZFS to just be a supported option for a single disk inside the unraid array, that'd suck. Hopefully they will allow us to set up in either fashion and not restrict us to the original unraid array.
  14. Hey, anyone else experiencing non-responsive user scripts in rc1 of 6.9? I'm finding I can't edit one at all that I've just created, can't delete it etc either. On top of that I can't schedule any of the default ones. Haven't tried rebooting, but that seems a bit extreme. <<Edit>> Actually I think the script I added below was causing it (deleted manually at console), but oddly I didn't even get to run the script, just using the name below was enough so full stops or underscores? Seems unlikely. Either way, just renaming it to something basic got me going.
  15. LOL, I know right?! This is what amazes me sometimes with the defenders of BTRFS. The basics 'seem' to be overlooked. That said, I've never been sure if it's BTRFS or something wonky with the way Unraid formats the mirror. I know in the past that there definitely WAS a problem with the way unraid formatted it (I got shot down for suggesting that too), but in the end that was fixed by the wonderful guys at LimeTech. In that prior example, I found that I could fix the issue in the command line and not in the GUI, but I still had issues with BTRFS getting corrupted so ended up giving up. But full disclaimer, I'm pretty light on BTRFS experience - however having used probably more filesystems than most, in both large production and small residential instances, I think I'm qualified enough to say it's unreliable, at least in Unraid (only place I've used it). That's an impressive stat. I thought I read write amplification was solved some time ago in unraid, but perhaps it's come back (or perhaps a reformat is needed with accurate cluster sizes or something). BTW, I am also sure I read in the previous beta that the spin down issue was solved for SAS drives, which logically should also mean with a controller of some sort. It was in one of the release notes. You may be interested in my post this morning on my experience migrating away from unraids array here (though still using unraid product). Not for the inexperienced but I'm definitely looking forward to ZFS being baked in.
  16. Hey, I just wanted to share my experiences with ZFS on unraid now that I've migrated all my data (don't use the unraid array at all now). The big takeaway is the performance improvements and simplicity are amazing. And I'm not just talking about Unraid's Achilles heel - throughput - more details below. Why Due to ongoing stability issues I couldn't track down, I ended up buying an IBM P700 with twin xeons and 96GB RAM. This I thought would cover my production software such as nextcloud, Plex, Wordpress and so on, (the things that have customer facing service). The big challenge was that the disks remained on the other box and so stability issues on that would impact my production instance. The P700 only has 4 official 3.5" disk slots and some 5.25" I could use if desired to increase that (I didn't). Solution My solution was to sell the multitude of 8TB disks I have and buy (through a deal on an auction site) some 6 month old 16TB EXOS disks. This gives 48TB usable which is more than enough and 4.5 years remaining on warranty. Decision time on to whether to make the array ZFS or unraid included weighing up the loss of individual disks powering down, multi-size disks in an array and the easier expansion (compared to ZFS). Benefits It was a big decision in a way, but now I'm reaping the somewhat unexpected benefits of having improved performance, a production box with storage and a play box which does GPU passthrough, back end automation and such that can be rebooted without issue. One of the most surprising benefits was the increased speed of Plex Library scanning. This was not something I was expecting, nor thought was possible at all. On the unraid array it would take a significant amount of time to complete a manual scan of the library. On ZFS, the scan is sub 5 seconds! I can only guess this is some clever in memory directory caching contained within ZFS. I must go and read up to find out about that. Things I've noticed so far include: Seriously fast directory scanning e.g. plex No spin up delay Faster throughput General system responsiveness improvement Only four drives which are more modern don't use much if any more power being spun up more Other optional benefits are obvious: Variable cluster sizes for optimised storage / speed (e.g. media dataset can have 1MB while documents dataset can have 8k) NVME read caching algorithm enables VM's to be performantly run from HDD rather than SSD if desired Can migrate away from Unraid to something else if desired and keep my disk pool as is. E.g. Proxmox / FreeNAS / TrueNAS Scale. Throughput Snapshots Compression Encryption Send Receive / backup option Super reliable CoW file system unlike BTRFS in my opinion Best in class data integrity, Unraid's array can't even come close to that Downsides Upgrade options are more limited, can either add an extra mirror, or upgrade all 4 to larger when eventually that is needed With a lot of drives, the power bill may be more expensive due to lack of individual drive power down, but newer helium filled drives reduce this requirement and given I've gone from 11 drives to 4 this is unlikely to be an issue Something cool that happened was while migrating data one of the drives got unplugged accidentally (dodgy cable). When I noticed it, I just shut down the system, rebooted and it automatically resilvered the drive in 3minutes back to known good state. If this had been unraid (or any other array) it would have had to write the whole 16TB again. You gotta love ZFS. Anyway, that's my thoughts so far. Oh yes, to get around the requirement that Unraid won't start Docker or VM's without an unraid array started, I just pointed it at a cheap usb drive and put the array on that. It works well and have been doing that for a few months now with no issues. My hope is this requirement will change in the next version of unraid. Marshalleq
  17. There have been arguments on both side of the fence (btrfs is great, btrfs is not great). My experience has been the latter and I would recommend not using it in your cache. Switch to XFS and forget the mirror. With BTRFS mirror I did not have a reliable experience as have others. BTRFS 'should' be able to cope with a disconnect, even if it had to be fixed manually. However, if both devices have got constant hardware disconnects then I guess that's going to be pretty challenging on any filesystem. From my quick reading, this is a kernel issue though, not a hardware issue, so if you're not on the latest beta, if it were me, I'd probably shift to that given rc1 has the much newer kernel. Running SSD's via an LSI card will reduce their speed as well BTW. Another thing I'd do is raise it directly on the kernel mailing list (it's probably already there if it's still a current issue) because that would eventually find it's way into unraid. Hopefully you're not on the latest beta / kernel as that alone might possibly solve it.
  18. I'm fairly sure I recall @limetech stating previously that domain related features are not functioning and it's something they need to get back to.
  19. I run a threadripper 1950X with an Asus X399 Prime-A motherboard. Originally I didn't have to disable C-states and enable power supply typical current idle. I did have to do that on my previous Ryzen 1700X system. Anyway, recent crashes made me revisit that. The other thing I did was adjust my VM to use the new NIC settings, which from previous testing seem to be much slower but more stable, I should check if that's still the case or not. But I was getting my logs filling up due to having virtio set instead of virtio-net. I assume having a full drive due to logs isn't great for stability either - maybe it's partitioned though, I haven't checked. Mostly the issues do seem to come when I'm gaming in a Windows 10 VM though, other VM's seem OK. So the main difference I can think of is GPU passthrough. Also, mine doesn't always crash per-se, but dmesg shows similar kernel messages to those posted in the beginning with kernel traces etc (wish I knew how to read those). My memory has also undergone extensive testing. Is any of this common to anyone else here with system crashes?
  20. I've just seen some strange behaviour in this space due to a ghost Nic in my network.cfg. If you've had multiple nics it might be worth checking that.
  21. I have fairly consistently had problems with the networking GUI in unraid. I wrote some rather frustrated post about it some years back, but got shot down and didn't come back to it. But post upgrade to 6.9 beta 1, I had a related issue, though in this case I can see why it might have happened. As per this post today, my 10G Nic is not recognised in 6.9 beta 1. Fix common problems started alerting me about not being able to communicate with GitHub. However other things 'seemed' to be working OK so I figured it was a fix common problems issue. However, upon later investigation (much of the system wasn't working e.g. docker versions were 'not available' in the GUI and other odd tell tale signs like extremely slow raid shutdown) I found docker.cfg still had the missing NIC called br4 which was not in the GUI but the system was still attempting to route traffic through. Removing via the GUI these more obvious routes did not help until eventually I edited the network.cfg directly and took the extra card out. But to round out the post a bit, what has been challenging with the GUI in the past is things like 1 - Unraid adding a second gateway when the NIC is on the same subnet (IMO there should only be one gateway per subnet) and even if two gateways are acceptable e.g via prioritisation: 2 - Why was unraid sending all traffic out the non-primary card? 3 - Changing interface rules so that the primary card is first does not seem to not take effect properly i.e. the interface descriptions and I think even IP addressing doesn't move and the whole thing gets confusing. So not to go on about that too much, but it does make it hard to figure out exactly what is happening so I can post intelligently about it. Now that I have two unraid licences (one a sort of half dev box) perhaps I can add some value here. Anyway, the fix for me was below. It would be nice if unraid could detect that a card had been removed and update the network.cfg as a result. Given the challenges mentioned above, I expect those are making that difficult to achieve. As per below everything from IFNAME[1]="br4" I removed except for last line which I assumed needed to be SYSNICS="1". The Unraid networking GUI is a mostly functional thing that meets most of the needs of it's customer base. But it would be great to see an item on the roadmap for an upgrade of it to something a little more modern in terms of it's interface and how it interacts with the system. Thanks for reading. # Generated settings: IFNAME[0]="br0" DHCP_KEEPRESOLV="yes" DNS_SERVER1="192.168.43.3" DHCP6_KEEPRESOLV="no" BRNAME[0]="br0" BRNICS[0]="eth0" BRSTP[0]="no" BRFD[0]="0" DESCRIPTION[0]="Onboard" PROTOCOL[0]="ipv4" USE_DHCP[0]="no" IPADDR[0]="192.168.43.10" NETMASK[0]="255.255.255.0" GATEWAY[0]="192.168.43.3" METRIC[0]="1" USE_DHCP6[0]="yes" MTU[0]="1454" VLANS[0]="1" IFNAME[1]="br4" BRNAME[1]="br4" BRSTP[1]="no" BRFD[1]="0" DESCRIPTION[1]="QNAP_10G" BRNICS[1]="eth4" PROTOCOL[1]="ipv4" USE_DHCP[1]="no" IPADDR[1]="192.168.43.12" NETMASK[1]="255.255.255.0" GATEWAY[1]="192.168.43.3" METRIC[1]="2" MTU[1]="1454" VLANS[1]="1" SYSNICS="2" obi-wan-diagnostics-20201219-0920.zip
  22. Hi, this is a new 10G card I got on a whim and was pleased to see it was supported and working well. However I upgraded the other day and now it is not detected (doesn't even show up in the kernel that I can see dmesg / lspci etc. It's the single port version listed here: https://www.qnap.com/en/product/nic-marvell-aqtion (QXG-10G1T) Also the official drivers are here: https://www.marvell.com/support/downloads.html I assume this just got forgotten in the newer builds with the latest kernel? Thanks for any help you can provide. obi-wan-diagnostics-20201219-0920.zip
  23. Sounds like for you B series is OK. Have a google on b450 vs x570 if you want to know more about the differences e.g. https://www.msi.com/blog/msi-b450-b550-x570-comparison If you buy your 16G and make sure it only occupies two slots, you can upgrade later anyway. For a single direct play Plex stream this whole thing would be overkill - you could do that on a raspberry pi (just to prove the point).
  24. Depends what you want to do with it. For a straight file server with a few dockers that'll be enough. But add VM's and you'll want more cores and more memory probably. If you want a file server and a game machine, you might be able to get away with just adding more memory. I've also heard to stay away from the 'B' motherboards. Don't worry about expensive memory or faster memory too much, it makes little difference, just get lots of it. Also check the max memory in your motherboard, some of them only go up to 32 which is a bit limiting.
  25. Yeah, it seems fairly obvious to avoid the unraid raid mount for a differing brand i.e zfs raid mount - we don't know how the secret sauce of unraid raid really works. Anyway, either way, you can put it straight under /mnt, that's what I do. And just ignore it in the fix common problems if you have that come up. So /mnt/data /mnt/data1 or /mnt/Samsung500G are good examples. Coming from windows about 15 years ago, it took me a while to figure out what I should call these things. That's what I came up with after talking to an ex linux admin.
×
×
  • Create New...