Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Truenas is probably more complicated than using the zfs command line. I wouldnt go there you’d be disappointed. Also getting a basic zfs system running on y raid is pretty simple really and there are a lot of friendly people on here that would help you. However if you’re the kind of guy that isn’t really into technology or willing to spend a bit of time learning then I might agree wait for official zfs support. Or just get a Mac and put plex on it and use iPhoto.
  2. It's possible you had one very slow disk caused by bad disk or bad cable or similar if there were those kinds of performance issues on the unraid array - if I recall directly it will only go as fast as the slowest drive. It should be enough for playing video media - you really only notice it when you want to copy large amounts of data to or from it. Personally unless you have a need to use a large amount of differing sized disks I wouldn't bother with the unraid array when you are already experienced with a much better tech - ZFS (which is purportedly going to have a much tighter integration in the next version). ZFS will add some really core benefits like actually tell you when things corrupt and offer to repair them - the unraid array will only rebuild a disk really - it doesn't do much else than that. And if it's speed you want - the capabilities it has for SSD mirrors are also pretty awesome.
  3. Yes, the standard unraid array is one big known performance issue, however depending on what you're doing with it it may or may not bother you. Putting ZFS on unraid will indeed get around this if you do it right because basically any array known to man is faster than unraids standard array, but speed isn't what it was made for - it does have a use case. So there is really no need to move from unraid to solve a standard unraid array speed issue - you can simply put zfs on unraid. Your response was a bit confusing because it sounded like you had ZFS on unraid already, but you also said 'standard array' which is certainly not ZFS on unraid (yet). I suspect you'll be back on unraid as the features are pretty much better than everything else. You'll be able to bring your ZFS array over easily if you ever decide to do that. Good luck.
  4. I'm not sure if /mnt/disks is a good location i.e. isn't used for other things that Unraid automates and might it interfere? I just put mine under mount/data /mnt/whatever works good. Fix common problems will always complain as far as I know and you just have to ignore it.
  5. For those that associate the phrase hacking with something negative, I would just like to point out that putting ZFS on Unraid is not at all a hack. The two devs have worked extremely hard on it including with Limetech to make the plugin work and update seamlessly alongside Unraid updates etc. In fact it's them we have to thank for some of the nice new plugin features we are now all enjoying. The fact that ZFS currently has no official GUI, is just how it is supplied at present, (the master code has no GUI by design, however there are ZFS plugins available to help with this). And yes many people come to Unraid for the Unraid array - including myself. But just because you run ZFS, does not mean Unraid has no value. Actually the way that Unraid has implemented docker support and plugins is in a class all of its own. I tried TrueNAS and even participated in the beta of TrueNAS scale to try give it a bit more polish, its implementation of docker is just awful and frustrating to use and second cousin to its installed Kubernetes. Kubernetes is considered to be the king of containerisation on TrueNAS but that is some weird hack implementation that doesn't quite fit well for home installations nor enterprise installations, arguably Kubernetes is really meant for enterprises. So basically I'm just trying to defend Unraid a bit here by saying it's array isn't it's only good feature. And FYI, looking at the latest announcement for the latest unraid version it looks like baked in ZFS by limetech is coming in the next Unraid version. Happy Weekend!
  6. I'm another referred here by the fix common problems plugin. I use tdarr extensively having been in discussion with the developer since the beginning of it's creation. I have never and still do not have any of these issues. However, I do not use the unraid array except as a dummy USB device to start the docker services (I use ZFS) and do not use NFS (I use SMB). I strongly suspect this issue is more about tdarr triggering an unraid bug of some kind than tdarr itself being the issue.
  7. The question should be reframed to ask if it's any kind of server or just unraid servers. I answered for unraid servers, others did not.
  8. @jaylo123It's a known fact that Unraid's 'Unraid' array has dawdling speed. There is no workaround for this. The only solution I can think of (which I have done) is to not use the unraid array. So pretty much on unraid that means use ZFS array. From experience the speed increase was notable. -add to that the remainder of benefits and (to me at least) it's a no brainer. However, despite being very well implemented into unraid, you would need to be comfortable with the command line to use it and be prepared to do some reading on how it works. So it isn't for everyone and I'm not trying to push you one way or the other. I'm just saying the 'unraid' array is known to be extremely slow.
  9. Also try removing and re-adding the zfs plugin. Could also try stable and do it again. But it does work on at least r3 because I'm running that. (Sorry not sure what you're aware of here so I'm just going to say it - make sure you wait until the zfs plugin updates it's module before rebooting after an unraid version change). And if you can boot into normal mode (not safe mode), perhaps you need to drop to the command line and reimport the pools.
  10. It sounds to me like either the server didn't wait for the updated zfs module to get built, or it wasn't able to be built. The quite solution to that might be to downgrade the server firmware to the previous version and go from there. That shouldn't be too hard. Failing that, ping either steini84 (or possibly others know) where there is a downloadable matchable version along with where to put it to force the process. Sorry I don't know the process, it may be indicated somewhere in this thread though. Marshalleq
  11. No problem. I haven't actually ever used the GUI options and my setup may be very slightly different. For example when I create an array, I set the mount point of the array simply into /mnt - it seems like you've put yours into /mnt/user/zpool. I am not sure if that's in a guide or what, but it doesn't seem like a sensible way of doing it as you may have permissions problems being that it's a user folder. Also, clearly you're also talking about SMB sharing. I found that the unraid smb sharing doesn't work with ZFS really, but luckily zfs has it's own SMB built in - so you can edit the smbextra file in /etc somewhere - sorry can't look it up exactly at the moment - I think it's like /etc/samba/smb.conf/smbextra.conf or something like that. Don't worry the smb format has examples and is super easy. Also for me, any zfs shares in unraid under /mnt I just set them to nobody.user and that sorts them out. having them as root definitely doesn't work. Just always bear in mind with linux, folders must always be set to 777, and the files can be whatever you need. Hope that sort of points you in the right direction.
  12. Command line. The useful commands I've noted for the disk based (L2) cache are: Add Cache Drive: zpool add data cache /dev/nvme0n1 Info to see how good it's functioning (also good for the in memory cache) arc_summary arcstat How to Remove Cache Drive zpool remove data cache /dev/disk/by-id/ata-xxx (Not sure if /dev/disk/by-id was what actually removed it in the end) The second time this command removed it zpool remove DISKNAME /dev/nvme0n1 It is sort of diminishing returns but fun to try out because you can actually remove it. I tried it out for a while but the hit rate was quite low. Could be good if you had VM's stored on slow disks or something though. One nice thing is it's persistent now, so it remembers what it cached after a reboot.
  13. Basically, I created a new dataset and copied it over with rsync after setting it all up. Have been doing ZFS for a while now. Same for the array with special vdev - that was a while ago now and wasn't a small task, but got there in the end. My main gripes are not so much with the web pages, more to do with load times e.g. startup from docker and the forever chugging away in the background. It may just be that my library is big. Plex says I have 114000 tracks / 1092 artists / 8463 albums. I hadn't seen ioztat before - I'm guessing that better than zpool iostat by going down to dataset level of something? I'm talking about how ZFS will store in the default 128k block a block of up to 128k. it's variable. I believe it will literally turn a 128k block into a 64k one if it sees fit to do that. I've always been suspicious about this though, particularly with databases. The thing with ZFS is that just because one group of people told me, doesn't mean it's true - there are a lot of details to work through. But I do know that ZFS has variable record sizes. That's very kind thank you. I may take you up on it in future as it would be fun to see your process of figuring it out. Also, this is actually not on the 1950x (actually now it's a 2950x so I should update that), this is on the dual xeon machine. Either way I'm basically unavailable for around 3 weeks due to things going on in my life so would have to be after that if it's anything more than comments in a forum.
  14. I'd recommend setting up some sensible defaults in the root that you think will apply to that whole drive, e.g. like turn compression on as most things will use it - probably the default record size can be left alone, xattr=sa and whatever else you want but those are the main ones from memory. Then you tweak them per dataset.
  15. I've done what you're suggested on Lidarr, because that's by far the worst performing app for me, but don't really notice any difference so far. In particular the updating of the library (which annoying seems to be entirely scanned now when triggering only a single artist) My library is probably 6x the size of yours. What I was asking above was, does your understanding of zfs include why or why not the variable record sizes cover performance of different table sizes in a database? Because I've gone down this path before of optimising record sizes, jumped on some forums and been shot down because they were adamant that it's not needed with the variable record size feature of ZFS. Also, by reducing the record size, you apparently reduce the available compression (which I checked and my DB went from 1.3G to 1.6G with the smaller record size so it appears that at least that comment was correct. Personally, I think you're onto something here, because I assume ZFS cannot be aware of database page size in a large single file like it can be with individual files and cannot align a page to a record without some help, but I could be wrong. Since the official ZFS page has an example covering I think it's Postgres, that would seem to confirm it. So I applaud your efforts here and await more commentary from others around any speed changes. What I was hoping for was an increased refresh and scan speed and the subsequent 'reading file' which seems to happen twice on the whole library afterward to be a little quicker. But there are external factors with that. In addition to the Lidarr DB on SSD, my audio is stored on a 6 disk Raidz2 but with a special vdev where all the metadata is stored, so it's about as good as I'm going to get without going all SSD. I also use a product called Roon, which is a fantastic but expensive Music player. That is the absolute slowest app I have. It runs on Google Leveldb. Any experience with that out of interest?
  16. I run some Darr apps and notice on my ZFS array they are probably some of the more heavy I/o requirements. It's a 6 disk Enterprise INTEL SSD set so it has the IOPS. Most of the time I've looked to optimise it, I come back to the advice that ZFS has variable record sizes so there's no point. But if we use postgres perhaps there's an optimisation opportunity to match the page size to the record size. I'd be interested in your thoughts on that - thanks for the link to the Radarr guide.
  17. Fantastic info! This will be a great addition!
  18. Hey, just eluding the various places where Limetech have already eluded ZFS is coming - .e.g the video interview I saw quite a while back now, the poll for next feature which included (a winning) zfs as highest vote etc...
  19. Just to be clear - nothing in that post says ZFS will be available in 6.11. It only says they're laying the groundwork for it. Yes, that does officially elude that it's coming - but that's happened in a few places already.
  20. Does anyone know if this is specific to the unraid array and or a specific filesystem, or does it apply to any filesystem? I'm having some weird issues and wondering if this could be the culprit. Thanks.
  21. Just letting you know I got thunderbolt storage working in unraid. Details of how here:
  22. Hi all just wanted to confirm I got thunderbolt storage working in unraid with an add on card. Relatively simple in the end. Details here:
  23. OK, I got it working. Details here:
  24. Just adding a note here as I created a page for people to list their experiences with different devices and I'll summarise them and keep it all up to date. I've also included a sort of live install instructions, which will no doubt need work, but we can update it as we go. I am managing to see storage devices minus the disks so far, there's probably some trick to mounting the disks once the SATA connector is detected by thunderbolt. Anyway, the page is here if you'd like to contribute. Many thanks, Marshalleq
  25. This section to become the howto for how to get this going. For now I will put my experience so far, which is to say it looks like it's working but am yet to get any fruit. Jump pins 3 and 5 as outlined here , install the card and connect your devices. The card should now show up in Unraid as a PCIe device lspci shows a number of lines similar to: 03:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] (rev 06) Also for your storage you should see a line similar to: [ 44.231417] thunderbolt 0-1: LaCie Rugged THB USB-C Add the following line to the /etc/udev/rules.d/99-local.rules file: ACTION=="add", SUBSYSTEM=="thunderbolt", ATTR{authorized}=="0", ATTR{authorized}="1" Navigate to /sys/bus/thunderbolt/devices/domain0/yourdevicefolder Display the contents of the authorized file: cat authorized If this is currently a 0 then: echo 1 > authorized Confirm it is now a 1: cat authorized Reboot Success, storage is now shown in the unraid GUI Note that line 4 effectively disables all thunderbolt security if you're worried about that. I've requested the bolt package be added to nerd tools as this apparently is the security manager for thunderbolt in linux. There is also a package called thunderbolt-tools that provides tbtadm that seems to do something similar. I am still to understand how much value these tools are considering this works now.