Marshalleq

Members
  • Posts

    968
  • Joined

  • Last visited

Everything posted by Marshalleq

  1. Sorry, I'm not sure if we're allowed to ask here - but could we get the bolt tools for thunderbolt added to this? I think I have thunderbolt working in unraid by disabling the security, but I understand the bolt package will allow that to happen more elegantly. Many thanks. Marshalleq.
  2. Wow that was a lot to take in. On that two sided argument I’m going for the awesome read performance mentioned and think it isn’t going to make much difference for writes. Especially when I am using a 32 thread threadripper and a 24 thread dual xeon setup.
  3. If I recall correctly the performance is better and it’s multi threaded where as lz4 is very old and single threaded. Could be wrong about the threads. Zstd has differing levels of performance you can set obviously. I just read up on it at the time and chose it. I don’t use slogs I thought they were really only beneficial in rare cases and would need more redundancy because it’s writes? I don’t know much about them sorry. Very nice drive though!
  4. I can tell you what I use, then you can go and read up on those bits. To me these are the key bits in setting up an array. I'm not going to disagree entirely with BVD, but really like all things technical, research and experimentation is valuable and it's no reason to fear ZFS and not use it. There are some people that dive in without doing even a minor bit of forethought and I assume his commentary is really aimed toward that rarer group whom will likely get themselves into trouble with everything not just ZFS. Anyway, here's the basic command I use first. If it's an ssd pool, I add -o autotrim=on some people are still scared of this, but I've never ever had even one issue with it - compare that to btrfs and the issues were quite a few - though that was years ago now. zpool create HDDPool1 -f -o ashift=12 -o -m /mnt/HDDPool1 raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg See below about special vdev before creating the above. Then the optional tweaking stuff I use: zfs set xattr=sa atime=off compression=zstd-1 HDDPOOL1 And depending on your media a dataset could be changed thus: zfs set recordsize=1M HDDPool1 Default is 128k (Don't get caught with this too much as I keep forgetting ZFS does variable record sizes - 1M might be good if you have a lot of large video files for example.) dedup=on (I use this a lot, but only because I have a special vdev because IMO it means that no significant extra ram is required, however I've had quite the discussion about that so not all will agree - definitely do your own research before enabling this one). Works great if you have lots of VM's and ISO's. My RAM is not used in any way that I've ever been able to notice. More options zfs set primarycache=metadata HDDPool1/dataset zfs set secondarycache=all HDDPool1/dataset Some of the cache options are actually dealt with automatically. The promise with them is to optimise how much of your data will be cached in ram depending on e.g. if you have big files or not and whether it is valuable and even possible to cache them. And finally, the special vdev mentioned above is very cool. It will store metadata on a second set of disks assigned to the array. So for example, if you have slow hard drives in a raidz2, you could have 3 SSD's (for same redundancy level), which speeds up seeking and such. It optionally also will store on the SSD's, any small file up to the size you specific (which must be less than the recordsize or you'd be storing everything). As you probably know, small files on HDD's aren't very fast to read and write to, so the advantage here is obvious if you have that kind of setup. zpool add HDDPool1 -o ashift=12 special mirror /dev/sdp /dev/sdr /dev/sdq I actually also set up a fast SSD / NVME as a level 2 cache - this can be done at any time and it's advantage is just that anything the doesn't get hit in ram, fails over to SSD and again is a way of speeding up reads from HDD's. zpool add HDDPool1 cache /dev/nvme0n1 useful commands: arc_summary arcstat So what you can probably see is there is a default way of doing things and the 'tweaking' mentioned above is really more about understanding your data and how you want to address it via ZFS. Some settings need to be done at array creation and some can be done later. Most settings that are done later will only apply to newly written data so you end up having to copy the data off and on again if you get it wrong. I found it super fun to go on the journey and learn it all, I expect you will too. If you're like me, you'll want to be doing some more reading now! Have a great day. Marshalleq.
  5. Hi thanks for posting, I'm interested in what advantage this is to you. I have run the img file on a zfs dataset and also run it as a folder which creates a host of annoying datasets within it. Does this somehow get the best of both worlds? Thanks.
  6. Right out of the gate I can say that you'd be better off with 4 drives in two mirrors. You have the same space and more speed vs 4 drives in raidz2. Secondly, I assume that an i3 is powerful enough to calculate the parity needed without bottleneck, but it might pay to double check that. In the above configuration, do you have all of that RAM spare or not much ram? I have seen it where having not much ram spare slows things down a lot - this can somewhat be mitigated by setting the available ram in the go file. It also may pay to performance test each drive individually, in case one of them is slowing the others down. I had this same problem on a thunderbolt connected ZFS cabinet the other day, then found out that running non-zfs file system on a single (or multiple disk) was also slow. I am yet to progress but suspect I have one drive that is playing up. It's surprising (and annoying) how often this can be a faulty SATA cable. ZFS will be slower that other raided file systems, but not by that much so I agree something is wrong. It's probably close to what unaid performance is though as that is actually very slow. Can't think of anything else right now sorry, and I appreciate you may have thought of these things already - but sometimes it can trigger a thought for a solution right? Hope you figure it out and let us know - it might help me for my one!
  7. Sorry just rushing out, yeah this does seem a little slower than normal, though unraid is typically quote slow for disk transfers. I'd suggest to see how fast it copies locally on the box first to get a baseline, (maybe use midnight commander (mc is used to activate it) as I think that shows the speed. Then if that works OK it means it's probably a combination of networking and perhaps the SMB protocol, which is also a bit slow on unraid for some reason - depending on what you're connecting to it with. There are some tweaks for SMB. But those speeds almost look like you're running a 10Mb/s hub in there somewhere or a faulty cable slowing it down.
  8. This is a really good question. I think there has been a few times when I have wondered the same and done something probably wrong to get it going again. Perhaps a few of us (or anyone with a standard install that hasn't been messed with should post back here what they have. Mine is set to nobody.users with 777 on everything - so I guess I did that in the past some time. I do note that preferences.xml plexmediaserver.pid and the scripts folder are set to 644. I have a myriad of different permissions in the cache folder - which I guess is right because that will create it's defaults as it creates them. Probably a good idea will be to set one up from scratch and have a look at those defaults.
  9. I suspect bonding 2 NICs that are the same is OK, but in the case above, there was one 10G Nic along with some slower speed ones, which is probably not so great. Unraid networking - never been as straight forward as the competitors implementation.
  10. Hi all, does anyone know in what way thunderbolt support has been added to unraid? I ask because I've read that linux can work without the thunderbolt header and even without thunderbolt support on the motherboard at all, which would make things a lot easier. Otherwise it seems get a Titan Ridge Rev 2.0 card and short the pins. My motherboard only support TB1, so I also don't know if a TB2 or TB3 card will work or if I'm stuck to TB1. The problem I've got is the host is a Lenovo P700 and official TB cards have been unavailable for years. I'm hoping I can get to TB2 at least but even TB1 is fine really. Details on it here. I've purchased a nice thunderbolt connected 8 bay disk cabinet. I like this solution because unlike SAS cabinets, thunderbolt has a lot of options available that suit homes better particularly that they're quiet and smaller. If we can get thunderbolt working on unraid for disk storage, I think there will be a lot more options available to all of us - especially those of us living in apartments. In the chance that a dev comes across this, I'm keen to help - I don't know what would be valuable though - getting a TB card or something. I'm sure there are a few of us that would band together for this. Edit: This looks promising: https://github.com/utopia-team/Thunderbolt Thanks all. Marshalleq
  11. Card Status: Gigabyte GC-Titan Ridge 2.0 (Flashed for Mac) - Installed on Asus X399 Prime-A - Card detected with short pins trick - Devices Autodetect Gigabyte GC-Titan Ridge 2.0 (Flashed for Mac) - Installed on Lenovo P700 (Thunderbolt disabled in BIOS) - Storage Working Known Issues Reboot sometimes freezes system while thunderbolt plugged in Bolt tools are needed to properly authorise devices Devices may become deauthorized on reboot
  12. This page documents two parts of what is working and what is not working with Thunderbolt on Unraid. Part 1: The generic subsystems (for want of another name - perhaps someone can describe something technically more accurate here) Part 2: Specific matches of PCIe hardware to Hosts systems To date the following Thunderbolt subsystems are confirmed working on UnRaid: Storage: Working Networking: TBC Display: Passthrough
  13. Hi thanks for starting this wonderful plugin. With my datasets (there are quite a lot). I wonder if it might be possible to make it so that the pools are shown in a list and then the data sets are only shown / hidden by clicking the pool i.e. having an expand / retract feature. That way we could have a summary status across the whole system and more easily find problems without scrolling, potentially missing them in the process. Plus it would be a lot cleaner, currently mine takes up about 3-4 screens of scrolling. Thanks. <Edit> I think I should have opened my eyes - 'Show Datasets'!
  14. I came here because I too am looking for this. Not because I'm disagreeing with the point above, but because MegaCli can query information on the card from the cli which is very useful. I am currently trying to figure out why a raid cabinet is not hot swapping disks properly which has led me down a whole path with mpt2sas and mpt3sas which ended up me wanting this tool to get info. I think it would be great to have as part of nerd tools or something. e.g MegaCli64 -LDInfo -LALL -aALL
  15. Hi all anyone know why the above might be happening. It's a drive in an external cabinet connected via SAS. I'm thinking it's something up with the external cabinet. The symptom is that I cannot replug the drives without powering down the whole system power cycling the external cabinet as well, then doing that in reverse. If I hot unplug them, I have to power down to replug them. Occasionally they don't start up even after doing that. Diagnostics attached. Thanks. skywalker-diagnostics-20220226-2136.zip
  16. What kind of controllers? I assume encryption is off and the cpu you have is reasonable? If that is the typical target file size Have you tried setting recordsize=1M on the dataset? There are quite a few optimisations you can make but that’s probably the most obvious one for a large file. You can also try ensure you’ve got your ashift set correctly, perhaps it’s not? kinda guessing so far to be honest. If that’s still the same you could post your zpool get all and zfs get all might show something else.
  17. What kind of data are you writing there? There are a few settings that can optimise eg small files or large files or databases etc. It does sound a little slow for one of those drives to me.
  18. You know what guys, I just found out the tool is open sourced here. So to that end, surely someone on this thread knows how to fix it. The drive scanning thing seems to be happening on Mac as well as windows, so I'm guessing it's specific to the app, not the OS, but I'm no developer.
  19. I think the point is, this is little to do with USB 2 / 3 and the stick you use, the manual method works, the creator hasn't worked for many years. @limetech need to at least publish on their download page that this is the case, so that everyone doesn't waste their time wondering what they did wrong and wasting hours in the process. Even an .img file using a 3rd party image transfer tool would be better. Or just list the actual things that work.
  20. Oh, sorry you're right. I seem to recall I had to do something extra to get it to work and that needed to be redone after a reboot, it was that I was referring to - but perhaps the situation is resolved now! I can certainly type zdb now and stuff is coming up so I guess I'm golden!
  21. Just picking up the end of this conversation so apologies if I've misunderstood - if there is a plugin system scripts or similar, zdb would be a good candidate to include.
  22. Randomly I came across this openzfs man page which lists a device type specifically for storing dedup tables. I was not aware this device type was available. So this would be I guess to split it out from the other metadata and small file blocks that come with a special vdev, for those that want to do that - though I suspect that's pretty niche as the special device type should offer additional performance improvements in most cases. ZFS is awesome.
  23. Hey, I went through a few of these tests. You can add the special vdev at any time but you do have to recopy the data. A quick way I ended up figuring out how to do that was to first rename a dataset to a new name, the send receive that back to the original name, then delete the renamed set. There's also a rebalance script I could dig up, but there are caveats to that, so I ended up just doing the rename. I've read, that you can take a special vdev out again in certain circumstances, but to be honest it was not very clear and sounded scary - most people in that discussion concluded it wasn't for them. Remember the special vdev holds all the information about where the files are stored (I guess it's like ZFS FAT), so if it dies - the array is effectively dead, because it doesn't know how to find the files. Though again, only files that have been modified since the special vdev has been added are on it, so I'm not sure if the whole pool would die or not. EDIT: From that other thread "Supposedly, special vdevs can be removed from a pool, IF the special and regular vdevs use the same ASHIFT, and if everything is mirrors. So it won’t work if you have raidz vdevs or mixed Ashift." Honestly the best thread I found on it is below, with a fantastic thread of comments and questions at the bottom of it, worth reading if you're considering doing it. The opening paragraph from the article: Introduction ZFS Allocation Classes: It isn’t storage tiers or caching, but gosh darn it, you can really REALLY speed up your zfs pool. From the manual: Special Allocation Class The allocations in the special class are dedicated to specific block types. By default this includes all metadata, the indirect blocks of user data, and any deduplication tables. The class can also be provisioned to accept small file blocks." Link https://forum.level1techs.com/t/zfs-metadata-special-device-z/159954 Happy reading!
  24. Have you found some doc that says performance of dedup with special Vdev is bad? I mean it’s going to be slower than ram In most cases but that doesn’t mean we will notice it or make it unusable. The other link that is continuously posted above is ambiguous. I’ve heard otherwise and that aligns with my experience. Or are you just speaking generally from educated guesses? (Genuine question). I have mine with HDDs so probably is why I don’t notice it.
  25. @subivoodoo Great feedback! Did you by chance try the special vdev (or want to) to see what the difference is in terms of ram usage for dedup? I figure for this test, any small ssd would do (though typically you'd want it to be mirrored).