Marshalleq

Members
  • Posts

    874
  • Joined

  • Last visited

1 Follower

About Marshalleq

  • Birthday October 17

Converted

  • Gender
    Male
  • URL
    https://www.tech-knowhow.com
  • Location
    New Zealand
  • Personal Text
    TT

Recent Profile Visitors

2638 profile views

Marshalleq's Achievements

Collaborator

Collaborator (7/14)

117

Reputation

  1. Does anyone know if this is specific to the unraid array and or a specific filesystem, or does it apply to any filesystem? I'm having some weird issues and wondering if this could be the culprit. Thanks.
  2. Just letting you know I got thunderbolt storage working in unraid. Details of how here:
  3. Hi all just wanted to confirm I got thunderbolt storage working in unraid with an add on card. Relatively simple in the end. Details here:
  4. OK, I got it working. Details here:
  5. Just adding a note here as I created a page for people to list their experiences with different devices and I'll summarise them and keep it all up to date. I've also included a sort of live install instructions, which will no doubt need work, but we can update it as we go. I am managing to see storage devices minus the disks so far, there's probably some trick to mounting the disks once the SATA connector is detected by thunderbolt. Anyway, the page is here if you'd like to contribute. Many thanks, Marshalleq
  6. This section to become the howto for how to get this going. For now I will put my experience so far, which is to say it looks like it's working but am yet to get any fruit. Jump pins 3 and 5 as outlined here , install the card and connect your devices. The card should now show up in Unraid as a PCIe device lspci shows a number of lines similar to: 03:00.0 PCI bridge: Intel Corporation JHL7540 Thunderbolt 3 Bridge [Titan Ridge 4C 2018] (rev 06) Also for your storage you should see a line similar to: [ 44.231417] thunderbolt 0-1: LaCie Rugged THB USB-C Add the following line to the /etc/udev/rules.d/99-local.rules file: ACTION=="add", SUBSYSTEM=="thunderbolt", ATTR{authorized}=="0", ATTR{authorized}="1" Navigate to /sys/bus/thunderbolt/devices/domain0/yourdevicefolder Display the contents of the authorized file: cat authorized If this is currently a 0 then: echo 1 > authorized Confirm it is now a 1: cat authorized Reboot Success, storage is now shown in the unraid GUI Note that line 4 effectively disables all thunderbolt security if you're worried about that. I've requested the bolt package be added to nerd tools as this apparently is the security manager for thunderbolt in linux. There is also a package called thunderbolt-tools that provides tbtadm that seems to do something similar. I am still to understand how much value these tools are considering this works now.
  7. Sorry, I'm not sure if we're allowed to ask here - but could we get the bolt tools for thunderbolt added to this? I think I have thunderbolt working in unraid by disabling the security, but I understand the bolt package will allow that to happen more elegantly. Many thanks. Marshalleq.
  8. Wow that was a lot to take in. On that two sided argument I’m going for the awesome read performance mentioned and think it isn’t going to make much difference for writes. Especially when I am using a 32 thread threadripper and a 24 thread dual xeon setup.
  9. If I recall correctly the performance is better and it’s multi threaded where as lz4 is very old and single threaded. Could be wrong about the threads. Zstd has differing levels of performance you can set obviously. I just read up on it at the time and chose it. I don’t use slogs I thought they were really only beneficial in rare cases and would need more redundancy because it’s writes? I don’t know much about them sorry. Very nice drive though!
  10. I can tell you what I use, then you can go and read up on those bits. To me these are the key bits in setting up an array. I'm not going to disagree entirely with BVD, but really like all things technical, research and experimentation is valuable and it's no reason to fear ZFS and not use it. There are some people that dive in without doing even a minor bit of forethought and I assume his commentary is really aimed toward that rarer group whom will likely get themselves into trouble with everything not just ZFS. Anyway, here's the basic command I use first. If it's an ssd pool, I add -o autotrim=on some people are still scared of this, but I've never ever had even one issue with it - compare that to btrfs and the issues were quite a few - though that was years ago now. zpool create HDDPool1 -f -o ashift=12 -o -m /mnt/HDDPool1 raidz2 /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sdg See below about special vdev before creating the above. Then the optional tweaking stuff I use: zfs set xattr=sa atime=off compression=zstd-1 HDDPOOL1 And depending on your media a dataset could be changed thus: zfs set recordsize=1M HDDPool1 Default is 128k (Don't get caught with this too much as I keep forgetting ZFS does variable record sizes - 1M might be good if you have a lot of large video files for example.) dedup=on (I use this a lot, but only because I have a special vdev because IMO it means that no significant extra ram is required, however I've had quite the discussion about that so not all will agree - definitely do your own research before enabling this one). Works great if you have lots of VM's and ISO's. My RAM is not used in any way that I've ever been able to notice. More options zfs set primarycache=metadata HDDPool1/dataset zfs set secondarycache=all HDDPool1/dataset Some of the cache options are actually dealt with automatically. The promise with them is to optimise how much of your data will be cached in ram depending on e.g. if you have big files or not and whether it is valuable and even possible to cache them. And finally, the special vdev mentioned above is very cool. It will store metadata on a second set of disks assigned to the array. So for example, if you have slow hard drives in a raidz2, you could have 3 SSD's (for same redundancy level), which speeds up seeking and such. It optionally also will store on the SSD's, any small file up to the size you specific (which must be less than the recordsize or you'd be storing everything). As you probably know, small files on HDD's aren't very fast to read and write to, so the advantage here is obvious if you have that kind of setup. zpool add HDDPool1 -o ashift=12 special mirror /dev/sdp /dev/sdr /dev/sdq I actually also set up a fast SSD / NVME as a level 2 cache - this can be done at any time and it's advantage is just that anything the doesn't get hit in ram, fails over to SSD and again is a way of speeding up reads from HDD's. zpool add HDDPool1 cache /dev/nvme0n1 useful commands: arc_summary arcstat So what you can probably see is there is a default way of doing things and the 'tweaking' mentioned above is really more about understanding your data and how you want to address it via ZFS. Some settings need to be done at array creation and some can be done later. Most settings that are done later will only apply to newly written data so you end up having to copy the data off and on again if you get it wrong. I found it super fun to go on the journey and learn it all, I expect you will too. If you're like me, you'll want to be doing some more reading now! Have a great day. Marshalleq.
  11. Hi thanks for posting, I'm interested in what advantage this is to you. I have run the img file on a zfs dataset and also run it as a folder which creates a host of annoying datasets within it. Does this somehow get the best of both worlds? Thanks.
  12. Right out of the gate I can say that you'd be better off with 4 drives in two mirrors. You have the same space and more speed vs 4 drives in raidz2. Secondly, I assume that an i3 is powerful enough to calculate the parity needed without bottleneck, but it might pay to double check that. In the above configuration, do you have all of that RAM spare or not much ram? I have seen it where having not much ram spare slows things down a lot - this can somewhat be mitigated by setting the available ram in the go file. It also may pay to performance test each drive individually, in case one of them is slowing the others down. I had this same problem on a thunderbolt connected ZFS cabinet the other day, then found out that running non-zfs file system on a single (or multiple disk) was also slow. I am yet to progress but suspect I have one drive that is playing up. It's surprising (and annoying) how often this can be a faulty SATA cable. ZFS will be slower that other raided file systems, but not by that much so I agree something is wrong. It's probably close to what unaid performance is though as that is actually very slow. Can't think of anything else right now sorry, and I appreciate you may have thought of these things already - but sometimes it can trigger a thought for a solution right? Hope you figure it out and let us know - it might help me for my one!
  13. Sorry just rushing out, yeah this does seem a little slower than normal, though unraid is typically quote slow for disk transfers. I'd suggest to see how fast it copies locally on the box first to get a baseline, (maybe use midnight commander (mc is used to activate it) as I think that shows the speed. Then if that works OK it means it's probably a combination of networking and perhaps the SMB protocol, which is also a bit slow on unraid for some reason - depending on what you're connecting to it with. There are some tweaks for SMB. But those speeds almost look like you're running a 10Mb/s hub in there somewhere or a faulty cable slowing it down.
  14. This is a really good question. I think there has been a few times when I have wondered the same and done something probably wrong to get it going again. Perhaps a few of us (or anyone with a standard install that hasn't been messed with should post back here what they have. Mine is set to nobody.users with 777 on everything - so I guess I did that in the past some time. I do note that preferences.xml plexmediaserver.pid and the scripts folder are set to 644. I have a myriad of different permissions in the cache folder - which I guess is right because that will create it's defaults as it creates them. Probably a good idea will be to set one up from scratch and have a look at those defaults.
  15. I suspect bonding 2 NICs that are the same is OK, but in the case above, there was one 10G Nic along with some slower speed ones, which is probably not so great. Unraid networking - never been as straight forward as the competitors implementation.