TexasUnraid

Members
  • Posts

    1178
  • Joined

  • Last visited

Everything posted by TexasUnraid

  1. I have 5x 128gb laptop SSD's I was given in a raid5, so kinda need btrfs lol. This is why I had the extra SSD in the array formatted as XFS. I put docker on the cache this morning, waiting a few hours to see what kind of writes I end up with. Really considering just updating to the beta and creating a 2nd cache to be done with this.
  2. Yeah, I am thinking I will just use the space_cache=v2 and move things over to the cache for now and see what kind of writes I get. If they are tolerable then I will wait for 6.9 RC. If they are still too high I will consider the beta. The multiple cache pools would be really handy for me as well. Keep us posted on how things go and if you notice any bugs with the beta 🙂
  3. Power usage on this card is around ~15w according to my watt meter (not exact as I made some other changed to the hardware at the same time). Considering this is pcie gen3 with 16 ports, that is a bit better then average from my research. Naturally the 8 port pcie gen 2 cards will use less, think they were generally closer to 8-10w.
  4. Sure thing, although with all due respect, until a few weeks ago when this was officially acknowledged, this entire topic was nothing but "hacks" to work around the issue. 😉 So if hacks are not allowed to be discussed, what is an official option to fix the issue? I really would love one, I really hate going outside officially supported channels. Thus far the only official option I have heard is wait for 6.9 which is ? months away for an RC and 6+ months away from official release? 6.9 does sound like the fix, I am just uncomfortable using betas on an active server, I will always question if an issue if due to the beta or something else. An RC is not ideal but I would consider it.
  5. Yes, Agreed on a proper guild being really handy for this even though I can figure it out the hard way. Well, I tried to use the symlink for a UD device but ran into an issue. I put the symlinks on the cache drive pointing towards the UD drive. This works fine for everything except that unraid does not see these as valid shares and thus does not let me edit the settings for them. So I have no idea how it will treat these shares and if it will break docker / appdata since they will not be set to cache only?
  6. Ok, so I was looking into LSI HBA cards and they all seemed overpriced / over-hyped compared to other options. After looking around a lot I was debating between the HP240 and Adaptec as they both offered more features for less money. I ended up going with the adaptec 71605 as I got a really good deal on it and it checked all the boxes except trim support for consumer SSD's like basically all other HBA's. 16 sata ports PCIE Gen3 x8 No need to flash the card for HBA funcality, just simply switch to HBA mode in the cards bios Built in driver support in unraid and every other OS I have tried it with Can be found for less then $50 fairly easily on ebay (I got mine for a bit over half that during a sale) So I have been using it for a bit over a month now and have to say, I am quite happy with my choice. I can hit over 2GB/sec speeds on my 5x 128gb cache pool but I am CPU bottlenecked at that point (all cores pegged to 100%). Others have reported 4500MB/s combined speeds and still not reaching a bottleneck. The heatsink does need some airflow like all HBA cards. I built a funnel out of poster board that covers half a 120mm side panel fan and it keeps it nice and cool with the fan at the lowest speed ~500rpm. The bios is also pretty simple to figure out as well, reset all settings to default and don't setup any raid settings and it will work. Although switching it to HBA mode and enabling drive listing on boot is better for our use case IMHO. Overall I am very happy with this card and recommend it. The only issue I have had is that the default write-cache setting is "Drive dependent". Sounds fine on paper but I had several drives just suddenly stop supporting write caching after a reboot and hdparm could no re-enable it nor would it work on another computer. This setting should always be set to "enabled always" and never changed. EDIT: I figured out how to fix this from within unraid after some more playing around, I will leave the old windows fix in a zip archive below in case some of that helped get this working. hdparm does not work to fix this, nor does the normal smartctl command. There is a more advanced smartctl command that does work though! smartctl -s wcache-sct,on,p /dev/sdX This got it working on a drive I added to the server after fixing the others without having to revert to windows. No need to reboot or anything, takes effect immediately. The ",p" is supposed to make the command persistent. I archived the rest of this thread for future reference in case someone needs it in the attached zip file, including the dskcache windows program. dskcache - Tool for fixing write cache on hard drives.zip Archived instructions for fixing write-cache disabled on Adaptec HBA card.zip
  7. So there are no outstanding changes from beta 25 to be implemented in the next release? I have not been following the beta thread very closely.
  8. Thats what I thought, I am waiting for at least the RC of 6.9 to consider upgrading now that the server is in service. Symlinks / UD sound like a good stopgap, makes it simple to swap over to 6.9 as well since the paths will remain the same.
  9. Good to know, interesting use case as well. How would the script know that an attack is taking place? So no gotchas with symlinks on unraid? works just like any other linux system (aka, I can look up generic symlink tutorials online)?
  10. Yeah, it was the same root cause as the docker writes. So fix one and you fix them both. So are you saying that I can reparition the SSD's on 6.8 and they will work? Basically make my cache like 6.9 will be (and hopefully compatible as well so I don't need to convert again later)? How would I go about doing this? In the UD thread something was said about the new partition not being backward compatible.
  11. The writes are massively inflated with appdata as well as docker. Now that we understand why, it makes sense, the tiny writes that both make will cause the writing of at least 2 full blocks on the drive + the filesystem overhead with the free space caching. Even if it just wanted to write 1 byte. Great, so the symlinks won't cause any issues with the fuse file system? I simply put a symlink in cache pointing towards the UD drive and everything works as expected, the files will be accessible from the /user file system? That could work, have not actually used symlinks in linux yet but no time like the present to learn lol. Used them a lot in windows.
  12. I realized why it won't work, you would have to edit every docker to point to the appdata on UD which would be a real pain and easy to mess up when having to do it x20+. Although, can symlinks be used on unraid? So could I use a symlink between the cache and a UD device so that I could keep the same paths I have now?
  13. I didn't think putting the docker on a UD device would be a good long term option. I always saw UD as a temporary use feature. That said I could be wrong and it would work perfectly fine for docker and appdata. Anyone have any info on this?
  14. Because I already have a cache pool setup and 6.8 does not support multiple cache pools. This drive is not even supposed to be in the server, I stole it out of a laptop since using this drive formatted as xfs was 50-100x less writes vs using the cache pool.
  15. Ok, so as I understand it the fix for the excessive writes is the combo of space_cache=v2 and the new alignment. The space_cache=v2 I can do now but it will still be roughly 2x the writes without the alignment fix. The alignment fix can not be done until 6.9 as unraid will not recognize the partition. Am I on the right track there? Is there any way to use the new alignment with unraid 6.8? I just got a drive to use for parity but that means I need to remove the SSD from the array that is currently formatted XFS and has docker/appdata. Debating options now.
  16. Yeah, I figured there would be an RC release before a full release, just was not sure on the time frame between them. Thanks for the info, so basically I have some time to kill before moving to 6.9 will be an option. Leaves me with a conundrum of what to do but think I will move that conversation to the excessive write docker thread.
  17. Just curious being new around here, what kind of timeline are we looking at for 6.9 RC and then full release? Weeks? Months? I ask because I am having to use an SSD in the array for docker right now, not a problem since I don't have a parity drive. The issue is I just got a drive to use for parity, so trying to figure out what order to do things in. Install parity now and build it (24 hours+) knowing that the SSD could break the first 256gb of parity. Move docker to cache and just eat the excessive writes for a little bit while waiting for 6.9. Wait a few weeks for 6.9 to be released and then install parity after removing the SSD from the array. If 6.9 is around the corner I might as well wait and use the extra drive as a backup for the time being. If it is still months away I will run out of space most likely as I am going to move some of my replaceable backup drives (aka, linux ISO's) into the array once I have parity working. I trust parity for replaceable data but not for irreplaceable data.
  18. Perfect, toying with the idea of manually creating a read cache using some commands I found online with one of the secondary cache pools. Snapshots and a read cache / tired storage are the only features I miss at this point. At the least I want to move some frequently accessed folders to an SSD so they don't spin up the main drives for no reason.
  19. Thanks for the explanation, makes total sense, just never really chased that tail to the conclusion before. Just ordered a drive to use for parity so looking forward to 6.9 and being able to move dockers back onto the cache.
  20. interesting that the alignment would make that much of a difference, anyone have a technical explanation on why this is?
  21. I had a question about the cache pools in 6.9 that I was not able to test when I was still using it. I know that only a single cache pool can be selected for a share, is this a limit of mover or the file system? AKA, if I manually created the same share on another cache pool then the one selected, would it be combined with the rest of the share and function like a normal cache pool minus mover functionality?
  22. I like to add these tracker lists to the "Automatically add these trackers to new downloads:" section. https://github.com/ngosang/trackerslist It is reset on every reboot though? Any way to make it persist?
  23. I think you have done a fantastic job with UD. It has expanded to be a very powerful addition to unraid and I can firmly say that I would not of been able to move to unraid without it. The ability to use my old drives with UD while setting up unraid was a make or break point for me. The new cache pools features should really expand the capabilities of unraid, I am really looking forward to them. If they add ZFS as well that should really blur the lines between freeNAS / unraid and make unraid the de-facto option IMHO.
  24. Here is a script that someone else made, I tweaked a few things to make it easier to use. It outputs to a Temp share right now, you can updated as desired, it is the last line. #!/bin/bash #description=Basic script to display the amount of data written to SSD on drives that support this. Set "argumentDefault" to the drive you want if you will schedule this. #argumentDescription= Set drive you want to see here #argumentDefault=sdc ### replace sd? above with label of drive you want TBW calculated for ### device=/dev/"$1" sudo smartctl -A $device |awk ' $0 ~ /Power_On_Hours/ { poh=$10; printf "%s / %d hours / %d days / %.2f years\n", $2, $10, $10 / 24, $10 / 24 / 365.25 } $0 ~ /Total_LBAs_Written/ { lbas=$10; bytes=$10 * 512; mb= bytes / 1024^2; gb= bytes / 1024^3; tb= bytes / 1024^4; #printf "%s / %s / %d mb / %.1f gb / %.3f tb\n", $2, $10, mb, gb, tb printf "%s / %.2f gb / %.2f tb\n", $2, gb, tb printf "mean writes per hour: / %.3f gb / %.3f tb", gb/poh, tb/poh } $0 ~ /Wear_Leveling_Count/ { printf "%s / %d (%% health)\n", $2, int($4) } ' | sed -e 's:/:@:' | sed -e "s\$^\$$device @ \$" | column -ts@ # Get the TBW of /dev/s!db TBWSDB_TB=$(/usr/sbin/smartctl -A /dev/"$1" | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^4 }') TBWSDB_GB=$(/usr/sbin/smartctl -A /dev/"$1" | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^3 }') TBWSDB_MB=$(/usr/sbin/smartctl -A /dev/"$1" | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^2 }') echo "TBW on $(date +"%d-%m-%Y %H:%M:%S") --> if 2 numbers, Written data first line, read data second line > $TBWSDB_TB TB, which is $TBWSDB_GB GB, which is $TBWSDB_MB MB." >> /mnt/user/Temp/TBW_"$1".log I have it set to run daily now but had it set to hourly when I was actively troubleshooting. Far as the graph goes, you should be able to get a very similar graph from the hard disks section in netdata.
  25. Netdata is great for tracking write speed like that, although looks like he is using something else. Netdata is a must IMHO for a server like this, it has helped me track down several issues already plus it is nice to be able to see exactly what is happening. For tracking total writes though the best option is to use the LBA's written smart metric. There was a script posted earlier that automated logging this over time using user scripts.