Lev

Members
  • Posts

    369
  • Joined

  • Last visited

  • Days Won

    3

Everything posted by Lev

  1. Upgrading both the 846 and 826 backplanes to SAS3. Such an easy swap in Supermicro 847 case. @johnnie.black This has been such a helpful resource, and I stumbled across your posts over on another message board about your benchmarks with sas3 that were helpful in knowing what performance to expect with sas3 and databolt. I wanted to post here to say thanks for that. Also if there was anything you wanted to update in your first post here to reflect your most up to date findings. I'll post some pics of the backplane swap when I get a chance. It's been fun.
  2. Huge improvement =D Night and day difference for me. Like wow! LIKE WOW!
  3. You made a lot of great points. And you're right, there is likely a better way I could structure things that would reduce the number of UDs.
  4. Super helpful response @Squid, thanks for the details.
  5. UD animation and it can vary in length of time. Fastest is usually 5-6 seconds, average maybe around 8 - 10 and the extreme would be 15-20 seconds. The UnRAID main page loads instantly.
  6. No smb / nfs mounts are in use or created. I think I also keep the disks spun up to see if that helps. Sometimes it's a longer wait for the refresh than other times. I suspect this is when there is activity on one or more of the drives and UD is having to wait in the queue to get the info from the disks that it needs for the refresh. You rock, awesome.
  7. Story time.... 🤣 I think I have a lot of UDs. 16 drives to be exact managed by UD. Pretty cool! Quite the growth since 2011 when I first started with UnRAID. I have all 28 data disks in UnRAID utilized. So now with another 16 disks managed by UD, it'd given me some experiences to consider how UD works with this managing this many devices and where do I want to go next. The recent enhancement of 'None' or 'All' to show the mount points has been awesome. Really improved the usability of UD with this many devices. I can easily find mount point names. As I keep adding devices, managing mount point names is getting more of a chore. Thankfully UD has a great feature that it uses the disk serial number ID by default as a mount point. This works great with MergerFS, as I can keep this default, and just add the parent directories I wish on the drive and MergerFS takes care of the rest. Super easy! 😀 Two enhancements I think about sometimes based on this large number of UD devices: 1With this many devices in UD, it can take about 10 or more seconds to load the Main UnRAID page with the UD GUI. It has me wondering if there is a way to speed this up. Like if disabling temperature (hdparm) or some other info UD queries to improve UD loading speed. I find of all the switched on the UD GUI I use the 'Auto-Mount' switch the most. Each time it triggers the 'refresh animation' and has me wondering if there is a way to make it work similar to UnRAID Docker's 'autostart'. Those switches don't trigger a GUI refresh when switched, so I'm wondering if a similar method could be leveraged by UD. I really love UD has enabled all this to be possible via the GUI. While I could do it via the console, it wouldn't be as much fun. Really appreciate all the work you've put into UD over the years @dlandon.
  8. Need some help wrapping my head around the XFS option. Here's what I *think* I know, if someone can correct or affirm, I'd appreciate it 😀 Let's play True or False.... brtfs has traditionally been the cache drive file system of preference if using Docker. T/F ? brtfs was the preference because it supported Copy on Write COW, which was helpful for de-duplication of Docker images. T / F ? XFS with the new-ish (2018?) reflink=1 that is now the default format option for XFS in UnRAID also enables COW. T / F ? Moving forward I can now select XFS as my cache drive option where Docker images are stored. Or the new Docker folder is pointed @Squid and can gain all the benefits I had before using brtfs but with the trust and stability that XFS brings. T / F ? Thanks for playing!
  9. This is correct. UnRAID's starting sector is 64, it used to be 63. Never 2048. I know it sucks to hear. But consider it a good excuse to buy one more Easystore to shuck and make the long copy process a little less painful 😀
  10. I've been puzzled for the last week why most of my XFS partitions were reporting 'sectsz=512' and a few were reporting 'sectsz=4096' and traced it to a whether the drive had been partitioned and formated by UnRAID or UnAssigned Devices. I put the detailed log output at the very bottom to show both outputs. Researching if an answer existed already I had to take a trip down memory lane searching through the posts to recall that UD didn't always create UnRAID compatible partitions. I remember those days but didn't find an answers to why the difference. I went deeper into the code and traced it to the difference in the following commands: # Create new UnRAID Partition sgdisk -o -a 8 -n 1:32K:0 {$dev} # Create UnAssigned Disk Partition sgdisk -o -a 1 -n 1:32K:0 {$dev} What's the real impact? 🤷‍♂️ Both ways work and are functional, however the UnRAID command aligning on "-a 8" does seem to not trigger an alignment warning. # UnRAID partition creation # sgdisk -Z /dev/sde GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. # sgdisk -o -a 8 -n 1:32K:0 /dev/sde Creating new GPT entries in memory. The operation has completed successfully. # UnAssigned Devices partition creation # sgdisk -Z /dev/sde GPT data structures destroyed! You may now partition the disk using fdisk or other utilities. # sgdisk -o -a 1 -n 1:32K:0 /dev/sde Creating new GPT entries in memory. Warning: Setting alignment to a value that does not match the disk's physical block size! Performance degradation may result! Physical block size = 4096 Logical block size = 512 Optimal alignment = 8 or multiples thereof. The operation has completed successfully. Detailed XFS_INFO output from two drives for comparison Partitioned and Formated by UnRAID v6.8.3 # xfs_info /dev/sde1 meta-data=/dev/sde1 isize=512 agcount=11, agsize=268435455 blks = sectsz=512 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=2929721331, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=512 sunit=0 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Partitioned and Formated by UnAssigned Devices v2020.06.28 # xfs_info /dev/sdas1 meta-data=/dev/sdas1 isize=512 agcount=8, agsize=268435455 blks = sectsz=4096 attr=2, projid32bit=1 = crc=1 finobt=1, sparse=1, rmapbt=0 = reflink=1 data = bsize=4096 blocks=1953506633, imaxpct=5 = sunit=0 swidth=0 blks naming =version 2 bsize=4096 ascii-ci=0, ftype=1 log =internal log bsize=4096 blocks=521728, version=2 = sectsz=4096 sunit=1 blks, lazy-count=1 realtime =none extsz=4096 blocks=0, rtextents=0 Any thoughts on this @dlandon ?
  11. Works awesome No more playing hide and seek trying to guess which drive has the mount I'm looking for. Love the change. Thank you!
  12. A enhancement request... as the number of drives grows in my UD list in the GUI, I find I have to press the (+) to expand each drive to find the one that has the mount point I'm looking for. Not sure what the right enhancement is, perhaps a way to expand them all at once to make all the mount points visible?
  13. Not that I'm aware of.
  14. Nice update @Squid! Much appreciated 😀
  15. You mean like this? My current server
  16. @Addy90 Now that I've got more free time on my hands, I'm going to give CEPH a try. I ordered some more RAM for my 4 server cluster that I'll be setting up for it. In the meantime I'm reading and watching tutorial videos. Thanks again for your post. Using CEPH at home is a big leap from UnRAID but in the years ahead it will prove to scale beyond my yearly budget to buy additional hard drives 🤣
  17. I had some fun updating my User Profile for this forum. There's an 'About Me' section so I added my UnRAID servers I've had over the years. Will try to get more actual photos.
  18. I'm presently in my 3rd Generation of UnRAID server. 2017 to present - SuperMicro SC847 - 36 bays 2014 to 2017 - Norco 4224 - 24 bays 2011 to 2014 - Dell Inspiron - I modded the case to hold 10 drives. Future - In Test - BackBlaze Storage Pod v3 - 45 drives Not sure if it'll be UnRAID. Right now I'm testing SnapRAID w/ MergerFS to overcome both the drive limit and move to a asynchronous parity sync to gain write speed direct to the data disks.
  19. No. It doesn't fit my requirements. I overcame the 30 drive limit, there's various methods. I use Unraid VMs within an Unraid Host and the mount the smb shares on the Unraid VMs with Unassigned Devices in my Unraid Host. It took some time to get used to. Rebooting the UInraid Host takes a lot of effort to make sure each Unraid VM's array is stopped and then each VM is powered down before the host.
  20. @cbr600ds2welcome, I can see you are new here. happy to have a new person asking these kinds of questions. you'l fit right in. rather than continue this thread, I invite you to join an existing thread. You will likely have many advanced thoughts if you're asking such things. Try searching the forums, and you're likely to discover a wealth of information from those who have asked the same question years eariler.
  21. Found it... Dumping this here so I can find it in the future.... special thanks to him who will recognize his code. Don't use this unless you know WTF you're doing. This will destory things.